WO2013132242A1 - Touchless user interfaces - Google Patents

Touchless user interfaces Download PDF

Info

Publication number
WO2013132242A1
WO2013132242A1 PCT/GB2013/050536 GB2013050536W WO2013132242A1 WO 2013132242 A1 WO2013132242 A1 WO 2013132242A1 GB 2013050536 W GB2013050536 W GB 2013050536W WO 2013132242 A1 WO2013132242 A1 WO 2013132242A1
Authority
WO
WIPO (PCT)
Prior art keywords
movement
predetermined
display screen
screen
user
Prior art date
Application number
PCT/GB2013/050536
Other languages
French (fr)
Inventor
David Vagenes
Erik FORSSTRÖM
Tobias Gulden Dahl
Cato SYVERSRUD
Tom KAVLI
Original Assignee
Elliptic Laboratories As
Samuels, Adrian
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Elliptic Laboratories As, Samuels, Adrian filed Critical Elliptic Laboratories As
Publication of WO2013132242A1 publication Critical patent/WO2013132242A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • G06F3/043Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means using propagating acoustic waves
    • G06F3/0436Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means using propagating acoustic waves in which generating transducers and detecting transducers are attached to a single acoustic waves transmission substrate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/16Constructional details or arrangements
    • G06F1/1613Constructional details or arrangements for portable computers
    • G06F1/1615Constructional details or arrangements for portable computers with several enclosures having relative motions, each enclosure supporting at least one I/O or computing function
    • G06F1/1616Constructional details or arrangements for portable computers with several enclosures having relative motions, each enclosure supporting at least one I/O or computing function with folding flat displays, e.g. laptop computers or notebooks having a clamshell configuration, with body parts pivoting to an open position around an axis parallel to the plane they define in closed position
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/16Constructional details or arrangements
    • G06F1/1613Constructional details or arrangements for portable computers
    • G06F1/1626Constructional details or arrangements for portable computers with a single-body enclosure integrating a flat display, e.g. Personal Digital Assistants [PDAs]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/16Constructional details or arrangements
    • G06F1/1613Constructional details or arrangements for portable computers
    • G06F1/1633Constructional details or arrangements of portable computers not specific to the type of enclosures covered by groups G06F1/1615 - G06F1/1626
    • G06F1/1637Details related to the display arrangement, including those related to the mounting of the display in the housing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/16Constructional details or arrangements
    • G06F1/1613Constructional details or arrangements for portable computers
    • G06F1/1633Constructional details or arrangements of portable computers not specific to the type of enclosures covered by groups G06F1/1615 - G06F1/1626
    • G06F1/1684Constructional details or arrangements related to integrated I/O peripherals not covered by groups G06F1/1635 - G06F1/1675
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/16Constructional details or arrangements
    • G06F1/1613Constructional details or arrangements for portable computers
    • G06F1/1633Constructional details or arrangements of portable computers not specific to the type of enclosures covered by groups G06F1/1615 - G06F1/1626
    • G06F1/1684Constructional details or arrangements related to integrated I/O peripherals not covered by groups G06F1/1635 - G06F1/1675
    • G06F1/169Constructional details or arrangements related to integrated I/O peripherals not covered by groups G06F1/1635 - G06F1/1675 the I/O peripheral being an integrated pointing device, e.g. trackball in the palm rest area, mini-joystick integrated between keyboard keys, touch pads or touch stripes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04815Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/0483Interaction with page-structured environments, e.g. book metaphor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04842Selection of displayed objects or displayed text elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/0485Scrolling or panning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2200/00Indexing scheme relating to G06F1/04 - G06F1/32
    • G06F2200/16Indexing scheme relating to G06F1/16 - G06F1/18
    • G06F2200/163Indexing scheme relating to constructional details of the computer
    • G06F2200/1633Protecting arrangement for the entire housing of the computer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/048Indexing scheme relating to G06F3/048
    • G06F2203/048023D-info-object: information is displayed on the internal or external surface of a three dimensional manipulable object, e.g. on the faces of a cube that can be rotated by the user

Definitions

  • This invention relates to the use of reflected signals to estimate the movement of an input object such as a hand, for example to provide an input to an electronic device. It relates particularly, but not exclusively, to the use of reflected ultrasonic signals for estimating the motion of a hand in a touchless human interface to an electronic device.
  • ultrasound signals There have been a number of proposals stretching back many years to use ultrasound signals as a way of enabling electronic devices to be controlled through movements of the human hand without needing to touch the device.
  • ultrasonic systems represent a kind of distributed sensing technology.
  • the components of the sensing system which can be transmitters, receivers or transceivers, are in most cases mounted at a distance relative to one another (although there are some examples of the opposite, such as when using densely spaced elements forming arrays.
  • These are easier to liken to optical systems such as cameras, and their
  • the present invention provides an electronic device comprising at least one ultrasonic transmitter and at least one ultrasonic receiver, the device being arranged to transmit ultrasonic signals from the transmitter and to receive ultrasonic signals reflected from an input object, wherein the device is adapted to give a predetermined response only to a predetermined movement or set of predetermined movements of the input object wherein said movement(s) is/are defined at least partially by at least one of a predetermined path through which the object moves and a predetermined spatial zone in which the movement takes place.
  • the transmitters and receivers are preferably mounted adjacent to a display screen, for example within five centimetres, but preferably within one or two centimetres of the edge of the display screen. These may be mounted beside the screen, on or level with the surface of the screen, or alternatively may be mounted behind the screen.
  • the more that can be known a priori about the motion one is looking for the more accurately such movement can be detected, even in the presence of substantial levels of noise, unwanted motion and interference.
  • movement is restricted to belong to a particular, finite set of possible motions, or limited to be detected within a small set of motion spaces such as lines, graphs or curves, the potential for accurate detection is vastly improved. Such compromises are particularly valuable when the number of sensors is low.
  • the restricted set of movement can be application-specific, so that the supported modes and the resulting robustness and accuracy improvements can be designed specifically for an application. Picking the right interaction mode(s) for a specific device or application will always involve compromises and tradeoffs. To be able to strike the right one, it is necessary to have a sufficient number basis ideas from which more complete concepts can grow.
  • the predetermined path of movement may include the angle of the movement relative to a reference plane or reference axis defined relative to the device, and/or the direction of the movement along the trajectory (i.e. which ends of the path the movement starts and finishes. Additionally or alternatively it may include the nonlinear shape of the path. The angle between the direction of movement and a reference plane or axis (defined relative to the device), and the direction of movement along the trajectory are therefore possible parameters of the
  • the device displays an on-screen object which is indicative of the interactions available - e.g. through its shape and or position.
  • the on-screen object is arranged to move in accordance with movement of the input object in accordance with the predetermined set of movements.
  • the predetermined response may comprise the execution of a function of the device, for example a change in what is displayed on the screen.
  • the present invention further comprises a method of operating an electronic device comprising an ultrasonic transmitter, an ultrasonic receiver and a display screen, the method comprising:
  • predetermined spatial zone in which the movement takes place, said predetermined spatial zone being indicated by a graphical object on said display screen.
  • the invention also extends to computer software for operating an electronic device comprising an ultrasonic transmitter, an ultrasonic receiver and a display screen, the software comprising:
  • the software comprising logic for displaying said graphical object
  • the software comprising logic for displaying said graphical object.
  • the present invention provides an electronic device comprising at least one ultrasonic transmitter and at least one ultrasonic receiver, the device being arranged to transmit ultrasonic signals from the transmitter and to receive ultrasonic signals reflected from an input object, wherein the device is adapted to give a predetermined response only to a predetermined movement or set of predetermined movements of the input object wherein said movement(s) is/are defined at least partially by at least one of a predetermined path through which the object moves and a predetermined spatial zone in which the movement takes place.
  • Fig. 1 is a perspective view of a laptop computer embodying the invention
  • Fig. 2 contains perspective views of a tablet computer and stand embodying the invention
  • Fig. 3 is a perspective view of a tablet computer and sleeve embodying the invention.
  • Fig. 4 contains figurative drawings of a user interacting with a device according to an embodiment of the invention
  • Fig. 5 contains figurative drawings of a user interacting with a device according to an embodiment of the invention
  • Fig. 6 contains figurative drawings of a user interacting with a device according to an embodiment of the invention
  • Fig. 7 contains figurative drawings of a user interacting with a device according to an embodiment of the invention.
  • Fig. 8 is a perspective view of a laptop computer embodying the invention.
  • Fig. 9 contains figurative drawings of a user interacting with a device according to an embodiment of the invention.
  • Fig. 10 contains figurative drawings of a user interacting with an object on a display screen according to an embodiment of the invention
  • Fig. 1 1 contains figurative drawings of a user interacting with an object on a display screen according to an embodiment of the invention
  • Fig. 12 contains figurative drawings of a user interacting with an object on a display screen according to an embodiment of the invention
  • Fig. 13 contains figurative drawings of a user interacting with an object on a display screen according to an embodiment of the invention
  • Fig. 14 is a figurative drawing of a user interacting with an object on a display screen according to an embodiment of the invention.
  • Fig. 15 contains figurative drawings of a user interacting with an object on a display screen according to an embodiment of the invention
  • Fig. 16 is a front view of a display screen of a device embodying the invention
  • Fig. 17 contains figurative drawings of a user interacting with an object on a display screen according to an embodiment of the invention
  • Fig. 18 contains figurative drawings of a user interacting with an object on a display screen according to an embodiment of the invention
  • Fig. 19 contains figurative drawings of a device embodying the invention.
  • Fig. 20 contains figurative drawings of a user interacting with an object on a display screen according to an embodiment of the invention
  • Fig. 21 contains figurative drawings of a user interacting a display screen and a number of virtual screens according to an embodiment of the invention
  • Fig. 22 is a figurative drawing of a user interacting with an object on a display screen according to an embodiment of the invention.
  • Fig. 23 contains figurative drawings of a user interacting with an object on a display screen according to an embodiment of the invention.
  • Fig. 1 shows the idea of a limited gesture space in a touchless-enabled electronic device, namely a laptop computer 2. Although not shown this has a number of ultrasonic transmitters and receivers placed around the screen 4. These may be dedicated transducers or may also be ordinary audio loudspeakers and
  • the number and placement of the transducers may be chosen to suit the application, e.g. the cost and power constraints, the required accuracy and reliability etc.
  • a virtual "cross slider” 6 At the bottom left of the screen 4 is a virtual "cross slider" 6.
  • the cross slider 6 is tuned to detect movements along the lines that are visible i.e. a localized x and y- axis.
  • the four virtual dots 8 in the top right corner could be buttons that will be 'pushed', for example, when a user holds his hand/finger in that zone/area for a given time.
  • an ultrasonic tracking system can track a finger or a hand in 3D, this is a complicated problem requiring a lot or resources.
  • this problem is simplified.
  • Figs. 2A, 2B and 3 show some possible physical embodiments of devices in accordance with the invention.
  • a tablet computer 10 is associated with a peripheral device such as a stand 12 or a closely-fitting sleeve 14.
  • the stand 12 would have a number of ultrasonic transmitter-receiver pairs, e.g. eight or more pairs.
  • the tablet computer 10 may have a physical or wireless connection to the stand 12 to allow it to use the transmitters and receivers to operate a touchless interface.
  • the sleeve 14 shown in Fig. 3 could have an internal docking connector.
  • the pairs of transmitters 16 and receivers 18 can be seen in this embodiment around the edge of the sleeve. Of course other peripherals can be used or the necessary transducers can be provided integrated with the device at manufacture.
  • Interaction options for touch based devices can be extended using touchless technology. It is perhaps best not to replace all functionality by touchless mode counterparts. New touchless sensing options can add functionality rather than replacing it, creating a richer user experience.
  • a 'boundary breaking interface' as illustrated in Figs. 4A, 4B and 4C.
  • Touchless interaction is enabled on the boundaries of the screen 20.
  • the boundaries are located at the edges of the screen 20, i.e. left, right, top and bottom edges (there could also a boundary in the depth dimension).
  • a working example would be a 'hidden' menu, which is available at one side of the screen, say the left side. To invoke the menu, the user performs a touchless gesture moving his hand from left to right, from the outside of the screen 20 and over to the left part of the screen, as shown in Figure 4B.
  • Figure 5 shows a sequence representing pulling a hidden menu 22 onto a screen by a rightwards movement at the left of this screen 20 until the menu 22' is visible.
  • a touchless gesture moving from outside the right hand side of the screen in a leftwards motion could invoke a different menu.
  • a similar action moving in towards the screen is shown in Fig. 4C.
  • the localized gestures were performed crossing the edges of the screen. In general, such gestures can be performed in other areas; in front of or in the vicinity of an input device (or even on the back).
  • Navigating a file system e.g. directory tree structure
  • WIMP window, icon, menu, and pointing device
  • the traditional navigation and on-screen representations are not the only options anymore.
  • Figs. 7a-h we can imagine navigating a file system or a menu more like a visual tree search process. This is shown in Figs. 7a-h.
  • a file system is a tree of nodes connected with branches.
  • a node could either be a folder, containing other folders and files, or a file (image, video etc).
  • the user starts off at a given node 32 (say, home or My Documents) - Fig.7a.
  • Subfolders and files are shown on a display screen as circles 34 connected by straight lines 36 to a circle 32 representing the starting node (Fig. 7b).
  • the starting node 32 is bigger than the other nodes, and is located at the middle of the screen.
  • the user can move his hand in the direction of one of the branches (Fig. 7c) until his hand/finger is on the top of a new node 34 (Fig. 7d).
  • the node By hovering over that node 34 for a given time the node is selected (Fig. 7e). Once selected the circle 34 moves towards the centre of the screen (Fig. 7f) and the circles 38 representing its folders and files (nodes) are expanded and shown on the screen. The previous node visited 32 is made smaller, so as to show that it is no longer selected or active.
  • the tree is provided with terminal branches 38', the purpose of which is to complete the navigation (e.g. to select a file) or complete an action. For example if the nodes represent folders of music files, selecting the terminal branch 38' could begin to play that file.
  • branches could be grown on the tree while interaction happens.
  • That branch may glow. This gives the user feedback, and tells him that the system is recognizing his movement.
  • Several branches can be made to glow at the same time indicating that the system is uncertain which branch the user is following. This helps the user to be clearer about what he or she wants to do.
  • the classic cursor-tracking approach is circumvented.
  • a particular advantage of ultrasonic gesture recognition systems is their ability to sense motion within a wide sensing zone; e.g. a full hemisphere in front of a display screen. This can be termed 'dome proximity' and is a direct consequence of the propagation properties of ultrasound in air. Using a transmitter with small diameter relative to the ultrasonic wavelength will facilitate a broad ultrasonic wave front, propagating out from the transmitting device.
  • One application is to let a device sense the presence of the user or his or her hand or other body part, no matter what the direction of approach is.
  • FIG. 8 shows that a sensing space 42 can cover the whole screen area 42 of laptop and slightly beyond. This means that the ultrasonic system is capable of detecting a hand anywhere in the zone 42.
  • Figure 9 shows one example of a use of dome proximity. In Figure 9a a user's hand is approaching the screen. Once inside the proximity dome the system reacts to this instantaneously by starting a predefined animation that zooms the content out. This zoom level is kept while the user interacts with the system. Once the user removes his or her hand the system responds to this be returning to its initial state (zoom-level wise).
  • the resolution of an ultrasonic recognition system can be limited by a number of factors. These include the number of sensors used, the level of background noise or interference, or the update rate of the processing unit. In such situations, it is valuable to limit the number of possible movements that are being tracked, by reducing the tracking resolution from a full 2D or 3D tracked cursor to a more limited set of movements.
  • GUI graphical user interface
  • a floating 3D button in the form of a cube 44 as shown in Fig. 10, this can indicate to the user that a more complex interaction is possible.
  • Moving a hand or finger over the object 44 in either a left, right, up or down motion cause the computer to rotate the cube 44 in the same direction, revealing another face of the cube.
  • This could in turn be linked to specific functionalities. Examples would be the opening up of more menus, displaying of more cubes, starting or stopping an application or other commonly used or important features. Lines and rows of buttons could be aligned, so that larger and smaller sweep motions could turn multiple cubes around, enabling even richer interaction experiences.
  • giving the object the shape of a cube has several advantages. First, it tells the user by its very shape what kinds of spins, turns or presses are naturally enabled or 'asked for' by the system. Next, since the user already knows what motions are being 'monitored', the system itself (i.e. on an algorithm or system level) can reduce the sensor resolution, or constrain a gesture-recognition engine, to only look for a few predefined movements or gestures. This is simpler and more robust than full 2D or 3D coordinate input or cursor motion.
  • the cube has limited size and is operated from a fairly close distance to the screen - e.g. 1 , 3, 5, or 10 cm away from the screen, so the need for feedback effects other than the turning of the cube itself is limited, and could perhaps be limited to indicating relative levels of presence or proximity to the cube. This could be achieved by changing its size or colour. For example assume that when the user is not interacting with the system the cube could be semi-transparent and smaller than when interacting with it. When the user comes within a predefined area close to the cube on the screen it becomes bigger and opaque indicating that this GUI element is now ready for interaction.
  • Figure 1 1 shows another embodiment where the shape of an on-screen object 46 is used to indicate the interactions allowed.
  • a cylindrical menu is shown. Given this type of menu, the user will be given information as to how he/she should interact with this menu, i.e. with up and downwards motions.
  • a wheel representation of the content can be shown on screen, as in Figure 12.
  • Item 1 is the item which is selected when the user moves his hand into the interaction zone.
  • Item 2 is then selected after the user has performed the gesture/motion.
  • Figure 12c the user performs a leftwards motion selecting item 10, the last item.
  • the on-screen wheel can be generalized into a cylindrical tree structure where different folders can be represented as different levels on the tree/branch. Up and down motions are used to move up or down one level of the tree as shown in Figs. 13a-d.
  • the cylindrical tree structure can be used for browsing through a folder structure.
  • the top level may be the highest level in the folder structure, e.g. Home, My Documents etc.
  • the user is at the third level in the folder hierarchy.
  • By performing an upwards gesture/motion the user "goes into” the folder represented by Item 1 .
  • the user browses the contents of Item 1 with left and right gestures. Having not found what he was looking for the user moves up two levels with two consecutive downwards motions, as shown in Figure 13d.
  • Each of the items in Figure 13 could either be folders or files, mp3, video, pictures etc.
  • the file is selected by an upwards motion.
  • this is done be a downwards motion.
  • a wheel menu 48 can also be displayed in the corner or on the side of the screen 50. Only a piece of the wheel 48 might be displayed, preferably half or less.
  • Figure 14 shows the case where a wheel 48 is on the right side of the screen and each 'tooth' 52 on the wheel has an associated functionality. The user can perform up and down motions in the vicinity of the right of the screen 48 to cause a change to which menu item 52 is highlighted. A left movement lets the user select the highlighted menu icon 52'. A rightwards motion can cause the menu to be hidden, taking up less space on the screen.
  • FIGs. 15a and 15b A further embodiment shown in Figs. 15a and 15b is referred to as a string plucking interface. It offers a simple yet powerful method for selecting content in a touchless application. The user browses content with left and right movements while a small downwards and/or an upwards movement is performed to select a particular item. In the variant shown in Fig. 15a, there is one highlighted item. This could be the item that is in the centre of the screen. Left and right motions are used to highlight different items. The highlighted item is selected with an up/down motion as previously mentioned.
  • the screen is divided into multiple zones.
  • the middle or main part of the screen is the gesture or motion zone 56 where the user can perform gestures to browse content.
  • the left and right edges of the screen 58 can serve as 'fast scrolling zones' (the spaces beyond the edges of the screen can also be used for this purpose).
  • rapid scrolling of the content shown on the screen will occur.
  • the rate of scrolling may be faster than occurs when the user moves his hand in regular gestures in the middle part of the screen 56.
  • the scrolling can also speed up over time so that the user can efficiently browse a large collection of files.
  • the scrolled edges may be located at each edge of the screen, i.e. left, right, top and bottom.
  • This interaction mode can also be combined with touch or touch-based swipe interaction as well as mouse or touchpad operations. Similar to a keyboard having both 'Up', 'Down' and 'Page Up', 'Page Down', such combined modes can provide both faster and more accurate location of content.
  • the speed of the scrolling can also be controlled with hand position over the scroll "buttons" 58.
  • the user's hand is hovering over the scroll button 58 on the right hand side of the screen, moving content from left to right.
  • speed can be increased.
  • Touchless interaction will sometimes be implemented with just a few gestures (e.g. left, right, up and down) rather than detailed pointing. In some situations it may be beneficial to define the gestures to be relatively large movements, covering a large portion of the screen. The challenge then becomes one of allowing a user to efficiently browse a lot of content using only four gestures.
  • One approach is to group the content on the screen.
  • An example of this is 'string folding' which is shown in Figs. 17a-d.
  • An upwards movement might 'fold' the content on the screen, making more content visible at the same time, while a downwards movement can be used to 'unfold' the content, initializing a more detailed view of the content.
  • a left or right movement can be used to scroll content left or right, making new content visible to the user.
  • Figure 17 shows the concept of string folding.
  • the GUI shows content where one item occupies a large portion of the screen.
  • the user can browse this content with left- and rightwards motions, as shown in Figure 17a.
  • the user can also execute a downward motion, which will "fold" the string of items allowing more items shown.
  • the individual items are also shrunk to fit more items on the screen.
  • This transition is shown in Figure 17b.
  • This folding can be done more than one time making even more content shown on the screen.
  • Figure 17c Once the user has zoomed out to the level of content that he wants, left and rightwards motion can be performed to browse the content in a more efficient manner as shown in Figure 17d.
  • Upwards motions can be used to 'unfold' the data again if desired.
  • Figure 18 a different kind of folding is shown. The goal is the same: to give the user the possibility of viewing and browsing more content at the same time.
  • the interaction concept is the same as for the folding shown in Figure 17.
  • Content can be browsed with left and rightwards motions, fold with downwards motions and unfolded with upwards motions.
  • the enabled set could be specific for a given stage of the interaction process or for a specific application that is running in the foreground.
  • LED lights 60 can be mounted on the edges of the screen 62, each indicative of a respective gesture the user can perform. For example, an LED light 60 on the left side of the screen could tell the user that a leftward gesture, or a slider on the left hand side is currently enabled.
  • Several LED lights 60 and corresponding gestures can be enabled simultaneously. Thus Fig. 19A shows lights only on the right hand side of the screen whereas Fig. 19B shows lights all round the screen.
  • a popular application for tablet-like devices is reading e-books. Touchless interaction can be used to turn pages when reading by waving a hand leftwards or rightwards over the screen. However, the actual turning of pages occupies only a small portion of the time spent using the application. Therefore, the user can be given feedback and reminded as to when the device actually is expecting touchless input, such as turning a page.
  • touchless input such as turning a page.
  • the system recognizes the movement.
  • the GUI changes its appearance, and effectively tells the user that the system is ready for a specific mode of interaction. For example, when the user is simply reading the on-screen content, the GUI has its traditional look, such as a classic PDF file layout as shown in Fig. 20A.
  • the GUI changes its appearance and becomes more similar to a real life book, to indicate that the user can turn the pages as shown in Fig. 20B.
  • the centrefold line may appear, indicated by shadows, folds and/or morphing of the text into a book-shape (i.e. to represent non-planar pages).
  • the readability of the text may be reduced, the user is reminded of what a book looks like and therefore what kinds of interactions she could expect to be having with it.
  • FIG. 21 Another embodiment of the invention is shown in Fig. 21 .
  • a PC desktop 64 can end up jammed with information, showing icons for documents, web browsers, Word, Excel, Outlook and other applications.
  • Some PC users choose to extend the effective workspace for applications by using several virtual desktops or 'virtual screens' 66. Only one desktop is visible on the screen at a time, but the PC maintains the status of several desktops and the user can instruct the PC to switch to a particular one.
  • Such arrangements help to divide content in a more organized fashion.
  • invoking the different desktops requires the use of a mouse, sometimes with large movements, or the user must remember obscure keyboard shortcuts to switch between screens.
  • a better alternative is using touchless interaction modes. By performing simple left and right motions in front of the screen, the user can swap between desktop.
  • Snap-to-grid functionality can be added to lock screens at full frames or half- or quarter frames. Although only horizontal rows of virtual desktops 66 are shown in Fig. 21 , the use of simple touchless gestures allows vertical as well as horizontal and diagonal desktop panning options to be implemented, or even larger 2D 'labyrinths' of pages.
  • Fig. 22 Products such as digital picture frames or portable DVD players are usually designed around a set of physical buttons that allow basic navigation. Using touchless technology, it is possible to increase the number of "buttons" available to the user by associating actions in space with particular functions of the device, thus enhancing an existing design. However, since there is usually a lot of thought behind designing a product with the exact number and placement of buttons, such a 'button extension mode' may not always be the best option.
  • buttons 66 look like a copy of the physical buttons allocated on the device, e.g. around the edge of a digital picture frame. These 'virtual' buttons would have the same functionality as their hard-wired counterparts, but could be bigger and therefore easier to locate than the smaller 'physical' buttons (not shown). They could have a similar or a different co- arrangement. Upon approaching the device, proximity detection could be used to show an animation where the on-screen buttons 66 'spring out' from the physical buttons, communicating to the user that the two input systems are the same.
  • An on-screen or physical scroll wheel has become popular on touch-based devices such as mp3 players. Difficulties in replicating this in touchless sensing include the need for continuous tracking with high accuracy, and the inherent inability to "untouch” the surface to perform a "select” function.
  • One approach is to use a curved screen object 68 (see Fib. 23A) and perform a select function when a 'radial' movement in detected. More generally, a motion in a direction normal to any shaped curve (not just a circular wheel) can be used to perform a select action. Preferably the curve would be some closed form.
  • the shape of the closed form may be exploited by the motion detecting or tracking algorithm.
  • This algorithm can be tuned so that it is only sensitive to specific movements on specific positions, namely (a) movements along the curve, and (b) movements normal to the curve but still in a plane parallel to the screen.
  • the second, perpendicular movement could further be limited to only inwards or only outwards motions.
  • the constrained space of motions helps the algorithm to perform robust and low-complexity motion recognition; it only needs to test a smaller set of motion hypotheses instead of performing a full 2D or 3D cursor tracking task.
  • the feedback to the user need also not follow the rules of moving a cursor. Instead, changing the lighting intensity along the contour of the curve (using a sparkle or glow) (see Fig.
  • the feedback could also include moving a number of graphical objects along the contour of the curve (Fig. 23C).
  • gesture interaction design An important aspect of gesture interaction design is to help the user understand which gestures are supported by the system in general or by specific applications.
  • On-screen information such as help screens is one option. However, these require the user to take focus away from the application she is working with. Additional text, icons or objects indicating the potential for motion tracking would be another option, but this too would steal attention from more important tasks. The user may be overwhelmed by the amount of information to be taken in.
  • the information revealed to the user may be indicative of the kind of motion the system can detect during a next step.
  • the information might be provided via a transparent or a background residing icon, such as an arrow pointing in a specific direction (or a double arrow or an arrow cross).
  • the cues could also be more subtle, such as wave motions morphing the onscreen object, along a specific direction. This would indicate that such an indicated motion could trigger further events.
  • Glowing particles could be used instead of waves. Multiple options for directional motion sensing could be indicated, simultaneously or in turn.
  • the 'cues' may gradually fade out, either as a function of the time elapsed, or when the finger, hand, or other object ceases to be present in the vicinity of the gesture/interaction zone.
  • Such gradual 'discovery' of gesture options allows the user to spend some time to take the new alternatives in.
  • the familiar desktop or application front end is available to the user without new and strange icons appearing or distracting.
  • a particular feature of touchless systems is their ability to measure depth. Ultrasonic systems are able to sense movements relatively close to the screen. This allows the user to keep focus on what happens on the screen and what she is doing with her hand, at the same time, and while thinking within the same coordinate system. This is different from using a mouse, where a relative motion mapping between the table and the cursor on the screen must exist somewhere in the users mind. It is also different from touchless systems operating at further ranges, where the sense of a common coordinate system is harder to establish due to the distance to the screen.
  • One-to-one motion mapping i.e.
  • the motion of the row of images could still match the motion of the hand close to a one-to-one degree; the relatively fast change of the objects along the screen is merely a function of the z-displacement of the whole string of images.
  • the pictures would be moved inwards to a lesser degree, and the browsing pace would be lower.
  • Such variable speed browsing is enabled using the z-dimension of the movement actively.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • Mathematical Physics (AREA)
  • Acoustics & Sound (AREA)
  • Position Input By Displaying (AREA)

Abstract

An electronic device comprises at least one ultrasonic transmitter (16), at least one ultrasonic receiver (18) and a display screen (10), and is arranged to transmit ultrasonic signals from the transmitter (16) and to receive ultrasonic signals reflected from an input object. The device is adapted to give a predetermined response only to a predetermined movement or set of predetermined movements of the input object, with said movement(s) defined at least partially by at least one of: a predetermined path through which the object moves, said predetermined path being indicated by a graphical object on said display screen (10); and a predetermined spatial zone in which the movement takes place, said predetermined spatial zone being indicated by a graphical object on said display screen (10).

Description

108714/01
Touchless User Interfaces
This invention relates to the use of reflected signals to estimate the movement of an input object such as a hand, for example to provide an input to an electronic device. It relates particularly, but not exclusively, to the use of reflected ultrasonic signals for estimating the motion of a hand in a touchless human interface to an electronic device. There have been a number of proposals stretching back many years to use ultrasound signals as a way of enabling electronic devices to be controlled through movements of the human hand without needing to touch the device.
To understand and appreciate the range of interaction options afforded by ultrasound, it is important to recognize that ultrasonic systems represent a kind of distributed sensing technology. This means that the components of the sensing system, which can be transmitters, receivers or transceivers, are in most cases mounted at a distance relative to one another (although there are some examples of the opposite, such as when using densely spaced elements forming arrays. These are easier to liken to optical systems such as cameras, and their
performance can even be explained in similar terms as with other apertures). The number, the spacing and positioning of the sensors will to a large extent define which interaction modes are conceivable; we can say that form affects functioning. For engineers, this means that a working mental model of how a touchless ultrasonic system works is closer to that of a GPS system than for a camera or touch-sensitive surface. For product and interaction designers, it means that compromises will have to be struck between the product's cost, appearance and the modes of use it enables. While this inherent distributional thinking requires some additional effort to get the design right, it also provides a set of new and interesting opportunities. There are important benefits in having multiple viewpoints to the object of interest, one being that it provides protection in the face of full or partial occlusions. The potential for tracking movements using a wide field of view, close to or just over a surface, such as a touch pad or a screen is another valuable asset. Moreover, such features can be realized while maintaining a flat design front end. On a more technical level, the ability to discern leading edges from trailing parts of an object, such as the fingers from the back of the hand allows a set of natural, sensitive hand movements to be recognized. Reusing existing and low-cost audio equipment for touchless interaction makes ultrasound a natural yet exciting extension to the sensing capabilities of many electronic devices.
There are some additional factors that will affect the design rules for an ultrasonic touchless device. These include the environment the device is operating in, the number of sensors available for 'listening in' on the scene, the level of ultrasonic background noise, the degree to which the device is itself in motion, and the presence of other potentially interfering devices in the near surroundings.
Generally speaking, one can say that the more accurate, fine-tuned and more continuous the tracking and feedback of a system needs to be, the higher are the requirements on all of the above points. Considering the wide range of interaction options available, however, interesting and valuable compromises that can be struck to obtain the right level of interaction at the right cost.
When viewed from a first aspect the present invention provides an electronic device comprising at least one ultrasonic transmitter and at least one ultrasonic receiver, the device being arranged to transmit ultrasonic signals from the transmitter and to receive ultrasonic signals reflected from an input object, wherein the device is adapted to give a predetermined response only to a predetermined movement or set of predetermined movements of the input object wherein said movement(s) is/are defined at least partially by at least one of a predetermined path through which the object moves and a predetermined spatial zone in which the movement takes place.
The transmitters and receivers are preferably mounted adjacent to a display screen, for example within five centimetres, but preferably within one or two centimetres of the edge of the display screen. These may be mounted beside the screen, on or level with the surface of the screen, or alternatively may be mounted behind the screen. The more that can be known a priori about the motion one is looking for, the more accurately such movement can be detected, even in the presence of substantial levels of noise, unwanted motion and interference. As an example, if movement is restricted to belong to a particular, finite set of possible motions, or limited to be detected within a small set of motion spaces such as lines, graphs or curves, the potential for accurate detection is vastly improved. Such compromises are particularly valuable when the number of sensors is low. The restricted set of movement can be application-specific, so that the supported modes and the resulting robustness and accuracy improvements can be designed specifically for an application. Picking the right interaction mode(s) for a specific device or application will always involve compromises and tradeoffs. To be able to strike the right one, it is necessary to have a sufficient number basis ideas from which more complete concepts can grow. The predetermined path of movement may include the angle of the movement relative to a reference plane or reference axis defined relative to the device, and/or the direction of the movement along the trajectory (i.e. which ends of the path the movement starts and finishes. Additionally or alternatively it may include the nonlinear shape of the path. The angle between the direction of movement and a reference plane or axis (defined relative to the device), and the direction of movement along the trajectory are therefore possible parameters of the
predetermined path of movement of the input object.
In a set of embodiments the device displays an on-screen object which is indicative of the interactions available - e.g. through its shape and or position. Preferably the on-screen object is arranged to move in accordance with movement of the input object in accordance with the predetermined set of movements.
The predetermined response may comprise the execution of a function of the device, for example a change in what is displayed on the screen.
When viewed from a second aspect, the present invention further comprises a method of operating an electronic device comprising an ultrasonic transmitter, an ultrasonic receiver and a display screen, the method comprising:
transmitting ultrasonic signals from said transmitter;
receiving at said receiver ultrasonic signals reflected from an input object; giving a predetermined response only to a predetermined movement or set of predetermined movements of the input object wherein said movement(s) is/are defined at least partially by at least one of:
a predetermined path through which the object moves, said predetermined path being indicated by a graphical object on said display screen; and
a predetermined spatial zone in which the movement takes place, said predetermined spatial zone being indicated by a graphical object on said display screen.
The invention also extends to computer software for operating an electronic device comprising an ultrasonic transmitter, an ultrasonic receiver and a display screen, the software comprising:
logic for transmitting ultrasonic signals from said transmitter;
logic for receiving at said receiver ultrasonic signals reflected from an input object;
logic for giving a predetermined response only to a predetermined movement or set of predetermined movements of the input object wherein said movement(s) is/are defined at least partially by at least one of:
a predetermined path through which the object moves, said predetermined path being indicated by a graphical object on said display screen, the software comprising logic for displaying said graphical object; and
a predetermined spatial zone in which the movement takes place, said predetermined spatial zone being indicated by a graphical object on said display screen, the software comprising logic for displaying said graphical object.
The software may or may not be provided on a carrier or other physical medium. A number of embodiments of the invention will now be described, by way of example only, with reference to the accompanying drawings in which:
When viewed from a further aspect the present invention provides an electronic device comprising at least one ultrasonic transmitter and at least one ultrasonic receiver, the device being arranged to transmit ultrasonic signals from the transmitter and to receive ultrasonic signals reflected from an input object, wherein the device is adapted to give a predetermined response only to a predetermined movement or set of predetermined movements of the input object wherein said movement(s) is/are defined at least partially by at least one of a predetermined path through which the object moves and a predetermined spatial zone in which the movement takes place.
Fig. 1 is a perspective view of a laptop computer embodying the invention;
Fig. 2 contains perspective views of a tablet computer and stand embodying the invention;
Fig. 3 is a perspective view of a tablet computer and sleeve embodying the invention;
Fig. 4 contains figurative drawings of a user interacting with a device according to an embodiment of the invention;
Fig. 5 contains figurative drawings of a user interacting with a device according to an embodiment of the invention;
Fig. 6 contains figurative drawings of a user interacting with a device according to an embodiment of the invention;
Fig. 7 contains figurative drawings of a user interacting with a device according to an embodiment of the invention;
Fig. 8 is a perspective view of a laptop computer embodying the invention;
Fig. 9 contains figurative drawings of a user interacting with a device according to an embodiment of the invention;
Fig. 10 contains figurative drawings of a user interacting with an object on a display screen according to an embodiment of the invention;
Fig. 1 1 contains figurative drawings of a user interacting with an object on a display screen according to an embodiment of the invention;
Fig. 12 contains figurative drawings of a user interacting with an object on a display screen according to an embodiment of the invention;
Fig. 13 contains figurative drawings of a user interacting with an object on a display screen according to an embodiment of the invention;
Fig. 14 is a figurative drawing of a user interacting with an object on a display screen according to an embodiment of the invention;
Fig. 15 contains figurative drawings of a user interacting with an object on a display screen according to an embodiment of the invention;
Fig. 16 is a front view of a display screen of a device embodying the invention; Fig. 17 contains figurative drawings of a user interacting with an object on a display screen according to an embodiment of the invention;
Fig. 18 contains figurative drawings of a user interacting with an object on a display screen according to an embodiment of the invention;
Fig. 19 contains figurative drawings of a device embodying the invention;
Fig. 20 contains figurative drawings of a user interacting with an object on a display screen according to an embodiment of the invention;
Fig. 21 contains figurative drawings of a user interacting a display screen and a number of virtual screens according to an embodiment of the invention;
Fig. 22 is a figurative drawing of a user interacting with an object on a display screen according to an embodiment of the invention; and
Fig. 23 contains figurative drawings of a user interacting with an object on a display screen according to an embodiment of the invention.
Fig. 1 shows the idea of a limited gesture space in a touchless-enabled electronic device, namely a laptop computer 2. Although not shown this has a number of ultrasonic transmitters and receivers placed around the screen 4. These may be dedicated transducers or may also be ordinary audio loudspeakers and
microphones. The number and placement of the transducers may be chosen to suit the application, e.g. the cost and power constraints, the required accuracy and reliability etc.
At the bottom left of the screen 4 is a virtual "cross slider" 6. The cross slider 6 is tuned to detect movements along the lines that are visible i.e. a localized x and y- axis. The four virtual dots 8 in the top right corner could be buttons that will be 'pushed', for example, when a user holds his hand/finger in that zone/area for a given time. In theory although an ultrasonic tracking system can track a finger or a hand in 3D, this is a complicated problem requiring a lot or resources. However by limiting the tracker to only search for horizontal and vertical movements this problem is simplified. And further limiting this interaction to take place in a limited spatial area could potentially simplify the problem further: The amount of processing needed will be reduced since the device only needs to look for movements in certain parts of the field of view, corresponding to certain parts of the impulse response image. Impulse response images are described in greater detail in WO 2009/147398. Regarding the buttons 8 in the top left corner, the device will look for activity only in the parts of the impulse response images corresponding to these positions. Figs. 2A, 2B and 3 show some possible physical embodiments of devices in accordance with the invention. In each case a tablet computer 10 is associated with a peripheral device such as a stand 12 or a closely-fitting sleeve 14. Although not shown the stand 12 would have a number of ultrasonic transmitter-receiver pairs, e.g. eight or more pairs. The tablet computer 10 may have a physical or wireless connection to the stand 12 to allow it to use the transmitters and receivers to operate a touchless interface. The sleeve 14 shown in Fig. 3 could have an internal docking connector. The pairs of transmitters 16 and receivers 18 can be seen in this embodiment around the edge of the sleeve. Of course other peripherals can be used or the necessary transducers can be provided integrated with the device at manufacture.
Interaction options for touch based devices can be extended using touchless technology. It is perhaps best not to replace all functionality by touchless mode counterparts. New touchless sensing options can add functionality rather than replacing it, creating a richer user experience.
One such added functionality is a 'boundary breaking interface' as illustrated in Figs. 4A, 4B and 4C. Touchless interaction is enabled on the boundaries of the screen 20. The boundaries are located at the edges of the screen 20, i.e. left, right, top and bottom edges (there could also a boundary in the depth dimension). A working example would be a 'hidden' menu, which is available at one side of the screen, say the left side. To invoke the menu, the user performs a touchless gesture moving his hand from left to right, from the outside of the screen 20 and over to the left part of the screen, as shown in Figure 4B. Figure 5 shows a sequence representing pulling a hidden menu 22 onto a screen by a rightwards movement at the left of this screen 20 until the menu 22' is visible. A touchless gesture moving from outside the right hand side of the screen in a leftwards motion (Fig. 4A) could invoke a different menu. A similar action moving in towards the screen is shown in Fig. 4C. At several points in this description we will touch upon the concept of localized gestures, so a general description follows. In the description just above ("Boundary breaking interface"), the localized gestures were performed crossing the edges of the screen. In general, such gestures can be performed in other areas; in front of or in the vicinity of an input device (or even on the back).
The concept of localized gestures is something different than global gestures, which could be applied anywhere in the sensing volume. As previously mentioned, significant benefits can be obtained by limiting the set of possible motions to a predefined set of movements or directions.
In Figure 6 localized gestures are used to navigate two adjacent menus on a screen. Up and downwards motions can toggle selected menu icons (shown in black) in both menus independently of one another. Thus the first two pictures shown a movement in the left half of the screen 24 moving a selector on a first menu 28 up and down respectively, whilst the second menu 30 remaining still whilst the third and fourth pictures show the same movements performed in the right hand half of the screen 26 operating the second menu 30. Only one menu can be interacted with at a time.
Navigating a file system (e.g. directory tree structure) on a computer device often employs WIMP (window, icon, menu, and pointing device) interfaces and the use of a mouse. As new and different types of interaction emerge, the traditional navigation and on-screen representations are not the only options anymore. We can imagine navigating a file system or a menu more like a visual tree search process. This is shown in Figs. 7a-h.
At an abstract level, a file system is a tree of nodes connected with branches. A node could either be a folder, containing other folders and files, or a file (image, video etc). The user starts off at a given node 32 (say, home or My Documents) - Fig.7a. Subfolders and files are shown on a display screen as circles 34 connected by straight lines 36 to a circle 32 representing the starting node (Fig. 7b). The starting node 32 is bigger than the other nodes, and is located at the middle of the screen. The user can move his hand in the direction of one of the branches (Fig. 7c) until his hand/finger is on the top of a new node 34 (Fig. 7d). By hovering over that node 34 for a given time the node is selected (Fig. 7e). Once selected the circle 34 moves towards the centre of the screen (Fig. 7f) and the circles 38 representing its folders and files (nodes) are expanded and shown on the screen. The previous node visited 32 is made smaller, so as to show that it is no longer selected or active.
The tree is provided with terminal branches 38', the purpose of which is to complete the navigation (e.g. to select a file) or complete an action. For example if the nodes represent folders of music files, selecting the terminal branch 38' could begin to play that file.
Throughout the process, new branches could be grown on the tree while interaction happens. When a user moves his hand in a direction parallel to a particular connection 36, that branch may glow. This gives the user feedback, and tells him that the system is recognizing his movement. Several branches can be made to glow at the same time indicating that the system is uncertain which branch the user is following. This helps the user to be clearer about what he or she wants to do. At the same time, the classic cursor-tracking approach is circumvented.
A particular advantage of ultrasonic gesture recognition systems is their ability to sense motion within a wide sensing zone; e.g. a full hemisphere in front of a display screen. This can be termed 'dome proximity' and is a direct consequence of the propagation properties of ultrasound in air. Using a transmitter with small diameter relative to the ultrasonic wavelength will facilitate a broad ultrasonic wave front, propagating out from the transmitting device. One application is to let a device sense the presence of the user or his or her hand or other body part, no matter what the direction of approach is. This can create a feeling of a more sensitive device, for instance by turning the backlighting on in a gradual fashion, or letting a front menu on a mobile device gradually move towards the user from a hovering or floating position further away in a virtual 3D space (represented on a 2D screen). Fig. 8 shows that a sensing space 42 can cover the whole screen area 42 of laptop and slightly beyond. This means that the ultrasonic system is capable of detecting a hand anywhere in the zone 42. Figure 9 shows one example of a use of dome proximity. In Figure 9a a user's hand is approaching the screen. Once inside the proximity dome the system reacts to this instantaneously by starting a predefined animation that zooms the content out. This zoom level is kept while the user interacts with the system. Once the user removes his or her hand the system responds to this be returning to its initial state (zoom-level wise).
The resolution of an ultrasonic recognition system can be limited by a number of factors. These include the number of sensors used, the level of background noise or interference, or the update rate of the processing unit. In such situations, it is valuable to limit the number of possible movements that are being tracked, by reducing the tracking resolution from a full 2D or 3D tracked cursor to a more limited set of movements.
A part of the challenge with this reduced set of options is to help the user understand which on-screen graphical user interface (GUI) objects she can interact with and how. A solution is to present a number of relatively small 3D-effect buttons on the screen. In this embodiment these do not however act like ordinary GUI 'buttons' which only afford two options: being pressed or not being pressed.
By showing a floating 3D button in the form of a cube 44 as shown in Fig. 10, this can indicate to the user that a more complex interaction is possible. Moving a hand or finger over the object 44 in either a left, right, up or down motion cause the computer to rotate the cube 44 in the same direction, revealing another face of the cube. This could in turn be linked to specific functionalities. Examples would be the opening up of more menus, displaying of more cubes, starting or stopping an application or other commonly used or important features. Lines and rows of buttons could be aligned, so that larger and smaller sweep motions could turn multiple cubes around, enabling even richer interaction experiences.
Giving the object the shape of a cube has several advantages. First, it tells the user by its very shape what kinds of spins, turns or presses are naturally enabled or 'asked for' by the system. Next, since the user already knows what motions are being 'monitored', the system itself (i.e. on an algorithm or system level) can reduce the sensor resolution, or constrain a gesture-recognition engine, to only look for a few predefined movements or gestures. This is simpler and more robust than full 2D or 3D coordinate input or cursor motion.
The cube has limited size and is operated from a fairly close distance to the screen - e.g. 1 , 3, 5, or 10 cm away from the screen, so the need for feedback effects other than the turning of the cube itself is limited, and could perhaps be limited to indicating relative levels of presence or proximity to the cube. This could be achieved by changing its size or colour. For example assume that when the user is not interacting with the system the cube could be semi-transparent and smaller than when interacting with it. When the user comes within a predefined area close to the cube on the screen it becomes bigger and opaque indicating that this GUI element is now ready for interaction.
Figure 1 1 shows another embodiment where the shape of an on-screen object 46 is used to indicate the interactions allowed. Here a cylindrical menu is shown. Given this type of menu, the user will be given information as to how he/she should interact with this menu, i.e. with up and downwards motions.
To navigate content using left and right hand movements, a wheel representation of the content can be shown on screen, as in Figure 12. For example, if the user is browsing content by moving his hand in a leftwards motion, new content can continue to appear as long as new content is available in that folder. Optionally a wrap-around feature can be implemented, so that the user does not have to move all the way to the start when content at the end is shown. Thus considering Figure 12a, Item 1 is the item which is selected when the user moves his hand into the interaction zone. If the user performs a rightwards motion, as shown in Figure 12b, Item 2 is then selected after the user has performed the gesture/motion. Whereas in Figure 12c the user performs a leftwards motion selecting item 10, the last item. The on-screen wheel can be generalized into a cylindrical tree structure where different folders can be represented as different levels on the tree/branch. Up and down motions are used to move up or down one level of the tree as shown in Figs. 13a-d. The cylindrical tree structure can be used for browsing through a folder structure. The top level may be the highest level in the folder structure, e.g. Home, My Documents etc. In Figure 13a the user is at the third level in the folder hierarchy. By performing an upwards gesture/motion the user "goes into" the folder represented by Item 1 . The user then browses the contents of Item 1 with left and right gestures. Having not found what he was looking for the user moves up two levels with two consecutive downwards motions, as shown in Figure 13d.
Each of the items in Figure 13 could either be folders or files, mp3, video, pictures etc. When the user "arrives" at a file rather than a folder, the file is selected by an upwards motion. When the user wants to return to the file browser, this is done be a downwards motion.
A wheel menu 48 can also be displayed in the corner or on the side of the screen 50. Only a piece of the wheel 48 might be displayed, preferably half or less. Figure 14 shows the case where a wheel 48 is on the right side of the screen and each 'tooth' 52 on the wheel has an associated functionality. The user can perform up and down motions in the vicinity of the right of the screen 48 to cause a change to which menu item 52 is highlighted. A left movement lets the user select the highlighted menu icon 52'. A rightwards motion can cause the menu to be hidden, taking up less space on the screen.
A further embodiment shown in Figs. 15a and 15b is referred to as a string plucking interface. It offers a simple yet powerful method for selecting content in a touchless application. The user browses content with left and right movements while a small downwards and/or an upwards movement is performed to select a particular item. In the variant shown in Fig. 15a, there is one highlighted item. This could be the item that is in the centre of the screen. Left and right motions are used to highlight different items. The highlighted item is selected with an up/down motion as previously mentioned.
In Figure 15b content is browsed by left and right movements as before, however here there is no highlighted item in the centre of the screen. Here the selection is done by doing a downwards motion touchlessly over the item that the user wants to select.
In a further embodiment shown in Fig. 16 the screen is divided into multiple zones. The middle or main part of the screen is the gesture or motion zone 56 where the user can perform gestures to browse content. The left and right edges of the screen 58 can serve as 'fast scrolling zones' (the spaces beyond the edges of the screen can also be used for this purpose). When the user hovers his hand in these fast scrolling areas 58, or carries out a specific gesture in such an area, rapid scrolling of the content shown on the screen will occur. The rate of scrolling may be faster than occurs when the user moves his hand in regular gestures in the middle part of the screen 56. The scrolling can also speed up over time so that the user can efficiently browse a large collection of files. The scrolled edges may be located at each edge of the screen, i.e. left, right, top and bottom. This interaction mode can also be combined with touch or touch-based swipe interaction as well as mouse or touchpad operations. Similar to a keyboard having both 'Up', 'Down' and 'Page Up', 'Page Down', such combined modes can provide both faster and more accurate location of content.
The speed of the scrolling can also be controlled with hand position over the scroll "buttons" 58. Consider the case where the user's hand is hovering over the scroll button 58 on the right hand side of the screen, moving content from left to right. When the user moves his hand more to the right, i.e. even closer to the right edge of the screen, speed can be increased. Touchless interaction will sometimes be implemented with just a few gestures (e.g. left, right, up and down) rather than detailed pointing. In some situations it may be beneficial to define the gestures to be relatively large movements, covering a large portion of the screen. The challenge then becomes one of allowing a user to efficiently browse a lot of content using only four gestures.
One approach is to group the content on the screen. An example of this is 'string folding' which is shown in Figs. 17a-d. An upwards movement might 'fold' the content on the screen, making more content visible at the same time, while a downwards movement can be used to 'unfold' the content, initializing a more detailed view of the content. A left or right movement can be used to scroll content left or right, making new content visible to the user.
Figure 17 shows the concept of string folding. In its initial state the GUI shows content where one item occupies a large portion of the screen. The user can browse this content with left- and rightwards motions, as shown in Figure 17a. The user can also execute a downward motion, which will "fold" the string of items allowing more items shown. The individual items are also shrunk to fit more items on the screen. This transition is shown in Figure 17b. This folding can be done more than one time making even more content shown on the screen. This is shown in Figure 17c. Once the user has zoomed out to the level of content that he wants, left and rightwards motion can be performed to browse the content in a more efficient manner as shown in Figure 17d. Upwards motions can be used to 'unfold' the data again if desired. In Figure 18 a different kind of folding is shown. The goal is the same: to give the user the possibility of viewing and browsing more content at the same time. The interaction concept is the same as for the folding shown in Figure 17. Content can be browsed with left and rightwards motions, fold with downwards motions and unfolded with upwards motions.
When a device has touchless interaction enabled, not all the possible
gestures/movements need to the enabled at all times. The enabled set could be specific for a given stage of the interaction process or for a specific application that is running in the foreground. In order to help the user to know what gestures are possible at a given time, LED lights 60 (or any other indicia) can be mounted on the edges of the screen 62, each indicative of a respective gesture the user can perform. For example, an LED light 60 on the left side of the screen could tell the user that a leftward gesture, or a slider on the left hand side is currently enabled. Several LED lights 60 and corresponding gestures can be enabled simultaneously. Thus Fig. 19A shows lights only on the right hand side of the screen whereas Fig. 19B shows lights all round the screen.
A popular application for tablet-like devices is reading e-books. Touchless interaction can be used to turn pages when reading by waving a hand leftwards or rightwards over the screen. However, the actual turning of pages occupies only a small portion of the time spent using the application. Therefore, the user can be given feedback and reminded as to when the device actually is expecting touchless input, such as turning a page. When the user's hand approaches the screen with his hand, the system recognizes the movement. The GUI changes its appearance, and effectively tells the user that the system is ready for a specific mode of interaction. For example, when the user is simply reading the on-screen content, the GUI has its traditional look, such as a classic PDF file layout as shown in Fig. 20A. However when the user's hand approaches the screen, the GUI changes its appearance and becomes more similar to a real life book, to indicate that the user can turn the pages as shown in Fig. 20B. The centrefold line may appear, indicated by shadows, folds and/or morphing of the text into a book-shape (i.e. to represent non-planar pages). Although the readability of the text may be reduced, the user is reminded of what a book looks like and therefore what kinds of interactions she could expect to be having with it. Note also that when turning a page over, there is no need for an exact one-to-one spatial correspondence between the page motion and the hand motion, since the mental model for turning a page is different than when moving a solid object. This lessens the requirements on the sensing systems and algorithms relative to i.e. the situation where touchless swipes are used to navigate between pictures lined out on a string or in a folder.
Another embodiment of the invention is shown in Fig. 21 . A PC desktop 64 can end up jammed with information, showing icons for documents, web browsers, Word, Excel, Outlook and other applications. Some PC users choose to extend the effective workspace for applications by using several virtual desktops or 'virtual screens' 66. Only one desktop is visible on the screen at a time, but the PC maintains the status of several desktops and the user can instruct the PC to switch to a particular one. Such arrangements help to divide content in a more organized fashion. However, invoking the different desktops requires the use of a mouse, sometimes with large movements, or the user must remember obscure keyboard shortcuts to switch between screens. A better alternative is using touchless interaction modes. By performing simple left and right motions in front of the screen, the user can swap between desktop. Snap-to-grid functionality can be added to lock screens at full frames or half- or quarter frames. Although only horizontal rows of virtual desktops 66 are shown in Fig. 21 , the use of simple touchless gestures allows vertical as well as horizontal and diagonal desktop panning options to be implemented, or even larger 2D 'labyrinths' of pages.
A further embodiment which provides on-screen buttons 66 in certain
circumstances is shown in Fig. 22. Products such as digital picture frames or portable DVD players are usually designed around a set of physical buttons that allow basic navigation. Using touchless technology, it is possible to increase the number of "buttons" available to the user by associating actions in space with particular functions of the device, thus enhancing an existing design. However, since there is usually a lot of thought behind designing a product with the exact number and placement of buttons, such a 'button extension mode' may not always be the best option.
Another approach is to make on-screen buttons 66 look like a copy of the physical buttons allocated on the device, e.g. around the edge of a digital picture frame. These 'virtual' buttons would have the same functionality as their hard-wired counterparts, but could be bigger and therefore easier to locate than the smaller 'physical' buttons (not shown). They could have a similar or a different co- arrangement. Upon approaching the device, proximity detection could be used to show an animation where the on-screen buttons 66 'spring out' from the physical buttons, communicating to the user that the two input systems are the same.
From a user friendliness point of view, it is not always easy to push a button on a device standing on the table without moving the device. Picking up a digital photo frame is not always desirable, such as when showing pictures from an album to a friends or family members and this embodiment overcomes this difficulty.
An on-screen or physical scroll wheel has become popular on touch-based devices such as mp3 players. Difficulties in replicating this in touchless sensing include the need for continuous tracking with high accuracy, and the inherent inability to "untouch" the surface to perform a "select" function. One approach is to use a curved screen object 68 (see Fib. 23A) and perform a select function when a 'radial' movement in detected. More generally, a motion in a direction normal to any shaped curve (not just a circular wheel) can be used to perform a select action. Preferably the curve would be some closed form.
The shape of the closed form may be exploited by the motion detecting or tracking algorithm. This algorithm can be tuned so that it is only sensitive to specific movements on specific positions, namely (a) movements along the curve, and (b) movements normal to the curve but still in a plane parallel to the screen. The second, perpendicular movement could further be limited to only inwards or only outwards motions. The constrained space of motions helps the algorithm to perform robust and low-complexity motion recognition; it only needs to test a smaller set of motion hypotheses instead of performing a full 2D or 3D cursor tracking task. The feedback to the user need also not follow the rules of moving a cursor. Instead, changing the lighting intensity along the contour of the curve (using a sparkle or glow) (see Fig. 23B) and gradually growing out an inwards or outwards branch when a normal motion is detected, could provide valuable, if not, one-to-one feedback to the user. The feedback could also include moving a number of graphical objects along the contour of the curve (Fig. 23C).
Discovery With Gestures
An important aspect of gesture interaction design is to help the user understand which gestures are supported by the system in general or by specific applications. On-screen information such as help screens is one option. However, these require the user to take focus away from the application she is working with. Additional text, icons or objects indicating the potential for motion tracking would be another option, but this too would steal attention from more important tasks. The user may be overwhelmed by the amount of information to be taken in.
It is better to design a system which invites exploration and thereby helps the user to gradually learn and understand what her new and additional interaction options are; i.e. in addition to what is afforded by keyboards, mice or touch screens. One way of achieving this is to make a certain area or part of the screen light up in response to a movement over or close to it. There could simply be a colour change of the background, indicating that the specific area is sensitive to movement. At a more advanced level, the screen may simulate opening up to reveal levels beneath its original information layer, almost like as a trap door. Equally well, movements in an area could invoke a top, transparent layer, such as a transparent icon superimposed on other, existing onscreen objects (which still could be invoked using a mouse & cursor, or a tap on the screen).
The information revealed to the user may be indicative of the kind of motion the system can detect during a next step. The information might be provided via a transparent or a background residing icon, such as an arrow pointing in a specific direction (or a double arrow or an arrow cross). The cues could also be more subtle, such as wave motions morphing the onscreen object, along a specific direction. This would indicate that such an indicated motion could trigger further events. Glowing particles could be used instead of waves. Multiple options for directional motion sensing could be indicated, simultaneously or in turn.
The 'cues' may gradually fade out, either as a function of the time elapsed, or when the finger, hand, or other object ceases to be present in the vicinity of the gesture/interaction zone. Such gradual 'discovery' of gesture options allows the user to spend some time to take the new alternatives in. In the meanwhile, the familiar desktop or application front end is available to the user without new and strange icons appearing or distracting.
Depth defined gestures
A particular feature of touchless systems, not present in touch based systems, is their ability to measure depth. Ultrasonic systems are able to sense movements relatively close to the screen. This allows the user to keep focus on what happens on the screen and what she is doing with her hand, at the same time, and while thinking within the same coordinate system. This is different from using a mouse, where a relative motion mapping between the table and the cursor on the screen must exist somewhere in the users mind. It is also different from touchless systems operating at further ranges, where the sense of a common coordinate system is harder to establish due to the distance to the screen. One-to-one motion mapping (i.e. where the on-screen content is moved the exact same distance as the user's hand moves through space) with added depth dimension allows for a certain level of depth-defined gestures. This means that a movement, such as a right movement of the hand, could produce a totally different or a slightly different effect based on how far away from the screen the hand is during motion. As an example, consider browsing a series of images shown on the screen. The task at hand is to skip from one image to the next. Moving the hand rightwards close to the screen could be given a specific visual effect: as the hand moves in, the string of pictures shrinks in size and is pushed towards the horizon. More images are simultaneously visible than if the inwards push had not happened. When moving the hand to the right, the resulting effect on the images is that they are moved at a faster rate of images skipped because each image is physically smaller on the screen, i.e. the browsing through the set happens faster.
The motion of the row of images could still match the motion of the hand close to a one-to-one degree; the relatively fast change of the objects along the screen is merely a function of the z-displacement of the whole string of images. This makes it natural to move this string at a relatively high pace, i.e. moving a high number of pictures per inch of hand rightwards movement. Had the hand been held further out as it was moved to the right, the pictures would be moved inwards to a lesser degree, and the browsing pace would be lower. Such variable speed browsing is enabled using the z-dimension of the movement actively.
There are various other ways to exploit the z-dimension, such as triggering a specific effect or invoking a menu or a help screen if the hand moves sufficiently close to the surface. Operating would continue it in other and more standard ways when gestures are detected further away.
It will be appreciated by those skilled in the art that the many examples given above are not exhaustive and the invention contemplates other possibilities. In particular any two or more of the features shown herein may be used together without any further features.

Claims

Claims:
1 . An electronic device comprising at least one ultrasonic transmitter, at least one ultrasonic receiver and a display screen, the device being arranged to transmit ultrasonic signals from the transmitter and to receive at the receiver ultrasonic signals reflected from an input object, wherein the device is adapted to give a predetermined response only to a predetermined movement or set of
predetermined movements of the input object wherein said movement(s) is/are defined at least partially by at least one of:
a predetermined path through which the input object moves, said predetermined path being indicated by a graphical object on said display screen; and
a predetermined spatial zone in which the movement takes place, said predetermined spatial zone being indicated by a graphical object on said display screen.
2. An electronic device as claimed in claim 1 wherein said transmitter and/or said receiver is/are mounted adjacent to said display screen
3. An electronic device as claimed in claim 1 or 2 wherein said transmitter and/or said receiver is/are mounted behind said display screen.
4. An electronic device as claimed in any preceding claim wherein the onscreen object is arranged to move in accordance with said movement of the input object.
5. An electronic device as claimed in any preceding claim wherein the predetermined path of movement comprises an angle of the movement relative to a reference plane or reference axis defined relative to the device.
6. An electronic device as claimed in any preceding claim wherein the predetermined path of movement comprises a direction of movement along a given trajectory.
7. An electronic device as claimed in any preceding claim comprising an onscreen object which is indicative of the interactions available to the user.
8. An electronic device as claimed in any preceding claim comprising an on- screen object which changes appearance based on said movement of the input object.
9. An electronic device as claimed in any preceding claim wherein the predetermined response comprises executing a function of the device.
10. An electronic device as claimed in claim 9 wherein the function comprises changing what is displayed on the display screen.
1 1 . A method of operating an electronic device comprising an ultrasonic transmitter, an ultrasonic receiver and a display screen, the method comprising: transmitting ultrasonic signals from said transmitter;
receiving at said receiver ultrasonic signals reflected from an input object; giving a predetermined response only to a predetermined movement or set of predetermined movements of the input object wherein said movement(s) is/are defined at least partially by at least one of:
a predetermined path through which the input object moves, said predetermined path being indicated by a graphical object on said display screen; and
a predetermined spatial zone in which the movement takes place, said predetermined spatial zone being indicated by a graphical object on said display screen.
12. A method as claimed in claim 1 1 wherein said transmitter and/or said receiver is/are mounted adjacent to said display screen
13. A method as claimed in claim 1 1 or 12 wherein said transmitter and/or said receiver is/are mounted behind said display screen.
14. A method as claimed in any of claims 1 1 to 13 wherein the on-screen object moves in accordance with said movement of the input object.
15. A method as claimed in any of claims 1 1 to 14 wherein the predetermined path of movement comprises an angle of the movement relative to a reference plane or reference axis defined relative to the device.
16. A method as claimed in any of claims 1 1 to 15 wherein the predetermined path of movement comprises a direction of movement along a given trajectory.
17. A method as claimed in any of claims 1 1 to 16 comprising displaying an on- screen object which is indicative of the interactions available to the user.
18. A method as claimed in any of claims 1 1 to 17 comprising displaying an onscreen object which changes appearance based on said movement of the input object.
19. A method as claimed in any of claims 1 1 to 13 wherein the predetermined response comprises executing a function of the device.
20. A method as claimed in claim 19 wherein the function comprises changing what is displayed on the display screen.
21 . Computer software for operating an electronic device comprising an ultrasonic transmitter, an ultrasonic receiver and a display screen, the software comprising:
logic for transmitting ultrasonic signals from said transmitter;
logic for receiving at said receiver ultrasonic signals reflected from an input object;
logic for giving a predetermined response only to a predetermined movement or set of predetermined movements of the input object wherein said movement(s) is/are defined at least partially by at least one of:
a predetermined path through which the object moves, said predetermined path being indicated by a graphical object on said display screen, the software comprising logic for displaying said graphical object; and a predetermined spatial zone in which the movement takes place, said predetermined spatial zone being indicated by a graphical object on said display screen, the software comprising logic for displaying said graphical object.
PCT/GB2013/050536 2012-03-05 2013-03-05 Touchless user interfaces WO2013132242A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
GB1203851.9 2012-03-05
GBGB1203851.9A GB201203851D0 (en) 2012-03-05 2012-03-05 Touchless user interfaces

Publications (1)

Publication Number Publication Date
WO2013132242A1 true WO2013132242A1 (en) 2013-09-12

Family

ID=46003146

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/GB2013/050536 WO2013132242A1 (en) 2012-03-05 2013-03-05 Touchless user interfaces

Country Status (2)

Country Link
GB (1) GB201203851D0 (en)
WO (1) WO2013132242A1 (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103885590A (en) * 2014-03-10 2014-06-25 可牛网络技术(北京)有限公司 Method and user equipment for obtaining user instructions
US20160091308A1 (en) * 2014-09-30 2016-03-31 Invensense, Inc. Microelectromechanical systems (mems) acoustic sensor-based gesture recognition
US9501810B2 (en) 2014-09-12 2016-11-22 General Electric Company Creating a virtual environment for touchless interaction
US9733720B2 (en) 2014-12-02 2017-08-15 Elliptic Laboratories As Ultrasonic proximity and movement detection
WO2017137755A2 (en) 2016-02-09 2017-08-17 Elliptic Laboratories As Proximity detection
US10459525B2 (en) 2014-07-10 2019-10-29 Elliptic Laboratories As Gesture control

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060182346A1 (en) * 2001-09-17 2006-08-17 National Inst. Of Adv. Industrial Science & Tech. Interface apparatus
WO2009147398A2 (en) 2008-06-04 2009-12-10 Elliptic Laboratories As Object location
WO2011042748A2 (en) * 2009-10-07 2011-04-14 Elliptic Laboratories As User interfaces
US20120001875A1 (en) * 2010-06-29 2012-01-05 Qualcomm Incorporated Touchless sensing and gesture recognition using continuous wave ultrasound signals

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060182346A1 (en) * 2001-09-17 2006-08-17 National Inst. Of Adv. Industrial Science & Tech. Interface apparatus
WO2009147398A2 (en) 2008-06-04 2009-12-10 Elliptic Laboratories As Object location
WO2011042748A2 (en) * 2009-10-07 2011-04-14 Elliptic Laboratories As User interfaces
US20120001875A1 (en) * 2010-06-29 2012-01-05 Qualcomm Incorporated Touchless sensing and gesture recognition using continuous wave ultrasound signals

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103885590A (en) * 2014-03-10 2014-06-25 可牛网络技术(北京)有限公司 Method and user equipment for obtaining user instructions
US10459525B2 (en) 2014-07-10 2019-10-29 Elliptic Laboratories As Gesture control
US9501810B2 (en) 2014-09-12 2016-11-22 General Electric Company Creating a virtual environment for touchless interaction
US20160091308A1 (en) * 2014-09-30 2016-03-31 Invensense, Inc. Microelectromechanical systems (mems) acoustic sensor-based gesture recognition
US9733720B2 (en) 2014-12-02 2017-08-15 Elliptic Laboratories As Ultrasonic proximity and movement detection
WO2017137755A2 (en) 2016-02-09 2017-08-17 Elliptic Laboratories As Proximity detection
US10642370B2 (en) 2016-02-09 2020-05-05 Elliptic Laboratories As Proximity detection

Also Published As

Publication number Publication date
GB201203851D0 (en) 2012-04-18

Similar Documents

Publication Publication Date Title
JP5745141B2 (en) Optical finger gesture user interface
US10102010B2 (en) Layer-based user interface
CN105683878B (en) User interface object operation in user interface
US7877707B2 (en) Detecting and interpreting real-world and security gestures on touch and hover sensitive devices
KR100801089B1 (en) Mobile device and operation method control available for using touch and drag
EP2564292B1 (en) Interaction with a computing application using a multi-digit sensor
WO2013132242A1 (en) Touchless user interfaces
US10591988B2 (en) Method for displaying user interface of head-mounted display device
JP2013524311A (en) Apparatus and method for proximity based input
AU2010292231A1 (en) A system and method for displaying, navigating and selecting electronically stored content on a multifunction handheld device
Hansen et al. Mixed interaction space: designing for camera based interaction with mobile devices
US20170242568A1 (en) Target-directed movement in a user interface
US10042445B1 (en) Adaptive display of user interface elements based on proximity sensing
CN106796810A (en) On a user interface frame is selected from video
CN103870131B (en) The method and electronic equipment of a kind of control electronics
US20130159935A1 (en) Gesture inputs for navigating in a 3d scene via a gui
Strohmeier DisplayPointers: Seamless cross-device interactions
Di Geronimo et al. Tilt-and-Tap: framework to support motion-based web interaction techniques
Esteves et al. One-Handed Input for Mobile Devices via Motion Matching and Orbits Controls
Thomason et al. Exploring multi-dimensional data on mobile devices with single hand motion and orientation gestures
Shittu et al. A review on interaction techniques on mobile phones
Zaiţi et al. Exploring hand posture for smart mobile devices
AU2013257423B2 (en) Light-based finger gesture user interface
Yang Blurring the boundary between direct & indirect mixed mode input environments

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13710526

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 13710526

Country of ref document: EP

Kind code of ref document: A1