GB2490479A - Use of a virtual sound source to enhance a user interface - Google Patents

Use of a virtual sound source to enhance a user interface Download PDF

Info

Publication number
GB2490479A
GB2490479A GB1106663.6A GB201106663A GB2490479A GB 2490479 A GB2490479 A GB 2490479A GB 201106663 A GB201106663 A GB 201106663A GB 2490479 A GB2490479 A GB 2490479A
Authority
GB
United Kingdom
Prior art keywords
user
sound
geographical
virtual source
geographical position
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
GB1106663.6A
Other versions
GB201106663D0 (en
Inventor
Matti J Nelimarkka
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nokia Oyj
Original Assignee
Nokia Oyj
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nokia Oyj filed Critical Nokia Oyj
Priority to GB1106663.6A priority Critical patent/GB2490479A/en
Publication of GB201106663D0 publication Critical patent/GB201106663D0/en
Publication of GB2490479A publication Critical patent/GB2490479A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/167Audio in a user interface, e.g. using voice commands for navigating, audio feedback
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04815Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • User Interface Of Digital Computer (AREA)
  • Stereophonic System (AREA)

Abstract

A method in which the position of user input (e.g. cursor or controller position) 210 is detected thereby enabling generation of sound output having a corresponding user-perceived virtual source 225 position associated with the detected position to be generated. And a method of generating of virtual sound source audio navigation signals between a first geographical position (obtained e.g. via GPS) and the second geographical position (obtained via e.g. user input) configured to enable a user to navigate from the first geographical position to the second geographical position. Stereo, surround sound, binaural, 3D sound or a time delay between real sound sources may be used.

Description

AUDIO-ENHANCED USER INTERFACE AND ASSOCIATED METHODS
Technical Field
s The present disclosure relates to the field of audio-enhanced user interfaces, associated methods, computer programs and apparatus. Certain disclosed aspects/embodiments relate to portable electronic devices, and in particular, to so-called hand-portable electronic devices which may be hand-held in use (although they may be placed in a cradle in use). Such hand-portable electronic devices include so-called Personal Digital Assistants (PDAs).
The portable electronic devices/apparatus according to one or more disclosed aspects/embodiments may provide one or more audio/text/video communication functions (e.g. tele-communication, video-communication, and/or text transmission (Short Message Service (SMS)/ Multimedia Message Service (MMS)/emailing) functions), interactive/non-interactive viewing functions (e.g. web-browsing, navigation, TV/program viewing functions), music recording/playing functions (e.g. MP3 or other format and/or (FM/AM) radio broadcast recording/playing), downloading/sending of data functions, image capture function (e.g. using a (e.g. in-built) digital camera), and gaming functions.
Background
It is now not unusual for electronic devices to utilise a graphical user interface (GUI) for enabling a user to interact with the device. For example, a touch screen may display one or a number of keys, icons or menu items (or other user interface (UI) elements) which each occupy a subsection of the touch screen area and correspond to a selectable function. Other devices may provide tactile feedback to the user (e.g. a user may feel the keys of a conventional keyboard) or audio feedback (e.g. beeps corresponding to key presses of mobile phone).
The listing or discussion of a prior-published document or any background in this specification should not necessarily be taken as an acknowledgement that the document or background is part of the state of the art or is common general knowledge. One or more aspects/embodiments of the present disclosure may or may not address one or
more of the background issues.
I
Summary
In a first aspect, there is provided an apparatus comprising: a processor; and a memory including computer program code, the memory and the computer program code configured to, with the processor, cause the apparatus to perform the following: detect the position of a user input from a user interface of an electronic device; use the detected position of the user input to enable generation of a sound having a corresponding user-perceived virtual source position, associated with the detected position of the user input.
Using the detected position of the user input to enable generation of a sound having a corresponding user-perceived virtual source position may create an audio environment and enable the user to hear, for example, audio cues in ID/2D/3D that may assist the user in interacting with the apparatus (e.g. to confirm the position of a cursor). By using virtual source positions the user experience may be created to be more immersive and lead to better user experience. The user-perceived virtual source may enable eyes-free usage of a user interface and/or apparatus.
The apparatus may be configured to detect the position of the input along two axes in the same plane and provide the user-perceived sound position of the sound according to the two dimensional planar axes defined position.
The user-perceived virtual source position may be considered to be the position from which the user perceives the sound to be coming from. The user's perception may be dependent on the absolute volume of sound heard in each ear (e.g. the user may interpret a louder sound to be closer than a quieter sound), the relative volume of sound heard in each ear (e.g. if one ear hears a louder sound than the other, the user may interpret that the virtual source is closer to the ear hearing the louder sound), the relative timing of sounds heard in each ear (e.g. if one ear hears a sound earlier than the other, the user may interpret that the virtual source is closer to the ear hearing the earlier sound).
The position of a user input may correspond to one or more of, for example: a cursor position when the cursor is moved; a cursor position when a button or key is pressed (e.g. clicking a mouse); selecting an icon; a position of a controller (e.g. remote controller such as a wand or mouse); and s the position of a scriber (e.g. finger, stylus) on a touch-screen surface.
The apparatus may be configured to enable generation of a sound having a corresponding user-perceived virtual source position using one or more of: time differences between a plurality of real sources; ic average volume of a plurality of real sources; volume differences between a plurality of real sources; surround sound technology; stereo sound technology; binaural audio technology; and 3D sound technology.
3D sound technology may be provided by a plurality of real sound sources in, for example, a 2.0-Stereo configuration, a 2.1-Stereo configuration, a 4.1-Surround configuration, a 4.0-Quad configuration, a 4.1 configuration, a 5.1 configuration, a 5.1-Side configuration, a 6.1 configuration, a 7.1-Front configuration, a 7.1-Surround configuration or a 9.1-Surround configuration.
The virtual sound source and the corresponding user input may be configured to have the same position.
The apparatus may comprise a sound generator, the sound generator comprising one or more real sound sources configured to enable the generation of a plurality of sound outputs each having a different user-perceived virtual source.
The sound generator may comprise one or more of an earpiece (e.g. of headphones) and a speaker (e.g. subwoofer, woofer and/or tweeter).
The apparatus may be configured to detect the position of the input along one or more axes and provide the user-perceived sound position of the sound according to the axes defined position. That is the user-perceived sound position may be configured to be a function of the user input position.
The apparatus may comprise a user interface, the user interface comprising one or more of a keyboard, a wand, a touch screen interface, a mouse and a cursor.
The apparatus may be an electronic device, a portable electronic device, a phone, a computer, a home cinema, a TV, a movie player, a personal music player, video game console, a laptop, a personal digital assistant, portable e-book reader, a satellite navigation device or a mobile phone or a module for one or more of these devices.
In a second aspect, there is provided a method, the method comprising: detecting the position of a user input from a user interface of an electronic device; using the detected position of the user input to enable generation of a sound having a corresponding user-perceived virtual source position, associated with the detected position of the user input.
In a third aspect, there is provided a computer program, the computer program comprising computer code configured to: detect the position of a user input from a user interface of an electronic device; use the detected position of the user input to enable generation of a sound having a corresponding user-perceived virtual source position, associated with the detected position of the user input.
In a fourth aspect, there is provided an apparatus comprising: a processor; and a memory including computer program code, the memory and the computer program code configured to, with the processor, cause the apparatus to perform the following: determine a first geographical position; and receive input corresponding to a second geographical position; enable generation of audio navigation signalling between the first geographical position and the second geographical position via optionally one or more intermediate geographical positions, the audio navigation signalling having a user-perceived virtual source position associated with the relative position of the second, or intermediate geographical positions, with respect to the first geographical position or an earlier intermediate geographical position, the user-perceived virtual source position configured to enable the user to navigate from the first geographical position to the second geographical location.
The first position may be the current position of the apparatus.
The apparatus may be configured to update the first position when the first position changes.
At least one of the geographical positions may be determined using a combination of one 1D or more of global positioning satellites, mobile phone signals and environment recognition.
The audio navigation signalling may comprise a combination of one or more of a musical note, a song, a tune, an audio guide (e.g. for use in a museum or art gallery), navigation is instructions, an audio book, a phone call, a news feed and a radio broadcast.
In a fourth aspect, there is provided a method, the method comprising: determining a first geographical position; and receiving input corresponding to a second geographical position; enabling generation of audio navigation signalling between the first geographical position and the second geographical position via optionally one or more intermediate geographical positions, the audio navigation signalling having a user-perceived virtual source position associated with the relative position of the second, or intermediate geographical positions, with respect to the first geographical position or an earlier intermediate geographical position, the user-perceived virtual source position configured to enable the user to navigate from the first geographical position to the second geographical location.
In a fifth aspect, there is provided a computer program, the computer program comprising computer code configured to: determine a first geographical position; and receive input corresponding to a second geographical position; enable generation of audio navigation signalling between the first geographical position and the second geographical position via optionally one or more intermediate geographical positions, the audio navigation signalling having a user-perceived virtual source position associated with the relative position of the second, or intermediate geographical positions, with respect to the first geographical position or an
S
earlier intermediate geographical position, the user-perceived virtual source position configured to enable the user to navigate from the first geographical position to the second geographical location.
The present disclosure includes one or more corresponding aspects, embodiments or features in isolation or in various combinations whether or not specifically stated (including claimed) in that combination or in isolation. Corresponding means for performing one or more of the discussed functions are also within the present disclosure.
in Corresponding computer programs for implementing one or more of the methods disclosed are also within the present disclosure and encompassed by one or more of the described embodiments.
The above summary is intended to be merely exemplary and non-limiting.
Brief Description of the Figures
A description is now given, by way of example only, with reference to the accompanying drawings, in which:-Figure 1 depicts an embodiment comprising a number of electronic components, including memory, a processor and a communication unit.
Figure 2a illustrates an embodiment comprising a touch-screen.
Figure 2b depicts the user providing input to the touch-screen user interface.
Figure 2c depicts the user perceiving the sound originating from a virtual source position.
Figure 2d shows the sound signals provided by each of the real sound sources.
Figure 3a illustrates a further embodiment comprising a remote controller.
Figure 3b depicts the user perceiving the sound originating from a virtual source position.
Figure 3c shows the sound signals provided by each of the real sound sources.
Figure 4a illustrates a further embodiment comprising a touch-screen.
Figure 4b depicts the user input provided to the touch-screen user interface.
Figure 4c shows the sound signals provided by each of the real sound sources.
Figure Sa illustrates a further embodiment for providing audio navigation signals.
Figure Sb depicts an overhead view of the user in a real world environment.
Figure Sc shows the screen as the user is interacting with the device.
Figure 6 shows a flow diagram illustrating the use of a detected position of the user input to enable generation of a sound having a corresponding user-perceived virtual source position.
Figure 7 illustrates schematically a computer readable media providing a program according to an embodiment of the present invention.
Description of Example Aspects/Embodiments
Other embodiments depicted in the figures have been provided with reference numerals that correspond to similar features of earlier described embodiments. For example, feature number I can also correspond to numbers 101, 201, 301 etc. These numbered features may appear in the figures but may not have been directly referred to within the description of these particular embodiments. These have still been provided in the figures to aid understanding of the further embodiments, particularly in relation to the features of similar described embodiments.
In modern devices, various user interfaces have become popular to enable the user to interact with, receive information from and control an apparatus. Generally the user uses visual information provided by the user interface to interact with the device (e.g. using the position of a cursor as it appears on the screen). Other user interfaces use tactile feedback to provide information to the user (e.g. a physical keyboard or haptical display).
However, some interfaces (e.g. a touch-screen) may not be able to provide the same level of tactile information as, for example, a physical keyboard. Other user interfaces use sound to indicate an event (e.g. a click sound corresponding to clicking on an icon).
It may be advantageous to use other senses to enhance the user experience. Providing a sound with a virtual source position which is dependent on (e.g. is a function of) the user input position may enhance the user experience and allow better control of the device.
Figure 1 depicts an example embodiment (101) of an apparatus, such as a mobile phone or personal digital assistant, comprising a user interface (105) such as, for example, a Touch Screen. In other example embodiments, the apparatus (101) may comprise a module for a mobile phone (or PDA, audio/video player or other suitable device), and may just comprise a suitably configured memory 107 and processor 108 (see below).
The apparatus (101) of Figure 1 is configured such that it may receive, include, and/or otherwise access data. For example, this example embodiment (101) comprises a communications unit (103), such as a receiver, transmitter, and/or transceiver, in communication with an antenna (102) for connecting to a wireless network and/or a port (not shown) for accepting a physical connection to a network, such that data may be received via one or more types of networks. This example embodiment comprises a memory (107) that stores data, possibly after being received via an antenna (102) or port or after being generated at the user interface (105). The user interface (105) allows the user to interact with the device (101). The processor (108) may receive the data representing nput from the user interface (105), from the memory (107), or from the communication unit (103). Regardless of the origin of the data, these data may be outputted to a user of apparatus (101) via a display device (104), a sound generator (106) and/or any other output devices provided with the apparatus. The processor (108) may also store the data for later use in the memory (107). The memory (107) may store computer program code and/or applications which may be used to instruct/enable the processor (108) to perform functions (e.g. generate, delete, read, write and/or process data). It will be appreciated that other example embodiments may comprise additional displays (e.g. CRT screen, LCD screen and/or plasma screen) and/or user interfaces (e.g. physical keys and/or buttons). It will be appreciated that references to a memory or a processor may encompass a plurality of memories or processors.
Figure 2a depicts the outward appearance of an embodiment of figure 1 comprising a portable electronic device (201), e.g. such as a mobile phone or portable media player, with a user interface comprising a touch-screen (205), a memory (not shown) and a processor (not shown). The portable electronic device is configured to allow the user to interact with the portable electronic device with his/her finger on the touch screen (205).
It will be appreciated that in other suitably adapted embodiments the user may interact with the touch screen using a stylus (or other scriber). In this case the portable electronic device (201) comprises a sound generator (206) which, in this case, is a set of headphones comprising two real sound sources (206a, 206b).
Figure 2b depicts the touch screen (205) of the user interface as the user is interacting with, by providing user input to, the device. In this case the screen is showing icons corresponding to three favourite webpages (211a, 211b, 211c) of the user, arranged along a screen x-axis. In this case, the screen x-axis runs parallel to an edge of the screen (204). A cinema listing website icon (211 a) occupies a first range of x-positions on the screen x-axis (251), a news website icon (211 b) occupies a second range of x-positions on the screen x-axis (251), and a search engine website icon (21 Ic) occupies a third range of x-positions on the screen x-axis (251). In this example, the user wishes to select the search-engine website icon (211c). To do this the user presses his finger on the screen within the search-engine website icon (211 c) boundaries (and within the third range of x-positions on the screen x-axis (251) corresponding to the search-engine website icon (211c)). This embodiment is configured to detect the position of the finger user input from the touch-screen user interface (205). From this information the device is configured to determine which icon has been pressed.
In this case, the apparatus is configured to give audio feedback to the user in response to the selection. From the user input position, x0, in this case, the device is configured to determine the distance, x0, of the user input along the x-axis (251) of the apparatus. In this case the zero-point (or origin) of this axis (251) is at the left side of the screen (according to the user when the screen is in use). It will be appreciated that, for other embodiments, the zero-point of this axis may not be positioned at an edge of the screen (e.g. it may be positioned at the centre, or some other point, of the screen). Using the value of the screen width, screen_width, the device is configured to determine the distance between the user input position and the left edge of the screen, XL, and the distance between the user input position and the right edge of the screen, XR. In this case, as the origin is at the left edge of the screen, XL= x0, and XR = screen_width -x0.
ln order to provide audio-enhanced feedback, the device is configured to supply different signallings to the left and to the right real sound sources of the sound generator. By supplying different signallings, the sound generator is configured to enable the generation of a sound having a corresponding user-perceived virtual source position.
This is depicted in figure 2c, which shows an overhead view of the user (290) wearing the headphones sound generator (206). In this case, the apparatus is configured such that each ear of the user may receive sound from one real sound source of the sound generator (206).
In this example, the detected position of the user input is more to the right of the screen and the device generates a sound having a corresponding user-perceived virtual source position (226) more to the right of the user. In this embodiment, the virtual source position, P(R,O), is defined with respect to the position of the real sound sources (206a, 206b) of the headphones sound generator. In this case, as the real sound sources are configured to be worn on the head of the user, the virtual source moves as the user moves their head (and the real sound sources).
In this case the user-perceived virtual source position, P(R,9), is configured to be positioned by setting a time delay between the sound signalling provided by the right real sound source (206b) of the sound generator and that provided by the left real sound source (206a). That is, the sound signalling supplied to the user by the left sound s source, AL, is the same form as the sound signalling supplied by the right sound source, AR, but can be delayed in time by a time delay, z\t. That is, for this embodiment, AR(t) AL(t+At). It will be appreciated that for this embodiment Ext may be positive, zero or negative. That is, AR may be delayed with respect to AL. In this case Ext is a function of XL and Xp. For example, the time delay may be proportional to the difference in distances of the user input from the right and left edges of the screen (e.g. Ext(xL,xR)= A(xL-xp)-Iscreen_width), where A is a constant of proportionality. That is, the user-perceived virtual source position is related to the corresponding user input position using a mapping function, J. This may be expressed as: P(R,O) =f(XLXR) In this case, the user-perceived virtual source position, P(R, 0), is defined with respect to the position of the real sound sources of the sound generator. R gives the distance between the virtual source position and the midpoint between the two real source positions. B gives the angle between the axis joining the virtual source position and the midpoint and the axis on which the two real sources lie. It will be appreciated that other embodiments may be configured to detect the position of the position of the real sound sources and define the user-perceived virtual source position, P(R, 0), with respect to the user interface.
It will be appreciated that the origin may be positioned at the centre of the screen and the apparatus may be configured to determine the distance between the user input and the centre of the screen. For example, negative values of x0 may correspond to a virtual sound source positioned to the left of the screen centre and positive values of x0 may correspond to a virtual sound source positioned to the right of the screen centre.
It will be appreciated that the sound could be used to guide the user's positioning of the input. For example, the virtual sound source position could change in response to the user dragging his finger across the screen. The user could use the information provided by the virtual source position to position his finger before providing further input, such as a different type of input (e.g. clicking an icon). This may allow the user to interact with the device using limited or no visual information.
It will be appreciated that the real sound sources may increase or decrease the volume of each of the sound signallings in accordance with the determined user input position.
For example, if the cursor is closer to the right side of the screen, the volume of the right speaker may be louder than that of the left speaker. This volume difference gives the impression that the sound is coming from the right.
It will be appreciated that other embodiments may use other configurations of multichannel audio (e.g. 5.1 six channel surround sound) to provide the virtual source position.
It will be appreciated that other embodiments the virtual source position may be dependent on the orientation of the display. For example, the virtual source may be positioned in a two dimensional virtual source plane, the virtual source plane being parallel to the user interface display.
Figure 3a depicts a user (392) interacting with a further embodiment (301) comprising a screen (304), and two real sound sources (306a, 30Gb) which in this case are speakers.
In this case the user interacts with the information display using a remote control (e.g. a wand). Unlike the previous embodiment, for this embodiment the position of the real sound sources (306a, 30Gb) are fixed with respect to the display (304), rather than with respect to the user. In addition, unlike the previous embodiment where each ear of the user could only receive sound from one real sound source, this embodiment is configured such that the both ears of the user may receive sound signalling emanating from each of the real sound sources.
In this case, the user interacts with the display by controlling the position of a cursor (393) on the screen using a wand remote controller (392). In this case the apparatus (301) determines the cursor (393) position when the user provides input corresponding to moving the cursor (393). The apparatus (301) is configured to provide a continuous note (of a certain frequency, such as 262 Hertz) from the speakers, as the cursor (393) is being moved. It will be appreciated that, for other embodiments, the frequency of the sound may change as a function of cursor position.
In this case, the real sound sources (306a, 30Gb) are configured to lie along a right-left x-axis (351) which is parallel to a screen edge. It will be appreciated that for other embodiments, the sound sources may lie along an arbitrary axis (e.g. up-down or some other axis).
In this case, the apparatus is configured to enable sound of a different volume to be provided by each of the sound sources (306a, 30Gb). When the first (left) real sound source (306a) is louder than the second (right) real sound source (30Gb), the user perceives the sound as coming from a virtual sound source (326) positioned closer to the first (left) real sound source (306a) than the second (right) real sound source (306b).
Correspondingly, when the second (right) real sound source (306b) is louder than the first (left) real sound source (306a), the user perceives the sound as coming from a virtual sound source positioned towards their right. In this example, the cursor is initially positioned on the screen, towards the first (left) real sound source (306a), at x0 (as depicted in figure 3a), and the user is moving the cursor towards the second (right) real sound source (306b) using the controller (292). In this example, the apparatus is configured to provide a sound output which corresponds to this movement input. That is, in this example, the virtual sound source will move in a direction (327) corresponding to the cursor movement.
The volumes of the speakers as this movement takes place is shown in figure 3c. AL corresponds to the sound produced by the first (left) real sound source speaker (306a).
AR corresponds to the sound produced by the second (right) real sound source speaker (306b). When the user starts to move the cursor at t1, the cursor is towards the first (left) real sound source (30Gb) side of the screen (on a first position, x1, on the (left-right) x-axis). This input position corresponds to a virtual source position which is towards the first (left) real sound source (306a), which is generated by producing a larger amplitude sound with the first (left) real sound source speaker (306a) than with the second (right) real sound source speaker (30Gb). As the cursor moves towards the right, the volume of the first (left) real sound source (306a) is reduced and the volume of the second (right) sound source (306b) is increased until the user stops moving the cursor at time, tf. This gives the user the perception that the source of the sound is moving along with the cursor, parallel to the (left-right) x-axis. For this embodiment, the virtual sound source is configured to move in consort with the user input (which in this case is the cursor movement). It will be appreciated that for other embodiments, the virtual sound source is configured to have the same position as the corresponding user input.
It will be appreciated that other embodiments may use one or more of relative timings and relative volumes of two or more real sound generators to give the user the perception of a sound source with a virtual source position.
Figure 4a depicts the outward appearance of a further embodiment comprising a portable electronic device (401), e.g. such as a mobile phone or tablet computing device, with a user interface comprising a touch-screen (405), a memory (not shown) and a processor (not shown). Like the embodiment of figure 2a, the portable electronic device is configured to allow the user to interact with the portable electronic device with his/her finger (and/or scriber) on the touch screen (405). It will be appreciated that in other suitably adapted embodiments the user may interact with the touch screen using a stylus. Like the embodiment of figure 3, the portable electronic device (401) comprises a sound generator (406), which in comprise a number of speakers (406a -406d).
However in this case, the speakers are not aligned along a single axis, thereby allowing is the virtual source to be positioned with respect to both an x-axis (451) and a y-axis (452).
In this embodiment, the x-axis (451) and y-axis (452) each lie along an edge of the display (404).
Figure 4b depicts a front view of the device as the user is interacting with a 2D array of icons displayed on the touch screen (404) of the device. The 20 array comprises icons arranged in two rows of four columns (411 a-41 I h). In this case the user controls a cursor (410) by touching and dragging his finger across the screen. For this embodiment, the apparatus is configured to control volume of each of the real sound sources (406a-406d) in order to generate a sound having a virtual sound source (not shown). In this case, the volume of the sounds emitted by each of the speakers is dependent on the position of the cursor (410).
This embodiment is configured to generate a series of short clicks (432) as the user is dragging the cursor and a long click (433) when the user selects an icon (e.g. by pressing a selection key or button). For this embodiment, when the cursor (410) is in the centre of the screen (404), the volume of the four sound sources (406a-406d) are the same. This gives the user the perception that the sound is emanating from a point which is equidistant from all of the sound sources. When the cursor moves closer to any of the speakers, the apparatus is configured to increase the volume of that speaker. That is, for this embodiment, the relative volume of a speaker increases as the distance of the cursor to the speaker decreases. Correspondingly, when the cursor moves further away from any of the speakers, the apparatus is configured to decrease the relative volume of the speaker.
Figure 4c depicts the sound amplitude of each of the real sound sources, as the user drags his finger, and the cursor, along a path (431) across the screen (404) and then clicks on the photographs icon (411c). The sound amplitude ALT corresponds to the sound produced by the top left real sound source (406a). The sound amplitude AR-I-corresponds to the sound produced by the top right real sound source (406b). The sound amplitude ALS corresponds to the sound produced by the bottom left real sound source (406c). The sound amplitude ARB corresponds to the sound produced by the bottom right real sound source (406d). In this case the cursor is initially positioned on the settings icon (41 If) which is on the second row of icons and the second row two-dimensional icon array (411) displayed on the screen (404). In this situation depicted in figure 4b the user has dragged the cursor from this initial position, through the centre point, up and towards the right to the photographs icon (which is the in the first row and third column of the two-dimensional array) and then clicks on the photo icon.
During the dragging motion, the cursor remains approximately the same distance from both the top left real sound source (406a) and the bottom right real sound source (406d).
In this embodiment, this results in the amplitude of the short clicks for the top left and bottom right real sound sources (406a, 406d) being the same throughout the dragging motion (as shown in the ALT and ARB graph). During the dragging motion, the cursor moves away from the bottom left sound source and towards the top right sound source.
This means that, throughout the dragging motion, the amplitude of the bottom left sound source (406d) is decreased (as shown in the ABL graph), whereas the amplitude of the top right sound source is increased (as shown in the ART graph).
By adjusting the volume, the user perceives a sound, the sound comprising equally spaced short clicks, emanating from a moving virtual source. In this way the user may use the position of the user-perceived virtual source to position the cursor more accurately. In this case, when the user has positioned the cursor in the desired position (using visual and/or audio information provided by the device) the user selects the corresponding settings icon (411 c). For this embodiment, the apparatus is configured to produce a long click sound in response to the selection of an icon. The long click sound is configured to have a corresponding virtual source position corresponding to the position of the cursor when the icon was selected. In this way, the user is provided with audio confirmation of which icon was selected.
We have described embodiments related to audio cues having a virtual source position in 1 and 2 dimensions according to user input detected in I and 2 dimensions. It will also be possible to provide audio cues in 3 dimensions based on, for example, detecting position in 3 dimensions and providing an appropriate 3 dimensional audio cue with a virtual position in 3 dimensions. In such an embodiment, the user interface may extend over more than one plane e.g. not just a flat (2D) user interface, but a user interface that would detect input along a depth axis for example. In the case of detecting input along a depth axis, the virtual source position may be moved forwards/backwards with respect to a user.
Other embodiments may be configured to detect, for example, the real distance between a scriber (e.g. finger) used to provide input and a screen. For example, an embodiment may be configured to detect a scriber within a detection range of a touch screen. For example, the farther the finger is from the screen plane (and within the detection range), the farther the virtual source may be positioned along a z-axis, the z-axis configured to be perpendicular to the screen plane.
Alternatively/in addition for embodiments configured to display a virtual 3D user interface, the device may be configured such that the virtual sound position may be configured to correspond to the user input within the virtual 3D environment. For example, a 3D user interface may comprise a hovering virtual plane wherein the user can use his finger (or other scriber) to interact with the user interface, such that the user can move the virtual screen plane forward or backward depending on the finger movement in the depth dimension (perpendicular to the screen surface). For example, the user could push the virtual plane of the user interface objects farther away/or select objects farther in the 3D user interface. In this example, the virtual sound source may be positioned to correspond to the position of the virtual plane.
Alternatively/in addition other embodiments may be configured to position the virtual source with respect to another parameter of the user input. For example, a high pressure press on a touch screen may correspond to a virtual source position which is closer, or farther away, than a virtual source position corresponding to a lower pressure press.
Other parameters affecting the virtual source position, and/or type of sound, may include a combination of one or more of, duration of input, type of input (e.g. double click, single click, drag), type of user interface object associated with input (e.g. a window, an icon, a text input field, a URL, a link etc.) and function associated with input (e.g. copy, paste, select, delete, open file, icon type). For example, if a user input is associated with the progress of a task (e.g. data transfer, installation, update) the user may click on (or hover over or otherwise provide input corresponding to) a graphical progress bar (or icon). In this case, the progress of the task may define the position of the virtual source, for example, along an axis. For example, when the task had just started the sound may emanate from the left, whereas if it was almost complete the virtual source may be positioned to the right. It will be appreciated that the virtual sound position may correspond to the information displayed graphically (for example, the virtual source position could be configured to correspond to the position of a progress indicator, such as the progress bar).
In another example, user interaction with a user interface object may affect the virtual sound type and position. For example, when a user clicks on, selects (or hovers over or otherwise provides input corresponding to) a web link on browser or other application, a virtual sound used for web links may have more bass in the sound and/or provided with an echo effect. A virtual sound used when a user navigates from a web form field to another field may help the user notice the type of field upon which the focus currently is on. For example, if the user had browser bookmarks and navigates among them, the position (of virtual sound source) and/or the type of virtual sound would indicate which bookmark the user is selecting. If the bookmarks were links to audio information the user often consumes, e.g. such as podcasts, the user would recognize the selected one without having to look at the display or listen to all the links before finding the one he wants to select (as a particular podcast would be associated with one or more of a particular sound type coming from a particular virtual source).
In another embodiment, the user interface may be configured to detect user input on a front face as well as an opposing back/rear face of the user interface. In such a situation, the position of the virtual sound source may be varied to be originating from the rear when the user input is provided on the rear face of the user interface, and the virtual sound source may be varied to be originating from the front when the user input is provided on the front face of the user interface.
Figure Sa depicts the outward appearance of a further embodiment comprising a portable electronic device (501), e.g. such as satellite navigation device, with a user interface comprising a touch-screen (505), a memory (not shown) which may store
IS
computer program code, and a processor (not shown). The portable electronic device is configured to allow the user to interact with the portable electronic device using a display screen (504), and a physical keyboard (505). In this case the portable electronic device (501) comprises a sound generator (506) which, in this case, is a set of headphones s comprising two real sound sources (506a, 506b). In this case, the apparatus is configured to determine a first geographical position (561) using a global navigation satellite system, such as the Global Positioning System (GPS). This embodiment is also configured to determine the orientation of the sound generator (506) using an in-built compass (509).
Figure Sb depicts an overhead view of the user (592) of the device in a real world environment (569), such as on a street in a town. In this case the user (592) is at a first geographical position (561). The apparatus is configured to enable the user to provide input corresponding to a second geographical position (564) by, for example, clicking on a position on a map or entering a set of coordinates. In this case the apparatus is configured to determine a route (567) between the first geographical position (561) and the destination second geographical position (561). In this case the route is defined by the first geographical position (561), a first intermediate geographical position (562), a second intermediate geographical position (563) and the destination second geographical position (564). It will be appreciated that other embodiments may define a route in terms of distances and directions, or with respect to landmarks, which may all be considered to be geographical positions.
The apparatus is configured to generate audio navigation signalling which is configured to guide the user from the first geographical position (561) to the second geographical position (564) along the determined route (567). For this route (567), when the user is at the first geographical position (561), the virtual source position (592) is configured to correspond to the first intermediate geographical position (562) (as depicted in figure Sb).
That is, for this embodiment, the virtual source position (592) and the next geographical position in the route (which, in the case depicted in figure Sb, is the first intermediate source position (567)) are configured to be the same. For this embodiment, the user can use the virtual source position to direct his movement along the determined route. In this case the audio navigation signalling comprises a radio broadcast. In this way, the user can listen to the content of the radio broadcast and use the user perceived virtual source of the radio broadcast audio navigation signalling to navigate to the second geographical position. It will be appreciated that for other example embodiments the audio navigation signalling may comprise, for examples, a song, navigation commands (e.g. proceed forwards' or turn right') or an audio guide.
Like the embodiment of figure 2c, for this embodiment, the user-perceived virtual source position is configured to be positioned by adjusting a time delay between the sound signalling provided by the right real sound source (506b) of the sound generator and that provided by the left real sound source (506a). That is, the sound signalling supplied to the user by the left sound source is the same form as the sound signalling supplied by the right sound source but can be delayed in time by a time delay. It will be appreciated that for this embodiment the time delay may be positive, negative or zero (in the case where the virtual sound source is directly ahead the delay is zero as is the case depicted in figure Sb when the user is orientated towards the first intermediate geographical position (562)). However unlike the embodiment of figure 2 where the virtual source position is orientated with respect to the users head, the virtual source position (526), in is this case, is orientated with respect to the real world environment (569). That is, when the user rotates their head (and the sound generator (506)), the user perceives that the position of the virtual sound source (526) as emanating from a fixed source. That is, for this embodiment, the time delay between the first and second real sound sources is adjusted as a function of the orientation of the sound generator (506), so for example if the user was pointing away from the next geographical position they would perceive that the virtual sound source was positioned behind them.
It will be appreciated that, for other embodiments, the real sound sources may increase or decrease the volume of each of the sound signallings to generate a user-perceived virtual source position.
When the user has reached an intermediate geographical position, the device is configured to recognize that the intermediate geographical position has been reached and move the virtual source of the sound to correspond to the next geographical position in the route. In this way the user can navigate to the destination second geographical location by following the sound of the virtual source. That is, when the user has reached (or is approaching) the first intermediate geographical position (562), the time delay is configured to change such that the virtual source position moves to the right (to correspond to the second intermediate geographical position (563)). The user then follows the updated virtual source position to reach the second intermediate geographical position (563). Likewise, the user has reached (or is approaching) the second intermediate geographical position (563), the time delay is configured to change such that the virtual source position moves to the left (to correspond to the second geographical position (564)).
It will be appreciated that for other example embodiments, the virtual source position may be in the direction of the next geographical position (for example, at a predetermined constant distance from the user).
Figure Sc depicts the screen when the user is moving towards the first intermediate geographical position. The screen, in this case, depicts a visual perspective representation of the real world environment (e.g. generated by the device, or captured by image capture apparatus such as a video or photo camera (e.g. in advance or in real time)). The route indicator (468) provides visual confirmation of the route. It will be appreciated that other example embodiments may not have a visual display or graphical user interface. For example the second geographical position could be entered using voice commands.
It will be appreciated that the position and orientation of the user and/or apparatus may be determined using mobile phone signals, direction of motion and/or environment recognition.
Using the virtual source position of audio navigation signalling for navigation may be more intuitive than using voice commands. The audio navigation signalling may not need to be language specific. For example, the audio navigation signalling may comprise a musical note. lt will be appreciated that audio navigation signalling (with a virtual source position) comprising voice commands in a language not understood by the user may still be used by the user for navigation, as the user could follow the virtual source of the voice rather than understand and interpret the content of the commands.
Using the virtual source position of audio navigation signalling may also allow the user to use other senses to perform other tasks simultaneously to receiving navigation signals.
For example, when driving a car, it may reduce the need to visually check navigation instructions provided by, for example, a satellite navigation console.
Using the virtual source position of audio navigation signalling may also allow the user to use other aspects of the audio navigation signalling to perform other tasks simultaneously to receiving navigation signals. For example, the user could use the content of the audio navigation signalling to make a phone call whilst using the virtual source position to navigate through the environment. For example, the second geographical position could be selected to correspond to the location of the person whom the user was phoning. In this way, an embodiment may be configured such that the user could navigate towards a person with whom they are having a phone call using the virtual source position of the person's voice whilst continuing the conversation.
It will be appreciated that for other embodiments, the virtual source may be positioned such that the user goes away from the virtual source to reach the second geographical position. In this case, when the user is going in the correct direction the virtual sound source would be perceived to be positioned behind the user (such that the user would be directed away from the sound source towards the next geographical position).
It will be appreciated that other example embodiments may enable a user to provide input (e.g. by touching and/or hovering over) corresponding with a geographical position represented by a displayed map, wherein the device is configured to generate a virtual is sound having a virtual source position corresponding to the geographical position. For example, if the user provided input corresponding to a location which was north east of the user's geographical position, the apparatus may be configured such that the user would perceive the sound coming from the north east.
It will be appreciated that other example embodiments may be configured for use in 3D environments (e.g. in buildings, such as museums or art galleries). For example, if the second geographical position corresponded to going down stairs, the virtual source may be positioned below the user.
Figure 6 shows a flow diagram illustrating the use of a detected position of the user input to enable generation of a sound having a corresponding user-perceived virtual source position.
Figure 7 illustrates schematically a computer/processor readable media 700 providing a program according to an embodiment of the present invention. In this example, the computer/processor readable media is a disc such as a digital versatile disc (DVD) or a compact disc (CD). In other embodiments, the computer readable media may be any media that has been programmed in such a way as to carry out an inventive function.
It will be appreciated to the skilled reader that any mentioned apparatus/device/server and/or other features of particular mentioned apparatus/device/server may be provided by apparatus arranged such that they become configured to carry out the desired operations only when enabled, e.g. switched on, or the like. In such cases, they may not necessarily have the appropriate software loaded into the active memory in the non-enabled (e.g. switched off state) and only load the appropriate software in the enabled (e.g. on state). The apparatus may comprise hardware circuitry and/or firmware. The apparatus may comprise software loaded onto memory. Such software/computer programs may be recorded on the same memory/processor/functional units and/or on one or more memories/processors/functional units.
In some embodiments, a particular mentioned apparatus/device/server may be pre-programmed with the appropriate software to carry out desired operations, and wherein the appropriate software can be enabled for use by a user downloading a "key" for example, to unlock/enable the software and its associated functionality. Advantages associated with such embodiments can include a reduced requirement to download data when further functionality is required for a device, and this can be useful in examples is where a device is perceived to have sufficient capacity to store such pre-programmed software for functionality that may not be enabled by a user.
It will be appreciated that the any mentioned apparatus/circuitry/elements/processor may have other functions in addition to the mentioned functions, and that these functions may be performed by the same apparatus/circuitry/elements/processor. One or more disclosed aspects may encompass the electronic distribution of associated computer programs and computer programs (which may be source/transport encoded) recorded on an appropriate carrier (e.g. memory, signal).
It will be appreciated that any "computer" described herein can comprise a collection of one or more individual processors/processing elements that may or may not be located on the same circuit board, or the same region/position of a circuit board or even the same device. In some embodiments one or more of any mentioned processors may be distributed over a plurality of devices. The same or different processor/processing 3D elements may perform one or more functions described herein.
It will be appreciated that the term "signalling" may refer to one or more signals transmitted as a series of transmitted and/or received signals. The series of signals may comprise one, two, three, four or even more individual signal components or distinct signals to make up said signalling. Some or all of these individual signals may be transmitted/received simultaneously, in sequence, and/or such that they temporally overlap one another.
With reference to any discussion of any mentioned computer and/or processor and memory (e.g. including ROM, CD-ROM etc), these may comprise a computer processor, Application Specific Integrated Circuit (ASIC), field-programmable gate array (FPGA), S and/or other hardware components that have been programmed in such a way to carry out the inventive function.
The applicant hereby discloses in isolation each individual feature described herein and any combination of two or more such features, to the extent that such features or combinations are capable of being carried out based on the present specification as a whole, in the light of the common general knowledge of a person skilled in the art, irrespective of whether such features or combinations of features solve any problems disclosed herein, and without limitation to the scope of the claims. The applicant indicates that the disclosed aspects/embodiments may consist of any such individual is feature or combination of features. In view of the foregoing description it will be evident to a person skilled in the art that various modifications may be made within the scope of the
disclosure.
While there have been shown and described and examples of embodiments thereof, it will be understood that various omissions and substitutions and changes in the form and details of the devices and methods described may be made by those skilled in the art without departing from the spirit of the invention. For example, it is expressly intended that all combinations of those elements and/or method steps which perform substantially the same function in substantially the same way to achieve the same results are within the scope of the invention. Moreover, it should be recognized that structures and/or elements and/or method steps shown and/or described in connection with any disclosed form or embodiment of the invention may be incorporated in any other disclosed or described or suggested form or embodiment as a general maffer of design choice.
Furthermore, in the claims means-plus-function clauses are intended to cover the structures described herein as performing the recited function and not only structural equivalents, but also equivalent structures.

Claims (19)

  1. Claims: 1. An apparatus comprising: a processor; and a memory including computer program code, the memory and the computer program code configured to, with the processor, cause the apparatus to perform the following: detect the position of a user input from a user interface of an electronic device; use the detected position of the user input to enable generation of a sound having a corresponding user-perceived virtual source position, associated with the detected position of the user input.
  2. 2. The apparatus of claim 1, wherein the apparatus is configured to detect the position of the input along two axes in the same plane and provide the user-perceived sound position of the sound according to the two dimensional planar axes defined position.
  3. 3. The apparatus of claim 1, wherein the apparatus is configured to enable generation of a sound having a corresponding user-perceived virtual source position using one or more of: time differences between a plurality of real sources; volume differences between a plurality of real sources; average volume of a plurality of real sources; surround sound technology; stereo sound technology; binaural audio technology; and sound technology.
  4. 4. The apparatus of claim 1, wherein the apparatus comprises a sound generator, the sound generator comprising a plurality of real sound sources configured to enable the generation of a plurality of sound outputs each having a different user-perceived virtual source.
  5. 5. The apparatus of claim 4, wherein the sound generator comprises one or more of head phones and speakers.
  6. 6. The apparatus of claim 1, wherein the apparatus is configured to detect the position of the input along one or more axes and provide the user-perceived sound position of the sound according to the axes defined position.
  7. 7. The apparatus of claim 1 wherein the apparatus comprises a user interface, the user interface comprising one or more of a graphical user interface, a 2D graphical user interface, a keyboard, a wand, a touch screen interface, a mouse and a cursor.
  8. 8. The apparatus of claim 1, wherein the position of a user input corresponds to one or more of, for example: a cursor position when the cursor is moved; a cursor position when a button or key is pressed; selecting an icon; a position of a controller; and the position of a scriber on a touch-screen surface.
  9. 9. The apparatus of claim 1, wherein the virtual sound source and the corresponding user input are configured to have the same position.
  10. 10. A method, the method comprising: detecting the position of a user input from a user interface of an electronic device; using the detected position of the user input to enable generation of a sound having a corresponding user-perceived virtual source position, associated with the detected position of the user input.
  11. 11. A computer program, the computer program comprising computer code configured to: detect the position of a user input from a user interface of an electronic device; use the detected position of the user input to enable generation of a sound having a corresponding user-perceived virtual source position, associated with the detected position of the user input.
  12. 12. An apparatus comprising: a processor; and a memory including computer program code, the memory and the computer program code configured to, with the processor, cause the apparatus to perform the following: determine a first geographical position; and receive input corresponding to a second geographical position; enable generation of audio navigation signalling between the first geographical position and the second geographical position via optionally one or more intermediate geographical positions, the audio navigation signalling having a user-perceived virtual source position associated with the relative position of the second, or intermediate geographical positions, with respect to the first geographical position or an earlier intermediate geographical position, the user-perceived virtual source position configured to enable the user to navigate from the first geographical position to the second geographical location.
  13. 13. The apparatus of claim 12, wherein the first position is the current position of the apparatus.
  14. 14. The apparatus of claim 12, wherein the apparatus is configured to update the first position when the first position changes.
  15. 15. The apparatus of claim 12, wherein the at least one of the geographical positions is determined using a combination of one or more of global positioning satellites, mobile phone signals and environment recognition.
  16. 16. The apparatus of claim 12, wherein the audio navigation signalling comprises a combination of one or more of a musical note, a song, a tune, an audio guide, navigation instructions, a phone call, an audio book, a news feed and a radio broadcast.
  17. 17. The apparatus of claim I or claim 12, wherein the apparatus is an electronic device, a portable electronic device, a phone, a computer, a home cinema, a as TV, a personal music player, a laptop, a personal digital assistant, portable e-book reader, a satellite navigation device or a mobile phone or a module for one or more of these devices.
  18. 18. A method, the method comprising: determining a first geographical position; and receiving input corresponding to a second geographical position; enabling generation of audio navigation signalling between the first geographical position and the second geographical position via optionally one or more intermediate geographical positions, the audio navigation signalling having a user-perceived virtual source position associated with the relative position of the second, or intermediate geographical positions, with respect to the first geographical position or an earlier intermediate geographical position, the user-perceived virtual source position configured to enable the user to navigate from the first geographical position to the second geographical location.
  19. 19. A computer program, the computer program comprising computer code is configured to: determine a first geographical position; and receive input corresponding to a second geographical position; enable generation of audio navigation signalling between the first geographical position and the second geographical position via optionally one or more intermediate geographical positions, the audio navigation signalling having a user-perceived virtual source position associated with the relative position of the second, or intermediate geographical positions, with respect to the first geographical position or an earlier intermediate geographical position, the user-perceived virtual source position configured to enable the user to navigate from the first geographical position to the second geographical location.
GB1106663.6A 2011-04-20 2011-04-20 Use of a virtual sound source to enhance a user interface Withdrawn GB2490479A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB1106663.6A GB2490479A (en) 2011-04-20 2011-04-20 Use of a virtual sound source to enhance a user interface

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB1106663.6A GB2490479A (en) 2011-04-20 2011-04-20 Use of a virtual sound source to enhance a user interface

Publications (2)

Publication Number Publication Date
GB201106663D0 GB201106663D0 (en) 2011-06-01
GB2490479A true GB2490479A (en) 2012-11-07

Family

ID=44147270

Family Applications (1)

Application Number Title Priority Date Filing Date
GB1106663.6A Withdrawn GB2490479A (en) 2011-04-20 2011-04-20 Use of a virtual sound source to enhance a user interface

Country Status (1)

Country Link
GB (1) GB2490479A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015101413A3 (en) * 2014-01-05 2015-08-20 Kronoton Gmbh Method for audio reproduction in a multi-channel sound system
US9335905B1 (en) 2013-12-09 2016-05-10 Google Inc. Content selection feedback
US9612722B2 (en) 2014-10-31 2017-04-04 Microsoft Technology Licensing, Llc Facilitating interaction between users and their environments using sounds

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0528743A1 (en) * 1991-08-19 1993-02-24 International Business Machines Corporation Audio user interface with stereo and filtered sound effects for visually impaired users
US5287102A (en) * 1991-12-20 1994-02-15 International Business Machines Corporation Method and system for enabling a blind computer user to locate icons in a graphical user interface
US6297818B1 (en) * 1998-05-08 2001-10-02 Apple Computer, Inc. Graphical user interface having sound effects for operating control elements and dragging objects
US6469712B1 (en) * 1999-03-25 2002-10-22 International Business Machines Corporation Projected audio for computer displays
WO2003027822A2 (en) * 2001-09-24 2003-04-03 Koninklijke Philips Electronics N.V. Interactive system and method of interaction
US6647119B1 (en) * 1998-06-29 2003-11-11 Microsoft Corporation Spacialization of audio with visual cues
EP1903432A2 (en) * 2006-09-14 2008-03-26 Avaya Technology Llc Audible computer user interface method and apparatus
WO2008112704A1 (en) * 2007-03-14 2008-09-18 Apple Inc. Audibly announcing user interface elements

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0528743A1 (en) * 1991-08-19 1993-02-24 International Business Machines Corporation Audio user interface with stereo and filtered sound effects for visually impaired users
US5287102A (en) * 1991-12-20 1994-02-15 International Business Machines Corporation Method and system for enabling a blind computer user to locate icons in a graphical user interface
US6297818B1 (en) * 1998-05-08 2001-10-02 Apple Computer, Inc. Graphical user interface having sound effects for operating control elements and dragging objects
US6647119B1 (en) * 1998-06-29 2003-11-11 Microsoft Corporation Spacialization of audio with visual cues
US6469712B1 (en) * 1999-03-25 2002-10-22 International Business Machines Corporation Projected audio for computer displays
WO2003027822A2 (en) * 2001-09-24 2003-04-03 Koninklijke Philips Electronics N.V. Interactive system and method of interaction
EP1903432A2 (en) * 2006-09-14 2008-03-26 Avaya Technology Llc Audible computer user interface method and apparatus
WO2008112704A1 (en) * 2007-03-14 2008-09-18 Apple Inc. Audibly announcing user interface elements

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9335905B1 (en) 2013-12-09 2016-05-10 Google Inc. Content selection feedback
WO2015101413A3 (en) * 2014-01-05 2015-08-20 Kronoton Gmbh Method for audio reproduction in a multi-channel sound system
US11153702B2 (en) 2014-01-05 2021-10-19 Kronoton Gmbh Method for audio reproduction in a multi-channel sound system
US9612722B2 (en) 2014-10-31 2017-04-04 Microsoft Technology Licensing, Llc Facilitating interaction between users and their environments using sounds
US9652124B2 (en) 2014-10-31 2017-05-16 Microsoft Technology Licensing, Llc Use of beacons for assistance to users in interacting with their environments
US9977573B2 (en) 2014-10-31 2018-05-22 Microsoft Technology Licensing, Llc Facilitating interaction between users and their environments using a headset having input mechanisms
US10048835B2 (en) 2014-10-31 2018-08-14 Microsoft Technology Licensing, Llc User interface functionality for facilitating interaction between users and their environments

Also Published As

Publication number Publication date
GB201106663D0 (en) 2011-06-01

Similar Documents

Publication Publication Date Title
CN103914222B (en) Image display device and its control method
KR101748668B1 (en) Mobile twrminal and 3d image controlling method thereof
KR101841121B1 (en) Mobile terminal and control method for mobile terminal
EP2799972B1 (en) Mobile terminal capable of dividing a screen and a method of controlling the mobile terminal
US10185456B2 (en) Display device and control method thereof
KR101850821B1 (en) Mobile terminal and message display method for mobile terminal
KR101758164B1 (en) Mobile twrminal and 3d multi-angle view controlling method thereof
US20130009890A1 (en) Method for operating touch navigation function and mobile terminal supporting the same
EP2372539A2 (en) Method and apparatus for editing list in portable terminal
US20140096084A1 (en) Apparatus and method for controlling user interface to select object within image and image input device
US11429342B2 (en) Spatial management of audio
US9715339B2 (en) Display apparatus displaying user interface and method of providing the user interface
KR20150019165A (en) Mobile terminal and method for controlling the same
KR20150004123A (en) Electronic device and method for controlling multi- window in the electronic device
KR20150146296A (en) Mobile terminal and method for controlling the same
KR20140074141A (en) Method for display application excution window on a terminal and therminal
KR20110055088A (en) Operation method for display of portable device and apparatus using the same
KR101968131B1 (en) Mobile apparatus for processing multiple applications and method thereof
CN112230914B (en) Method, device, terminal and storage medium for producing small program
US10628017B2 (en) Hovering field
KR20150069184A (en) Method for controlling screen of portable electronic device
KR20140118338A (en) Display apparatus for executing plurality of applications and method for controlling thereof
CN103135888A (en) Method and mobile device for displaying supplementary window
KR20120080774A (en) Displaying method for displaying information of electro-field intensity and system thereof, and portable device supporting the same
KR20130064514A (en) Method and apparatus for providing 3d ui in electric device

Legal Events

Date Code Title Description
WAP Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1)