EP4057645A1 - Appareil, procédés et programmes informatiques permettant de fournir une interface utilisateur audio - Google Patents

Appareil, procédés et programmes informatiques permettant de fournir une interface utilisateur audio Download PDF

Info

Publication number
EP4057645A1
EP4057645A1 EP21161158.7A EP21161158A EP4057645A1 EP 4057645 A1 EP4057645 A1 EP 4057645A1 EP 21161158 A EP21161158 A EP 21161158A EP 4057645 A1 EP4057645 A1 EP 4057645A1
Authority
EP
European Patent Office
Prior art keywords
user
spatial audio
body parts
items
audio items
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
EP21161158.7A
Other languages
German (de)
English (en)
Inventor
Christopher Wright
Harry CRONIN
Phil CATTON
William Schnabel
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nokia Technologies Oy
Original Assignee
Nokia Technologies Oy
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nokia Technologies Oy filed Critical Nokia Technologies Oy
Priority to EP21161158.7A priority Critical patent/EP4057645A1/fr
Publication of EP4057645A1 publication Critical patent/EP4057645A1/fr
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S7/00Indicating arrangements; Control arrangements, e.g. balance control
    • H04S7/30Control circuits for electronic adaptation of the sound field
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S7/00Indicating arrangements; Control arrangements, e.g. balance control
    • H04S7/30Control circuits for electronic adaptation of the sound field
    • H04S7/302Electronic adaptation of stereophonic sound system to listener position or orientation
    • H04S7/303Tracking of listener position or orientation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S2400/00Details of stereophonic systems covered by H04S but not provided for in its groups
    • H04S2400/11Positioning of individual sound objects, e.g. moving airplane, within a sound field

Definitions

  • Examples of the present disclosure relate to apparatus, methods and computer programs for providing an audio user interface. Some relate to apparatus, methods and computer programs for providing a spatial audio user interface.
  • User interfaces can be provided to enable a user to interact with electronic devices such as mobile telephones.
  • Spatial audio user interfaces can provide audio outputs that are rendered so that the user can perceive the audio outputs to be originating from a particular location or direction.
  • an apparatus comprising means for; estimating a position of one or more body parts of a user relative to a spatial audio output system; and enabling one or more spatial audio items to be provided via the spatial audio output system so that the one or more spatial audio items are provided at positions determined, at least in part, by the position of the one or more body parts of the user and one or more distraction criteria associated with the one or more spatial audio items.
  • the means may be for using one or more wireless signals to estimate the position of the one or more body parts.
  • the wireless signals may comprise any one or more of mmWaves, ultra wide band signals, WiFi signals, acoustic signals.
  • the means may be for determining a location of the spatial audio output system and using the determined location of the spatial audio output system to estimate positions of one or more body parts of a user.
  • the one or more spatial audio items may comprise one or more interactive user interface items that enable the user to interact with one or more applications of the apparatus.
  • Spatial audio items with a higher one or more distraction criteria may be positioned at more prominent positions relative to the one or more body parts of the user compared to spatial audio items with a lower one or more distraction criteria so that the spatial audio items with a higher one or more distraction criteria are more audibly perceptible by the user.
  • the one or more distraction criteria may be determined by one or more of; importance of spatial audio item, application associated with spatial audio item, assigned user preferences.
  • One or more of the spatial audio items may be provided within the user's peripersonal space.
  • the means may be for tracking the position of the one or more body parts of the user and if the one or more body parts of the user has changed adjusting the rendering of the one or more spatial audio items so that the position of the one or more spatial audio items relative to the one or more body parts of the user is maintained.
  • the one or more body parts of the user may comprise any one or more of: user's arms, legs, torso.
  • the spatial audio output system may comprise at least one of: ear pieces, head set, surround sound speaker system.
  • an apparatus comprising at least one processor; and at least one memory including computer program code, the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus at least to perform: estimating a position of one or more body parts of a user relative to a spatial audio output system; and enabling one or more spatial audio items to be provided via the spatial audio output system so that the one or more spatial audio items are provided at positions determined, at least in part, by the position of the one or more body parts of the user and one or more distraction criteria associated with the one or more spatial audio items.
  • a method comprising: estimating a position of one or more body parts of a user relative to a spatial audio output system; and enabling one or more spatial audio items to be provided via the spatial audio output system so that the one or more spatial audio items are provided at positions determined, at least in part, by the position of the one or more body parts of the user and one or more distraction criteria associated with the one or more spatial audio items.
  • a computer program comprising computer program instructions that, when executed by processing circuitry, cause: estimating a position of one or more body parts of a user relative to a spatial audio output system; and enabling one or more spatial audio items to be provided via the spatial audio output system so that the one or more spatial audio items are provided at positions determined, at least in part, by the position of the one or more body parts of the user and one or more distraction criteria associated with the one or more spatial audio items.
  • a user device comprising an apparatus as described herein.
  • the spatial audio user interface can comprise spatial audio items that can enable a user to interact with a user device.
  • the position of a limb or other part of a user's body can be determined.
  • the spatial audio items can then be positioned based on the positions of the user's limbs or other parts of their body. This enables the spatial audio items to be provided in positions that are intuitive and convenient for a user to interact with.
  • Fig. 1 shows an example system 101 that can be used to implement examples of the disclosure.
  • the system 101 comprises a user device 111 and a spatial audio output system 113.
  • the spatial audio output system 113 can be configured to provide spatial audio for a user 103. It is to be appreciated that only components of the system 101 that are referred to in this description are shown in Fig.1 and that the system 101 could comprise additional components in other examples of the disclosure.
  • Fig. 1 also shows a user 103.
  • the user 103 can be the user 103 of the user device 111.
  • the spatial audio output system 113 can be configured to provide spatial audio to the user 103.
  • the user 103 has one or more body parts 105.
  • the body parts 105 are the user's arms.
  • the body parts 105 could be the user's legs, torso or any other suitable part of their body.
  • the user device 111 could be a mobile phone, a smart speaker or any other suitable electronic device.
  • the user device 111 could be a portable electronic device that the user 103 could carry in their pocket, handbag, or other place such that there might be no direct line of sight between the user device 111 and the one or more parts 105 of the user's body.
  • the user device 111 can be configured to determine the position of the user's body parts 105.
  • the user device 11 could be configured to control the spatial audio system 113 to enable a spatial audio output to be rendered for the user 103.
  • the user device 111 can be positioned in proximity to the user 103 so that the user device 111 can be used to estimate positions of one or more parts of the user's body.
  • the user device 111 can be positioned close enough to the user 103 to enable wireless signals to be used to estimate positions of one or more parts of the user's body.
  • the user device 111 comprises an apparatus 107 and a transceiver 109. Only the components of the user device 111 referred to in the following description have been shown in Fig.1 . It is to be appreciated that in implementations of the disclosure the user device 111 can comprise additional components that have not been shown in Fig. 1 .
  • the user device 111 can comprise a power source, a user interface and other suitable components.
  • the apparatus 107 can be a controller 203 comprising a processor 205 and memory 207 that can be as shown in Fig. 2 .
  • the apparatus 107 can be configured to enable control of the user device 111.
  • the apparatus 107 can be configured to control the radiofrequency beams that are transmitted by the transceiver 109 or any other suitable functions of the user device 111.
  • the user device 111 also comprises at least one transceiver 109.
  • the transceiver 109 can comprise any means that can be configured to enable radio frequency signals to be transmitted and received by the user device 111.
  • the transceiver 109 can be configured to enable wireless communications.
  • the transceiver 109 can be configured to provide one or more wireless signals that can be used to estimate the position of one or more body parts of the user.
  • the transceiver 109 can provide wireless signals such as mmWaves, ultra wide band signals, WiFi signals, acoustic signals (e.g. active sonar ranging).
  • the transceiver 109 can be configured to transmit wireless signals and then detect the signals that are reflected back from the user's body parts 105.
  • the apparatus 107 can then use the information in the reflected signals to estimate the positions of the parts of the user's body.
  • the positions of the body parts 105 can be detected by detecting shadowing or blocking of the wireless signals.
  • shadowing or blocking of wireless signals of a communication channel between the user device 111 and the spatial audio system 113 can be used.
  • the user device 111 can comprise additional components that are not shown in Fig. 1 .
  • an acoustic transducer could be provided that can be configured to enable acoustic signals to be used to detect the position of the user's body parts 105.
  • a speaker and a microphone may be used in combination to transmit acoustic signals and then detect the acoustic signals that are reflected back from the one or more body parts 105 of the user 103.
  • the same user device 111 can transmit the wireless signal and detect reflected wireless signals.
  • a plurality of different devices could be provided that can transmit and/or detect the wireless signals. These can enable the wireless signals to be transmitted from a first device and reflected from the user's body and then detected by a different device.
  • the transceiver 109 can be configured to enable wireless communication using mm waves the transceiver 109 can be configured to enable wireless communication using a wavelength below approximately 10mm. Wavelengths below approximately 10mm can be considered to be short wavelengths.
  • the transceiver 109 can be configured to enable wireless communication using a high frequency.
  • the high frequency can be above 24 GHz. In some examples the frequency may be between 24 to 39 GHz.
  • the transceiver 109 can be configured to enable 5G communication.
  • the transceiver 205 can be configured to enable communication within New Radio networks.
  • New Radio is the 3GPP (3 rd Generation Partnership Project) name for 5G technology.
  • the use of the wireless signals to determine the position of the one or more parts 105 of the user's body can enable the positions of the parts 105 of the user's body to be determined even when there is no direct line of sight between the user device 111 and the parts 105 of the user's body, and therefore also no direct line of sight between the apparatus 107 within the user device 111 and the parts of the user's body.
  • the user device 111 could be in the user's pocket or handbag.
  • the spatial audio output system 113 can comprise any means that can be configured to provide a spatial audio output to the user 103.
  • the spatial audio output system 113 is configured to convert an electrical input signal to an output sound signal that can be heard by the user 103.
  • the spatial audio output system 113 can comprise earphones, a head set, an arrangement of loudspeakers or any other suitable system.
  • the spatial audio that is played back by the spatial audio output system 113 can be configured so that spatial audio items can be perceived by the user 103 to be located at particular positions.
  • the audio that is provided by the spatial audio output system 113 can comprise one or more settings to control the spatial aspects of the audio. For example, head related transfer functions (HRTFs), or other processes can be used to create spatial characteristics that can be reproduced by the spatial audio output system 113 when the audio is played back.
  • HRTFs head related transfer functions
  • the spatial processing of the audio content can be performed by any suitable device in the system 101.
  • the audio content can be spatially processed by the user device 111 and can then be transmitted to the spatial audio output system 113 for playback.
  • the spatial audio output system 113 could comprise one or more processing modules that could be configured to process the audio signal to provide spatial characteristics.
  • the user device 111 can be configured to communicate with the spatial audio output system 113. This can enable the user device 111 to provide spatial audio content for playback to the spatial audio output system 113. In some examples this can enable the user device 111 to provide information to the spatial audio output system 113 that can then be used by the spatial audio output system 113 when rendering the spatial audio content and/or spatial audio items. For example, the user device 111 can determine the position of the user 103 or part 105 of the user's body. This information could then be provided to the spatial audio output system 113 and used by the spatial audio output system 113 when rendering one or more spatial audio items.
  • the rendering can comprise the processing of a digital signal before the digital signal is played back by a loudspeaker.
  • Fig. 2 shows an example apparatus 107.
  • the apparatus 107 illustrated in Fig. 2 can be a chip or a chip-set.
  • the apparatus 107 can be provided within user devices 111 such as a mobile phone, personal electronics device or any other suitable type of user device 111.
  • the apparatus 107 could be provided within user devices 111 as shown in Fig. 1 .
  • the apparatus 107 comprises a controller 203.
  • the implementation of the controller 203 can be as controller circuitry.
  • the controller 203 can be implemented in hardware alone, have certain aspects in software including firmware alone or can be a combination of hardware and software (including firmware).
  • the controller 203 can be implemented using instructions that enable hardware functionality, for example, by using executable instructions of a computer program 209 in a general-purpose or special-purpose processor 205 that can be stored on a computer readable storage medium (disk, memory etc.) to be executed by such a processor 205.
  • a general-purpose or special-purpose processor 205 that can be stored on a computer readable storage medium (disk, memory etc.) to be executed by such a processor 205.
  • the processor 205 is configured to read from and write to the memory 207.
  • the processor 205 can also comprise an output interface via which data and/or commands are output by the processor 205 and an input interface via which data and/or commands are input to the processor 205.
  • the memory 207 is configured to store a computer program 209 comprising computer program instructions (computer program code 211) that controls the operation of the apparatus 107 when loaded into the processor 205.
  • the computer program instructions, of the computer program 209 provide the logic and routines that enables the apparatus 107 to perform the methods illustrated in Figs. 3 and 5 .
  • the processor 205 by reading the memory 207 is able to load and execute the computer program 209.
  • the apparatus 107 therefore comprises: at least one processor 205; and at least one memory 207 including computer program code 211, the at least one memory 207 and the computer program code 211 configured to, with the at least one processor 205, cause the apparatus 107 at least to perform:
  • the delivery mechanism 201 can be, for example, a machine readable medium, a computer-readable medium, a non-transitory computer-readable storage medium, a computer program product, a memory device, a record medium such as a Compact Disc Read-Only Memory (CD-ROM) or a Digital Versatile Disc (DVD) or a solid-state memory, an article of manufacture that comprises or tangibly embodies the computer program 209.
  • the delivery mechanism can be a signal configured to reliably transfer the computer program 209.
  • the apparatus 107 can propagate or transmit the computer program 209 as a computer data signal.
  • the computer program 209 can be transmitted to the apparatus 107 using a wireless protocol such as Bluetooth, Bluetooth Low Energy, Bluetooth Smart, 6LoWPan (IP v 6 over low power personal area networks) ZigBee, ANT+, near field communication (NFC), Radio frequency identification, wireless local area network (wireless LAN) or any other suitable protocol.
  • a wireless protocol such as Bluetooth, Bluetooth Low Energy, Bluetooth Smart, 6LoWPan (IP v 6 over low power personal area networks) ZigBee, ANT+, near field communication (NFC), Radio frequency identification, wireless local area network (wireless LAN) or any other suitable protocol.
  • the computer program 209 comprises computer program instructions for causing an apparatus 107 to perform at least the following:
  • the computer program instructions can be comprised in a computer program 209, a non-transitory computer readable medium, a computer program product, a machine readable medium. In some but not necessarily all examples, the computer program instructions can be distributed over more than one computer program 209.
  • memory 207 is illustrated as a single component/circuitry it can be implemented as one or more separate components/circuitry some or all of which can be integrated/removable and/or can provide permanent/semi-permanent/ dynamic/cached storage.
  • processor 205 is illustrated as a single component/circuitry it can be implemented as one or more separate components/circuitry some or all of which can be integrated/removable.
  • the processor 205 can be a single core or multi-core processor.
  • references to "computer-readable storage medium”, “computer program product”, “tangibly embodied computer program” etc. or a “controller”, “computer”, “processor” etc. should be understood to encompass not only computers having different architectures such as single /multi- processor architectures and sequential (Von Neumann)/parallel architectures but also specialized circuits such as field-programmable gate arrays (FPGA), application specific circuits (ASIC), signal processing devices and other processing circuitry.
  • References to computer program, instructions, code etc. should be understood to encompass software for a programmable processor or firmware such as, for example, the programmable content of a hardware device whether instructions for a processor, or configuration settings for a fixed-function device, gate array or programmable logic device etc.
  • circuitry can refer to one or more or all of the following:
  • circuitry also covers an implementation of merely a hardware circuit or processor and its (or their) accompanying software and/or firmware.
  • circuitry also covers, for example and if applicable to the particular claim element, a baseband integrated circuit for a mobile device or a similar integrated circuit in a server, a cellular network device, or other computing or network device.
  • the blocks illustrated in the Figs. 3 and 5 can represent steps in a method and/or sections of code in the computer program 209.
  • the illustration of a particular order to the blocks does not necessarily imply that there is a required or preferred order for the blocks and the order and arrangement of the block can be varied. Furthermore, it can be possible for some blocks to be omitted.
  • Fig. 3 shows an example method according to examples of the disclosure. The method could be implemented using an apparatus 107 and system 101 as described above.
  • the method comprises estimating a position of one or more body parts 105 of a user 103 relative to a spatial audio output system 113.
  • any suitable process can be used to estimate the positions of the body parts 105.
  • one or more wireless signals can be used to estimate the position of the one or more body parts 105.
  • the wireless signals could comprise mmWaves, ultra wide band signals, WiFi signals, acoustic signals or any other suitable type of signal.
  • the position of the user's body parts 105 can be estimated based on detected reflections of the wireless signals. This can enable the position of the body parts 105 of the user 103 relative to the user device 111 to be determined.
  • the position of the spatial audio system 113 relative to the user device 111 can also be determined.
  • the spatial audio system 113 comprises earbuds or a head set the properties of a datalink between the spatial audio system and the user device 111 can be examined to determine the position of the earbuds or headset.
  • the position of the earbuds or head set can be determined by estimating a likely position of the user's head based on normal human motions.
  • the spatial audio system comprises loudspeakers the positions of these loudspeakers can be determined and provided to the user device 111.
  • the position of the spatial audio system 113 relative to the user device 111 and the position of the user's body parts relative to the user device 111 are known these can be used to estimate a position of the user's body parts relative to the spatial audio system 113. This position can be estimated using an algorithm or any other suitable means.
  • the positions of the body parts 105 of the user 103 could be estimated based on a context of the user 103. For example, the position and/or activity of the user 103 could be determined and this could enable the position of parts of the user's body to be estimated. For instance, if it is determined that the user 103 is seated at a desk then it is likely that the user's arms will be positioned in front of them to enable typing or writing and that the legs of the user 103 will be position in front of them in a seated position.
  • the apparatus 107 could first determine the context of the user 103 to determine a likely position of the user's limbs or other body parts 105.
  • the wireless signals, or any other suitable means could then be used to obtain a more accurate estimate of the positions or could be used to detect movement or changes in position of the user's limbs or other body parts 105.
  • the method comprises enabling one or more spatial audio items to be provided via the spatial audio output system 113.
  • the spatial audio items could comprise one or more interactive user interface items that enable the user 103 to interact with one or more applications of the apparatus 107 and/or user device 111.
  • the spatial audio items could comprise a notification that an event associated with one or more applications of the apparatus 107 has occurred.
  • the spatial audio items could comprise items that a user 103 could select or otherwise interact with.
  • the spatial audio items can be provided by the spatial audio output system 113 so that the spatial audio items are perceived by the user 103 to originate from a determined location or direction. Any suitable means can be used to enable the rendering of the spatial audio items. For example HRTFs can be used where the spatial audio output system 113 comprises a head set or earbuds. Other types of filtering or processing could be used in other examples of the disclosure.
  • the spatial audio items can be provided within the user's peripersonal space.
  • the peripersonal space is the region of space immediately surrounding the user's body.
  • the location of the peripersonal space can be determined from the estimated positions of the user's body parts 105.
  • the position at which the spatial audio items are to be provided can be determined, at least in part, by the position of the one or more body parts 105 of the user 103 and one or more distraction criteria associated with the one or more spatial audio items.
  • Sounds that occur within some locations within the peripersonal space of a user 105 are naturally more distracting than sounds provided at other locations. For example, a sound that occurs near a user's hands will be naturally more distracting than a similar sound that occurs near a user's elbow or a sound that occurs further away from the user 105. This naturally occurring variation in how distracting a sound will be can be used to determine where the audio items should be provided. This can enable different levels of distraction and noticeability to be associated with different spatial audio items.
  • the spatial audio can be provided so that items with a higher one or more distraction criteria are positioned at more prominent positions relative to the body parts 105 of the user 103 compared to spatial audio items with a lower one or more distraction criteria.
  • a spatial audio item with a higher distraction criteria could be provided at a location close to a user's hand while a spatial audio item that has a lower distraction criteria could be provided at a location that is close to the user's elbow.
  • Other locations for the spatial audio items could be used in other examples of the disclosure.
  • the one or more distraction criteria could be determined, at least in part, by an importance of a spatial audio item.
  • the importance could be an indication of the significance of an event associated with a spatial audio item. For instance, a notification that the user device 111 is running low on power could have a higher importance than an incoming chat message.
  • the one or more distraction criteria could be determined, at least in part, by an application associated with a spatial audio item. For instance, items associated with an email application could be provided with a higher distraction criteria than spatial audio items associated with a chat or gaming application or other applications.
  • the one or more distraction criteria could be determined, at least in part, by assigned user preferences.
  • the user 103 of the user device 111 could indicate the items, functions and/or applications to which they would like to assign higher distraction criteria.
  • a user 103 could indicate the applications of the user device 111 that they wish to associate with a higher distraction criteria or conversely the applications that they wish to associate with a lower distraction criteria.
  • a user 103 could associate specific events of the user device 111 with a higher distraction criteria.
  • the user 103 could assign messages or incoming communications from specific people to a higher distraction criteria. As an example, this could enable the user 103 to assign a higher distraction criteria to their manager or to family members than to friends.
  • the method can comprise additional blocks that are not shown in Fig. 3 .
  • the method can also comprise tracking the position of the one or more body parts 105 of the user 103. This can enable changes in the positions of the user 103 or body parts of the user 103 to be monitored over time. This enables the rendering of the one or more spatial audio items to be adjusted so that the position of the one or more spatial audio items relative to the one or more body parts of the user 103 is maintained. For instance, if the spatial audio item is to be rendered close to the user's hand then as the user 103 moves their hand relative to the apparatus 107 or the spatial audio system 113 the position of the spatial audio item relative to the apparatus 107 and/or spatial audio system 113 also changes.
  • the apparatus 107 can be configured to update the rendering of the spatial audio to take into account this change.
  • the tracking of the position of the one or more body parts 105 of the user 103 can enable the position of the peripersonal space and the locations within the peripersonal space to be adjusted relative to the position of the apparatus 107 and/or the spatial audio system 113. For instance, if the user's arm is in a first position at a first time then the spatial audio output can be provided at a first position corresponding to the position of the arm. If the arm is detected to be in a second position at a second time then the spatial audio output can be provided in a second position corresponding to the second position of the arm.
  • Figs. 4A and 4B show example implementations of the disclosure.
  • the apparatus 107 has determined the position of parts 105 of the user's body.
  • the body parts 105 are the arms 405A, 405B of the user 103.
  • the positions of other parts 105 of the user's body could be determined in other examples of the disclosure.
  • the position of the user's legs or torso could be determined.
  • the positions of the user's body parts 105 can be determined using any suitable means. In some examples the positions of the user's body parts can be determined using wireless signals and/or from determining a context of the user 103.
  • both of the user's arms 405A, 405B are positioned in front of the user 103.
  • the user 103 could be seated at a desk and the two arms 405A, 405B could be positioned in front of the user 103 as the user types.
  • the user's peripersonal space has been divided into four different zones 401A, 401B, 401C, 401D.
  • the positions of the four different zones 401A, 401B, 401C, 401D are determined by the positions of the user's arms 405A, 405B.
  • a first zone 401A is provided to the left of the user's left arm 405A.
  • a second zone 401B is provided to the right of the user's left arm 405B.
  • a third zone 401C is provided to the left of the user's right arm 405B and a fourth zone 401D is provided to the right of the user's right arm 405B. It is to be appreciated that other numbers and arrangements of the zones could be used in other examples of the disclosure.
  • the zones 401A, 401B, 401C, 401D are positioned so that they are easy and intuitive for a user 103 to interact with.
  • the zones 401A, 401B, 401C, 401D are positioned so that spatial audio items 403 that are provided in these zones401A, 401B, 401C, 401D are highly perceptible to the user 103.
  • the spatial audio items 403 in the zones 401A, 401B, 401C, 401D are likely to be more perceptible than spatial audio items provided elsewhere. This makes the spatial audio items 403 provided in these zones more distracting than spatial audio items provided outside of the user's peripersonal space, or provided in an inconvenient location within the peripersonal space such as close to the user's elbow.
  • the different zones 401A, 401B, 401C, 401D can be associated with different functions and/or applications of the apparatus 107 or the user device 111.
  • each zone 401A, 401B, 401C, 401D is associated with a different function or application. In other examples some applications and functions could be associated with more than one zone.
  • the first zone 401A is associated with a first email account
  • the second zone 401B is associated with a second email account
  • the third zone 401C is associated with a first application
  • the fourth zone 401D is associated with a second application.
  • the user device 111 receives an email associated with the second email account. This causes a notification to be provided to the user 103.
  • the notification is a spatial audio item 403.
  • the apparatus 107 determines that a spatial audio item 403 is to be provided and then determines the location of the zone 401B associated with the spatial audio item 403 that is to be provided. In this example the apparatus 107 will determine that the spatial audio item 403 is to be provided in a zone 401B that is located to the right of the user's left arm 405A as shown in Fig. 4A .
  • the apparatus 107 can then determine the location of the user's arm so that the apparatus 107 can determine the location of the user's arm relative to the spatial audio system 113 and/or the apparatus 107.
  • the apparatus 107 can then enable the spatial audio item 403 to be processed so that the spatial audio item 403 sounds as though it is located within the second zone 401B. This therefore enables an alert to be provided to the user 103.
  • the user 103 can perceive the alert without having to directly interact with the user device 111.
  • the position of the spatial audio item 403 provides information to the user 103.
  • the position of the spatial audio item 403 provides information indicative of the application or function that is associated with the spatial audio item 403. This can enable the user 103 to distinguish between different notifications or other types of spatial audio items 403 without having to look at the user device 111.
  • the apparatus 107 can be configured to detect a user input or user interaction associated with the spatial audio item 403. For instance, the user 103 could make a user input in response to the spatial audio item 403. For instance, the user 103 could make a gesture or other user input that could cause the spatial audio item 403 to be cancelled or could enable a function associated with the spatial audio item 403 to be performed.
  • Fig. 4B shows an example in which the user 103 is interacting with the spatial audio item 403 and so is enabling control of functions of the user device 111 and/or apparatus 107.
  • the interactions comprise the user 103 making a gesture within the second zone 401B, or at least partly within the second zone 401B that is associated with the spatial audio item 403.
  • the user 103 touches the right side of their left arm 405A.
  • the user 103 can move their right arm 405B to touch the right side of their left arm 405A as shown in Fig. 4B .
  • the apparatus 107 can detect that the gesture user input has occurred. The apparatus 107 can detect this by determining the positions of the user's arms 405A, 405B and recognizing this as a gesture user input. The positions of the user's arms 405A, 405B and/or movement of the user's arms 405A, 405B can be detected using wireless signals and/or any other suitable means.
  • the apparatus 107 can then identify that this user input is associated with the spatial audio item 403 because the user input has been detected in the second zone 401B to the right of the left arm 405A.
  • the apparatus 107 therefore determines that the interaction is associated with the second email account 401B and so can enable a function associated with the second email account to be performed.
  • the function could be providing an update relating to the emails that have been received. For example, it could provide an indication of the number of emails that have been received and the sender of the emails. The indication could be provided as an audio output via the spatial audio system 111.
  • Other functions could be performed in response to the interactions from the user 103 in other examples of the disclosure.
  • the spatial audio items 403 therefore enable the user 103 to interact with functions of the apparatus 107 and/or user device 111 without having to touch or look at or otherwise directly interact with the apparatus 107 and/or user device 111.
  • This can enable the user 103 to control functions of the apparatus 107 and/or user device 111 while the user device 111 remains in their pocket and/or handbag or otherwise when there is no line of sight between the apparatus 107 and the one or more parts of the user's body. This can be more convenient for the user 103.
  • Figs. 4A and 4B show an example of how a spatial audio item 403 could be provided to a user 103 and how the user 103 can interact with the spatial audio item 403.
  • the apparatus 107 can be configured to enable a spatial audio item 403 to be provided in response to an input from the user 103.
  • the input 103 could be a gesture user input that comprises the user 103 positioning their arms 405, or other parts of their body, in a specific configuration.
  • the user 103 could hold their arms 405 out in front of them or could use one arm 405 to tap another part of their body.
  • the positions of the user's arms 405 could be detected and recognized as a gesture input.
  • the spatial audio items 403 could be provided.
  • a spatial audio item 403 comprising information relating to the second email account could be provided.
  • the information that is provided could be an indication of the number of emails that have been received and the sender of the emails or any other suitable information.
  • the spatial audio item 403 could be provided in the zone 401B associated with the second email account or in any other suitable location within the user's peripersonal space.
  • Fig. 5 shows another example method that could be implemented using the apparatus 107 and systems 101 described herein.
  • the method comprises requesting a spatial audio item 403.
  • the spatial audio item 403 could be a user interface item that can enable a user 103 to interact with the user device 111 and/or apparatus 107. For instance, it could be a notification that informs the user 103 of an event or an item that can be selected by a user 103 or any other suitable user interaction
  • the request for the spatial audio item 403 can be received or otherwise obtained by the apparatus 107.
  • the request for the spatial audio item 403 could be triggered by the occurrence of an event.
  • the trigger event could be the receiving of a message by an email or messaging application or any other suitable event.
  • the trigger event could be detected by the user device 111 or a module within the user device 111.
  • the method can also comprise receiving one or more distraction criteria for the spatial audio item 403.
  • the distraction criteria can provide an indication of the level of distraction that should be provided by the spatial audio item 403.
  • the distraction criteria can be determined by any one or more of; importance of spatial audio item, application associated with spatial audio item, assigned user preferences or any other suitable factors or combination of such factors.
  • a position for the spatial audio item 403 can be determined.
  • the position can be a location relative to one or more body parts 105 of the user 103.
  • the location could be a position within one or more zones that are defined relative to one or more body parts 105 of the user 103.
  • the position for the spatial audio item 403 can be determined based on the functions associated with the spatial audio item 403, the positions of the body parts 105 of the user 103, the zones available within the user's peripersonal space and/or any other suitable factor.
  • the position for the spatial audio item 403 can be determined, at least in part, based on one or more distraction criteria associated with the spatial audio item.
  • the distraction criteria can ensure that spatial audio items 403 with a higher designated importance value can be provided in positions that correspond to the level of importance. This can enable spatial audio items 403 with a higher one or more distraction criteria to be positioned at more prominent positions relative to the body parts 105 of the user 103 compared to spatial audio items with a lower one or more distraction criteria.
  • the most prominent positions for the spatial audio items 403 comprise the positions at which the user 103 would find the spatial audio items 403 most distracting.
  • the most prominent positions could be close to the user's hands or a position that appears to be within the user's head. Positions with lower prominence could comprise positions close to the user's elbow or behind the user 103.
  • the distraction criteria can be used so that spatial audio items 403 with one or more higher distraction criteria are provided at more prominent positions. This causes the spatial audio items with a higher one or more distraction criteria to be more audibly perceptible by the user 103.
  • the position determined at block 503 can be an ideal position. This could be an optimal, or substantially optimal position, for the spatial audio item.
  • the actual position that can be achieved for the spatial audio item 403 can be limited by factors such as the accuracy with which the position of the user's body parts 105 can be determined, the accuracy at which the spatial audio system 113 can render audio items and any other suitable factors.
  • the position of one or more of the user's body parts 105 can be estimated. This could comprise estimating the position of the user's legs, arms or torso or any other suitable part of the user's body.
  • the position of the user's body parts 105 can be determined using wireless signals or any other suitable means.
  • the settings for the spatial audio system 113 are calculated.
  • the settings for the spatial audio system 113 can comprise the filters, or other processes, that are to be user to enable the spatial audio item 403 to be rendered so that the spatial audio item 403 is perceived to be at the position determined at block 503.
  • the user's limbs or other body parts 105 do not have a fixed position relative to the apparatus 107 and/or the spatial audio system 113. This means that the position at which the spatial audio item 403 is to be rendered can change over time, even if the user device 111 and the spatial audio system 113 do not move. Therefore, the estimated position of the body parts105 of the user 103 is used together with the ideal position for the spatial audio item 403 in order to calculate the settings for the spatial audio system 113.
  • the spatial audio item 403 is provided.
  • the spatial audio item 403 can be provided in a digital signal.
  • the spatial audio item 403 is played back to a user 103.
  • the spatial audio system 113 converts the digital signal comprising the spatial audio item 403 into an acoustic signal that can be heard by the user 103.
  • the settings that are applied to the spatial audio item 403 ensure that the user 103 perceives the spatial audio item 403 at the appropriate position within their peripersonal space.
  • the example method shown in Fig. 5 also shows how spatial audio items 403 can be provided in response to a user interaction or input.
  • the method comprises estimating a new position of the body parts 105 of the user 103.
  • the new position can be estimated using wireless signals or any other suitable means.
  • the new position can be estimated in the same way that the position of the body parts 105 is estimated at block 505.
  • a user interaction can be detected.
  • the user interaction can be detected if the new estimated position of the body parts 105 corresponds to a known input gesture. For instance, in the example shown in Fig. 4B the user interaction is the user 103 moving one of their hands into one of the predefined zones within their peripersonal space. In the example of Fig. 4B the user moves their hand into the zone in which the spatial audio item 403 was provided.
  • Other gestures and user inputs could be used in other examples of the disclosure.
  • a function associated with the detected user interaction is determined.
  • the function and/or application associated with the zone in which the gesture has been detected could be identified. This could be used in examples such as those shown in Fig. 4A and 4B .
  • the user interaction could comprise any recognisable gesture so that other user interactions could be used in other examples of the disclosure.
  • the user interaction could be a user waving their hand or other part of their body. The user interaction need not be a movement that takes place within a particular zone.
  • the function associated with the interaction can be performed.
  • Examples of the disclosure provide the advantage that the position of the parts of the user's body 105 can be determined even when there is no direct line of sight between the apparatus 107 and the parts of the user's body. For example, if the user device 111 is within the user's pocket or handbag.
  • the spatial user interface and the spatial audio items 403 that are provided can therefore enable a user to interact with the user device 111 without having to remove the user device 111 from their pocket or handbag. This can be more convenient for the user 103.
  • the position of the spatial audio items 403 can also enable additional information to be conveyed to a user 103.
  • the position of the spatial audio item can provide an indication of the application or function associated with a notification.
  • the position of the spatial audio item can be used to provide an indication of importance of the spatial audio item 403 so that spatial audio items 403 with a higher designated importance value can be provided in positions that correspond to the level of importance.
  • a property of the instance can be a property of only that instance or a property of the class or a property of a sub-class of the class that includes some but not all of the instances in the class. It is therefore implicitly disclosed that a feature described with reference to one example but not with reference to another example, can where possible be used in that other example as part of a working combination but does not necessarily have to be used in that other example.
  • 'a' or 'the' is used in this document with an inclusive not an exclusive meaning. That is any reference to X comprising a/the Y indicates that X may comprise only one Y or may comprise more than one Y unless the context clearly indicates the contrary. If it is intended to use 'a' or 'the' with an exclusive meaning then it will be made clear in the context. In some circumstances the use of 'at least one' or 'one or more' may be used to emphasis an inclusive meaning but the absence of these terms should not be taken to infer any exclusive meaning.
  • the presence of a feature (or combination of features) in a claim is a reference to that feature or (combination of features) itself and also to features that achieve substantially the same technical effect (equivalent features).
  • the equivalent features include, for example, features that are variants and achieve substantially the same result in substantially the same way.
  • the equivalent features include, for example, features that perform substantially the same function, in substantially the same way to achieve substantially the same result.

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Acoustics & Sound (AREA)
  • Signal Processing (AREA)
  • Circuit For Audible Band Transducer (AREA)
EP21161158.7A 2021-03-08 2021-03-08 Appareil, procédés et programmes informatiques permettant de fournir une interface utilisateur audio Pending EP4057645A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
EP21161158.7A EP4057645A1 (fr) 2021-03-08 2021-03-08 Appareil, procédés et programmes informatiques permettant de fournir une interface utilisateur audio

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
EP21161158.7A EP4057645A1 (fr) 2021-03-08 2021-03-08 Appareil, procédés et programmes informatiques permettant de fournir une interface utilisateur audio

Publications (1)

Publication Number Publication Date
EP4057645A1 true EP4057645A1 (fr) 2022-09-14

Family

ID=74859743

Family Applications (1)

Application Number Title Priority Date Filing Date
EP21161158.7A Pending EP4057645A1 (fr) 2021-03-08 2021-03-08 Appareil, procédés et programmes informatiques permettant de fournir une interface utilisateur audio

Country Status (1)

Country Link
EP (1) EP4057645A1 (fr)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130259238A1 (en) * 2012-04-02 2013-10-03 Qualcomm Incorporated Systems, methods, apparatus, and computer-readable media for gestural manipulation of a sound field
EP2891955A1 (fr) * 2014-01-03 2015-07-08 Harman International Industries, Incorporated Système audio spatial interactif gestuel embarqué

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130259238A1 (en) * 2012-04-02 2013-10-03 Qualcomm Incorporated Systems, methods, apparatus, and computer-readable media for gestural manipulation of a sound field
EP2891955A1 (fr) * 2014-01-03 2015-07-08 Harman International Industries, Incorporated Système audio spatial interactif gestuel embarqué

Similar Documents

Publication Publication Date Title
US9832582B2 (en) Sound effect control method and apparatus
US11032675B2 (en) Electronic accessory incorporating dynamic user-controlled audio muting capabilities, related methods and communications terminal
US10878833B2 (en) Speech processing method and terminal
US20170318374A1 (en) Headset, an apparatus and a method with automatic selective voice pass-through
US10805708B2 (en) Headset sound channel control method and system, and related device
US10475434B2 (en) Electronic device and control method of earphone device
CN108777827B (zh) 无线耳机、音量调整方法及相关产品
WO2015020889A1 (fr) Écouteurs à commande gestuelle
EP3599777B1 (fr) Appareil, système, procédé et programme informatique pour fournir de l'audio spatial
WO2016003955A1 (fr) Annulation d'écho à taille de pas variable avec prise en compte des interférences instantanées
KR20170065630A (ko) 표정 이미지 매칭 방법, 장치 및 단말기
US9601128B2 (en) Communication apparatus and voice processing method therefor
CN106384597B (zh) 一种音频数据处理方法、及设备
US10204639B2 (en) Method and device for processing sound signal for communications device
US20210144495A1 (en) Microphone Hole Blockage Detection Method, Microphone Hole Blockage Detection Device, and First Wireless Earphone
CN103647868B (zh) 减少铃声打扰的方法和装置
US11227617B2 (en) Noise-dependent audio signal selection system
WO2020118496A1 (fr) Procédé et dispositif de commutation de trajet audio, support de stockage lisible et équipement électronique
EP4057645A1 (fr) Appareil, procédés et programmes informatiques permettant de fournir une interface utilisateur audio
CN108391208B (zh) 信号切换方法、装置、终端、耳机及计算机可读存储介质
CN113115179B (zh) 工作状态调节方法和装置
CN105898033A (zh) 一种移动通信终端受话音量的调整方法及调整装置
CN113596662A (zh) 啸叫声的抑制方法、啸叫声的抑制装置、耳机及存储介质
CN112954524A (zh) 降噪方法、系统、车载终端及计算机存储介质
EP4120692A1 (fr) Appareil, procédé et programme informatique pour permettre un zoom audio

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION HAS BEEN PUBLISHED

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

17P Request for examination filed

Effective date: 20230314

RBV Designated contracting states (corrected)

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: EXAMINATION IS IN PROGRESS

17Q First examination report despatched

Effective date: 20230706