WO2020157035A1 - An apparatus, method or computer program for enabling real-time audio communication between users experiencing immersive audio - Google Patents

An apparatus, method or computer program for enabling real-time audio communication between users experiencing immersive audio Download PDF

Info

Publication number
WO2020157035A1
WO2020157035A1 PCT/EP2020/051985 EP2020051985W WO2020157035A1 WO 2020157035 A1 WO2020157035 A1 WO 2020157035A1 EP 2020051985 W EP2020051985 W EP 2020051985W WO 2020157035 A1 WO2020157035 A1 WO 2020157035A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
audio content
audio
sound source
rendering
Prior art date
Application number
PCT/EP2020/051985
Other languages
English (en)
French (fr)
Inventor
Sujeet Shyamsundar Mate
Miikka Tapani Vilermo
Arto Juhani Lehtiniemi
Jussi Artturi LEPPÄNEN
Original Assignee
Nokia Technologies Oy
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nokia Technologies Oy filed Critical Nokia Technologies Oy
Publication of WO2020157035A1 publication Critical patent/WO2020157035A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S7/00Indicating arrangements; Control arrangements, e.g. balance control
    • H04S7/30Control circuits for electronic adaptation of the sound field
    • H04S7/302Electronic adaptation of stereophonic sound system to listener position or orientation
    • H04S7/303Tracking of listener position or orientation
    • H04S7/304For headphones

Definitions

  • Embodiments of the present invention relate to apparatuses, methods and computer programs for enabling real-time audio communication between users experiencing immersive audio.
  • Immersive audio describes the rendering to a user of audio content selected by a current point- of-view of the user.
  • the user therefore has the experience that they are immersed within a three-dimensional audio field that changes as their point-of-view changes.
  • the apparatus comprises means for causing rendering of a portion of an audio content to a first user; causing real-time communication between the first user and a second user; causing adaptation of the portion of the audio content to create an adapted portion of the audio content; and
  • the portion of the audio content is selected based at least in part on a current point-of-view of the first user
  • Causing adaptation of the portion of the audio content to create an adapted portion of the audio content comprises replacing a sound source of the portion of the audio content with a different sound source.
  • the different sound source is rendered instead of the sound source.
  • an apparatus comprising means for:
  • causing real-time communication between the first user and a second user by causing transmission, for rendering to the second user, of audio generated by the first user and by causing rendering, to the first user, of audio generated by the second user for rendering to the first user;
  • an apparatus comprising means for:
  • causing real-time communication between the first user and a second user by causing transmission, for rendering to the second user, of audio generated by the first user and by causing rendering, to the first user, of audio generated by the second user for rendering to the first user;
  • the different sound source originates from a second portion of the audio content, different to the portion of the audio content, wherein the second portion of the audio content is selected by a current point-of-view of the second user.
  • causing adaptation of the portion of the audio content to create an adapted portion of the audio content comprises replacing multiple sound sources with different sound sources.
  • the multiple sound sources are replaced by the different sound sources one-at-a-time and wherein the adapted portion of the audio content is rendered to the user while the one-at-a-time adaptation is on-going, wherein progressively more of the different sound sources are rendered instead of the multiple sound sources.
  • the portion of the audio content rendered to the user depends on the point-of-view of the user in the first zone and includes sound sources associated with the first zone and does not include any sound source associated with the second zone
  • the content rendered to the second user depends on the point-of-view of the second user in the second zone and includes sound sources associated with the second zone
  • the adapted portion of the audio content rendered to the first user depends on the point-of-view of the first user in the first zone and includes at least one sound source associated with the second zone
  • the apparatus comprises means for causing an undoing of some or all of the adaptation performed on the portion of the audio content to create the adapted portion of the audio content and/or causing rendering, to the first user, of the portion of the audio content instead of the portion of the adapted content, wherein the sound source is rendered instead of the different sound source.
  • the un-doing of some or all of the adaptation performed on the portion of the audio content to create the adapted portion of the audio content is performed as a consequence of a change in point-of-view of the first user.
  • the portion of audio content rendered to the first user is defined by a point-of-view of a virtual user in a virtual space, which is determined by a point-of-view of the first user in a real space.
  • the point-of-view of the first user is determined by an orientation of the first user or wherein the point-of-view of the first user is determined by an orientation and a location of the user.
  • the apparatus is configured as a head mounted apparatus.
  • the apparatus comprises means for causing adaptation of the portion of the audio content to create adapted content as a consequence of an initiation of the real-time communication and an additional criterion or criteria.
  • the criterion or criteria include a condition based upon determining who will be the target of the adaptation.
  • the criterion or criteria are based upon an assessment of a difference between the content portions rendered to the first user and the second user.
  • a computer program that, when run on a computer, performs:
  • causing real-time communication between the first user and a second user by causing transmission, for rendering to the second user, of audio generated by the first user and by causing rendering, to the first user, of audio generated by the second user for rendering to the first user;
  • an apparatus comprising:
  • At least one memory including computer program code
  • the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus at least to perform:
  • causing real-time communication between the first user and a second user by causing transmission, for rendering to the second user, of audio generated by the first user and by causing rendering, to the first user, of audio generated by the second user for rendering to the first user;
  • FIG. 1A, 1 B, 1C, 1 D show example embodiments of the subject matter described herein;
  • FIG. 2 shows another example embodiment of the subject matter described herein
  • FIG. 3 shows another example embodiment of the subject matter described herein
  • FIG. 4 shows another example embodiment of the subject matter described herein
  • FIG. 5A, 5B, 6A, 6B show other example embodiments of the subject matter described herein;
  • FIG. 7A, 7B, 7C show other example embodiments of the subject matter described herein;
  • FIG. 8 shows another example embodiment of the subject matter described herein;
  • FIG. 9 shows another example embodiment of the subject matter described herein.
  • FIG. 10 shows another example embodiment of the subject matter described herein
  • FIG. 11 shows another example embodiment of the subject matter described herein
  • “artificial environment’ may be something that has been recorded or generated.
  • virtual visual space refers to fully or partially artificial environment that may be viewed, which may be three dimensional.
  • virtual visual scene refers to a representation of the virtual visual space viewed from a particular point of view (position) within the virtual visual space.
  • ‘virtual visual object is a visible virtual object within a virtual visual scene.
  • Sound space refers to an arrangement of sound sources in a three- dimensional space.
  • a sound space may be defined in relation to recording sounds (a recorded sound space) and in relation to rendering sounds (a rendered sound space).
  • sound scene refers to a representation of the sound space listened to from a particular point of view (position) within the sound space.
  • “sound object’ refers to a sound source that may be located within the sound space.
  • a source sound object represents a sound source within the sound space, in contrast to a sound source associated with an object in the virtual visual space.
  • a recorded sound object represents sounds recorded at a particular microphone or location.
  • a rendered sound object represents sounds rendered from a particular location.
  • “virtual space” may mean a virtual visual space, mean a sound space or mean a combination of a virtual visual space and corresponding sound space. In some examples, the virtual space may extend horizontally up to 360° and may extend vertically up to 180°.
  • virtual scene may mean a virtual visual scene, mean a sound scene or mean a combination of a virtual visual scene and corresponding sound scene.
  • ‘virtual object is an object within a virtual scene, it may be an augmented virtual object (e.g. a computer-generated virtual object) or it may be an image of a real object in a real space that is live or recorded. It may be a sound object and/or a virtual visual object.
  • augmented virtual object e.g. a computer-generated virtual object
  • image of a real object in a real space that is live or recorded It may be a sound object and/or a virtual visual object.
  • Virtual position is a position within a virtual space. It may be defined using a virtual location and/or a virtual orientation. It may be considered to be a movable‘point of view’.
  • Correspondence or“ corresponding” when used in relation to a sound space and a virtual visual space means that the sound space and virtual visual space are time and space aligned, that is they are the same space at the same time.
  • Correspondence or“ corresponding” when used in relation to a sound scene and a virtual visual scene (or visual scene) means that the sound space and virtual visual space (or visual scene) are corresponding and a notional (virtual) listener whose point of view defines the sound scene and a notional (virtual) viewer whose point of view defines the virtual visual scene (or visual scene) are at the same location and orientation, that is they have the same point of view (same virtual position).
  • real space (or “physical space”) refers to a real environment, which may be three dimensional.
  • real scene refers to a representation of the real space from a particular point of view (position) within the real space.
  • real visual scene refers to a visual representation of the real space viewed from a particular real point of view (position) within the real space.
  • mediated reality in this document refers to a user experiencing, for example visually and/or aurally, a fully or partially artificial environment (a virtual space) as a virtual scene at least partially rendered by an apparatus to a user.
  • the virtual scene is determined by a point of view (virtual position) within the virtual space.
  • Displaying the virtual scene means providing a virtual visual scene in a form that can be perceived by the user.
  • augmented reality in this document refers to a form of mediated reality in which a user experiences a partially artificial environment (a virtual space) as a virtual scene comprising a real scene, for example a real visual scene, of a physical real environment ( real space) supplemented by one or more visual or audio elements rendered by an apparatus to a user.
  • augmented reality implies a mixed reality or hybrid reality and does not necessarily imply the degree of virtuality (vs reality) or the degree of mediality;
  • virtual reality in this document refers to a form of mediated reality in which a user experiences a fully artificial environment (a virtual visual space) as a virtual scene displayed by an apparatus to a user;
  • virtual content is content, additional to real content from a real scene, if any, that enables mediated reality by, for example, providing one or more augmented virtual objects.
  • mediated reality content is virtual content which enables a user to experience, for example visually and/or aurally, a fully or partially artificial environment (a virtual space) as a virtual scene.
  • Mediated reality content could include interactive content such as a video game or non interactive content such as motion video.
  • Augmented reality content' is a form of mediated reality content which enables a user to experience, for example visually and/or aurally, a partially artificial environment (a virtual space) as a virtual scene.
  • Augmented reality content could include interactive content such as a video game or non-interactive content such as motion video.
  • Virtual reality content is a form of mediated reality content which enables a user to experience, for example visually and/or aurally, a fully artificial environment (a virtual space) as a virtual scene.
  • Virtual reality content could include interactive content such as a video game or non interactive content such as motion video.
  • “perspective-mediated’ as applied to mediated reality, augmented reality or virtual reality means that user actions determine the point of view (virtual position) within the virtual space, changing the virtual scene ;
  • first person perspective-mediated as applied to mediated reality, augmented reality or virtual reality means perspective mediated with the additional constraint that the user’s real point of view (location and/or orientation) determines the point of view (virtual position) within the virtual space of a virtual user,
  • third person perspective-mediated as applied to mediated reality, augmented reality or virtual reality means perspective mediated with the additional constraint that the user’s real point of view does not determine the point of view (virtual position) within the virtual space ;
  • user interactive as applied to mediated reality, augmented reality or virtual reality means that user actions at least partially determine what happens within the virtual space ;
  • “ displaying” means providing in a form that is perceived visually (viewed) by the user.
  • rendering means providing in a form that is perceived by the user
  • virtual user defines the point of view (virtual position- location and/or orientation) in virtual space used to generate a perspective-mediated sound scene and/or visual scene.
  • a virtual user may be a notional listener and/or a notional viewer.
  • notional listener 1 defines the point of view (virtual position- location and/or orientation) in virtual space used to generate a perspective-mediated sound scene, irrespective of whether or not a user is actually listening
  • notional viewer 1 defines the point of view (virtual position- location and/or orientation) in virtual space used to generate a perspective-mediated visual scene, irrespective of whether or not a user is actually viewing.
  • Three degrees of freedom describes mediated reality where the virtual position is determined by orientation only (e.g. the three degrees of three-dimensional orientation).
  • orientation e.g. the three degrees of three-dimensional orientation
  • An example of three degrees of three-dimensional orientation is pitch, roll and yaw.
  • orientation determines the virtual position.
  • Six degrees of freedom describes mediated reality where the virtual position is determined by both orientation (e.g. the three degrees of three-dimensional orientation) and location (e.g. the three degrees of three-dimensional location).
  • orientation e.g. the three degrees of three-dimensional orientation
  • location e.g. the three degrees of three-dimensional location
  • An example of three degrees of three-dimensional orientation is pitch, roll and yaw.
  • An example of three degrees of three- dimensional location is a three-dimensional coordinate in a Euclidian space spanned by orthogonal axes such as left -to-right (x), front to back (y) and down to up (z) axes.
  • first person perspective-mediated reality 6DoF both the user’s orientation and the user’s location in the real space determine the virtual position.
  • third person perspective- mediated reality 6DoF the user’s location in the real space does not determine the virtual position.
  • the user’s orientation in the real space may or may not determine the virtual position.
  • Three degrees of freedom‘plus’ (3DoF+) describes an example of six degrees of freedom where a change in location (e.g. the three degrees of three-dimensional location) is a change in location relative to the user that can arise from a postural change of a user’s head and/or body and does not involve a translation of the user through real space by, for example, walking.
  • a change in location e.g. the three degrees of three-dimensional location
  • spatial audio is the rendering of a sound scene.
  • First person perspective spatial audio” or “immersive audio” is spatial audio where the user’s point of view determines the sound scene so that audio content selected by a current point-of-view of the user is rendered to the user.
  • FIGS. 1A, 1 B, 1C, 1 D illustrate first person perspective mediated reality.
  • mediated reality means the rendering of mediated reality for the purposes of achieving mediated reality for a remote user, for example augmented reality or virtual reality. It may or may not be user interactive.
  • the mediated reality may support one or more of: 3DoF, 3DoF+ or 6D0F.
  • FIGS. 1A, 1 C illustrate at a first time a real space 50 and a sound space 60.
  • a user 40 in the real space 50 has a point of view (a position) 42 defined by a location 46 and an orientation 44.
  • the location is a three-dimensional location and the orientation is a three-dimensional orientation.
  • the user’s real point-of-view 42 determines the point-of-view 72 (virtual position) within the virtual space (e.g. sound space 60) of a virtual user 70.
  • An orientation 44 of the user 40 controls a virtual orientation 74 of a virtual user 70.
  • There is a correspondence between the orientation 44 and the virtual orientation 74 such that a change in the orientation 44 produces the same change in the virtual orientation 74.
  • the virtual orientation 74 of the virtual user 70 in combination with a virtual field of hearing defines a virtual sound scene 78.
  • a virtual sound scene 78 is that part of the sound space 60 that is rendered to a user.
  • a change in the location 46 of the user 40 does not change the virtual location 76 or virtual orientation 74 of the virtual user 70.
  • the user’s real point-of-view 42 determines the point-of-view 72 (virtual position) within the virtual space (e.g. sound space 60) of a virtual user 70.
  • the situation is as described for 3DoF and in addition it is possible to change the rendered virtual sound scene 78 by movement of a location 46 of the user 40.
  • a change in the location 46 of the user 40 produces a corresponding change in the virtual location 76 of the virtual user 70.
  • a change in the virtual location 76 of the virtual user 70 changes the rendered virtual sound scene 78.
  • FIGS. 1 B, 1 D illustrate the consequences of a change in location 46 and orientation 44 of the user 40 on the rendered virtual sound scene 78 (FIG. 1 D).
  • the change in location may arise from a postural change of the user and/or a translation of the user by walking or otherwise.
  • First person perspective mediated reality may control only a virtual sound scene 78, a virtual visual scene and both a virtual sound scene78and virtual visual scene, depending upon implementation.
  • the rendered sound space 60 may be desirable for the rendered sound space 60 to remain fixed in real space when the listener turns their head in space. This means that the rendered sound space 60 needs to be rotated relative to the audio output device by the same amount in the opposite sense to the head rotation.
  • the orientation of the portion of the rendered sound space tracks with the rotation of the listener’s head so that the orientation of the rendered sound space remains fixed in space and does not move with the listener’s head.
  • a sound‘locked’ to the real world may be referred to as a diegetic sound.
  • a sound‘locked’ to the user’s head may be referred to as a non-diegetic sound.
  • the rendering of a virtual sound scene 78 may also be described as providing spatial audio or providing immersive audio.
  • the sound space 60 defined by audio content 10 comprises one or more sound sources 20 at different positions in the sound space 60.
  • the audio rendered to the user depends upon the relative position of the virtual user 70 from the positions of the sound sources 20.
  • Perspective mediated virtual reality for example first person perspective mediated reality enables the user 40 to change the position of the virtual user 70 within the sound space 60 thereby changing the positions of the sound sources 20 relative to the virtual user which changes the virtual sound scene 78 rendered to the user 40.
  • Channel-based audio for example,. n,m surround sound (e.g. 5.1 , 7.1 or 22.2 surround sound) or binaural audio, can be used or scene-based audio, including spatial information about a sound field and sound sources, can be used.
  • Audio content may encode spatial audio as audio objects. Examples include but are not limited to MPEG-4 and MPEG SAOC. MPEG SAOC is an example of metadata-assisted spatial audio.
  • Audio content may encode spatial audio as audio objects in the form of moving virtual loudspeakers.
  • Audio content may encode spatial audio as audio signals with parametric side information or metadata.
  • the audio signals can be, for example, First Order Ambisonics (FOA) or its special case B-format, Higher Order Ambisonics (HOA) signals or mid-side stereo.
  • FOA First Order Ambisonics
  • HOA Higher Order Ambisonics
  • synthesis which utilizes the audio signals and the parametric metadata is used to synthesize the audio scene so that a desired spatial perception is created.
  • the parametric metadata may be produced by different techniques. For example, Nokia’s spatial audio capture (OZO Audio) or Directional Audio Coding (DirAC) can be used. Both capture a sound field and represent it using parametric metadata.
  • the parametric metadata may for example comprise: direction parameters that indicate direction per frequency band; distance parameters that indicate distance per frequency band; energy-split parameters that indicate diffuse-to-total energy ratio per frequency band.
  • Each time-frequency tile may be treated as a sound source with the direction parameter controlling vector based amplitude panning for a direct version and the energy-split parameter controlling differential gain for an indirect (decorrelated) version.
  • the audio content encoded may be speech and/or music and/or generic audio.
  • 3GPP IVAS (3GPP, Immersive Voice and Audio services), which currently under
  • amplitude panning techniques may be used to create or position a sound object.
  • the known method of vector-based amplitude panning can be used to position a sound source.
  • a sound object may be re-positioned by mixing a direct form of the object (an attenuated and directionally-filtered direct sound) with an indirect form of the object (e.g. positioned directional early reflections and/or diffuse reverberant).
  • FIG. 2 illustrates an example of a sound space 60 comprising a plurality of sound sources 20 at different locations within the sound space 60.
  • Each sound source 20 has associated with it a sound field 22, which may be a bearing, an area or a volume.
  • the user 40 has a different experience of the sound source 20 than if they are outside the sound field 22.
  • the user 40 may only hear the sound source 20 when the virtual user 70 is within the sound field 22 and cannot hear the sound source 20 outside the sound field 22.
  • the sound source 20 can be best heard within the sound field 22 and the sound source 20 is attenuated outside of the sound field 22 and in some examples it is more attenuated the greater the deviation or distance from the sound field 22.
  • the sound sources 20 and their locations and other characteristics of the sound space 60 are defined by the audio content 10 which is spatial audio content because sound sources 20 are controllably located within the sound space 60 by the audio content 10.
  • a reference to‘audio content’ in this document can also therefore be a reference to‘spatial audio content’. It will therefore be understood that the user 40, who is represented by the virtual user 70 in the sound space 60, experiences immersive audio.
  • a portion of the audio content 10 is selected by a current point-of-view 42 of the user 40 (point-of-view 72 of the virtual user 70). That portion of the audio content 10 is rendered to the user 40.
  • the user 40 by changing their own point-of-view 42, can change the point-of-view 72 of the virtual user 70 to appreciate different aspects of the sound space 60.
  • the change in the point-of-view 42 of the user 40 is achieved by varying only the user’s orientation 44 and in other examples it is achieved by changing the user’s orientation 44 and/or the user’s location 46.
  • the audio content 10 can therefore support 3DoF, 3DoF+, and 6DoF.
  • the sound space 60 comprises a number of distinct zones 30.
  • Each of the zones 30 is fully or partially isolated from the other zones or at least some of the other zones. Isolation in this context means that if the user is located within a particular zone 30, then the immersive audio that they experience is dominated by the sound sources of that zone. In some examples they may only hear the sound sources of that zone. In other examples they may not hear the sound sources of some or all of the other zones. Even in the circumstances where the virtual user 70 is within a zone 30 it is likely that the sound sources of that zone will be dominant compared to the sound sources of any other zone 30.
  • the user 40 can change their point-of-view 42, to cause a consequent change in the point-of- view 72 of the virtual user 70 within a zone 30.
  • This allows the user 40 to appreciate different aspects of the composition formed by the different sound sources 20 within the zone 30.
  • the change of point-of-view 72 within a zone may be achieved by 3DoF, 3DoF+, 6DoF.
  • a sweet spot is a particular point-of-view 72 for a virtual user 70 at which a better composition of the sound sources 20 in the zone 30 is rendered.
  • the composition is a mixed balance of the sound sources 20 of the zone 30.
  • the virtual user 70 can emphasize a sound source 20 in the rendering of the sound scene by, for example:
  • the virtual user 70 can de-emphasize a sound source 20 in the rendering of the sound scene by, for example:
  • the virtual user 70 It is also possible for the virtual user 70 to move between the different zones 30.
  • the user 40 is able to control the location of the virtual user 70 within the sound space 60.
  • the virtual user 70 by changing their location and/or orientation with respect to the sound source 20 can control how the sound source 20 is rendered to the user 40.
  • the point-of-view of the user 40 controls the point-of-view of the virtual user 70.
  • the sound sources 20 of the sound space 60 are musical instruments.
  • Each of the zones 30 has a main instrument and none, or one or more complementing instruments.
  • the main instrument is represented by a sound source 20.
  • Each of the complementing instruments, if present, is represented by a distinct sound source 20.
  • the secondary instruments of a zone 30 complement the primary instrument of the zone 30.
  • the instruments of one zone do not necessarily complement the instruments of another zone. It may therefore be desirable to prevent a user 40 hearing a mix of instruments from different zones. It may, for example, be desirable to prevent the simultaneous rendering to a user 40 of particular combinations of sound sources 20 from different zones.
  • FIG. 3 is an example of zonal spatial audio content 10 similar to the audio content 10 illustrated in FIG. 2 but at a higher obstruction level, highlighting the delineation of the different zones 30.
  • zone 1 is isolated from zones 2, 3 and 4 but not from the zone associated with the baseline instruments.
  • zone 2 is isolated from zones 1 , 3 and 4 but not from the zone associated with the baseline instruments.
  • zone 4 is isolated from zones 1 , 2 and 3 but not from the zone associated with the baseline instruments.
  • zone 3 is isolated from zones 1 , 2 and 4 but not from the zone associated with the baseline instruments.
  • the sound scene rendered to the user 40 is primarily dependent upon the sound sources 20 of zone 1 and the point-of-view of the virtual user 70 within zone 1 but may also include at a secondary level sound sources from the zone associated with the baseline instruments.
  • the sound scene rendered to the user 40 is primarily dependent upon the sound sources 20 of zone 2 and the point-of-view of the virtual user 70 within zone 2 but may also include at a secondary level sound sources from the zone 2 the zone associated with the baseline instruments.
  • the baseline instruments may be heard in all zones.
  • the sound sources of the other zones can only be heard if the virtual user 70 is within that particular zone 30.
  • each of the users 40 is associated with a different virtual user 70.
  • the point-of-view of each of the virtual users 70 can be independently controlled by each of the respective users 40.
  • each of the users 40 can independently experience the immersive spatial audio defined by the audio content 10.
  • a first portion of the audio contentIO is selected by a current point-of-view 42 of a first user 40. This is rendered to the first user 40.
  • the first user can change the current point-of- view 42 and change the portion of the audio contentI O that is selected and rendered to the first user 40.
  • a second portion of the audio contentIO is selected by a current point-of-view 42 of a second user 40, different to the first user 40.
  • This second portion of the audio content 10 is rendered to the second user 40.
  • the second user 40 can change the current point-of-view 42 and change the portion of the audio content 10 that is selected and rendered to the second user 40.
  • audio from by the first user is transmitted to the second user and is then rendered to the second user.
  • audio from the second user for example the voice of the second user, is transmitted to the first user and is then rendered to the first user
  • the audio from the first user may be audio that is generated by the first user, that originates from the first user and/or that is uploaded by the first user. In some examples, it can be a real time recorded voice of the first user or a real-time recorded environment of the first user. In other examples it can be or include uploaded audio content.
  • the audio from the second user may be audio that is generated by the second user, that originates from the second user and/or that is uploaded by the second user. In some examples, it can be a real-time recorded voice of the second user or a real-time recorded environment of the second user. In other examples it can be or include uploaded audio content.
  • the first user would consequently hear the audio generated by the second user and the first portion of the audio content selected by the current point-of-view 42 of the first user 40.
  • the second user would consequently hear the audio generated by the first user and the second portion of the audio content 10 selected by a current point-of-view 42 of the second user 40.
  • the audio context (the rendered sound scene) in which the second user generates the audio transmitted to the first user (the second portion of the audio content 10) and the audio context (the rendered sound scene) in which the first user hears that audio (the first portion of the audio contentIO) are different.
  • the audio context in which the first user generates the audio transmitted to the second user (the first portion of the audio content 10) and the audio context in which the second user hears that audio (the second portion of the audio content 10) are different.
  • this contextual difference may be undesirable.
  • the second portion of the audio content 10 could additionally be rendered to the first user and/or the first portion of the audio content 10 could additionally be rendered to the second user.
  • a single user may simultaneously hear both the first portion of the audio content 10 and the second portion of the audio content 10. In some circumstances this may be undesirable because the first portion of the audio content 10 and the second portion of the audio content 10 should be excluded from simultaneous rendering.
  • FIG. 4 illustrates an example of a method 100 that is capable of addressing some or all of these problems and other problems.
  • a portion of audio content is rendered to a user 40.
  • the portion of audio content is selected by a current point-of-view 42 of the first user 40 A .
  • real-time communication between the first user 40 A and a second user 40B is enabled. Audio generated by the first user 40 A for rendering to the second user 40B is transmitted to the second user 40B. Audio generated by the second user 40B for rendering to the first user 40 A is rendered to the first user 40 A.
  • the method 100 enables adaptation of the portion of the audio content 10 to create an adapted portion of the audio content 10, by replacing a sound source 20 of the portion of the audio content with a different sound source.
  • FIG. 5A illustrates an example of a first zone 30 A of the sound space 60 defined by the common audio content 10 at a first time.
  • the first zone 30 A comprises sound sources 20* at different positions within the first zone 30 A .
  • the first zone 30 A overlies the real space occupied by a first user 40 A .
  • the first user 40 A has a point-of-view 42 A defined by an orientation 44 A and/or a location 46 A as previously described.
  • the point-of-view 42 A of the first user 40 A defines the point-of-view 72 A of a first virtual user in the first zone 30 A .
  • the virtual user is co-located with the first user 40 A and is not illustrated in FIG. 5A for clarity.
  • the first user 40 A is experiencing immersive audio defined by the first zone 30 A as previously described.
  • FIG. 5B illustrates an example of a second zone 30B of the sound space 60 defined by the common audio content 10 at the first time.
  • the second zone 30B comprises sound sources 20# at different positions within the second zone 30B.
  • the second zone 30B overlies the real space occupied by a second user 40B.
  • the second user 40B has a point-of- view 42B defined by an orientation 44B and/or a location 46B as previously described.
  • the point- of-view 42B of the second user 40B defines the point-of-view 72B of a second virtual user in the second zone 30B.
  • the virtual user is co-located with the second user 40B and is not illustrated in FIG. 5B for clarity.
  • the second user 40B is experiencing immersive audio defined by the second zone 30B as previously described.
  • FIG. 6A illustrates an example of a first zone 30 A of the sound space 60 defined by the common audio content 10 at a second time and is similar to FIG 5A.
  • FIG. 6B illustrates an example of a second zone 30B of the sound space 60 defined by the common audio content 10 at a second time and is similar to FIG 5B.
  • the example illustrated in FIG. 6B is different to the example illustrated in FIG. 5B in that there is real-time communication 150 between the first user 40 A and the second user 40B and the method 100 has been applied for the second user 40B.
  • second audio 150B generated by the second user 40B to the first user 40 A and rendering of the second audio 150B to the first user 40 A .
  • the first audio 150 A may a sound source rendered at a particular location relative to the second user 40B .
  • the second audio 150B may a sound source rendered at a particular location relative to the first user 40 A.
  • the second portion 10B of the audio content 10 illustrated in FIG. 5B has been adapted to create an adapted second portion 10B* of the audio content 10.
  • a sound source 20# of the second portion 10B of the audio content 10 is replaced with a different sound source 20*.
  • the different sound source 20* is a sound source defined by the first portion 10 A of the common audio content 10.
  • the adapted second portion 10B* of the common audio content 10 is rendered to the second user 40B instead of the second portion 10B of the common audio content 10.
  • the different sound source 20* is rendered instead of the original sound source 20#.
  • the different sound source 20* originates from the first portion 10 A of the same audio content 10.
  • the first portion 10 A of the audio content 10 is selected by a current point- of-view 42 A of the first user 40 A .
  • the original sound source 20# has a position in the second portion 10B of the common audio content 10 relative to the second user 40B (and the virtual user 70). That is, it has a particular position in the sound space 60.
  • the different replacement sound source 20* has the same position in the adapted second portion 10B* of the content relative to second user 40B (and the virtual user 70). That is, the different sound source 20* has replaced the original sound source 20# at the same position within the sound space 60.
  • FIGS 7A, 7B and 7C illustrate an extension of the example illustrated in FIGS 5A, 5B, 6A and 6B.
  • FIG. 7 A is equivalent to FIG. 5B
  • FIG. 7B is equivalent to FIG. 6B.
  • FIG. 7C illustrates that the second portion 10B of the common audio content 10 that is adapted to create the adapted second portion 10B* of the audio content 10 is adapted by replacing multiple sound sources by different sound sources.
  • all of the sound sources 20# associated with the second portion 10B of the common audio content 10 have been replaced with sound sources 20* associated with the first portion 10 A of the common audio content 10.
  • the second portion 10B of the common audio content 10 is adapted one sound source at a time.
  • the multiple sound sources are replaced by the different sound sources one-at-a-time.
  • the shaker sound source is replaced by the electric guitar with sound source in FIG. 7B
  • the ukulele sound source is replaced by the electric guitar sound source in FIG. 7C.
  • the adapted second portion 10B* of the common audio content 10 is rendered to the second user 40B instead of the second portion 10B of the common content while the adaptation is on-going. Consequently, progressively more of the different sound sources 20* are rendered instead of the original sound sources 20#.
  • the adapted portion of the audio content is rendered to the user while the one-at-a-time adaptation is on-going, wherein progressively more of the different sound sources are rendered instead of the multiple sound sources.
  • the multiple replacement sound sources 20* have the same positions in the sound space 60 as the original sound sources 20#.
  • the position of the multiple original sound sources 20# relative to the second user 40B is the same as the positions of the multiple different replacement sound sources 20* relative to the second user 40B.
  • the first user 40 A (and the corresponding first virtual user) is in a first zone 30 A of a plurality of zones and the second user 40B (and the corresponding first virtual user) is in a second, different second zone 30B.
  • the first portion 10 A of the audio content 10 rendered to the first user 40 A depends on the point-of- view 42 A of the first user 40 A in the first zone 30 A and only includes sound sources 20 associated with the first zone 30 A (FIG. 5A, 6A).
  • the second portion 10B of the audio content 10 rendered to the second user 40B depends on the point-of-view 42B of the second user 40B in the second zone 30B and only includes sound sources 20 associated with the second zone 30B (FIG. 5B, FIG. 7A).
  • the adapted second portion 10B* of the common audio content 10 rendered to the second user 10B depends on the point-of-view 42B of the second user 40B in the second zone 30B and includes at least one sound source 20* associated with the first zone 30 A .
  • the sound sources 20# that were originally rendered to the second user 40B have been replaced with one or more sound sources 20* that are rendered to the first user 40 A .
  • the sound source swap may occur in the reverse direction either additionally to or as an alternative to the above-described sound source swap thus, in some examples, the sound sources 20* rendered to the first user 40 A may be replaced by one or more different sound sources 20# that are being rendered to the second user 40B.
  • this undoing may be as a consequence of a change in a point-of-view 42 of a user 40 (or changing point-of-view 72 of a virtual user 70). In other examples, or in the same examples, the undoing may be as a consequence of a user 40 (and virtual user 70) changing from one zone to another zone.
  • a sound source 150B that is transmitted to the second user 40B for rendering to the second user 40B is incompatible with the one or more of the original sound sources 20# currently being rendered to the second user 40B.
  • the mutual exclusion is resolved by replacing the one or more of the original sound sources 20# with a different sound source 20*.
  • the replacement sound source 20* is a sound source from the first zone 30 A from which the new sound source 150B originates.
  • the replacement of the one or more sound sources 20# with different sound sources 20* removes the mutual exclusion between the newly received sound source 150B and the sound sources 20# rendered in the second zone 30B.
  • Rules may be defined that specify how such mutual exclusions are resolved, which sound sources are to be replaced, with what timing and in what order. These rules and decisions may be based on one or more different criteria.
  • the replacement of the original sound sources 20# with different sound sources 20* may be done in a manner that smoothens the replacement for example by performing a fade-in and fade-out or by performing the replacement sequentially or otherwise controlling replacement of one sound source by another.
  • whether or not the sound sources of the first zone 30 A are replaced by sound sources of the second zone 30B or the sound sources of the second zone 30B are replaced by sound sources of the first zone 30 A is dependent upon whether the first user 40A or the second user 40B initiated the real-time communication between the first user 40A and the second user 40B. For example, it may be determined that the first user that initiated the call has precedence and therefore the sound sources for that user should not be replaced whereas the sound sources of the second user should be replaced. The reverse is also a possibility.
  • the criterion or criteria include a condition based upon determining who will be the target of the adaptation.
  • the criterion or criteria can in some examples be based upon an assessment of a difference between the content portions rendered to the first user and the a second user, for example, to determine if the they should be mutually exclusive.
  • the criterion or criteria can in some examples, additionally or alternatively, be based on a user request. For example, a first user may be able to make a request as to whether or not to share their context with a second user and therefore replace that second user’s sound sources with the sound sources that are being rendered to them or to request that they share the context of the second user and have the sound sources that are currently rendered to them replaced by one or more sound sources being rendered to the second user.
  • Metadata associated with the common audio content 10 may define exclusive subsets of sound sources that should not be rendered simultaneously.
  • the metadata may, in addition, define which sound sources should be dominant and should not be replaced and which sound sources should be replaced.
  • the metadata may even define different conflicts between different sound sources and how each of those conflicts should be resolved, that is which sound source should be replaced and which sound source should not be replaced when there is mutual exclusion.
  • the metadata may be defined by a content creator and associated with the common audio content 10.
  • the metadata may be generated using artificial intelligence or machine learning using a learning algorithm.
  • the sound sources may, for example, be parameterized in relation to key analysis, instrumentation or other audio aspects or characteristics and these may be provided as input parameters to a machine learning algorithm such as a neural network.
  • the neural network may be trained in advance or by the user so that it is able to detect the simultaneous rendering of sound sources that should be mutually exclusive before they are rendered. It is therefore possible to use this method not only to detect the possibility that mutually exclusive sound sources will be rendered but also to determine which sound source should be replaced.
  • the sound sources 150A, 150B that are communicated in real time between the first user 40A and the second user 40B are voice communications that are recorded at microphones.
  • the voice recordings captured by the microphones may be augmented with metadata that defines the context of the user whose voice is recorded at that time or which provides the context of the user at that time.
  • context refers to the audio scene that is rendered to the user at that time and includes for example the point-of-view of the user (which then defines the sound scene) or somehow otherwise defines the arrangement of sound sources relative to the user. It would of course be possible to transmit the sound scene as rendered to the user in the transmitted sound source 150, however, as the audio content 10 is shared content it is more efficient to instead transmit the point-of-view 42 of the user 40 which defines the audio context.
  • the real-time duplex communication between the first user 40A and the second user 40B therefore enables a conversation between the first user and the second user which has voice only or which also includes the ambient context of the user, that is, the sound scene currently being rendered to the user while their voice is recorded.
  • FIG. 8 illustrates an example of the method 100 that illustrates a number of the concepts described above.
  • the FIG illustrates the processes that occur at the first apparatus 200A used by the first user 40A and the processes that occur at a second apparatus 200B used by the second user 40B. It also illustrates the exchange of information between the first apparatus 200A and the second apparatus 200B.
  • the method 100 is performed with respect to the first user 40A and the first apparatus 200A.
  • a first portion 10 A of a common audio content 10 is rendered to a first user 40 A by the first apparatus 200 A .
  • the first portion 10 A of the common audio content 10 is selected by a current point-of-view 42 A of the first user 40 A .
  • a second portion 10B of the common audio content 10 is rendered to a second user 40B by the second apparatus 200B.
  • the second portion 10B of the audio content 10 is selected by a current point-of-view 42B of the second user 40B.
  • the first apparatus 200 A informs 120 the second apparatus 200B of the current point-of- view 42 A of the first user 40 A and the second apparatus 200B informs 122 the first apparatus 200 A of the point-of-view 42B of the second user 40B.
  • the first apparatus 200 A therefore has knowledge of the current point-of-view 42B of the second user 40B and as a consequence what second adapted portion 10B of the common audio content 10 is being rendered to the second user 40B.
  • the second apparatus 200B has knowledge of the current point-of-view 42 A of the first user 40 A and therefore also of what first portion 10 A of the common audio content 10 is being rendered to the first user 40 A .
  • the communications 120, 122 may also establish whether or not there is mutual exclusivity between the first portion 10 A and the second portion 10B of the common audio content 10, and if there is, how this should be handled.
  • the first apparatus 200 A determines that there is mutual exclusivity between the first portion 10 A of the common audio content 10 and the second portion 10B of the common audio content 10.
  • the first portion 10 A of the common audio content 10 determines that it should adapt the first portion 10 A of the common audio content 10 to produce, at block 106 A (2 ) , an adapted first portion 10 A * of the common audio content 10 by replacing one or more sound sources of the first portion 10 A of the common audio content 10 with one or more different sound sources.
  • the different sound sources may, as described above, originate from the second portion 10B of the common audio content 10.
  • the second apparatus 200B determines that it is not necessary to adapt the second portion 10B of the common audio content 10.
  • real-time duplex communication between the first user 40 A and the second user 40B is established.
  • transmission 102 for rendering to the second user 40B, of first audio 150 A generated by the first user 40 A .
  • transmission 122 for rendering to the first user 40 A , of second audio 150B generated by the second user 40B.
  • the audio 150 A , 150B that is exchanged may be a selectable common audio element. For example, it may be vocals that are recorded by microphones.
  • the first audio 150 A may be audio that has been recorded by a microphone at the first apparatus 200A and likewise the second audio 150B may be audio that has been recorded at a microphone of the second apparatus 200B.
  • the audio 150 A , 150B is therefore in this example referred to as local user content (LUC).
  • whole or part of the immersive audio scene that is rendered to the first user 40 A can be provided to the second user 40B and/or whole or part of the immersive audio scene rendered to the second user 40B can be delivered to the first user 40 A for rendering to the first user 40 A .
  • the immersive audio scene of the first user 40 A and the immersive audio scene of the second user 40B are incompatible and mutually exclusive.
  • the immersive audio scene of the second user is sent to the first user 40 A and in whole or in part replaces the immersive audio scene rendered to the first user 40 A .
  • the audio 150B generated by the second user 40B is rendered to the first user 40 A .
  • the adapted first portion 10 A * of the common audio content 10 is also rendered 108 to the first user 40 A instead of the first portion 10 A of the common audio content 10.
  • the one or more different sound sources of the adapted first portion 10A* of the common audio content 10 are rendered instead of one or more of the sound sources of the first portion 10A of the common audio content 10.
  • the audio 150 A generated by the first user 40 A is rendered to the second user 40B and the second portion 10B of the common audio content 10 is also rendered to the second user 40B.
  • the communication between two users, a first user 40A and a second user 40B may simultaneously communicate and the examples described and illustrated can be extended to this scenario.
  • FIG. 9 illustrates an example of a controller 210.
  • Implementation of a controller 210 may be as controller circuitry.
  • the controller 210 may be implemented in hardware alone, have certain aspects in software including firmware alone or can be a combination of hardware and software (including firmware).
  • controller 210 may be implemented using instructions that enable hardware functionality, for example, by using executable instructions of a computer program 206 in a general-purpose or special-purpose processor 202 that may be stored on a computer readable storage medium (disk, memory etc) to be executed by such a processor 202.
  • a general-purpose or special-purpose processor 202 may be stored on a computer readable storage medium (disk, memory etc) to be executed by such a processor 202.
  • the processor 202 is configured to read from and write to the memory 204.
  • the processor 202 may also comprise an output interface via which data and/or commands are output by the processor 202 and an input interface via which data and/or commands are input to the processor 202.
  • the memory 204 stores a computer program 206 comprising computer program instructions (computer program code) that controls the operation of the apparatus 200 when loaded into the processor 202.
  • the computer program instructions, of the computer program 206 provide the logic and routines that enables the apparatus to perform the methods illustrated in FIGS 1 to 8.
  • the processor 202 by reading the memory 204 is able to load and execute the computer program 206.
  • the apparatus 200 therefore comprises:
  • At least one memory 204 including computer program code
  • the at least one memory 204 and the computer program code configured to, with the at least one processor 202, cause the apparatus 200 at least to perform:
  • causing real-time communication between the first user and a second user by causing transmission, for rendering to the second user, of audio generated by the first user and by causing rendering, to the first user, of audio generated by the second user for rendering to the first user;
  • the computer program 206 may arrive at the apparatus 200 via any suitable delivery mechanism 220.
  • the delivery mechanism 220 may be, for example, a machine readable medium, a computer-readable medium, a non-transitory computer- readable storage medium, a computer program product, a memory device, a record medium such as a Compact Disc Read-Only Memory (CD-ROM) or a Digital Versatile Disc (DVD) or a solid state memory, an article of manufacture that comprises or tangibly embodies the computer program 206.
  • the delivery mechanism may be a signal configured to reliably transfer the computer program 206.
  • the apparatus 200 may propagate or transmit the computer program 206 as a computer data signal.
  • Computer program instructions for causing an apparatus to perform at least the following or for performing at least the following:
  • causing real-time communication between the first user and a second user by causing transmission, for rendering to the second user, of audio generated by the first user and by causing rendering, to the first user, of audio generated by the second user for rendering to the first user;
  • the computer program instructions may be comprised in a computer program, a non-transitory computer readable medium, a computer program product, a machine readable medium. In some but not necessarily all examples, the computer program instructions may be distributed over more than one computer program.
  • memory 204 is illustrated as a single component/circuitry it may be
  • processor 202 is illustrated as a single component/circuitry it may be
  • the processor 202 may be a single core or multi-core processor.
  • references to‘computer-readable storage medium’,‘computer program product’,‘tangibly embodied computer program’ etc. or a‘controller’,‘computer’,‘processor’ etc. should be understood to encompass not only computers having different architectures such as single /multi- processor architectures and sequential (Von Neumann)/parallel architectures but also specialized circuits such as field-programmable gate arrays (FPGA), application specific circuits (ASIC), signal processing devices and other processing circuitry.
  • References to computer program, instructions, code etc. should be understood to encompass software for a programmable processor or firmware such as, for example, the programmable content of a hardware device whether instructions for a processor, or configuration settings for a fixed- function device, gate array or programmable logic device etc.
  • circuitry may refer to one or more or all of the following:
  • circuitry also covers an implementation of merely a hardware circuit or processor and its (or their) accompanying software and/or firmware.
  • circuitry also covers, for example and if applicable to the particular claim element, a baseband integrated circuit for a mobile device or a similar integrated circuit in a server, a cellular network device, or other computing or network device.
  • the blocks illustrated in the FIGS 1 to 8 may represent steps in a method and/or sections of code in the computer program 206.
  • the illustration of a particular order to the blocks does not necessarily imply that there is a required or preferred order for the blocks and the order and arrangement of the block may be varied. Furthermore, it may be possible for some blocks to be omitted.
  • Fig 11 illustrates an example of an apparatus 200.
  • the apparatus 200 is configured to enable first person perspective mediated reality.
  • the apparatus may include circuitry 250 that is capable of tracking a user’s point-of-view 42, for example, by tracking movement of a user’s head while they are wearing the apparatus 200, as a head mounted apparatus, or are wearing a head-mounted tracking device communicating with the apparatus 200.
  • the head mounted device or apparatus may, in some but not necessarily all examples, include a head-mounted display for one or both eyes of the user 40.
  • the apparatus 200 comprises a decoder 252 for decoding the audio content 10.
  • the decoding produces the audio content 10 in a format that can be used to identify and separately process sound sources 20.
  • the decoded audio content 10 (spatial audio content) is provided to rendering control block 254 that performs the method 100.
  • the rendering control block 254 determines the portion or adapted portion of the audio content the audio content 10 that will be rendered.
  • the rendering control block 254 is configured to enable first person perspective mediated reality with respect to the audio content 10 and takes into account the point-of-view 42 of the user 40.
  • the rendering control block 254 is configured to identify and control each sound source 20 separately if required. It is capable of removing and adding one or more sound sources to/from a rendered sound scene.
  • the rendering control block 254 and the Tenderer 256 are housed within the same apparatus 200, in other examples, the rendering control block 254 and the Tenderer 256 may be housed in separate devices.
  • the rendering control block 254 provides a control output to the Tenderer 256 which may be one or more loudspeakers, for example.
  • the loudspeakers may be arranged around a user or have be part of a headset worn by the user.
  • the audio content 10 and the sound sources have been music based. However, this is not always the case. Other content is possible.
  • the method 100 is particularly suitable when there is a conflict or potential conflict between the current sound scene rendered to a user and a sound source received by that user for rendering. The removal or replacement of sound objects in the sound scene 20 obviates the conflict.
  • each of the different zones 30 represents a different language.
  • each of the different zones 30 represents a different age-restricted audio content.
  • the apparatus 200 is configured to communicate data from the apparatus 200 with or without local storage of the data in a memory 204 at the apparatus 200 and with or without local processing of the data by circuitry or processors at the apparatus 200.
  • the data may be stored in processed or unprocessed format remotely at one or more devices.
  • the data may be stored in the Cloud.
  • the data may be processed remotely at one or more devices.
  • the data may be partially processed locally and partially processed remotely at one or more devices.
  • the data may be communicated to the remote devices wirelessly via short range radio communications such as Wi-Fi or Bluetooth, for example, or over long range cellular radio links.
  • the apparatus may comprise a communications interface such as, for example, a radio transceiver for communication of data.
  • the apparatus 200 may be part of the Internet of Things forming part of a larger, distributed network.
  • the processing of the data may be for the purpose of health monitoring, data aggregation, patient monitoring, vital signs monitoring or other purposes.
  • the processing of the data may involve artificial intelligence or machine learning algorithms.
  • the data may, for example, be used as learning input to train a machine learning network or may be used as a query input to a machine learning network, which provides a response.
  • the machine learning network may for example use linear regression, logistic regression, vector support machines or an acyclic machine learning network such as a single or multi hidden layer neural network.
  • the processing of the data may produce an output.
  • the output may be communicated to the apparatus 200 where it may produce an output sensible to the subject such as an audio output, visual output or haptic output.
  • the systems, apparatus, methods and computer programs may use machine learning which can include statistical learning.
  • Machine learning is a field of computer science that gives computers the ability to learn without being explicitly programmed.
  • the computer learns from experience E with respect to some class of tasks T and performance measure P if its performance at tasks in T, as measured by P, improves with experience E.
  • the computer can often learn from prior training data to make predictions on future data.
  • Machine learning includes wholly or partially supervised learning and wholly or partially unsupervised learning. It may enable discrete outputs (for example classification, clustering) and continuous outputs (for example regression).
  • Machine learning may for example be implemented using different approaches such as cost function minimization, artificial neural networks, support vector machines and Bayesian networks for example.
  • Cost function minimization may, for example, be used in linear and polynomial regression and K-means clustering.
  • Artificial neural networks for example with one or more hidden layers, model complex relationship between input vectors and output vectors.
  • Support vector machines may be used for supervised learning.
  • a Bayesian network is a directed acyclic graph that represents the conditional independence of a number of random variables.
  • automotive systems telecommunication systems; electronic systems including consumer electronic products; distributed computing systems; media systems for generating or rendering media content including audio, visual and audio visual content and mixed, mediated, virtual and/or augmented reality; personal systems including personal health systems or personal fitness systems; navigation systems; user interfaces also known as human machine interfaces; networks including cellular, non-cellular, and optical networks; ad-hoc networks; the internet; the internet of things; virtualized networks; and related software and services.
  • any reference to X comprising Y indicates that X may comprise only one Y or may comprise more than one Y. If it is intended to use‘comprise’ with an exclusive meaning then it will be made clear in the context by referring to“comprising only one..” or by using “consisting”.
  • example or‘for example’ or‘can’ or‘may’ in the text denotes, whether explicitly stated or not, that such features or functions are present in at least the described example, whether described as an example or not, and that they can be, but are not necessarily, present in some of or all other examples.
  • example ‘for example’,‘can’ or‘may’ refers to a particular instance in a class of examples.
  • a property of the instance can be a property of only that instance or a property of the class or a property of a sub-class of the class that includes some but not all of the instances in the class. It is therefore implicitly disclosed that a feature described with reference to one example but not with reference to another example, can where possible be used in that other example as part of a working combination but does not necessarily have to be used in that other example.
  • the presence of a feature (or combination of features) in a claim is a reference to that feature or (combination of features) itself and also to features that achieve substantially the same technical effect (equivalent features).
  • the equivalent features include, for example, features that are variants and achieve substantially the same result in substantially the same way.
  • the equivalent features include, for example, features that perform substantially the same function, in substantially the same way to achieve substantially the same result.

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Acoustics & Sound (AREA)
  • Signal Processing (AREA)
  • Stereophonic System (AREA)
PCT/EP2020/051985 2019-02-01 2020-01-28 An apparatus, method or computer program for enabling real-time audio communication between users experiencing immersive audio WO2020157035A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
EP19155151.4A EP3691298A1 (de) 2019-02-01 2019-02-01 Vorrichtung, verfahren und computerprogramm für echtzeit audio kommunikation zwischen benutzern, die immersiven audiowiedergabe erleben
EP19155151.4 2019-02-01

Publications (1)

Publication Number Publication Date
WO2020157035A1 true WO2020157035A1 (en) 2020-08-06

Family

ID=65278276

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2020/051985 WO2020157035A1 (en) 2019-02-01 2020-01-28 An apparatus, method or computer program for enabling real-time audio communication between users experiencing immersive audio

Country Status (2)

Country Link
EP (1) EP3691298A1 (de)
WO (1) WO2020157035A1 (de)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2599359A (en) * 2020-09-23 2022-04-06 Nokia Technologies Oy Spatial audio rendering

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180270601A1 (en) * 2017-03-17 2018-09-20 Nokia Technologies Oy Preferential Rendering of Multi-User Free-Viewpoint Audio for Improved Coverage of Interest
WO2018174500A1 (ko) * 2017-03-20 2018-09-27 주식회사 라이커스게임 현실 음향을 반영한 증강 현실 3차원 음향 구현 시스템 및 프로그램

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180270601A1 (en) * 2017-03-17 2018-09-20 Nokia Technologies Oy Preferential Rendering of Multi-User Free-Viewpoint Audio for Improved Coverage of Interest
WO2018174500A1 (ko) * 2017-03-20 2018-09-27 주식회사 라이커스게임 현실 음향을 반영한 증강 현실 3차원 음향 구현 시스템 및 프로그램

Also Published As

Publication number Publication date
EP3691298A1 (de) 2020-08-05

Similar Documents

Publication Publication Date Title
US11089426B2 (en) Apparatus, method or computer program for rendering sound scenes defined by spatial audio content to a user
CN110121695B (zh) 虚拟现实领域中的装置及相关联的方法
CN112602053B (zh) 音频装置和音频处理的方法
US11721355B2 (en) Audio bandwidth reduction
US11140503B2 (en) Timer-based access for audio streaming and rendering
US20220059123A1 (en) Separating and rendering voice and ambience signals
EP3422744B1 (de) Vorrichtung und zugehörige verfahren
CN114041113A (zh) 用于音频渲染的隐私分区和授权
US12010490B1 (en) Audio renderer based on audiovisual information
US11099802B2 (en) Virtual reality
WO2020157035A1 (en) An apparatus, method or computer program for enabling real-time audio communication between users experiencing immersive audio
US11102604B2 (en) Apparatus, method, computer program or system for use in rendering audio
US20220171593A1 (en) An apparatus, method, computer program or system for indicating audibility of audio content rendered in a virtual space
EP3720149A1 (de) Vorrichtung, verfahren, computerprogramm oder system zur wiedergabe von audiodaten
Cohen et al. Spatial soundscape superposition and multimodal interaction
US11368807B2 (en) Previewing spatial audio scenes comprising multiple sound sources
US11570565B2 (en) Apparatus, method, computer program for enabling access to mediated reality content by a remote user
US11997463B1 (en) Method and system for generating spatial procedural audio
EP4240026A1 (de) Audiowiedergabe
EP3734966A1 (de) Vorrichtung und zugehöriges verfahren zur präsentation von audio

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20701617

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20701617

Country of ref document: EP

Kind code of ref document: A1