GB2558281A - Audio processing - Google Patents

Audio processing Download PDF

Info

Publication number
GB2558281A
GB2558281A GB1622178.0A GB201622178A GB2558281A GB 2558281 A GB2558281 A GB 2558281A GB 201622178 A GB201622178 A GB 201622178A GB 2558281 A GB2558281 A GB 2558281A
Authority
GB
United Kingdom
Prior art keywords
audio
candidate
sound source
directions
propagation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
GB1622178.0A
Other versions
GB201622178D0 (en
Inventor
Van Mourik Jelle
Mauricio Carvalho Corvo Pedro
Ward-Foxton Nicholas
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sony Interactive Entertainment Inc
Original Assignee
Sony Interactive Entertainment Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sony Interactive Entertainment Inc filed Critical Sony Interactive Entertainment Inc
Priority to GB1622178.0A priority Critical patent/GB2558281A/en
Publication of GB201622178D0 publication Critical patent/GB201622178D0/en
Publication of GB2558281A publication Critical patent/GB2558281A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S7/00Indicating arrangements; Control arrangements, e.g. balance control
    • H04S7/30Control circuits for electronic adaptation of the sound field
    • H04S7/302Electronic adaptation of stereophonic sound system to listener position or orientation
    • H04S7/303Tracking of listener position or orientation
    • H04S7/304For headphones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S3/00Systems employing more than two channels, e.g. quadraphonic
    • H04S3/002Non-adaptive circuits, e.g. manually adjustable or static, for enhancing the sound image or the spatial distribution
    • H04S3/004For headphones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S7/00Indicating arrangements; Control arrangements, e.g. balance control
    • H04S7/30Control circuits for electronic adaptation of the sound field
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S7/00Indicating arrangements; Control arrangements, e.g. balance control
    • H04S7/30Control circuits for electronic adaptation of the sound field
    • H04S7/305Electronic adaptation of stereophonic audio signals to reverberation of the listening space
    • H04S7/306For headphones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2499/00Aspects covered by H04R or H04S not otherwise provided for in their subgroups
    • H04R2499/10General applications
    • H04R2499/15Transducers incorporated in visual displaying devices, e.g. televisions, computer displays, laptops
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S2420/00Techniques used stereophonic systems covered by H04S but not provided for in its groups
    • H04S2420/01Enhancing the perception of the sound image or of the spatial distribution using head related transfer functions [HRTF's] or equivalents thereof, e.g. interaural time difference [ITD] or interaural level difference [ILD]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S2420/00Techniques used stereophonic systems covered by H04S but not provided for in its groups
    • H04S2420/05Application of the precedence or Haas effect, i.e. the effect of first wavefront, in order to improve sound-source localisation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S2420/00Techniques used stereophonic systems covered by H04S but not provided for in its groups
    • H04S2420/11Application of ambisonics in stereophonic audio systems

Abstract

Apparatus and method for generating candidate audio propagation directions 260-266 from a sound source position in a virtual environment. This is followed by modelling audio propagation properties of a current candidate direction and detect whether an audio propagation path originating in the current candidate direction from the sound source arrives at the listening position 200. This is then followed by selecting a predetermined number of candidate directions for which an audio propagation path arrives at the listening position and combining the audio propagation properties of the selected candidate directions. Preferably there is a filter to apply the audio filtering parameters to an audio signal. Preferably the candidate paths selected are selected in order of those which are first generated. Preferably the order of path generation is random. Preferably the position of a listener is detected and mapped.

Description

(71) Applicant(s):
Sony Interactive Entertainment Inc.
1-7-1 Konan, Minato-Ku 108-8270, Tokyo, Japan (72) Inventor(s):
Jelle Van Mourik
Pedro Mauricio Carvalho Corvo
Nicholas Ward-Foxton (56) Documents Cited:
WO 2016/130834 A1 US 20120093320 A1 US 20080273708 A1
US 20150378019 A1 US 20120070005 A1 (58) Field of Search:
INT CL H04S Other: WPI, EPODOC (74) Agent and/or Address for Service:
D Young & Co LLP
120 Holborn, LONDON, EC1N 2DY, United Kingdom (54) Title of the Invention: Audio processing
Abstract Title: Modelling audio in a virtual environment (57) Apparatus and method for generating candidate audio propagation directions 260-266 from a sound source position in a virtual environment. This is followed by modelling audio propagation properties of a current candidate direction and detect whether an audio propagation path originating in the current candidate direction from the sound source arrives at the listening position 200. This is then followed by selecting a predetermined number of candidate directions for which an audio propagation path arrives at the listening position and combining the audio propagation properties of the selected candidate directions. Preferably there is a filter to apply the audio filtering parameters to an audio signal. Preferably the candidate paths selected are selected in order of those which are first generated. Preferably the order of path generation is random. Preferably the position of a listener is detected and mapped.
v>°
Figure GB2558281A_D0001
Pi tt ,2_ >
Figure GB2558281A_D0002
Figure GB2558281A_D0003
Figure GB2558281A_D0004
Pin ,2_ /:
ho
Figure GB2558281A_D0005
F*7*
Figure GB2558281A_D0006
Figure GB2558281A_D0007
Figure GB2558281A_D0008
Figure GB2558281A_D0009
Ίοο
-Ί-ΐο
'Ύϊο
Figure GB2558281A_D0010
AUDIO PROCESSING
BACKGROUND
Field
This disclosure relates to audio processing.
Description of Related Art
In some data processing operations such as computer game processing, there is a need to generate sounds for a user to listen to. In some examples, the sounds may be generated as though they emanate from a location in a virtual environment.
Sound propagation can be modelled in the virtual environment. However, there is often a processing limitation on the number of paths which can be modelled. The selection of paths can be such that the selection changes, causing the audio reflections to appear (to the listener) to move around. This can be subjectively disturbing for the user.
SUMMARY
The present disclosure is defined by the appended claims.
It is to be understood that both the foregoing general description and the following detailed description are exemplary, but are not restrictive, of the present technology.
BRIEF DESCRIPTION OF THE DRAWINGS
A more complete appreciation of the disclosure and many of the attendant advantages thereof will be readily obtained as the same becomes better understood by reference to the following detailed description when considered in connection with the accompanying drawings, wherein:
Figure 1 schematically illustrates a virtual reality system comprising a head mountable display worn by a user;
Figure 2 schematically illustrates audio reflections in a virtual environment;
Figure 3 schematically illustrates a propagation processing apparatus;
Figure 4 is a schematic flowchart illustrating a method;
Figure 5 schematically illustrates a candidate direction;
Figure 6 schematically illustrates a data processing apparatus; and
Figure 7 is a schematic flowchart illustrating a method.
DESCRIPTION OF THE EMBODIMENTS
Referring now to Figure 1, a user 10 is wearing an HMD 20 (as an example of a generic head-mountable apparatus or virtual reality apparatus). The HMD comprises a frame 40, in this example formed of a rear strap and a top strap, and a display portion 50.
Note that the HMD of Figure 1 may comprise further features, to be described below in connection with other drawings, but which are not shown in Figure 1 for clarity of this initial explanation.
The HMD of Figure 1 completely (or at least substantially completely) obscures the user's view of the surrounding environment. All that the user can see is the pair of images displayed within the HMD.
The HMD has associated headphone audio transducers or earpieces 60 which fit into the user's left and right ears 70. The earpieces 60 replay an audio signal provided from an external source, which may be the same as the video signal source which provides the video signal for display to the user's eyes. A boom microphone 75 is mounted on the HMD so as to extend towards the user’s mouth.
The combination of the fact that the user can see only what is displayed by the HMD and, subject to the limitations of the noise blocking or active cancellation properties of the earpieces and associated electronics, can hear only what is provided via the earpieces, mean that this HMD may be considered as a so-called “full immersion” HMD. Note however that in some embodiments the HMD is not a full immersion HMD, and may provide at least some facility for the user to see and/or hear the user’s surroundings. This could be by providing some degree of transparency or partial transparency in the display arrangements, and/or by projecting a view of the outside (captured using a camera, for example a camera mounted on the HMD) via the HMD’s displays, and/or by allowing the transmission of ambient sound past the earpieces and/or by providing a microphone to generate an input sound signal (for transmission to the earpieces) dependent upon the ambient sound.
A front-facing camera 122 may capture images to the front of the HMD, in use. A Bluetooth® antenna 124 may provide communication facilities or may simply be arranged as a directional antenna to allow a detection of the direction of a nearby Bluetooth transmitter.
In operation, a video signal is provided for display by the HMD. This could be provided by an external video signal source 80 such as a video games machine or data processing apparatus (such as a personal computer), in which case the signals could be transmitted to the HMD by a wired or a wireless connection 82. Examples of suitable wireless connections include Bluetooth® connections. Audio signals for the earpieces 60 can be carried by the same connection. Similarly, any control signals passed from the HMD to the video (audio) signal source may be carried by the same connection. Furthermore, a power supply 83 (including one or more batteries and/or being connectable to a mains power outlet) may be linked by a cable 84 to the HMD. Note that the power supply 83 and the video signal source 80 may be separate units or may be embodied as the same physical unit. There may be separate cables for power and video (and indeed for audio) signal supply, or these may be combined for carriage on a single cable (for example, using separate conductors, as in a USB cable, or in a similar way to a “power over Ethernet” arrangement in which data is carried as a balanced signal and power as direct current, over the same collection of physical wires). The video and/or audio signal may be carried by, for example, an optical fibre cable. In other embodiments, at least part of the functionality associated with generating image and/or audio signals for presentation to the user may be carried out by circuitry and/or processing forming part of the HMD itself. A power supply may be provided as part of the HMD itself.
Some embodiments of the disclosure are applicable to an HMD having at least one electrical and/or optical cable linking the HMD to another device, such as a power supply and/or a video (and/or audio) signal source. So, embodiments of the disclosure can include, for example:
(a) an HMD having its own power supply (as part of the HMD arrangement) but a cabled connection to a video and/or audio signal source;
(b) an HMD having a cabled connection to a power supply and to a video and/or audio signal source, embodied as a single physical cable or more than one physical cable;
(c) an HMD having its own video and/or audio signal source (as part of the HMD arrangement) and a cabled connection to a power supply; or (d) an HMD having a wireless connection to a video and/or audio signal source and a cabled connection to a power supply.
If one or more cables are used, the physical position at which the cable 82 and/or 84 enters or joins the HMD is not particularly important from a technical point of view. Aesthetically, and to avoid the cable(s) brushing the user’s face in operation, it would normally be the case that the cable(s) would enter or join the HMD at the side or back of the HMD (relative to the orientation of the user’s head when worn in normal operation). Accordingly, the position of the cables 82, 84 relative to the HMD in Figure 1 should be treated merely as a schematic representation.
Accordingly, the arrangement of Figure 1 provides an example of a head-mountable display system comprising a frame to be mounted onto an observer’s head, the frame defining one or two eye display positions which, in use, are positioned in front of a respective eye of the observer and a display element mounted with respect to each of the eye display positions, the display element providing a virtual image of a video display of a video signal from a video signal source to that eye of the observer.
Figure 1 shows just one example of an HMD. Other formats are possible: for example an HMD could use a frame more similar to that associated with conventional eyeglasses, namely a substantially horizontal leg extending back from the display portion to the top rear of the user's ear, possibly curling down behind the ear. In other (not full immersion) examples, the user's view of the external environment may not in fact be entirely obscured; the displayed images could be arranged so as to be superposed (from the user's point of view) over the external environment.
Examples of the present technique relate to the handling of audio signals for a listener wearing an HMD of the type discussed above or simply a headphone or a pair of headphones.
Figure 2 schematically illustrates audio reflections in a virtual environment.
In the example of Figure 2, a listening position 200 representing a user position in a virtual environment 210 has a pair of ear positions 202, 204 either side of a (deemed solid) skull 206. Therefore, it will be apparent that (in the virtual environment) sound will tend to arrive at the pair of ear positions 202, 204 from different sides of the skull 206. In examples to be discussed below, a detector is configured to model the audio propagation properties of the current candidate direction for each of a pair of ears located at the listening position. In examples, the detector is configured to model the audio propagation properties by detecting whether the current candidate direction is reflected within the virtual environment.
A virtual sound source position 220 is illustrated. In the present example, the sound source is an omnidirectional sound source, which is to say that in it emits sound equally in all directions. However, in other examples, a sound source emitting sound in only certain directions, or having a preference such that sounds are preferentially emitted in certain directions even if a small amount of sound is emitted in other directions, can be used.
Various objects 230, 240 are located in the virtual environment, along with the walls 250 of this example virtual environment. Sound is considered or modelled to reflect from the objects 230, 240 and the walls 250. The reflection depends upon the angle of incidents, the normal to the surface on which the reflection is taking place, and the material of the surface from which the reflection is taking place. The angular aspects of this are known from the three-dimensional modelling by which the virtual environment is generated. Similarly, audio reflective properties can be associated with surfaces within a virtual environment.
Figure 2 shows an example of a direct path 260 from the position 220 to the user’s ear 202 in the virtual environment, paths 262, 264 involving one reflection either from a wall or from one of the objects, and an example path 266 involving two reflections.
In a real environment, the sound as experienced by a real listener would be represented by a linear combination of various propagation paths from a direct path through to one involving multiple reflections. The direct path is likely to be the more significant one for a real listener because (a) it potentially suffers less attenuation, and (b) the sound may arrive the soonest and will therefore be treated by the human brain as being the dominant representation of that sound.
Techniques will be described now to model the propagation of sound from the virtual position 220 to the user’s virtual ear positions 202, 204 using a propagation processing apparatus shown by way of example in Figure 3.
Referring to Figure 3, a processing apparatus to generate audio filtering parameters to simulate the propagation of a sound from a sound source position 220 in a virtual environment 210 to a listening position 200, 202, 204 in the virtual environment comprises the following:
• A generator 300 to generate candidate audio propagation directions from the sound source position in the virtual environment. In the case of an omnidirectional sound source, the candidate audio propagation directions can be in any direction within the entire spherical angular range. If the sound source is a directional sound source then the candidate propagation directions may be restricted to the propagation directions applicable to that sound source. In examples to be discussed below, the generator generates candidate audio propagation directions on a random pseudo-random basis.
• A detector 310 to model audio propagation properties of a current candidate direction and detect whether an audio propagation path originating in the current candidate direction from the sound source position arrives at the listening position. Here, it will be appreciated that the example path shown in Figure 2 or arrive at the listening position (or, at least, all arrive within a maximum number of reflections) bearing in mind that each reflection will tend to attenuate and delay the sound, a maximum number of reflections can be set in consideration of the candidate audio propagation paths. An example of such a maximum number of allowable reflections is five. If a path is detected not to arrive at the listening position (for example, within the maximum allowable number of reflections) then that path is not carried forward and is in fact discarded.
• A selector 320 selects a predetermined number of candidate directions for which an audio propagation path originating in the candidate direction from the sound source position arrives at the listening position. In example arrangements to be discussed below, the generator generates the candidate propagation directions successively, and the selector 320 is arranged to select the first n of the propagation direction which do arrive at the listening position.
• A filter 330 acting as a combiner to combine the audio propagation properties of the selected candidate directions. The filter 330 can then apply the combination of the selected propagation paths to an audio input signal 340 to generate an audio output signal 350.
Therefore, in examples, the generator 300 is configured to generate successive candidate directions; and the selector 320 is configured to select the predetermined number of candidate directions as those candidate directions first generated by the generator for which the audio propagation path originating in that candidate direction from the sound source position arrives at the listening position. The generator may be configured to generate the candidate directions according to a random or pseudo-random selection.
Figure 4 is a schematic flowchart illustrating a method applicable to the apparatus described with reference to Figure 3.
At a step 400, the generator generates a next candidate propagation direction. At a step 410, the reflection detector 310 models the propagation according to the candidate propagation direction, including reflections, and in doing so detects whether the candidate propagation direction arrives at the listening position. Here, the process of Figure 4 can be repeated separately in respect of each ear position 202, 204, or a propagation path can be treated at the step 410 as arriving at the listening position if it arrives at either or both of the ear positions 202, 204.
At step 420, the selector 320 detects whether the current candidate path is a valid path, which is to say it arrives at the listening position. If the answer is no, control returns to the step 400 to select a next candidate propagation path. If the answer is yes, however, then at a step 430 the selector 320 adds the current candidate path to a current set of propagation paths, for example a set of n propagation paths where n is, for example, six.
At a step 440, if the set of n propagation paths is complete then control passes to a step 450 at which the filter 330 applies the current set of propagation paths and their model properties as an audio filter to the audio input 340 to generate the audio output 350. If not (at the step 440) control returns to the step 400.
Figure 5 schematically illustrates a candidate propagation direction 500 with respect to a sound source position 510 in the virtual environment. The candidate direction is defined by longitude and latitude 520 and, in the case of an omnidirectional sound source at the sound source position 220, the longitude and latitude values can be within the entire range of spherical angles.
Figure 6 schematically illustrates a data processing apparatus. Here, an HMD is represented schematically by a pair of headphones 600. Of course, video display facilities may also be included, but for the present purposes the audio signal is under consideration. A position detector 610 cooperates with the headphones 600 to detect their current position in the real world environment. This could be in response to telemetry transmitted from the headphones 600 to the position detector based upon a detected position by the headphones 600 themselves and/or by a remote position detector such as a camera associated with the position detector 600 which optically detects the position, or at least changes in the position of, the headphones 600 in the real world.
The position detector 600 optionally applies a filtering or smoothing to the detected position before providing it as a representation of the virtual listening position 200 to the propagation processor of Figure 3 (620). In this way, changes in the real world position of the user are represented by corresponding changes in the virtual listening position 200 in the virtual environment. This can provide a strong sense of realism to a user enjoying a virtual environment.
Therefore, example arrangements can provide a position detector to detect a position of a listener in a real environment; and a position mapper to map the detected position of the listener to a listening position in the virtual environment.
An audio generator 630 generates audio sounds, for example as part of a video game in the case that the system is used as part of a games machine, and provides these to the filter 330 which, uses parameters provided by the propagation processor 620 generates an audio output 350 for transmission by a wired or wireless connection to the headphones 600.
Figure 7 is a schematic flowchart illustrating a method comprising:
generating (at a step 700) successive candidate audio propagation directions from a sound source position in a virtual environment;
modelling (at a step 710) audio propagation properties of a current candidate direction; detecting (at a step 720) whether an audio propagation path originating in the current candidate direction from the sound source position arrives at a listening position in the virtual environment;
selecting (at a step 730) a predetermined number of candidate directions for which an audio propagation path originating in the candidate direction from the sound source position arrives at the listening position; and combining (at a step 740) the audio propagation properties of the selected candidate directions to generate audio filtering parameters to simulate the propagation of a sound from the sound source position to the listening position.
A filtering operation can be applied to apply the audio filtering parameters to an audio signal.
It will be appreciated that example embodiments can be implemented by computer software operating on a general purpose computing system such as a games machine. In these examples, computer software, which when executed by a computer, causes the computer to carry out any of the methods discussed above is considered as an embodiment of the present disclosure. Similarly, embodiments of the disclosure are provided by a non-transitory, machine-readable storage medium which stores such computer software.
It will also be apparent that numerous modifications and variations of the present disclosure are possible in light of the above teachings. It is therefore to be understood that within the scope of the appended claims, the disclosure may be practised otherwise than as specifically described herein.

Claims (10)

1. Apparatus comprising:
a processor to generate audio filtering parameters to simulate the propagation of a sound from a sound source position in a virtual environment to a listening position in the virtual environment, the processor comprising:
a generator to generate candidate audio propagation directions from the sound source position in the virtual environment;
a detector to model audio propagation properties of a current candidate direction and to detect whether an audio propagation path originating in the current candidate direction from the sound source position arrives at the listening position;
a selector to select a predetermined number of candidate directions for which an audio propagation path originating in the candidate direction from the sound source position arrives at the listening position; and a combiner to combine the audio propagation properties of the selected candidate directions.
2. Apparatus according to claim 1, comprising a filter to apply the audio filtering parameters to an audio signal.
3. Apparatus according to claim 1 or claim 2, in which:
the generator is configured to generate successive candidate directions; and the selector is configured to select the predetermined number of candidate directions as those candidate directions first generated by the generator for which the audio propagation path originating in that candidate direction from the sound source position arrives at the listening position.
4. Apparatus according to claim 3, in which the generator is configured to generate the candidate directions according to a random or pseudo-random selection.
5. Apparatus according to any one of the preceding claims, comprising:
a position detector to detect a position of a listener in a real environment; and a position mapper to map the detected position of the listener to a listening position in the virtual environment.
6. Apparatus according to any one of the preceding claims, in which the detector is configured to model the audio propagation properties of the current candidate direction for each of a pair of ears located at the listening position.
7. Apparatus according to any one of the preceding claims, in which the detector is configured to model the audio propagation properties by detecting whether the current candidate direction is reflected within the virtual environment.
8. A method comprising:
generating successive candidate audio propagation directions from a sound source position in a virtual environment;
modelling audio propagation properties of a current candidate direction;
detecting whether an audio propagation path originating in the current candidate direction from the sound source position arrives at a listening position in the virtual environment;
selecting a predetermined number of candidate directions for which an audio propagation path originating in the candidate direction from the sound source position arrives at the listening position; and combining the audio propagation properties of the selected candidate directions to generate audio filtering parameters to simulate the propagation of a sound from the sound source position to the listening position.
9. Computer software which, when executed by a computer, causes the computer to perform the method of claim 8.
10. A non-transitory, machine-readable storage medium which stores computer software according to claim 9.
Intellectual
Property
Office
Application No: GB1622178.0
GB1622178.0A 2016-12-23 2016-12-23 Audio processing Withdrawn GB2558281A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB1622178.0A GB2558281A (en) 2016-12-23 2016-12-23 Audio processing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB1622178.0A GB2558281A (en) 2016-12-23 2016-12-23 Audio processing

Publications (2)

Publication Number Publication Date
GB201622178D0 GB201622178D0 (en) 2017-02-08
GB2558281A true GB2558281A (en) 2018-07-11

Family

ID=58360735

Family Applications (1)

Application Number Title Priority Date Filing Date
GB1622178.0A Withdrawn GB2558281A (en) 2016-12-23 2016-12-23 Audio processing

Country Status (1)

Country Link
GB (1) GB2558281A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080273708A1 (en) * 2007-05-03 2008-11-06 Telefonaktiebolaget L M Ericsson (Publ) Early Reflection Method for Enhanced Externalization
US20120070005A1 (en) * 2010-09-17 2012-03-22 Denso Corporation Stereophonic sound reproduction system
US20120093320A1 (en) * 2010-10-13 2012-04-19 Microsoft Corporation System and method for high-precision 3-dimensional audio for augmented reality
US20150378019A1 (en) * 2014-06-27 2015-12-31 The University Of North Carolina At Chapel Hill Methods, systems, and computer readable media for modeling interactive diffuse reflections and higher-order diffraction in virtual environment scenes
WO2016130834A1 (en) * 2015-02-12 2016-08-18 Dolby Laboratories Licensing Corporation Reverberation generation for headphone virtualization

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080273708A1 (en) * 2007-05-03 2008-11-06 Telefonaktiebolaget L M Ericsson (Publ) Early Reflection Method for Enhanced Externalization
US20120070005A1 (en) * 2010-09-17 2012-03-22 Denso Corporation Stereophonic sound reproduction system
US20120093320A1 (en) * 2010-10-13 2012-04-19 Microsoft Corporation System and method for high-precision 3-dimensional audio for augmented reality
US20150378019A1 (en) * 2014-06-27 2015-12-31 The University Of North Carolina At Chapel Hill Methods, systems, and computer readable media for modeling interactive diffuse reflections and higher-order diffraction in virtual environment scenes
WO2016130834A1 (en) * 2015-02-12 2016-08-18 Dolby Laboratories Licensing Corporation Reverberation generation for headphone virtualization

Also Published As

Publication number Publication date
GB201622178D0 (en) 2017-02-08

Similar Documents

Publication Publication Date Title
KR102616220B1 (en) Virtual and real object recording in mixed reality device
US11617050B2 (en) Systems and methods for sound source virtualization
US10536783B2 (en) Technique for directing audio in augmented reality system
CN107810646B (en) Filtered sound for conferencing applications
JP2023158059A (en) Spatial audio for interactive audio environments
US10979845B1 (en) Audio augmentation using environmental data
CN109691141B (en) Spatialization audio system and method for rendering spatialization audio
US11096006B1 (en) Dynamic speech directivity reproduction
JP2022504283A (en) Proximity audio rendering
US11812222B2 (en) Technique for directing audio in augmented reality system
JP2022177305A (en) Emphasis for audio spatialization
US20220095123A1 (en) Connection assessment system
GB2558279A (en) Head mountable display system
GB2558281A (en) Audio processing
GB2566758A (en) Motion signal generation
US10841727B2 (en) Low-frequency interchannel coherence control

Legal Events

Date Code Title Description
WAP Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1)