WO2012025580A1 - Method and device for enhanced sound field reproduction of spatially encoded audio input signals - Google Patents

Method and device for enhanced sound field reproduction of spatially encoded audio input signals Download PDF

Info

Publication number
WO2012025580A1
WO2012025580A1 PCT/EP2011/064592 EP2011064592W WO2012025580A1 WO 2012025580 A1 WO2012025580 A1 WO 2012025580A1 EP 2011064592 W EP2011064592 W EP 2011064592W WO 2012025580 A1 WO2012025580 A1 WO 2012025580A1
Authority
WO
WIPO (PCT)
Prior art keywords
audio input
input signals
subspace
sound field
reproducible
Prior art date
Application number
PCT/EP2011/064592
Other languages
French (fr)
Inventor
Etienne Corteel
Matthias Rosenthal
Original Assignee
Sonicemotion Ag
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sonicemotion Ag filed Critical Sonicemotion Ag
Priority to US13/818,014 priority Critical patent/US9271081B2/en
Priority to EP11752172.4A priority patent/EP2609759B1/en
Priority to ES11752172T priority patent/ES2922639T3/en
Publication of WO2012025580A1 publication Critical patent/WO2012025580A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R5/00Stereophonic arrangements
    • H04R5/04Circuit arrangements, e.g. for selective connection of amplifier inputs/outputs to loudspeakers, for loudspeaker detection, or for adaptation of settings to personal preferences or hearing impairments
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S7/00Indicating arrangements; Control arrangements, e.g. balance control
    • H04S7/30Control circuits for electronic adaptation of the sound field
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S2400/00Details of stereophonic systems covered by H04S but not provided for in its groups
    • H04S2400/03Aspects of down-mixing multi-channel audio to configurations with lower numbers of playback channels, e.g. 7.1 -> 5.1
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S2420/00Techniques used stereophonic systems covered by H04S but not provided for in its groups
    • H04S2420/11Application of ambisonics in stereophonic audio systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S2420/00Techniques used stereophonic systems covered by H04S but not provided for in its groups
    • H04S2420/13Application of wave-field synthesis in stereophonic audio systems

Definitions

  • the invention relates to a method and a device for efficient 3D sound field reproduction using loudspeakers.
  • Sound field reproduction relates to the reproduction of the spatial characteristics of a sound scene with in an extended listening area.
  • the sound scene should be encoded into a set of audio signals with associated sound field description data. Then, it should be reproduced/decoded on the available loudspeaker setup.
  • the object-based description provides a spatial description of the causes (the acoustic sources), their acoustic radiation characteristics (directivity) and their interaction with the environment (room effect).
  • This format is very generic but it suffers from two major drawbacks.
  • Second, the m ixing parameters are completely revealed to the users and may be altered. Th is l im its intel lectual property protection of the sound engineers therefore reducing acceptance factor of such a format.
  • the physical description intends to provide a physically correct description of the sound field within an extended area. It provides a global description of the consequences, i.e. the sound field, as opposed to the object-based description that describes the causes, i.e. the sources. There again exist two types of physical description:
  • the boundary description consists in describing the pressure and the normal velocity of the target sound field at the boundaries of a fixed size reproduction subspace.
  • this description provides a unique representation of the sound field within the inner listening subspace.
  • a continuous distribution of recording points is required leading to an infinite number of audio channels.
  • Performing a spatial sampling of the description surface can reduce the number of audio channels.
  • This however introduces so-called spatial aliasing that introduce audible artefacts.
  • the sound field is only described within a defined reproduction subspace that is not easily scalable. Therefore, the boundary description cannot be used in practice.
  • the Eigen function description corresponds to a decomposition of the sound field into Eigen solutions of the wave equation in a given coordinate system (plane waves in Cartesian coordinates, spherical harmonics in spherical coordinates, cylindrical harmonics in cylindrical coordinates, ). Such functions form a basis of infinite dimension for sound field description in 3D space.
  • the High Order Ambisonics (HOA) format describes the sound field using spherical harmonics up to a so-called order N. (N+1) 2 components are required for description up to order N that are indexed by so-called order and degree.
  • This format is disclosed by J. Daniel In "Spatial sound encoding including near field effect: Introducing distance coding filters and a viable, new ambisonic format" in 23th International Conference of the Audio Engineering Society, Helsingor, Danemark, June 2003.
  • the HOA description is independent of the reproduction setup. This description additionally keeps mixing parameters hidden from the end users.
  • Practical use of HOA usually considers maximum orders comprised between 1 (4 channels, so-called B-format) and 4 (i.e.25 audio channels). HOA thus introduces localization errors and localization blur of sound events of the sound scene even at the ideal centered listening positions that are getting less disturbing for higher orders as disclosed by S. Bertet, J. Daniel, E. Parizet, and O. Warusfel in " Investigation on the restitution system influence over perceived higher order Ambisonics sound field: a subjective evaluation involving from first to fourth order systems," in Proc. Acoustics-08, Joint ASA/EAA meeting, Paris, 2008.
  • the plane wave based physical description also requires an infinite number of components in order to provide an accurate description of the sound field in 3D space.
  • a plane wave can be described as resulting from a source at an infinite distance from the reference point that is describing a fixed direction independently of the listening point.
  • stereophonic based formats stereo, 5. 1 , 7.1 , 22.2 ...
  • They indeed carry audio information that should be reproduced using loudspeakers located at specific directions in reference to an optimum listening point (origin of the Cartesian system).
  • the audio channels contained for stereophonic or channel based format are obtained by positioning virtual sources using so-called panning laws.
  • Panning laws typically spread the energy of the audio input channel of the source on two or more output audio channels for simulating a virtual position in between loudspeaker directions.
  • These techniques are based on stereophonic principles that are essentially used in the horizontal plane but can be extended to 3D using VBAP as d isclosed by V. Pu lkki in "Virtual sound source positioning using vector based amplitude panning" Journal of the Audio Engineering Society, 45(6), June 1997.
  • Stereophonic principles create an illusion that is only valid at the reference listening point (the so-called sweet spot).
  • WFS Wave Field Synthesis
  • WFS can readily be derived for 3D reproduction as disclosed by Munenori N., Kimura T., Yamakata, Y. and Katsumoto, M. in “Performance Evaluation of 3D Sound Field Reproduction System Using a Few Loudspeakers and Wave Field Synthesis", Second International Symposium on Universal Communication, 2008. WFS is a very flexible sound reproduction method that can easily adapt to any convex loudspeaker array shape.
  • WFS spatial aliasing
  • Spatial aliasing results from the use of individual loudspeakers instead of a continuous line or surface.
  • it is possible to reduce spatial aliasing artefacts by considering the size of the listening area as disclosed in WO2009056508.
  • Channel based format can be easily reproduced using WFS using virtual loudspeakers.
  • Virtual loudspeakers are virtual sources that are positioned at the intended positions of the loudspeakers according to the channel based format (+/- 30 degrees for stereo, ). These virtual loudspeakers are preferably reproduced as plane waves as disclosed by Boone, M. and Verheijen E. in "Sound Reproduction Applications with Wave-Field Synthesis", 104 th convention of the Audio Engineering Society, 1998. This ensures that they are perceived at the intended angular position throughout the listening area, which tends to extend the size of the sweet spot (the area where the stereophonic illusion works). However, there remains a modification of relative delays between channels with respect to listening position due to travel time differences from the physical loudspeaker layout that limit the size of the sweet listening area.
  • the reproduction of HOA encoded material is usually realized by synthesizing spherical harmonics over a given set of at least (N+1) 2 loudspeakers where N is the order of the H OA format.
  • This "decoding" technique is commonly referred to as mode matching solution.
  • the main operation consists in inverting a matrix L that contains the spherical harmonic decomposition of the radiation characteristics of each loudspeakers as disclosed by R. Nicol in "Sound spatialization by higher order ambisonics: Encoding and decoding a sound scene in practice from a theoretical point of view. " i n P roceed i ngs of the 2nd I nternational Symposium on Am bisonics and Spherical Acoustics, 201 0.
  • the matrix L can easi ly be i ll-conditioned, especially for arbitrary loudspeaker layouts and depends on frequency.
  • the decoding performs best for a fully regular loudspeaker layout on a sphere with exactly (N+1 ) 2 loudspeakers in 3D. In this case, the inverse of matrix L is simply transpose of L.
  • the decoding m ight be made independent of frequency if the loudspeaker can be considered as plane waves, which is often not the case in practice.
  • the main limitation for sound field reproduction is the required number of loudspeakers and their placement within the room. Full 3D reproduction would require placing loudspeaker on a surface surrounding the listening area. In practice, the reproduction systems are thus limited to simpler loudspeaker layout that can be horizontal as for the majority of WFS systems, or even frontal only. At best loudspeakers are positioned on the upper half sphere as described by Zotter F., Pomberger H., and Noisternig M. in "Ambisonic decoding with and without mode-matching: a case study using the hemisphere" In 2nd International Symposium on Ambisonics and Spherical Acoustics, 2010.
  • Upmix Active rendering of spatially encoded input signals has been mostly applied in the field of upmixing systems.
  • Upmix consists in performing a spatial analysis to separate localizable sounds from diffuse sounds and typically create more audio output signals than audio input signals.
  • Classical applications of upmix consider enhanced playback of stereo signals on a 5.1 rendering system.
  • method 1 comparing directional channels by pairs using for example real valued correlation metrics as disclosed in WO2007026025 or complex valued correlation metrics as disclosed in US20090198356;
  • method 2 obtaining direction and diffuseness from "Gerzon vectors", i.e. velocity and intensity vectors for channel-based formats as disclosed in US20070269063;
  • the first two methods are mostly based on channel-based formats whereas the last one considers only first order Ambisonics inputs.
  • the related patent are describing techniques to either translate the Ambisonics format into channel based format by performing decoding on a given virtual loudspeaker setup or alternatively by considering the directions of the channel-based format as plan waves and decompose them into spherical harmonics to create an equivalent Ambisonics format.
  • the aim of the invention is to increase the spatial performance of sound field reproduction with spatially encoded audio signals in an extended listening area by properly accounting the capabilities of the rendering system. It is another aim of the invention to propose advanced spatial analysis techniques for improving sound field description before reproduction. It is another aim of the invention to account for the capabilities of the reproduction setup so as to focus the spatial analysis of the audio input signals into the reproducible subspace and limit influence of strong interferers that cannot be reproduced with the available loudspeaker setup.
  • the invention consists in a method and a device in which a reproducible subspace is defined based on the capabilities of the reproduction setup. Based on this reproducible subspace description, audio signals located within the reproducible subspace are extracted from the spatially encoded audio input signals. A spatial analysis is performed on the extracted audio input signals to extract main localizable sources within the reproducible subspace. The remaining signals and the portion of the audio input signals l ocated o uts i d e of th e re p rod u ci b l e a re t h e n m a p ped with i n the reproducible subspace. The latter and the extracted sources are then reproduced as virtual sources/loudspeakers on the physically available loudspeaker setup.
  • the spatial analysis is preferably performed into the spherical harmonics domain . It is proposed to adapt d irection of arrival estim ates m ethod techn iq ue developed i n the field of m icrophone array processing as disclosed by Teutsch, H. in “Modal Array Signal Processing: Principles and Applications of Acoustic Wavefield Decom position” Springer, 2007. These methods enable to estimate multiple sources simultaneously in the presence of spatially distributed noise. They were described for direction of arrival estimates of sources and beamform ing using circular (2D) or spherical (3D) distribution of m icrophones i n the cyl i nd rica l (2 D ) or spherical (3D) harmonics.
  • a method for sound field reproduction into a listening area of spatially encoded first audio input signals according to sound field description data using an ensemble of physical loudspeakers comprises the steps of computing reproduction subspace description data from loudspeaker positioning data describing the subspace in which virtual sources can be reproduced with the physically available setup.
  • Second and third audio input signals with associated sound field description data are extracted from first audio input signals such that second audio input signals comprise spatial components of the first audio input signals located within the reproducible subspace and third audio input signals com prise spatial com ponents of the first audio input signals located outside of the reproducible subspace.
  • a spatial analysis is performed on second audio input signals so as to extract fourth audio input signals corresponding to localizable sources within the reproducible subspace with associated source positioning data.
  • Rem ain ing com ponents of second aud io i n put s ig na ls after spatial analysis are merged with third audio input signals form ing fifth audio input signals with associated sound field description data for reproduction within the reproducible subspace.
  • loudspeaker alimentation signals are com puted from fou rth and fifth audio input signals according to loudspeaker positioning data, localizable sources positioning data and sound field description data.
  • the method may comprise steps wherein the sound field description data are corresponding to eigen solutions of the wave equation (plane waves, spherical harmonics, cylindrical harmonics, ... ) or incom ing directions (channel-based format: stereo, 5. 1 , 7.1 , 1 0.2, 12.2, 22.2). And the method may comprise steps:
  • the spatial analysis is performed by first converting, if necessary, second audio input signals into spherical (3D) or cylindrical (2D) harmonic components; second, identifying directional of arrival/sound field description data of main localizable sources within the reproducible subspace; and forming beam patterns by combination of spherical harmonics having main lobe in the direction of the estimated direction of arrival in order to extract fourth audio input signals from second audio input signals.
  • the sound field description data of fourth audio input signals are estimated using a subspace directional of arrival estimate method, derived for example from a MUSIC or ESPRIT based algorithm , operating in spherical (3D) or cylindrical (2D) harmonics domain.
  • the invention comprises a device for sound field reproduction into a listening area of spatially encoded first audio input signals according to sound field description data using an ensemble of physical loudspeakers.
  • Said device comprises a reproducible subspace computation device for computing reproduction subspace description data from loudspeaker positioning data describing the subspace in which virtual sources can be reproduced with the physically available setup.
  • Said device further comprises a reproducible subspace audio selection device for extracting second and third audio input signals with associated sound field description data wherein second audio input signals comprise spatial components of the first audio input signals located within the reproducible subspace and third audio input signals comprise spatial components of the first audio input signals located outside of the reproducible subspace.
  • Said device also comprises a sound field transformation device on second audio input signals so as to extract fourth audio input signals corresponding to localizable sources within the reproducible subspace with associated source positioning data and merging remaining components of second audio input signals after spatial analysis and third audio input signals into fifth audio input signals with associated sound field description data for reproduction within the reproducible subspace.
  • Said device finally comprises a spatial sound rendering device in order to compute loudspeaker alimentation signals from fourth and fifth audio input signals according to loudspeaker positioning data, localizable sources positioning data and sound field description data of the fifth audio input signals.
  • said device may preferably compromise elements:
  • reproducible subspace computation device computes the reproducible subspace description data according to the loudspeaker positioning data and the listening area description data.
  • the spatial sound rendering device computes loudspeaker alimentation signals according to loudspeaker positioning data, the listening area description data, localizable sources positioning data and sound field description data of the fifth audio input signals.
  • Fig. 1 describes the radiation pattern of spherical harmonics
  • Fig. 2 describes a sound reproduction system according to prior art.
  • Fig. 3 describes a sound reproduction system according to the invention.
  • Fig. 4 describes beamform ing by com bination of spherical harmonics of maximum order 3
  • Fig. 5 describes first embodiment according to the invention
  • Fig. 6 describes second embodiment according to the invention
  • Fig. 7 describes third embodiment according to the invention
  • Fig. 1 was discussed in the introductory part of the specification and is representing the state of the art. Therefore these figures are not further discussed at this stage.
  • Fig. 2 represents a soundfield rendering device according to the state of the art.
  • a decoding/spatial analysis device 24 calculates a plurality of decoded audio signals 25 and their associated sound field positioning data 26 from first audio input signals 1 and their associated sound field description data 2.
  • the decoding/spatial analysis device 24 may real ize either the decoding of HOA encoded signals or spatial analysis of first audio input signals 1 .
  • the positioning data 26 describe the position of target virtual loudspeakers 21 to be synthesized on the physical loudspeakers 3.
  • a spatial sound rendering device 19 computes alimentation signals 20 for physical loudspeakers 3 from decoded audio signals 25, their associated sound field description data 26 and loudspeakers positioning data 4.
  • the alimentation signals for physical loudspeakers 20 drive a plurality of loudspeakers 3.
  • Fig.3 represents a soundfield rendering device according to the invention.
  • a reproducible subspace computation device 7 is computing reproducible subspace description data 8 from loudspeaker positioning data 4.
  • a reproducible subspace audio selection device 9 extracts second audio input signals 10 and their associated sound field description data 11, and third audio input signals 12 and their associated sound field description data 13 from first audio input signals 1, their associated sound field description data 2 and reproducible subspace description data 8 such that second audio input signals 10 comprise elements of first audio input signals 1 that are located within the reproducible subspace 6 and third audio input signals 12 comprise elements of first audio input signals 1 that are located outside the reproducible subspace 6.
  • a sound field transformation device 14 computes fourth audio input signals 15 and their associated positioning data 16 by extracting localizable sources from second audio input signals 10 within the reproducible subspace 6.
  • the sound field transformation device 14 additionally computes fifth audio input signals 17 and their associated positioning data 18 from remaining components of second audio input signals 10 and their associated sound field description data 11 after localizable sources extraction and third audio input signals 12 and their associated sound field description data 13.
  • the positioning data 18 of fifth audio input signals 17 correspond to fixed virtual loudspeakers 21 located within the reproducible subspace 6.
  • a spatial sound rendering device 19 computes alimentation signals 20 for physical loudspeakers 3 from the fourth audio input signals 15 and their associated positioning data 16, fifth audio input signals 17 and their associated positioning data 18, and loudspeakers positioning data 4.
  • the alimentation signals for physical loudspeakers 20 drive a plurality of loudspeakers 3 so as to reproduce the target sou nd fie ld with in the listening area 5.
  • j fcf is the spherical bessel function of the first kind of order n and P M $ O) are the associated legendre function defined as
  • P (sin r3 ) — ⁇ '- where P n (sme) is the Legendre polynomial of the first kind of degree n. B mn ( ) are referred to as spherical harmonic decomposition coefficients of the sound field.
  • the spherical harmonics ⁇ ⁇ may describe more and more complex patterns of radiation around the origin of the coordinate system.
  • h ⁇ is the spherical Hankel function of the first kind.
  • the spherical harmonic decom position for a point source are therefore depending on frequency.
  • coefficients form the basis of HOA encoding from an object-based description format where the order is limited to a maximum value N providing (N+1 ) 2 signals.
  • the encoded signals form the (N+1 ) 2* 1 sized matrix B comprising the encoded signals at frequency ⁇ .
  • Decoding consists in finding the inverse (or pseudo-inverse) matrix D of the N L * (N+1 ) 2 matrix L that contains the L lmn (p) coefficients describing the radiation of each loudspeaker in spherical harmonics up to order N such that: where v ls is the N L * 1 matrix containing the alimentation signals of the loudspeakers.
  • Decoding can thus be considered as a beamforming operation where the HOA encoded signals are combined in a specific different way for each channel so as to form a directive beam in the direction of the target loudspeaker.
  • the spatially encoded signals are available as spherical harmonics in the matrix ( ⁇ , ⁇ ) that is obtained using a Short Time Fourier Transform (STFT) at instant ⁇ .
  • STFT Short Time Fourier Transform
  • a useful quantity for the direction of arrival estimation is the cross correlation matrix S BB ((O,K) that can be written as,
  • ⁇ ⁇ denotes the expectation operator and H is the hermitian transpose operator.
  • Ae[0,l] is the forgetting factor as disclosed by Allen J., Berkeley D., and Blauert, J. in "Multi-microphone signal-processing technique to remove room revereberation from speech signals", Journal of the Acoustical Society of America, vol.62, pp 912-915, October 1977.
  • a low forgetting factor provides a very accurate estimate of the correlation matrix but is not capable to properly adapt to changes in the position of the sources.
  • This eigenvalue decomposition of is the basis of the so-called subspace-based direction of arrival methods as disclosed by Teutsch, H. in “Modal Array Signal Processing: Principles and Applications of Acoustic Wavefield Decomposition” Springer, 2007.
  • the eigenvectors are separated into subspaces, the signal subspace and the noise subspace.
  • the signal subspace is composed of the I eigenvectors corresponding to the I largest eigenvalues.
  • the noise subspace is composed of the remaining eigenvectors.
  • This algorithm is commonly referred to as spectral MUSIC.
  • root-MUSIC unitary root-MUSIC, ...) that are detailed in the literature (see Krim H. and Viberg M. "Two decades of array signal processing research - the parametric approach.” IEEE Signal Processing Mag., 13(4):67-94, July 1996) and are not reproduced here.
  • the other class of source localization algorithm is commonly referred to as ESPRIT algorithms. It is based on the rotational invariance characteristics of the microphone array, or in this context, of the spherical harmonics. The complete formulation of the ESPRIT algorithm for spherical harmonics is disclosed by Teutsch, H. in “Modal Array Signal Processing: Principles and Applications of Acoustic Wavefield Decomposition” Springer, 2007. It is very complex in its formulation and it is therefore not reproduced here.
  • a linear array of physical loudspeakers 3 is used for the reproduction of a 5.1 input signal.
  • This embodiment is shown in Fig. 5.
  • the target listening area 5 is relatively large and it is used for computing the reproducible subspace together with loudspeaker positioning data considering the loudspeaker array as a window as disclosed by Corteel E. in "Equalization in extended area using multichannel inversion and wave field synthesis" Journal of the Audio Engineering Society, 54( 12) , Decem ber 2006.
  • the second audio input signals 1 0 are thus com posed of the frontal channels of the 5. 1 input ( L/R/C ) .
  • the th i rd aud io i n put chan ne ls 1 2 are form ed by the rear components of the 5.1 input (Ls and Rs channels).
  • the spatial analysis enables to extract virtual sources 21 which are then reproduced using WF S on the phys i ca l l oudspeakers at the i r i nte nded location.
  • the remaining components of the second audio input signals are decoded on 3 frontal virtual loudspeakers 22 located at the intended positions of the LRC channels (-30, 0, 30 degrees) as plane waves.
  • the third audio input s ig na ls are reproduced using virtual loudspeakers located at the boundaries of the reproducible subspace using WFS.
  • a circular horizontal array of physical loudspeakers 3 is used for the reproduction of a 10.2 input signal.
  • 10.2 is a channel-based reproduction format which comprises 1 0 broadband loudspeaker channels among which 8 channels are located in the horizontal plane and 2 are located at 45 degrees elevation and +/- 45 degrees azimuth as disclosed by Martin G. in " Introduction to Surround sound recording" available at http://www.tonmeister.ca/main/textbook/.
  • the second audio input signals 1 0 are thus composed of the horizontal channels of the 10.2 input.
  • the third audio input channels 12 are formed by the elevated components of the 10.2 input.
  • the spatial analysis enables to extract virtual sources 21 which are then reproduced using WFS on the physical loudspeakers at their intended location.
  • the remaining components of the second audio input signals are decoded on 5 regularly spaced surrounding virtual loudspeakers 22 located at (0, 72, 144, 216, 288 degrees) as plane waves.
  • This configuration enables improved decoding of the HOA encoded signals using a regular channel layout and a frequency independent decoding matrix.
  • the remaining components can be rendered using a lower num ber of virtual loudspeakers.
  • the third audio input signals are reproduced using virtual loudspeakers located at +/- 45 degrees using WFS.
  • an upper half-spherical array of physical loudspeakers 3 is used for the reproduction of a HOA encoded signal up to order 3.
  • This embodiment is shown in Fig. 7.
  • L (N+1) 2 loudspeakers considered as plane waves.
  • Such sampling techniques are disclosed by Zotter F. in "Analysis and Synthesis of Sound-Radiation with Spherical Arrays" PhD thesis, Institute of Electronic Music and Acoustics, University of Music and Performing Arts, 2009.
  • the second audio input channels 10 are thus simply extracted by selecting the virtual loudspeakers located in the upper half space.
  • the sound field description data 11 associated to the second audio input channels are thus simply corresponding to the directions of the selected virtual loudspeaker setup.
  • the remaining decoded channels therefore form the third audio input signals 13 and their directions give the associated sound field description data 14.
  • the spatial analysis is performed in the spherical harmonics domain by first reencoding the second audio input signals 10.
  • the extracted sources 21 are then reproduced on the physical loudspeakers 3 using WFS.
  • the remaining components of the second audio input signals 10 are then combined with the third audio input signals 12 to form fifth audio input signals 17 that are reproduced as virtual loudspeakers 22 on the physical loudspeakers 3 using WFS.
  • the mapping of the third audio input signals 12 onto the virtual loudspeakers 22 can be achieved by assigning each channel to the closest available virtual loudspeakers 22 or by spreading the energy using stereophonic based panning techniques.

Abstract

The invention relates to a method and a device for sound field reproduction into a listening area (5) of spatially encoded first audio input signals (1) according to sound field description data (2) using an ensemble of physical loudspeakers (3). The method comprises the steps of computing reproduction subspace description data (8) from loudspeaker positioning data (4) describing the subspace in which virtual sources can be reproduced with the physically available setup. Then, second (10) and third (12) audio input signals with associated sound field description data (11) (13) wherein second audio input signals (10) comprise spatial components of the first audio input signals (1) located within the reproducible subspace (6) and third audio input signals (12) comprise spatial components of the first audio input signals (1) located outside of the reproducible subspace (6). A spatial analysis is performed on second audio input signals (10) so as to extract fourth audio input signals (15) corresponding to localizable sources within the reproducible subspace (5) with associated source positioning data (13). Remaining components of second audio input signals (10) after spatial analysis are merged with third audio input signals (12) into fifth audio input signals (17) with associated sound field description data (18) for reproduction within the reproducible subspace (5). Finally loudspeaker alimentation signals (20) are computed from fourth (15) and fifth (17) audio input signals according to loudspeaker positioning data (4), localizable sources positioning data (16) and sound field description data (18).

Description

Method and device for enhanced sound field reproduction of spatially encoded audio input signals
The invention relates to a method and a device for efficient 3D sound field reproduction using loudspeakers. Sound field reproduction relates to the reproduction of the spatial characteristics of a sound scene with in an extended listening area. First, the sound scene should be encoded into a set of audio signals with associated sound field description data. Then, it should be reproduced/decoded on the available loudspeaker setup.
There exist a increasing variety of so-called audio format (stereo, 5.1 , 7.1 9.1 , 1 0.2, 22.2, HOA, MPEG-4, ... ) which needs to be reproduced on the available rendering system using loudspeakers or headphones. However, the available loudspeaker setup is usually not confirm ing to the standard of the audio format both from econom ical and practical constraints. The audio format may indeed require a too large number of loudspeakers that should be positioned at unpractical positions in most environments. The required loudspeaker system m ight also be too expensive for a large number of installations. Therefore, there is a requirement for advanced renderi ng m ethods and devices for opti m izi ng reprod uction on the available loudspeaker setup.
Description of state of the art
In the description of the state of the art, the spatial encoding methods are described first, highlighting their lim itations. In a second part, state of the art audio spatial reproduction techniques are presented.
Encoding of spatial sound scene
There exist two types of sound field description:
- the object based description, - the physical description.
The object-based description provides a spatial description of the causes (the acoustic sources), their acoustic radiation characteristics (directivity) and their interaction with the environment (room effect). This format is very generic but it suffers from two major drawbacks. First, the number of audio channels increases linearly with the number of sources. Therefore, a very h igh num ber of chan nels need to be transm itted to descri be complex scenes together with associated description data m aki ng it unsuitable for low bandwidth applications (mobile devices, conferencing, ... ). Second, the m ixing parameters are completely revealed to the users and may be altered. Th is l im its intel lectual property protection of the sound engineers therefore reducing acceptance factor of such a format.
The physical description intends to provide a physically correct description of the sound field within an extended area. It provides a global description of the consequences, i.e. the sound field, as opposed to the object-based description that describes the causes, i.e. the sources. There again exist two types of physical description:
• the boundary description,
• the spatial Eigen function decomposition.
The boundary description consists in describing the pressure and the normal velocity of the target sound field at the boundaries of a fixed size reproduction subspace. According to the so-called Kirchhoff-Helmholtz integral, this description provides a unique representation of the sound field within the inner listening subspace. In theory, a continuous distribution of recording points is required leading to an infinite number of audio channels. Performing a spatial sampling of the description surface can reduce the number of audio channels. This however introduces so- called spatial aliasing that introduce audible artefacts. Moreover the sound field is only described within a defined reproduction subspace that is not easily scalable. Therefore, the boundary description cannot be used in practice.
The Eigen function description corresponds to a decomposition of the sound field into Eigen solutions of the wave equation in a given coordinate system (plane waves in Cartesian coordinates, spherical harmonics in spherical coordinates, cylindrical harmonics in cylindrical coordinates, ...). Such functions form a basis of infinite dimension for sound field description in 3D space.
The High Order Ambisonics (HOA) format describes the sound field using spherical harmonics up to a so-called order N. (N+1)2 components are required for description up to order N that are indexed by so-called order and degree. This format is disclosed by J. Daniel In "Spatial sound encoding including near field effect: Introducing distance coding filters and a viable, new ambisonic format" in 23th International Conference of the Audio Engineering Society, Helsingor, Danemark, June 2003. Fig. 1 describes the equivalent radiation characteristics of spherical harmonics for N=3. It can be seen that higher orders correspond to more complex radiation pattern in the elevation whereas higher absolute degrees induce more complex radiation pattern in the azimuthal dimension.
As any other sound field description, the HOA description is independent of the reproduction setup. This description additionally keeps mixing parameters hidden from the end users.
HOA provides however a physically accurate description in a limited area around the origin of the spherical coordinate system. This area has the shape of a sphere with radius rmax=N/6^ where λ is the wavelength. Therefore, a physically correct description for typical head size in the entire audio bandwidth (20-20000 Hz) would require an order 20 (i.e.441 components). Practical use of HOA usually considers maximum orders comprised between 1 (4 channels, so-called B-format) and 4 (i.e.25 audio channels). HOA thus introduces localization errors and localization blur of sound events of the sound scene even at the ideal centered listening positions that are getting less disturbing for higher orders as disclosed by S. Bertet, J. Daniel, E. Parizet, and O. Warusfel in " Investigation on the restitution system influence over perceived higher order Ambisonics sound field: a subjective evaluation involving from first to fourth order systems," in Proc. Acoustics-08, Joint ASA/EAA meeting, Paris, 2008.
The plane wave based physical description also requires an infinite number of components in order to provide an accurate description of the sound field in 3D space. A plane wave can be described as resulting from a source at an infinite distance from the reference point that is describing a fixed direction independently of the listening point. Nowadays stereophonic based formats (stereo, 5. 1 , 7.1 , 22.2 ... ) can be related to plane wave description using a reduced num ber of com ponents. They indeed carry audio information that should be reproduced using loudspeakers located at specific directions in reference to an optimum listening point (origin of the Cartesian system).
The audio channels contained for stereophonic or channel based format are obtained by positioning virtual sources using so-called panning laws. Panning laws typically spread the energy of the audio input channel of the source on two or more output audio channels for simulating a virtual position in between loudspeaker directions. These techniques are based on stereophonic principles that are essentially used in the horizontal plane but can be extended to 3D using VBAP as d isclosed by V. Pu lkki in "Virtual sound source positioning using vector based amplitude panning" Journal of the Audio Engineering Society, 45(6), June 1997. Stereophonic principles create an illusion that is only valid at the reference listening point (the so-called sweet spot). Outside of the sweet spot, the illusion va n i s h es a nd so u rces a re l oca l ized o n the cl osest l o u d s pea ke r. Local ization in height using stereophon ic principals is also l im ited as disclosed by W. de Bruijn in "Application of Wave Field Synthesis in Videoconferencing" PhD thesis, TU Delft, Delft, the Netherlands, 2004. Localization is shown to be very imprecise and blurred.
The encoding of sound sources into spherical harmonics can also be described as equivalent panning functions using loudspeakers located on a sphere as disclosed by M. Poletti in "Three-dimensional surround sound systems based on spherical harmonics" Journal of the Audio Engineering Society, 1 1 (53): 1 004-1 025 , N ove m be r 2005. Th e refo re , it ca n be understood that HOA suffers from similar artefacts than channel based description format.
Sound field reproduction techniques
Sound reproduction techniques can be classified into two groups:
- passive reproduction techniques that directly reproduce the spatially encoded signals,
- active reproduction techniques that first perform a spatial analysis of the content in order to typical ly increase the precision of the spatial description before reproduction.
Passive reproduction techniques
The first passive sound field reproduction technique described here is referred to as Wave Field Synthesis (WFS). WFS relies on the recreation of the curvature of the wave front of an acoustic field em itted by a virtual source (object-based description) using a plurality of loudspeakers within an extended listening area which typically spans the entire reproduction space. This method has been disclosed by A .J. Berkhout in "A holographic approach to acoustic control", Journal of the Audio Eng. Soc , Vol . 36, pp 977-995, 1 988. In its original description WFS is l im ited to horizontal sound field reproduction using horizontal loudspeaker arrays. However, WFS can readily be derived for 3D reproduction as disclosed by Munenori N., Kimura T., Yamakata, Y. and Katsumoto, M. in "Performance Evaluation of 3D Sound Field Reproduction System Using a Few Loudspeakers and Wave Field Synthesis", Second International Symposium on Universal Communication, 2008. WFS is a very flexible sound reproduction method that can easily adapt to any convex loudspeaker array shape.
The main drawback of WFS is known as spatial aliasing. Spatial aliasing results from the use of individual loudspeakers instead of a continuous line or surface. However, it is possible to reduce spatial aliasing artefacts by considering the size of the listening area as disclosed in WO2009056508.
Channel based format can be easily reproduced using WFS using virtual loudspeakers. Virtual loudspeakers are virtual sources that are positioned at the intended positions of the loudspeakers according to the channel based format (+/- 30 degrees for stereo, ...). These virtual loudspeakers are preferably reproduced as plane waves as disclosed by Boone, M. and Verheijen E. in "Sound Reproduction Applications with Wave-Field Synthesis", 104th convention of the Audio Engineering Society, 1998. This ensures that they are perceived at the intended angular position throughout the listening area, which tends to extend the size of the sweet spot (the area where the stereophonic illusion works). However, there remains a modification of relative delays between channels with respect to listening position due to travel time differences from the physical loudspeaker layout that limit the size of the sweet listening area.
HOA rendering
The reproduction of HOA encoded material is usually realized by synthesizing spherical harmonics over a given set of at least (N+1)2 loudspeakers where N is the order of the H OA format. This "decoding" technique is commonly referred to as mode matching solution. The main operation consists in inverting a matrix L that contains the spherical harmonic decomposition of the radiation characteristics of each loudspeakers as disclosed by R. Nicol in "Sound spatialization by higher order ambisonics: Encoding and decoding a sound scene in practice from a theoretical point of view. " i n P roceed i ngs of the 2nd I nternational Symposium on Am bisonics and Spherical Acoustics, 201 0. The matrix L can easi ly be i ll-conditioned, especially for arbitrary loudspeaker layouts and depends on frequency. The decoding performs best for a fully regular loudspeaker layout on a sphere with exactly (N+1 )2 loudspeakers in 3D. In this case, the inverse of matrix L is simply transpose of L. Moreover, the decoding m ight be made independent of frequency if the loudspeaker can be considered as plane waves, which is often not the case in practice.
Another solution for HOA rendering over loudspeakers is disclosed by Corteel E. , Roux S. and Warusfel O. in "Creation of Virtual Sound Scenes Using Wave Field Synthesis" in proceedings of the 22nd tonmeistertagung vdt i nternationa l aud io convention , H an nover, Germ any, 2002. The reproduction of HOA encoded material is described by first decoding the HOA encoded scene into audio channels that are later reproduced through vi rtual loudspeakers on a real loudspeaker setu p us i ng WF S . It is recom m ended to reproduce vi rtual loudspeakers as plane waves to increase the listening area with HOA or stereophonic encoded material. The use of plane waves add itional ly sim pl ifies the decod ing of H OA encoded signals since the decoding matrix is then independent of frequency.
A sim ilar technique is later described in US201 0/009201 4 A1 . However, very few details are given the positioning of virtual loudspeakers. This patent application is more directed towards reduction of reproduction cost by realizing all movements of virtual sources in the spatially encoded format using either multichannel panning, VBAP or HOA.
Other methods: sound field optimization methods within restricted subspace
The main limitation for sound field reproduction is the required number of loudspeakers and their placement within the room. Full 3D reproduction would require placing loudspeaker on a surface surrounding the listening area. In practice, the reproduction systems are thus limited to simpler loudspeaker layout that can be horizontal as for the majority of WFS systems, or even frontal only. At best loudspeakers are positioned on the upper half sphere as described by Zotter F., Pomberger H., and Noisternig M. in "Ambisonic decoding with and without mode-matching: a case study using the hemisphere" In 2nd International Symposium on Ambisonics and Spherical Acoustics, 2010.
Active rendering: upmixing
Active rendering of spatially encoded input signals has been mostly applied in the field of upmixing systems. Upmix consists in performing a spatial analysis to separate localizable sounds from diffuse sounds and typically create more audio output signals than audio input signals. Classical applications of upmix consider enhanced playback of stereo signals on a 5.1 rendering system.
Methods in prior art are first decomposing the audio signals input signals into frequency bands. The spatial analysis is then performed in each frequency band independently using different techniques:
• method 1: comparing directional channels by pairs using for example real valued correlation metrics as disclosed in WO2007026025 or complex valued correlation metrics as disclosed in US20090198356;
• method 2: obtaining direction and diffuseness from "Gerzon vectors", i.e. velocity and intensity vectors for channel-based formats as disclosed in US20070269063;
• method 3: using principal component analysis of the correlation matrix to extract main direction from channel based formats as disclosed in US20080175394.
• method 4: computing intensity vector out of 1st order Ambisonics by combining omnidirectional component and dipoles to evaluate diffuseness and direction of incidence as disclosed in US20080232616;
The first two methods are mostly based on channel-based formats whereas the last one considers only first order Ambisonics inputs. However, the related patent are describing techniques to either translate the Ambisonics format into channel based format by performing decoding on a given virtual loudspeaker setup or alternatively by considering the directions of the channel-based format as plan waves and decompose them into spherical harmonics to create an equivalent Ambisonics format.
These spatial analysis techniques all suffer from the same type of problems. They only allow for a limited precision since only one source direction can typically be estimated per frequency band. The analysis is usually performed on the full space. Strong interferers located at positions that cannot be reproduced by the available loudspeaker setup can easily disturb the analysis. Therefore, important sources located in the reproducible subspace may be missed.
Drawbacks of state of the art Sound field reproduction systems according to state of the art suffer from several drawbacks. First, the encoding of the sound field into a limited set of components (channel-based encoding or HOA) reduces the quality of the spatial description of the sound scene and the size of the listening area. Second, spatial analysis procedures used in active reproduction systems to improve spatial encoding resolution are limited in their capabilities since they can only extract one source per considered frequency band. Moreover, the spatial analysis procedures don't account for the limited reproducible subspace due to the limitations of the reproduction setup in order to limit influence of strong interferers located outside of reproducible subspace and focus the analysis in the reproducible subspace only.
Aim of the invention
The aim of the invention is to increase the spatial performance of sound field reproduction with spatially encoded audio signals in an extended listening area by properly accounting the capabilities of the rendering system. It is another aim of the invention to propose advanced spatial analysis techniques for improving sound field description before reproduction. It is another aim of the invention to account for the capabilities of the reproduction setup so as to focus the spatial analysis of the audio input signals into the reproducible subspace and limit influence of strong interferers that cannot be reproduced with the available loudspeaker setup.
Summary of the invention
The invention consists in a method and a device in which a reproducible subspace is defined based on the capabilities of the reproduction setup. Based on this reproducible subspace description, audio signals located within the reproducible subspace are extracted from the spatially encoded audio input signals. A spatial analysis is performed on the extracted audio input signals to extract main localizable sources within the reproducible subspace. The remaining signals and the portion of the audio input signals l ocated o uts i d e of th e re p rod u ci b l e a re t h e n m a p ped with i n the reproducible subspace. The latter and the extracted sources are then reproduced as virtual sources/loudspeakers on the physically available loudspeaker setup.
The spatial analysis is preferably performed into the spherical harmonics domain . It is proposed to adapt d irection of arrival estim ates m ethod techn iq ue developed i n the field of m icrophone array processing as disclosed by Teutsch, H. in "Modal Array Signal Processing: Principles and Applications of Acoustic Wavefield Decom position" Springer, 2007. These methods enable to estimate multiple sources simultaneously in the presence of spatially distributed noise. They were described for direction of arrival estimates of sources and beamform ing using circular (2D) or spherical (3D) distribution of m icrophones i n the cyl i nd rica l (2 D ) or spherical (3D) harmonics.
In other words, there is presented here a method for sound field reproduction into a listening area of spatially encoded first audio input signals according to sound field description data using an ensemble of physical loudspeakers. The method comprises the steps of computing reproduction subspace description data from loudspeaker positioning data describing the subspace in which virtual sources can be reproduced with the physically available setup. Second and third audio input signals with associated sound field description data are extracted from first audio input signals such that second audio input signals comprise spatial components of the first audio input signals located within the reproducible subspace and third audio input signals com prise spatial com ponents of the first audio input signals located outside of the reproducible subspace. Then, a spatial analysis is performed on second audio input signals so as to extract fourth audio input signals corresponding to localizable sources within the reproducible subspace with associated source positioning data. Rem ain ing com ponents of second aud io i n put s ig na ls after spatial analysis are merged with third audio input signals form ing fifth audio input signals with associated sound field description data for reproduction within the reproducible subspace. Finally, loudspeaker alimentation signals are com puted from fou rth and fifth audio input signals according to loudspeaker positioning data, localizable sources positioning data and sound field description data.
Furthermore, the method may comprise steps wherein the sound field description data are corresponding to eigen solutions of the wave equation (plane waves, spherical harmonics, cylindrical harmonics, ... ) or incom ing directions (channel-based format: stereo, 5. 1 , 7.1 , 1 0.2, 12.2, 22.2). And the method may comprise steps:
• wherein the spatial analysis is performed by first converting, if necessary, second audio input signals into spherical (3D) or cylindrical (2D) harmonic components; second, identifying directional of arrival/sound field description data of main localizable sources within the reproducible subspace; and forming beam patterns by combination of spherical harmonics having main lobe in the direction of the estimated direction of arrival in order to extract fourth audio input signals from second audio input signals.
• wherein the sound field description data of fourth audio input signals are estimated using a subspace directional of arrival estimate method, derived for example from a MUSIC or ESPRIT based algorithm , operating in spherical (3D) or cylindrical (2D) harmonics domain.
• wherein the reproducible subspace description data are computed according to the loudspeaker positioning data (4) and the listening area description data (23).
Moreover, the invention comprises a device for sound field reproduction into a listening area of spatially encoded first audio input signals according to sound field description data using an ensemble of physical loudspeakers. Said device comprises a reproducible subspace computation device for computing reproduction subspace description data from loudspeaker positioning data describing the subspace in which virtual sources can be reproduced with the physically available setup. Said device further comprises a reproducible subspace audio selection device for extracting second and third audio input signals with associated sound field description data wherein second audio input signals comprise spatial components of the first audio input signals located within the reproducible subspace and third audio input signals comprise spatial components of the first audio input signals located outside of the reproducible subspace. Said device also comprises a sound field transformation device on second audio input signals so as to extract fourth audio input signals corresponding to localizable sources within the reproducible subspace with associated source positioning data and merging remaining components of second audio input signals after spatial analysis and third audio input signals into fifth audio input signals with associated sound field description data for reproduction within the reproducible subspace. Said device finally comprises a spatial sound rendering device in order to compute loudspeaker alimentation signals from fourth and fifth audio input signals according to loudspeaker positioning data, localizable sources positioning data and sound field description data of the fifth audio input signals. Furthermore, said device may preferably compromise elements:
• wherein the reproducible subspace computation device computes the reproducible subspace description data according to the loudspeaker positioning data and the listening area description data.
• wherein the spatial sound rendering device computes loudspeaker alimentation signals according to loudspeaker positioning data, the listening area description data, localizable sources positioning data and sound field description data of the fifth audio input signals.
The invention will be described with more detail hereinafter with the aid of an example and with reference to the attached drawings, in which
Fig. 1 describes the radiation pattern of spherical harmonics
Fig. 2 describes a sound reproduction system according to prior art.
Fig. 3 describes a sound reproduction system according to the invention. Fig. 4 describes beamform ing by com bination of spherical harmonics of maximum order 3
Fig. 5 describes first embodiment according to the invention
Fig. 6 describes second embodiment according to the invention
Fig. 7 describes third embodiment according to the invention
Detail description of figures
Fig. 1 was discussed in the introductory part of the specification and is representing the state of the art. Therefore these figures are not further discussed at this stage.
Fig. 2 represents a soundfield rendering device according to the state of the art. In this device, a decoding/spatial analysis device 24 calculates a plurality of decoded audio signals 25 and their associated sound field positioning data 26 from first audio input signals 1 and their associated sound field description data 2. Depending on the im plementation , the decoding/spatial analysis device 24 may real ize either the decoding of HOA encoded signals or spatial analysis of first audio input signals 1 . The positioning data 26 describe the position of target virtual loudspeakers 21 to be synthesized on the physical loudspeakers 3.
A spatial sound rendering device 19 computes alimentation signals 20 for physical loudspeakers 3 from decoded audio signals 25, their associated sound field description data 26 and loudspeakers positioning data 4. The alimentation signals for physical loudspeakers 20 drive a plurality of loudspeakers 3.
Fig.3 represents a soundfield rendering device according to the invention. In this device, a reproducible subspace computation device 7 is computing reproducible subspace description data 8 from loudspeaker positioning data 4. A reproducible subspace audio selection device 9 extracts second audio input signals 10 and their associated sound field description data 11, and third audio input signals 12 and their associated sound field description data 13 from first audio input signals 1, their associated sound field description data 2 and reproducible subspace description data 8 such that second audio input signals 10 comprise elements of first audio input signals 1 that are located within the reproducible subspace 6 and third audio input signals 12 comprise elements of first audio input signals 1 that are located outside the reproducible subspace 6. A sound field transformation device 14 computes fourth audio input signals 15 and their associated positioning data 16 by extracting localizable sources from second audio input signals 10 within the reproducible subspace 6. The sound field transformation device 14 additionally computes fifth audio input signals 17 and their associated positioning data 18 from remaining components of second audio input signals 10 and their associated sound field description data 11 after localizable sources extraction and third audio input signals 12 and their associated sound field description data 13. The positioning data 18 of fifth audio input signals 17 correspond to fixed virtual loudspeakers 21 located within the reproducible subspace 6. A spatial sound rendering device 19 computes alimentation signals 20 for physical loudspeakers 3 from the fourth audio input signals 15 and their associated positioning data 16, fifth audio input signals 17 and their associated positioning data 18, and loudspeakers positioning data 4. The alimentation signals for physical loudspeakers 20 drive a plurality of loudspeakers 3 so as to reproduce the target sou nd fie ld with in the listening area 5.
Mathematical foundations:
The derivations presented here are only given in the spherical harmonics domain that is adapted for describing sound fields in 3 dimensions (3D). For 2 dimensional sound fields (2D), the same derivations can be done using a limited subset of cylindrical harmonics that are independent of the vertical coordinate (z axis).
For the interior problem, where no sources are located within the listening area, the sound field radiated at a point r ( r\ radius, φ: azimuth angle, ø : elevation angle) can be uniquely expressed as a weighted sum of so called spherical harmonics Υη„(φ,θ) as: ρ(ν,ω ) =∑ "./„ (AT)∑¾„(ωΜ„(φ,θ)
The spherical harmonics ϊηιη(φ,θ) of degree m and order n are given by
Ϊ,,,ΧΨ,Θ) (2+ 1K ii-¾„(sie) x \∞ ^» - 0
1 (n + my. sin(-OT<p) if m < 0
where
Figure imgf000018_0001
j fcf is the spherical bessel function of the first kind of order n and PM $ O) are the associated legendre function defined as
P (sin r3 ) =— ^ '- where Pn(sme) is the Legendre polynomial of the first kind of degree n. Bmn( ) are referred to as spherical harmonic decomposition coefficients of the sound field.
The spherical harmonics Υη„(φ,θ) displayed in figure 3 for orders « ranging from 0 to 3 and all possible degrees. The spherical harmonics therefore describe more and more complex patterns of radiation around the origin of the coordinate system.
For a plane wave of magnitude originating from (φ^,θ^), the spherical harmonic decomposition coefficients Bmn(p) are given by:
B (ω ) =— r (<p β Λ
that are independent of frequency.
For a poi nt source of m agn itude o„ originating from (rsw,(psw,dsw), the spherical harmonic decom osition coefficients Bmn(p) are given by:
Figure imgf000019_0001
where h~ is the spherical Hankel function of the first kind. The spherical harmonic decom position for a point source are therefore depending on frequency.
These coefficients form the basis of HOA encoding from an object-based description format where the order is limited to a maximum value N providing (N+1 )2 signals. The encoded signals form the (N+1 )2*1 sized matrix B comprising the encoded signals at frequency ω.
Moreover, they are also used to describe the radiation of the NL loudspeakers during the decoding process. Decoding consists in finding the inverse (or pseudo-inverse) matrix D of the NL *(N+1 )2 matrix L that contains the Llmn(p) coefficients describing the radiation of each loudspeaker in spherical harmonics up to order N such that: where vls is the NL *1 matrix containing the alimentation signals of the loudspeakers.
Decoding can thus be considered as a beamforming operation where the HOA encoded signals are combined in a specific different way for each channel so as to form a directive beam in the direction of the target loudspeaker.
Such operation is described in figure 4 in which the combination of spherical harmonics is achieved using weights correspondin to VneBmn(p) coefficients obtained for a plane wave originating from It
Figure imgf000020_0001
shows a beam with maximum energy in the incoming direction of the plane wave and reduced level in other directions.
For the direction of arrival estimation, we consider that the spatially encoded signals are available as spherical harmonics in the matrix (ω,κ) that is obtained using a Short Time Fourier Transform (STFT) at instant κ. We assume here that the matrix Β (ω ,κ ) is obtained from the following equation:
B((D , K) = V((D , Θ, JC)S((0 , K) + N((D , K)
Figure imgf000020_0002
λ(ω,κ) Β2(ω,κ) ... 5Μ(ω,κ·)]Γ contains the STFT transform of the M = ( N + 1 )2 signals of the HOA encoded scene,
Figure imgf000020_0003
S2(CO,K) ... s{(o,K f contains the STFT transform of the
I sources signals at instant κ and frequency ω;
Ν(ω,κ)= [Νι(ω,κ) Ν2(ω,κ) ... ΝΜ ω,κ)^ contains the STFT transform of the M noise signals or diffuse filed components that are assumed to be decorrelated from the source signals.
In microphone array literature, the matrix ν(ω,Θ, ) is commonly referred to as "array manifold matrix". It describes how each source is captured on the microphone array depending on the array geometry and the direction of incidence of the desired sources ®(κ)= [Θ^Κ) Θ2(Κ) ... Θ7(κ·)]Γ.
Assuming that the virtual sources are plane waves, the array manifold vector contains Bmn(p) coefficients obtained from the spherical harmonic decomposition of a plane wave of incidence ©. = (φ,,θ,) up to order N.
The target of direction of arrival algorithms is thus to find the direction © = (φ,,θ,)/ = 1L / for all sources of the sound scene.
A useful quantity for the direction of arrival estimation is the cross correlation matrix SBB((O,K) that can be written as,
SBB (ω ,κ) = E {B(CD ,K)Bh (ω ,κ)}
= Υ(ω , K)Sss (ω , κ)ΥΗ (ο,κ)+8 (ο,κ)
where Ε{ } denotes the expectation operator and H is the hermitian transpose operator. The noise spectral matrix is assumed to be Sm((o,K) = all where aw 2 is the variance of the noise and I is the identity matrix of size M*M.
An estimate of the spatio-spectral correlation matrix is currently obtained recursively as:
Figure imgf000021_0001
λ x ν(ω,κ·)νΗ(ω,κ·)+(ΐ - λ)χ SBB(<a,K -l)
where Ae[0,l] is the forgetting factor as disclosed by Allen J., Berkeley D., and Blauert, J. in "Multi-microphone signal-processing technique to remove room revereberation from speech signals", Journal of the Acoustical Society of America, vol.62, pp 912-915, October 1977.
A low forgetting factor provides a very accurate estimate of the correlation matrix but is not capable to properly adapt to changes in the position of the sources. In contrast, a high forgetting factor would provide a very good estimate of the correlation matrix but would not very conservative and slow to adapt to changes in the sound scene. It is then beneficial to decompose the estimate of the spatio-spectral correlation matrix into its eigenvalues ζι and its eigenvectors ξ;, z = ii_ u such that
Figure imgf000022_0001
This eigenvalue decomposition of is the basis of the so-called subspace-based direction of arrival methods as disclosed by Teutsch, H. in "Modal Array Signal Processing: Principles and Applications of Acoustic Wavefield Decomposition" Springer, 2007. The eigenvectors are separated into subspaces, the signal subspace and the noise subspace. The signal subspace is composed of the I eigenvectors corresponding to the I largest eigenvalues. The noise subspace is composed of the remaining eigenvectors.
It is now useful to note that, by definition, these subspaces are orthogonal. This observation is the basis of the so-called MUSIC direction of arrival estimate algorithm. The MUSIC algorithm looks for the I array manifold vectors ν(Θ) that describe best the signal subspace or are in other words "most orthogonal" to the noise subspace. We therefore define the so- called pseudo-spectrum ζ(Θ) by projecting the array manifold vector onto the noise subspace while varying directional of arrival Θ = (φ,θ):
Figure imgf000022_0002
The ©. = (φ,,θ,)/ = 1L / can thus be obtained as the I minima of ζ(Θ).
This algorithm is commonly referred to as spectral MUSIC. There exist many variations of this algorithm (root-MUSIC, unitary root-MUSIC, ...) that are detailed in the literature (see Krim H. and Viberg M. "Two decades of array signal processing research - the parametric approach." IEEE Signal Processing Mag., 13(4):67-94, July 1996) and are not reproduced here. The other class of source localization algorithm is commonly referred to as ESPRIT algorithms. It is based on the rotational invariance characteristics of the microphone array, or in this context, of the spherical harmonics. The complete formulation of the ESPRIT algorithm for spherical harmonics is disclosed by Teutsch, H. in "Modal Array Signal Processing: Principles and Applications of Acoustic Wavefield Decomposition" Springer, 2007. It is very complex in its formulation and it is therefore not reproduced here.
Description of embodiments
In a f i rst embodiment of the invention, a linear array of physical loudspeakers 3 is used for the reproduction of a 5.1 input signal. This embodiment is shown in Fig. 5. The target listening area 5 is relatively large and it is used for computing the reproducible subspace together with loudspeaker positioning data considering the loudspeaker array as a window as disclosed by Corteel E. in "Equalization in extended area using multichannel inversion and wave field synthesis" Journal of the Audio Engineering Society, 54( 12) , Decem ber 2006. The second audio input signals 1 0 are thus com posed of the frontal channels of the 5. 1 input ( L/R/C ) . The th i rd aud io i n put chan ne ls 1 2 are form ed by the rear components of the 5.1 input (Ls and Rs channels). The spatial analysis is achieved in the cylindrical harmonic domain by encoding the second audio input channels into H OA with , for exam ple, N=4. The spatial analysis enables to extract virtual sources 21 which are then reproduced using WF S on the phys i ca l l oudspeakers at the i r i nte nded location. The remaining components of the second audio input signals are decoded on 3 frontal virtual loudspeakers 22 located at the intended positions of the LRC channels (-30, 0, 30 degrees) as plane waves. The third audio input s ig na ls are reproduced using virtual loudspeakers located at the boundaries of the reproducible subspace using WFS. In a second embodiment of the invention, a circular horizontal array of physical loudspeakers 3 is used for the reproduction of a 10.2 input signal. This embodiment is shown in Fig. 6. 10.2 is a channel-based reproduction format which comprises 1 0 broadband loudspeaker channels among which 8 channels are located in the horizontal plane and 2 are located at 45 degrees elevation and +/- 45 degrees azimuth as disclosed by Martin G. in " Introduction to Surround sound recording" available at http://www.tonmeister.ca/main/textbook/. The second audio input signals 1 0 are thus composed of the horizontal channels of the 10.2 input. The third audio input channels 12 are formed by the elevated components of the 10.2 input. The spatial analysis is achieved on the cylindrical harmonic domain by encoding the second audio input channels into HOA with, for example, N=4. The spatial analysis enables to extract virtual sources 21 which are then reproduced using WFS on the physical loudspeakers at their intended location. The remaining components of the second audio input signals are decoded on 5 regularly spaced surrounding virtual loudspeakers 22 located at (0, 72, 144, 216, 288 degrees) as plane waves. This configuration enables improved decoding of the HOA encoded signals using a regular channel layout and a frequency independent decoding matrix. Moreover, since strong localizable sources have been extracted from the spatial analysis, the remaining components can be rendered using a lower num ber of virtual loudspeakers. The third audio input signals are reproduced using virtual loudspeakers located at +/- 45 degrees using WFS.
In a third embodiment of the invention, an upper half-spherical array of physical loudspeakers 3 is used for the reproduction of a HOA encoded signal up to order 3. This embodiment is shown in Fig. 7. The extraction of the second audio input signals 10 and the third audio input signals 12 is realized by applying a decoding and reencoding scheme. This consists in decoding the first audio input signals 1 onto a virtual loudspeaker setup that performs a regular sampling of the full sphere with L = (N+1)2 loudspeakers considered as plane waves. Such sampling techniques are disclosed by Zotter F. in "Analysis and Synthesis of Sound-Radiation with Spherical Arrays" PhD thesis, Institute of Electronic Music and Acoustics, University of Music and Performing Arts, 2009.
The second audio input channels 10 are thus simply extracted by selecting the virtual loudspeakers located in the upper half space. The sound field description data 11 associated to the second audio input channels are thus simply corresponding to the directions of the selected virtual loudspeaker setup. The remaining decoded channels therefore form the third audio input signals 13 and their directions give the associated sound field description data 14.
The spatial analysis is performed in the spherical harmonics domain by first reencoding the second audio input signals 10. The extracted sources 21 are then reproduced on the physical loudspeakers 3 using WFS. The remaining components of the second audio input signals 10 are then combined with the third audio input signals 12 to form fifth audio input signals 17 that are reproduced as virtual loudspeakers 22 on the physical loudspeakers 3 using WFS. The mapping of the third audio input signals 12 onto the virtual loudspeakers 22 can be achieved by assigning each channel to the closest available virtual loudspeakers 22 or by spreading the energy using stereophonic based panning techniques.
Applications of the invention are including but not limited to the following domains: hifi sound reproduction, home theatre, cinema, concert, shows, interior noise simulation for an aircraft, sound reproduction for Virtual Reality, sound reproduction in the context of perceptual unimodal/crossmodal experiments. Although the foregoing invention has been described in some detail for the purposes of clarity of understanding, it will be apparent that certain changes and modifications may be practiced within the scope of the appended claims. Accordingly, the present embodiments are to be considered as illustrative and not restrictive, and the invention is not limited to the details given herein, but may be modified with the scope and equivalents of the appended claims.

Claims

Claims
1. A method for sound field reproduction into a listening area (5) of spatially encoded first audio input signals (1) according to sound field description data (2) using an ensemble of physical loudspeakers (3) comprising the steps of:
• computing reproduction subspace description data (8) from loudspeaker positioning data (4) describing the subspace in which virtual sources can be reproduced with the physically available setup;
• extracting second (10) and third (12) audio input signals with associated sound field description data (11) (13) wherein second audio input signals (10) comprise spatial components of the first audio input signals (1) located within the reproducible subspace (6) and third audio input signals (12) comprise spatial components of the first audio input signals (1) located outside of the reproducible subspace (6),
• performing a spatial analysis on second audio input signals (10) so as to extract fourth audio input signals (15) corresponding to localizable sources within the reproducible subspace (5) with associated source positioning data (13),
• merging remaining components of second audio input signals (10) after spatial analysis and third audio input signals (12) into fifth audio input signals (17) with associated sound field description data (18) for reproduction within the reproducible subspace (5),
• computing loudspeaker alimentation signals (20) from fourth (15) and fifth (17) audio input signals according to loudspeaker positioning data (4), localizable sources positioning data (16) and sound field description data (18).
The method of claim 1 wherein the sound field description data are correspond i ng to e igen sol utions of the wave equation : p lane waves, spherical harmonics, cylindrical harmonics.
The method of claim 1 wherein the sound field description data are corresponding to incom ing directions (channel-based format: stereo, 5.1 , 7.1 , 10.2, 12.2, 22.2).
The method of claim 1 wherein the spatial analysis comprises the steps of:
converting, if necessary, second audio input signals (1 0) into spherical (3D) or cylindrical (2D) harmonic components;
identifying directional of arrival/sound field description data (16) of main localizable sources within the reproducible subspace (5); forming beam patterns by combination of spherical harmonics having main lobe in the direction of the estimated direction of arrival in order to extract fourth audio input signals (15) from second audio input signals (10).
The method of claim 4 wherein the sound field description data (16) are estimated using a subspace directional of arrival estimate method, derived for example from a MUSIC or ESPRIT based algorithm, operating in spherical (3D) or cylindrical (2D) harmonics domain.
The method of claim 1 wherein the computation of the reproducible subspace description data (8) are computed according to the loudspeaker positioning data (4) and the listening area description data (23).
The method of claim 1 wherein the computation of loudspeaker alimentation signals (20) is performed according to loudspeaker positioning data (4), the listening area description data (23), localizable sources positioning data (16) and sound field description data (18).
A device for sound field reproduction into a listening area (5) of spatially encoded first audio input signals (1) according to sound field description data (2) using an ensemble of physical loudspeakers (3) comprising a reproducible subspace computation device (7) for computing reproduction subspace description data (8) from loudspeaker positioning data (4) describing the subspace in which virtual sources can be reproduced with the physically available setup; said device further comprising a reproducible subspace audio selection device (9) for extracting second (10) and third (12) audio input signals with associated sound field description data (11) (13) wherein second audio input signals (10) comprise spatial components of the first audio input signals (1 ) located within the reproducible subspace (6) and third audio input signals (12) comprise spatial components of the first audio input signals (1) located outside of the reproducible subspace (6); said device comprising a sound field transformation device (14) on second audio input signals (10) so as to extract fourth audio input signals (15) corresponding to localizable sources within the reproducible subspace (5) with associated source positioning data (13) and merging remaining components of second audio input signals (10) after spatial analysis and third audio input signals (12) into fifth audio input signals (17) with associated sound field description data (18) for reproduction within the reproducible subspace (5); said device also comprising a spatial sound rendering device (19) in order to compute loudspeaker alimentation signals (20) from fourth (15) and fifth (17) audio input signals according to loudspeaker positioning data (4), localizable sources positioning data (16) and sound field description data (18).
9. The device of claim 8 wherein the reproducible subspace computation device (7) computes the reproducible subspace description data (8) according to the loudspeaker positioning data (4) and the listening area description data (23).
10. The device of claim 8 wherein the spatial sound rendering device (19) computes compute loudspeaker alimentation signals (20) according to loudspeaker positioning data (4), the listening area description data (23), localizable sources positioning data (16) and sound field description data (18).
PCT/EP2011/064592 2010-08-27 2011-08-25 Method and device for enhanced sound field reproduction of spatially encoded audio input signals WO2012025580A1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
US13/818,014 US9271081B2 (en) 2010-08-27 2011-08-25 Method and device for enhanced sound field reproduction of spatially encoded audio input signals
EP11752172.4A EP2609759B1 (en) 2010-08-27 2011-08-25 Method and device for enhanced sound field reproduction of spatially encoded audio input signals
ES11752172T ES2922639T3 (en) 2010-08-27 2011-08-25 Method and device for sound field enhanced reproduction of spatially encoded audio input signals

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
EP10174407.6 2010-08-27
EP10174407 2010-08-27

Publications (1)

Publication Number Publication Date
WO2012025580A1 true WO2012025580A1 (en) 2012-03-01

Family

ID=44582979

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2011/064592 WO2012025580A1 (en) 2010-08-27 2011-08-25 Method and device for enhanced sound field reproduction of spatially encoded audio input signals

Country Status (4)

Country Link
US (1) US9271081B2 (en)
EP (1) EP2609759B1 (en)
ES (1) ES2922639T3 (en)
WO (1) WO2012025580A1 (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102857852A (en) * 2012-09-12 2013-01-02 清华大学 Sound-field quantitative regeneration control system and method thereof
FR2996094A1 (en) * 2012-09-27 2014-03-28 Sonic Emotion Labs METHOD AND SYSTEM FOR RECOVERING AN AUDIO SIGNAL
WO2014049268A1 (en) 2012-09-27 2014-04-03 Sonic Emotion Labs Method and device for generating audio signals to be delivered to a sound reproduction system
KR20140093578A (en) * 2013-01-15 2014-07-28 한국전자통신연구원 Audio signal procsessing apparatus and method for sound bar
WO2014125232A1 (en) 2013-02-18 2014-08-21 Sonic Emotion Labs Method and device for generating feed signals intended for a sound restitution system
JP2015080188A (en) * 2013-09-12 2015-04-23 ヤマハ株式会社 User interface device and acoustic control device
US9119011B2 (en) 2011-07-01 2015-08-25 Dolby Laboratories Licensing Corporation Upmixing object based audio
WO2015124880A1 (en) 2014-02-21 2015-08-27 Sonic Emotion Labs Method and device for restoring a multichannel audio signal in a listening zone
US9622014B2 (en) 2012-06-19 2017-04-11 Dolby Laboratories Licensing Corporation Rendering and playback of spatial audio using channel-based audio systems
US9854378B2 (en) 2013-02-22 2017-12-26 Dolby Laboratories Licensing Corporation Audio spatial rendering apparatus and method
CN110767242A (en) * 2013-05-29 2020-02-07 高通股份有限公司 Compression of decomposed representations of sound fields
RU2741763C2 (en) * 2014-07-02 2021-01-28 Квэлкомм Инкорпорейтед Reduced correlation between background channels of high-order ambiophony (hoa)

Families Citing this family (45)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9288603B2 (en) 2012-07-15 2016-03-15 Qualcomm Incorporated Systems, methods, apparatus, and computer-readable media for backward-compatible audio coding
US9473870B2 (en) 2012-07-16 2016-10-18 Qualcomm Incorporated Loudspeaker position compensation with 3D-audio hierarchical coding
KR102597573B1 (en) 2012-07-16 2023-11-02 돌비 인터네셔널 에이비 Method and device for rendering an audio soundfield representation for audio playback
EP2688066A1 (en) 2012-07-16 2014-01-22 Thomson Licensing Method and apparatus for encoding multi-channel HOA audio signals for noise reduction, and method and apparatus for decoding multi-channel HOA audio signals for noise reduction
JP6279569B2 (en) 2012-07-19 2018-02-14 ドルビー・インターナショナル・アーベー Method and apparatus for improving rendering of multi-channel audio signals
WO2014052429A1 (en) * 2012-09-27 2014-04-03 Dolby Laboratories Licensing Corporation Spatial multiplexing in a soundfield teleconferencing system
US9913064B2 (en) 2013-02-07 2018-03-06 Qualcomm Incorporated Mapping virtual speakers to physical speakers
EP2765791A1 (en) * 2013-02-08 2014-08-13 Thomson Licensing Method and apparatus for determining directions of uncorrelated sound sources in a higher order ambisonics representation of a sound field
EP2782094A1 (en) * 2013-03-22 2014-09-24 Thomson Licensing Method and apparatus for enhancing directivity of a 1st order Ambisonics signal
US9466305B2 (en) 2013-05-29 2016-10-11 Qualcomm Incorporated Performing positional analysis to code spherical harmonic coefficients
US20150127354A1 (en) * 2013-10-03 2015-05-07 Qualcomm Incorporated Near field compensation for decomposed representations of a sound field
EP3056025B1 (en) 2013-10-07 2018-04-25 Dolby Laboratories Licensing Corporation Spatial audio processing system and method
EP2866475A1 (en) 2013-10-23 2015-04-29 Thomson Licensing Method for and apparatus for decoding an audio soundfield representation for audio playback using 2D setups
DE102013223201B3 (en) * 2013-11-14 2015-05-13 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Method and device for compressing and decompressing sound field data of a region
US10015615B2 (en) * 2013-11-19 2018-07-03 Sony Corporation Sound field reproduction apparatus and method, and program
US9502045B2 (en) 2014-01-30 2016-11-22 Qualcomm Incorporated Coding independent frames of ambient higher-order ambisonic coefficients
US9922656B2 (en) 2014-01-30 2018-03-20 Qualcomm Incorporated Transitioning of ambient higher-order ambisonic coefficients
US20150264483A1 (en) * 2014-03-14 2015-09-17 Qualcomm Incorporated Low frequency rendering of higher-order ambisonic audio data
US10412522B2 (en) * 2014-03-21 2019-09-10 Qualcomm Incorporated Inserting audio channels into descriptions of soundfields
AU2015244473B2 (en) 2014-04-11 2018-05-10 Samsung Electronics Co., Ltd. Method and apparatus for rendering sound signal, and computer-readable recording medium
US9620137B2 (en) 2014-05-16 2017-04-11 Qualcomm Incorporated Determining between scalar and vector quantization in higher order ambisonic coefficients
US10770087B2 (en) 2014-05-16 2020-09-08 Qualcomm Incorporated Selecting codebooks for coding vectors decomposed from higher-order ambisonic audio signals
US9852737B2 (en) 2014-05-16 2017-12-26 Qualcomm Incorporated Coding vectors decomposed from higher-order ambisonics audio signals
US20150332682A1 (en) * 2014-05-16 2015-11-19 Qualcomm Incorporated Spatial relation coding for higher order ambisonic coefficients
US9949033B2 (en) * 2014-07-23 2018-04-17 The Australian National University Planar sensor array
US9536531B2 (en) * 2014-08-01 2017-01-03 Qualcomm Incorporated Editing of higher-order ambisonic audio data
US9774974B2 (en) * 2014-09-24 2017-09-26 Electronics And Telecommunications Research Institute Audio metadata providing apparatus and method, and multichannel audio data playback apparatus and method to support dynamic format conversion
US9747910B2 (en) 2014-09-26 2017-08-29 Qualcomm Incorporated Switching between predictive and non-predictive quantization techniques in a higher order ambisonics (HOA) framework
EP3024253A1 (en) * 2014-11-21 2016-05-25 Harman Becker Automotive Systems GmbH Audio system and method
US10932078B2 (en) 2015-07-29 2021-02-23 Dolby Laboratories Licensing Corporation System and method for spatial processing of soundfield signals
MX2018005090A (en) 2016-03-15 2018-08-15 Fraunhofer Ges Forschung Apparatus, method or computer program for generating a sound field description.
US20170372697A1 (en) * 2016-06-22 2017-12-28 Elwha Llc Systems and methods for rule-based user control of audio rendering
US11096004B2 (en) 2017-01-23 2021-08-17 Nokia Technologies Oy Spatial audio rendering point extension
US10531219B2 (en) 2017-03-20 2020-01-07 Nokia Technologies Oy Smooth rendering of overlapping audio-object interactions
US11074036B2 (en) 2017-05-05 2021-07-27 Nokia Technologies Oy Metadata-free audio-object interactions
US10165386B2 (en) 2017-05-16 2018-12-25 Nokia Technologies Oy VR audio superzoom
GB2563635A (en) 2017-06-21 2018-12-26 Nokia Technologies Oy Recording and rendering audio signals
US11395087B2 (en) 2017-09-29 2022-07-19 Nokia Technologies Oy Level-based audio-object interactions
US10542368B2 (en) 2018-03-27 2020-01-21 Nokia Technologies Oy Audio content modification for playback audio
WO2020037280A1 (en) 2018-08-17 2020-02-20 Dts, Inc. Spatial audio signal decoder
WO2020037282A1 (en) 2018-08-17 2020-02-20 Dts, Inc. Spatial audio signal encoder
EP3618464A1 (en) 2018-08-30 2020-03-04 Nokia Technologies Oy Reproduction of parametric spatial audio using a soundbar
CN110751956B (en) * 2019-09-17 2022-04-26 北京时代拓灵科技有限公司 Immersive audio rendering method and system
GB2590906A (en) * 2019-12-19 2021-07-14 Nomono As Wireless microphone with local storage
US11937070B2 (en) * 2021-07-01 2024-03-19 Tencent America LLC Layered description of space of interest

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060109992A1 (en) * 2003-05-15 2006-05-25 Thomas Roeder Device for level correction in a wave field synthesis system
WO2007026025A2 (en) 2005-09-02 2007-03-08 Lg Electronics Inc. Method to generate multi-channel audio signals from stereo signals
US20070269063A1 (en) 2006-05-17 2007-11-22 Creative Technology Ltd Spatial audio coding based on universal spatial cues
US20080175394A1 (en) 2006-05-17 2008-07-24 Creative Technology Ltd. Vector-space methods for primary-ambient decomposition of stereo audio signals
US20080232616A1 (en) 2007-03-21 2008-09-25 Ville Pulkki Method and apparatus for conversion between multi-channel audio formats
WO2008113427A1 (en) * 2007-03-21 2008-09-25 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Method and apparatus for enhancement of audio reconstruction
EP2056627A1 (en) * 2007-10-30 2009-05-06 SonicEmotion AG Method and device for improved sound field rendering accuracy within a preferred listening area
US20090198356A1 (en) 2008-02-04 2009-08-06 Creative Technology Ltd Primary-Ambient Decomposition of Stereo Audio Signals Using a Complex Similarity Index
EP2154911A1 (en) * 2008-08-13 2010-02-17 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. An apparatus for determining a spatial output multi-channel audio signal
US20100092014A1 (en) 2006-10-11 2010-04-15 Fraunhofer-Geselischhaft Zur Foerderung Der Angewandten Forschung E.V. Apparatus and method for generating a number of loudspeaker signals for a loudspeaker array which defines a reproduction space

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060109992A1 (en) * 2003-05-15 2006-05-25 Thomas Roeder Device for level correction in a wave field synthesis system
WO2007026025A2 (en) 2005-09-02 2007-03-08 Lg Electronics Inc. Method to generate multi-channel audio signals from stereo signals
US20070269063A1 (en) 2006-05-17 2007-11-22 Creative Technology Ltd Spatial audio coding based on universal spatial cues
US20080175394A1 (en) 2006-05-17 2008-07-24 Creative Technology Ltd. Vector-space methods for primary-ambient decomposition of stereo audio signals
US20100092014A1 (en) 2006-10-11 2010-04-15 Fraunhofer-Geselischhaft Zur Foerderung Der Angewandten Forschung E.V. Apparatus and method for generating a number of loudspeaker signals for a loudspeaker array which defines a reproduction space
US20080232616A1 (en) 2007-03-21 2008-09-25 Ville Pulkki Method and apparatus for conversion between multi-channel audio formats
WO2008113428A1 (en) * 2007-03-21 2008-09-25 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Method and apparatus for conversion between multi-channel audio formats
WO2008113427A1 (en) * 2007-03-21 2008-09-25 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Method and apparatus for enhancement of audio reconstruction
EP2056627A1 (en) * 2007-10-30 2009-05-06 SonicEmotion AG Method and device for improved sound field rendering accuracy within a preferred listening area
WO2009056508A1 (en) 2007-10-30 2009-05-07 Sonicemotion Ag Method and device for improved sound field rendering accuracy within a preferred listening area
US20090198356A1 (en) 2008-02-04 2009-08-06 Creative Technology Ltd Primary-Ambient Decomposition of Stereo Audio Signals Using a Complex Similarity Index
EP2154911A1 (en) * 2008-08-13 2010-02-17 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. An apparatus for determining a spatial output multi-channel audio signal

Non-Patent Citations (21)

* Cited by examiner, † Cited by third party
Title
A .J. BERKHOUT: "A holographic approach to acoustic control", JOURNAL OF THE AUDIO ENG. SOC, vol. 36, 1988, pages 977 - 995
ALLEN J., BERKELEY D, BLAUERT, J.: "Multi-microphone signal-processing technique to remove room revereberation from speech signals", JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA, vol. 62, October 1977 (1977-10-01), pages 912 - 915
BOONE, M., VERHEIJEN E.: "Sound Reproduction Applications with Wave-Field Synthesis", 104 TH CONVENTION OF THE AUDIO ENGINEERING SOCIETY, 1998
CORTEEL E, ROUX S., WARUSFEL O.: "Creation of Virtual Sound Scenes Using Wave Field Synthesis", 22ND TONMEISTERTAGUNG VDT INTERNATIONAL AUDIO CONVENTION, HANNOVER, GERMANY, 2002
CORTEEL E: "Equalization in extended area using multichannel inversion and wave field synthesis", JOURNAL OF THE AUDIO ENGINEERING SOCIETY, vol. 54, no. 12, December 2006 (2006-12-01)
CORTEEL ET AL: "Equalization in an Extended Area Using Multichannel Inversion and Wave Field Synthesis", JAES, AES, 60 EAST 42ND STREET, ROOM 2520 NEW YORK 10165-2520, USA, vol. 54, no. 12, 1 December 2006 (2006-12-01), pages 1140 - 1161, XP040507980 *
EDWIN VERHEIJEN: "Sound Reproduction by Wave Field Synthesis", INTERNET CITATION, 1 January 1997 (1997-01-01), pages COMPLETE, XP007914421, Retrieved from the Internet <URL:http://www.dbvision.nl/publicaties/ouder/Thesis_Edwin_Verheijen.pdf> [retrieved on 20100813] *
ETIENNE CORTEEL: "Caractérisation et Extensions de la Wave Field Synthesis en conditions réelles", 9 December 2004 (2004-12-09), XP055013158, Retrieved from the Internet <URL:http://articles.ircam.fr/textes/Corteel04a/> [retrieved on 20111125] *
EVERT WALTER START: "Direct sound enhancement by wave field synthesis", 24 July 1997 (1997-07-24), XP055013192, Retrieved from the Internet <URL:http://www.tnw.tudelft.nl/fileadmin/Faculteit/TNW/Over_de_faculteit/Afdelingen/Imaging_Science_and_Technology/Research/Research_Groups/Acoustical_Imaging_and_Sound_Control/Publications/Ph.D._thesis/doc/Evert_Start_19970624.pdf> [retrieved on 20111125] *
J. DANIEL: "Spatial sound encoding including near field effect: Introducing distance coding filters and a viable, new ambisonic format", 23TH INTERNATIONAL CONFERENCE OF THE AUDIO ENGINEERING SOCIETY, HELSINGOR, DANEMARK, June 2003 (2003-06-01)
KRIM H., VIBERG M.: "Two decades of array signal processing research - the parametric approach", IEEE SIGNAL PROCESSING MAG, vol. 13, no. 4, July 1996 (1996-07-01), pages 67 - 94, XP002176649, DOI: doi:10.1109/79.526899
M. POLETTI: "Three-dimensional surround sound systems based on spherical harmonics", JOURNAL OF THE AUDIO ENGINEERING SOCIETY, vol. 11, no. 53, November 2005 (2005-11-01), pages 1004 - 1025
MUNENORI N., KIMURA T., YAMAKATA, Y., KATSUMOTO, M.: "Performance Evaluation of 3D Sound Field Reproduction System Using a Few Loudspeakers and Wave Field Synthesis", SECOND INTERNATIONAL SYMPOSIUM ON UNIVERSAL COMMUNICATION, 2008
R. NICOL: "Sound spatialization by higher order ambisonics: Encoding and decoding a sound scene in practice from a theoretical point of view", PROCEEDINGS OF THE 2ND INTERNATIONAL SYMPOSIUM ON AMBISONICS AND SPHERICAL ACOUSTICS, 2010
ROZENN NICOL: "Restitution sonore spatialisée sur une zone étendue: application à la téléprésence", THÈSE PRÉSENTÉE EN VUE D'OBTENIR LE TITRE DE DOCTEUR DE L'UNIVERSITÉ DU MAINE ÈS ACOUSTIQUE, XX, XX, 14 December 1999 (1999-12-14), pages 1 - 518, XP008136326 *
S. BERTET, J. DANIEL, E. PARIZET, O. WARUSFEL: "Investigation on the restitution system influence over perceived higher order Ambisonics sound field: a subjective evaluation involving from first to fourth order systems", PROC. ACOUSTICS-08, JOINT ASA/EAA MEETING, PARIS, 2008
TEUTSCH, H.: "Modal Array Signal Processing: Principles and Applications of Acoustic Wavefield Decomposition", 2007, SPRINGER
V. PULKKI: "Virtual sound source positioning using vector based amplitude panning", JOURNAL OF THE AUDIO ENGINEERING SOCIETY, vol. 45, no. 6, June 1997 (1997-06-01), XP055303802
W. DE BRUIJN: "PhD thesis", 2004, TU DELFT, article "Application of Wave Field Synthesis in Videoconferencing"
ZOTTER F., POMBERGER H., NOISTERNIG M.: "Ambisonic decoding with and without mode-matching: a case study using the hemisphere", IN 2ND INTERNATIONAL SYMPOSIUM ON AMBISONICS AND SPHERICAL ACOUSTICS, 2010
ZOTTER F.: "Analysis and Synthesis of Sound-Radiation with Spherical Arrays'' PhD thesis, Institute of Electronic Music and Acoustics", UNIVERSITY OF MUSIC AND PERFORMING ARTS, 2009

Cited By (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9119011B2 (en) 2011-07-01 2015-08-25 Dolby Laboratories Licensing Corporation Upmixing object based audio
US9622014B2 (en) 2012-06-19 2017-04-11 Dolby Laboratories Licensing Corporation Rendering and playback of spatial audio using channel-based audio systems
CN102857852B (en) * 2012-09-12 2014-10-22 清华大学 Method for processing playback array control signal of loudspeaker of sound-field quantitative regeneration control system
CN102857852A (en) * 2012-09-12 2013-01-02 清华大学 Sound-field quantitative regeneration control system and method thereof
CN104919821A (en) * 2012-09-27 2015-09-16 声摩逊实验室 Method and system for playing back an audio signal
FR2996094A1 (en) * 2012-09-27 2014-03-28 Sonic Emotion Labs METHOD AND SYSTEM FOR RECOVERING AN AUDIO SIGNAL
WO2014049267A1 (en) 2012-09-27 2014-04-03 Sonic Emotion Labs Method and system for playing back an audio signal
WO2014049268A1 (en) 2012-09-27 2014-04-03 Sonic Emotion Labs Method and device for generating audio signals to be delivered to a sound reproduction system
CN104919821B (en) * 2012-09-27 2017-04-05 声摩逊实验室 For the method and system of playback audio signal
US9426597B2 (en) 2012-09-27 2016-08-23 Sonic Emotion Labs Method and system for playing back an audio signal
US20150356975A1 (en) * 2013-01-15 2015-12-10 Electronics And Telecommunications Research Institute Apparatus for processing audio signal for sound bar and method therefor
KR20200112774A (en) * 2013-01-15 2020-10-05 한국전자통신연구원 Audio signal procsessing apparatus and method for sound bar
KR102458956B1 (en) 2013-01-15 2022-10-26 한국전자통신연구원 Audio signal procsessing apparatus and method for sound bar
KR20210134279A (en) * 2013-01-15 2021-11-09 한국전자통신연구원 Audio signal procsessing apparatus and method for sound bar
KR20140093578A (en) * 2013-01-15 2014-07-28 한국전자통신연구원 Audio signal procsessing apparatus and method for sound bar
KR102322104B1 (en) 2013-01-15 2021-11-05 한국전자통신연구원 Audio signal procsessing apparatus and method for sound bar
KR102160218B1 (en) * 2013-01-15 2020-09-28 한국전자통신연구원 Audio signal procsessing apparatus and method for sound bar
WO2014125232A1 (en) 2013-02-18 2014-08-21 Sonic Emotion Labs Method and device for generating feed signals intended for a sound restitution system
US9854378B2 (en) 2013-02-22 2017-12-26 Dolby Laboratories Licensing Corporation Audio spatial rendering apparatus and method
CN110767242A (en) * 2013-05-29 2020-02-07 高通股份有限公司 Compression of decomposed representations of sound fields
JP2015080188A (en) * 2013-09-12 2015-04-23 ヤマハ株式会社 User interface device and acoustic control device
FR3018026A1 (en) * 2014-02-21 2015-08-28 Sonic Emotion Labs METHOD AND DEVICE FOR RETURNING A MULTICANAL AUDIO SIGNAL IN A LISTENING AREA
WO2015124880A1 (en) 2014-02-21 2015-08-27 Sonic Emotion Labs Method and device for restoring a multichannel audio signal in a listening zone
RU2741763C2 (en) * 2014-07-02 2021-01-28 Квэлкомм Инкорпорейтед Reduced correlation between background channels of high-order ambiophony (hoa)

Also Published As

Publication number Publication date
US20130148812A1 (en) 2013-06-13
US9271081B2 (en) 2016-02-23
EP2609759A1 (en) 2013-07-03
EP2609759B1 (en) 2022-05-18
ES2922639T3 (en) 2022-09-19

Similar Documents

Publication Publication Date Title
EP2609759B1 (en) Method and device for enhanced sound field reproduction of spatially encoded audio input signals
JP7119060B2 (en) A Concept for Generating Extended or Modified Soundfield Descriptions Using Multipoint Soundfield Descriptions
KR102468780B1 (en) Devices, methods, and computer programs for encoding, decoding, scene processing, and other procedures related to DirAC-based spatial audio coding
US9767813B2 (en) Method and device for decoding an audio soundfield representation for audio playback
US10313815B2 (en) Apparatus and method for generating a plurality of parametric audio streams and apparatus and method for generating a plurality of loudspeaker signals
TWI808298B (en) Apparatus and method for encoding a spatial audio representation or apparatus and method for decoding an encoded audio signal using transport metadata and related computer programs
US11863962B2 (en) Concept for generating an enhanced sound-field description or a modified sound field description using a multi-layer description
EP2920982A1 (en) Segment-wise adjustment of spatial audio signal to different playback loudspeaker setup
CN116671132A (en) Audio rendering using spatial metadata interpolation and source location information
TWI834760B (en) Apparatus, method and computer program for encoding, decoding, scene processing and other procedures related to dirac based spatial audio coding
KR102654507B1 (en) Concept for generating an enhanced sound field description or a modified sound field description using a multi-point sound field description
AU2020201419A1 (en) Method and device for decoding an audio soundfield representation

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 11752172

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2011752172

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 13818014

Country of ref document: US

NENP Non-entry into the national phase

Ref country code: DE