EP1563485A1 - Verfahren zur verarbeitung von audiodateien und erfassungsvorrichtung zur anwendung davon - Google Patents

Verfahren zur verarbeitung von audiodateien und erfassungsvorrichtung zur anwendung davon

Info

Publication number
EP1563485A1
EP1563485A1 EP03782553A EP03782553A EP1563485A1 EP 1563485 A1 EP1563485 A1 EP 1563485A1 EP 03782553 A EP03782553 A EP 03782553A EP 03782553 A EP03782553 A EP 03782553A EP 1563485 A1 EP1563485 A1 EP 1563485A1
Authority
EP
European Patent Office
Prior art keywords
distance
sound
components
point
signals
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
EP03782553A
Other languages
English (en)
French (fr)
Other versions
EP1563485B1 (de
Inventor
Jérôme DANIEL
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Orange SA
Original Assignee
France Telecom SA
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Family has litigation
First worldwide family litigation filed litigation Critical https://patents.darts-ip.com/?family=32187712&utm_source=google_patent&utm_medium=platform_link&utm_campaign=public_patent_search&patent=EP1563485(A1) "Global patent litigation dataset” by Darts-ip is licensed under a Creative Commons Attribution 4.0 International License.
Application filed by France Telecom SA filed Critical France Telecom SA
Publication of EP1563485A1 publication Critical patent/EP1563485A1/de
Application granted granted Critical
Publication of EP1563485B1 publication Critical patent/EP1563485B1/de
Anticipated expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H1/00Details of electrophonic musical instruments
    • G10H1/0091Means for obtaining special acoustic effects
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H1/00Details of electrophonic musical instruments
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S2400/00Details of stereophonic systems covered by H04S but not provided for in its groups
    • H04S2400/15Aspects of sound capture and related signal processing for recording or reproduction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S2420/00Techniques used stereophonic systems covered by H04S but not provided for in its groups
    • H04S2420/11Application of ambisonics in stereophonic audio systems

Definitions

  • the present invention relates to the processing of sound data.
  • Techniques relating to the propagation of a sound wave in three-dimensional space involving in particular a simulation and / or a specialized sound reproduction, implement audio signal processing methods applied to the simulation of acoustic and psycho-acoustic phenomena .
  • Such processing methods provide for spatial encoding of the acoustic field, its transmission and its spatial reproduction on a set of loudspeakers or on headphones of a stereophonic headset.
  • a first category of treatments relates to processes for synthesizing room effects, or more generally environmental effects. From a description of one or more sound sources (signal emitted, position, orientation, directivity, or other) and based on a room effect model (involving room geometry, or even an acoustic perception desired), we calculate and describe a set of elementary acoustic phenomena (direct waves, reflected or diffracted), or even a macroscopic acoustic phenomenon (reverberant and diffuse field), allowing to translate the spatial effect at the level of a listener located at a chosen point of auditory perception, in three-dimensional space. We then calculate a set of signals typically associated with reflections
  • Secondary sources active by re-emission of a main wave received, having a spatial position attribute
  • a late reverberation decorrelated signals for a diffuse field
  • a second category of process concerns the positional or directional rendering of sound sources. These methods are applied to signals determined by a method of the first category described above.
  • Methods according to this second category make it possible to obtain signals to be broadcast over loudspeakers or headphones, in order to finally give a listener the auditory impression of sound sources placed at respective predetermined positions, around the auditor.
  • the methods according to this second category are qualified as "creators of three-dimensional sound images", because of the distribution in three-dimensional space of the feeling of the position of the sources by a listener.
  • Methods according to the second category generally include a first step of spatial encoding of elementary acoustic events which produces a representation of the sound field in three-dimensional space. In a second step, this representation is transmitted or stored for deferred use. In a third decoding step, the decoded signals are delivered to speakers or headphones of a playback device.
  • the present invention falls rather into the second category mentioned above. It concerns in particular the spatial encoding of sound sources and a specification of the three-dimensional sound representation of these sources. It applies as well to an encoding of "virtual" sound sources (applications where sound sources are simulated such as games, a spatial conference, or other), as an "acoustic" encoding of a natural sound field, when taking sound through one or more three-dimensional networks of microphones.
  • Ambisonic encoding which will be described in detail later, consists in representing signals relating to one or more sound waves in a base of spherical harmonics (in spherical coordinates notably implying an elevation angle and an azimuthal angle, characterizing a direction of the sound (s)).
  • the components representing these signals and expressed in this base of spherical harmonics are also a function, for waves emitted in the near field, of a distance between the sound source emitting this field and a point corresponding to the origin of the harmonic base. spherical. More specifically, this distance dependence is expressed as a function of the sound frequency, as will be seen below.
  • This ambisonic approach offers a large number of possible functionalities, in particular in terms of simulation of virtual sources, and, in general, has the following advantages:
  • the representation of acoustic phenomena is scalable: it offers a spatial resolution which can be adapted to different situations. Indeed, this representation can be transmitted and used as a function of rate constraints during the transmission of the encoded signals and / or of limitations of the restitution device;
  • the ambisonic representation is flexible and it is possible to simulate a rotation of the sound field, or even, at restitution, to adapt the decoding of ambisonic signals to any restitution device, of various geometries.
  • the encoding of virtual sources is essentially directional.
  • the encoding functions amount to calculating gains which depend on the incidence of the sound wave expressed by the spherical harmonic functions which depend on the elevation angle and the azimuth angle in spherical coordinates.
  • the loudspeakers, at restitution are far away. he this results in a distortion (or curvature) of the shape of the reconstructed wave fronts.
  • the components of the sound signal in the base of spherical harmonics, for a near field in fact also depend on the distance from the source and on the sound frequency.
  • these components can be expressed mathematically in the form of a polynomial, the variable of which is inversely proportional to the aforementioned distance and to the sound frequency.
  • the ambisonic components in the sense of their theoretical expression, are divergent in the low frequencies and, in particular, tend towards infinity when the sound frequency decreases towards zero, when they represent a sound in near field emitted by a source located at a finite distance.
  • This mathematical phenomenon is known, in the field of ambisonic representation, already for order 1, by the term "bass boost", in particular by: - .A.GERZON, "General Metatheory of Audi tory
  • This phenomenon becomes particularly critical for high spherical harmonic orders involving high power polynomials.
  • spatial aliasing a processing artefact that appears because of the distance between the microphones, unless you choose a virtual microphone mesh tight in space, which weighs down the processing;
  • this technique is difficult to transpose to a real case of sensors to be placed on a network, in the presence of a real source, at acquisition;
  • the three-dimensional sound representation is implicitly subject to a fixed radius of the rendering device because the ambisonic decoding must be done, here, on a network of loudspeakers of the same dimensions as the initial microphone network, this document does not offering no way to adapt encoding or decoding to other sizes of rendering devices.
  • this document presents a horizontal network of sensors, which supposes that the acoustic phenomena which are taken into account here, propagate only in horizontal directions, which excludes any other direction of propagation and which, therefore, does not represent the physical reality of an ordinary sound field.
  • An object of the present invention is to provide a method for processing, by encoding, transmission and reproduction, any type of sound field, in particular the effect of a sound source in the near field.
  • Another object of the present invention is to provide a method allowing the encoding of virtual sources, not only in direction, but also in distance, and to define a decoding adaptable to any rendering device.
  • Another object of the present invention is to provide a robust processing method for sounds of all sound frequencies (including low frequencies), especially for taking sound from natural acoustic fields using three-dimensional networks of microphones.
  • the present invention provides a process for processing sound data, in which: a) signals representative of at least one sound propagating in three-dimensional space and coming from a source located at a first distance are coded from a reference point, to obtain a representation of the sound by components expressed in a base of spherical harmonics, of origin corresponding to said reference point, and b) a compensation for a near field effect is applied to said components by filtering which is a function of a second distance defining substantially, for a sound reproduction by a reproduction device, a distance between a reproduction point and a hearing perception point.
  • said source being far from the reference point
  • a filter is applied, the coefficients of which, each applied to a component of order m, are expressed analytically in the form of the inverse of a polynomial of power m, the variable of which is inversely proportional to the sound frequency and to said second distance, to compensate for a near field effect at the level of the restitution device.
  • said source being a virtual source provided at said first distance
  • the numerator is a polynomial of power m, the variable of which is inversely proportional to the sound frequency and to said first distance, to simulate a field effect close to the virtual source, and
  • the denominator is a polynomial of power m, the variable of which is inversely proportional to the sound frequency and to said second distance, to compensate for the effect of the near field of the virtual source in the low sound frequencies.
  • the coded and filtered data in steps a) and b) are transmitted to the restitution device with a parameter representative of said second distance.
  • the restitution device comprising means for reading a memory medium, the coded and filtered data are stored on a memory medium intended to be read by the restitution device. in steps a) and b) with a parameter representative of said second distance.
  • an adaptation filter is applied to the coded and filtered data, the coefficients of which are a function of said seconds and third distances.
  • the coefficients of this adaptation filter, each applied to a component of order m are expressed analytically in the form of a fraction, of which:
  • the numerator is a polynomial of power m, the variable of which is inversely proportional to the sound frequency and to said second distance,
  • the denominator is a polynomial of power m, the variable of which is inversely proportional to the sound frequency and to said third distance.
  • step b for the implementation of step b), provision is made: - for components of even order m, digital audio filters in the form of a cascade of cells of order two; and
  • digital audio filters in the form of a cascade of cells of order two and an additional cell of order one.
  • the coefficients of a digital audio filter, for a component of order m are defined from the numerical values of the roots of said polynomials of power m.
  • the above-mentioned polynomials are Bessel polynomials.
  • a microphone comprising a network of acoustic transducers arranged substantially on the surface of a sphere whose center corresponds substantially to said reference point, in order to obtain said signals representative of at least one sound. propagating in three-dimensional space.
  • a global filter is applied in step b) on the one hand, to compensate for a near field effect as a function of said second distance and, on the other hand, to equalize the signals coming from the transducers to compensate for a directivity weighting of said transducers.
  • a number of transducers is provided as a function of a chosen total number of components to represent the sound in said base of spherical harmonics.
  • step a) a total number of components is chosen from the base of spherical harmonics to obtain, at the restitution, a region of the space around the point of perception in which the restitution of the sound is faithful and whose dimensions are increasing with the total number of components.
  • a reproduction device comprising a number of loudspeakers at least equal to said total number of components.
  • a reproduction device comprising at least a first and a second loudspeaker arranged at a chosen distance from a listener, for this listener, information is obtained on the feeling expected from the position in the space of sound sources located at a predetermined reference distance from the listener for the application of a technique called "binaural synthesis” or "transaural", and
  • step b) The compensation of step b) is applied with said reference distance substantially as a second distance.
  • a reproduction device comprising at least a first and a second loudspeaker arranged at a chosen distance from a listener, - information is obtained for this listener about feeling the position in the space of sound sources located at a predetermined reference distance from the auditor, and
  • an adaptation filter is applied to the coded and filtered data in steps a) and b), the coefficients of which are a function of the second distance and substantially of the reference distance.
  • the restitution device comprises a helmet with two earphones for the respective ears of the listener
  • steps a) and b) are applied for respective signals intended to supply each earpiece, with, as first distance, respectively a distance separating each ear from a position of a source to be restored in the reproduction space.
  • a matrix system is formed in steps a) and b) comprising at least:
  • the reproduction device upon return the reproduction device comprises a plurality of loudspeakers arranged substantially at the same distance from the point of auditory perception, and
  • a matrix system comprising said matrix result of compensated components and a predetermined decoding matrix, specific to the restitution device, and
  • the present invention also relates to a sound acquisition device, comprising a microphone provided with a network of acoustic transducers arranged substantially on the surface of a sphere.
  • the device further comprises a processing unit arranged for:
  • a filtering which is a function, on the one hand, of a distance corresponding to the radius of the sphere and, on the other hand, of a reference distance.
  • the filtering carried out by the processing unit consists, on the one hand, in equalizing, as a function of the radius of the sphere, the signals coming from the transducers to compensate for a weighting of directivity of said transducers and, on the other hand, in compensate for a near field effect as a function of said reference distance.
  • FIG. 1 schematically illustrates a system of acquisition and creation, by simulation of virtual sources, of sound signals, with encoding, transmission, decoding and restitution by a spatialized restitution device
  • FIG. 4 illustrates a representation by a three-dimensional metric in a coordinate system of spherical coordinates, of spherical harmonics Y ⁇ n of different orders;
  • - Figure 5 is a diagram of the variations of the module of radial functions j a (kr), which are spherical Bessel functions, for successive values of order m, these radial functions intervening in the ambisonic representation of a field of sound pressure;
  • - Figure 6 shows the amplification due to the near field effect for different successive orders m, in particular in the low frequencies;
  • FIG. 7 schematically shows a playback device comprising a plurality of speakers
  • FIG. 8 shows schematically the parameters involved in the ambisonic encoding, with a directional encoding, as well as a distance encoding according to one invention
  • FIG. 11A shows a reconstruction of the near field with compensation, within the meaning of the present invention, for a spherical wave in the horizontal plane;
  • FIG. 11B represents the initial wavefront, coming from a source S;
  • FIG. 12 schematically represents a filtering module for adapting the received and pre-compensated ambisonic components to the encoding for a distance of reference R as a second distance, to a reproduction device comprising a plurality of loudspeakers arranged at a third distance R 2 from a point of auditory perception;
  • - Figure 13A schematically shows the arrangement of a sound source M, during playback, for a listener using a playback device applying binaural synthesis, with a source emitting in the near field;
  • - Figure 13B schematically shows the steps of encoding and decoding with near field effect in the context of the binaural synthesis of Figure 13A which is combined encoding / decoding ambisonic;
  • - Figure 14 schematically shows the processing of signals from a microphone comprising a plurality of pressure sensors arranged on a sphere, for illustrative purposes, by ambisonic encoding, equalization and near field compensation within the meaning of the invention.
  • FIG. 1 shows by way of illustration a global sound spatialization system.
  • a module simulating a virtual scene defines a sound object as a virtual source of a signal, for example monophonic, of position chosen in three-dimensional space and which defines a direction of sound. There may also be specifications for the geometry of a virtual room, to simulate reverberation of sound.
  • a processing module 11 applies management of one or more of these sources with respect to a listener (definition of a virtual position of the sources with respect to this listener). It implements a room effect processor to simulate reverberations or the like by applying common delays and / or filtering.
  • the signals thus constructed are transmitted to a module 2a for spatial encoding of the elementary contributions of the sources.
  • a natural sound recording can be carried out as part of a sound recording by one or more.
  • microphones arranged in a chosen manner in relation to the real sources (module 1b).
  • the signals picked up by the microphones are encoded by a module 2b.
  • the acquired and encoded signals can be transformed according to an intermediate representation format (module 3b), before being mixed by the module 3 with the signals generated by the module la and encoded by the module 2a (from virtual sources).
  • the mixed signals are then transmitted, or even memorized on a medium, for later playback (arrow TR). They are then applied to a decoding module 5, for playback on a playback device 6 comprising speakers.
  • the decoding step 5 can be preceded by a step for manipulating the sound field, for example by rotation, by means of a processing module 4 provided upstream of the decoding module 5.
  • the rendering device can be in the form of a multiplicity of loudspeakers, arranged for example on the surface of a sphere in a three-dimensional (peripheric) configuration to ensure, during the rendering, in particular a feeling of a direction sound in three-dimensional space.
  • an auditor generally places at the center of the sphere formed by the network of loudspeakers, this center corresponding to the point of auditory perception mentioned above.
  • the loudspeakers of the reproduction device can be arranged in a plane (two-dimensional panoramic configuration), the loudspeakers being arranged in particular on a circle and the listener usually placing themselves in the center of this circle.
  • the restitution device can be in the form of a "surround" type device (5.1).
  • the reproduction device can be in the form of a headset with two headphones for a binaural synthesis of the restored sound, which allows the listener to feel a direction of the sources in three-dimensional space, as will be seen in detail below.
  • a reproduction device with two loudspeakers, for feeling in three-dimensional space can also be in the form of a transaural reproduction device, with two loudspeakers arranged at a chosen distance from a listener.
  • FIG. 2 We now refer to FIG. 2 to describe a spatial encoding and a decoding for a three-dimensional sound reproduction, of elementary sound sources.
  • the signal from a source 1 to N is transmitted to a spatial encoding module 2, as well as its position (real or virtual). Its position can be defined as well in terms of incidence (direction of the source seen by the listener) as in terms of distance between this source and a listener.
  • the plurality of signals thus encoded allows to obtain a multi-channel representation of a global sound field.
  • the encoded signals are transmitted (arrow TR) to a sound reproduction device 6, for sound reproduction in three-dimensional space, as indicated above with reference to FIG. 1.
  • the set of weighting factors B ⁇ n which are implicitly a function of the frequency, thus describe the pressure field in the zone considered. For this reason, these factors are called "spherical harmonic components" and represent a frequency expression of the sound (or of the pressure field) in the base of the spherical harmonics Y n .
  • spherical harmonics The angular functions are called “spherical harmonics" and are defined by:
  • the spherical harmonics form an orthonormee base where the scalar products between harmonic components and, in general between two functions F and G, are respectively defined by:
  • Spherical harmonics are real bounded functions, as shown in Figure 4, as a function of the order m and the indices n and ⁇ .
  • the dark and light parts correspond respectively to the positive and negative values of the spherical harmonic functions.
  • the radial functions j m (kr) are spherical Bessel functions, the modulus of which is illustrated for some values of the order m in Figure 5.
  • the ambisonic representation of the sound is however less satisfactory as one moves away from the origin O. This effect becomes critical in particular for high sound frequencies (of short wavelength) . It is therefore advantageous to obtain a number of ambisonic components which is as large as possible, which makes it possible to create a region of space around the point of perception, in which the reproduction of the sound is faithful and whose dimensions are increasing with the total number of components.
  • an ambisonic system takes into account a subset of spherical harmonic components, as described above.
  • a system of order M when it takes into account ambisonic components of index m ⁇ M.
  • the reproduction device comprises loudspeakers arranged on the surface of a sphere ("periphery"), it is in principle possible to use as many harmonics as there are loudspeakers.
  • the pressure signal S designates the pressure signal carried by a plane wave and picked up at point O corresponding to the center of the sphere in FIG. 3 (origin of the base in spherical coordinates).
  • the incidence of the wave is described by the azimuth ⁇ and the elevation ⁇ .
  • the expression of the components of the field associated with this plane wave is given by the relation:
  • the encoding produces signals which differ from the original signal only by a real, finite gain, which corresponds to a purely directional encoding (relation [A3]);
  • the additional filter Fjjjf '( ⁇ ) encodes distance information by introducing, in the expression of the ambisonic components, complex amplitude relationships which depend on the frequency, as expressed in the relation [A5].
  • this additional filter is of the "integrator" type, with an increasing and diverging amplification effect (unbounded) as the sound frequencies decrease towards zero.
  • a reproduction device comprises a plurality of loudspeakers HPi, arranged at the same distance R, in the example described, from an auditory perception point P.
  • R distance
  • the point M corresponds to the position of a source (real or virtual) located at the first distance p, stated above, from the reference point O.
  • a near field pre-compensation is introduced at the very stage of encoding, this compensation involving filters of the form
  • coefficients of this compensation filter are increasing with the frequency of the sound and, in particular, tending towards zero, for the low frequencies.
  • this pre-compensation carried out from the encoding, ensures that the transmitted data are not divergent for the low frequencies.
  • the distance p then represents a distance between a near virtual source M and the point O representing the origin of the spherical base of FIG. 3.
  • a first near field simulation filter is thus applied to simulate the presence of a virtual source at the distance p described above.
  • the terms of the coefficient of this filter diverge in the low frequencies ( Figure 6) and, on the other hand, the distance p above will not necessarily represent the distance between the high- speakers of a playback device and a point P of perception (figure 7).
  • a pre-compensation is applied to the encoding, putting in
  • the near-field pre-compensation of the loudspeakers (placed at the distance R), at the encoding stage, can be combined with a simulated near-field effect of a virtual source placed at a distance p.
  • a total filter is finally brought into play resulting, on the one hand, from the near field simulation, and, on the other hand, from the near field compensation, the coefficients of this filter being able to be expressed analytically by the relation:
  • the total filter given by the relation [Ail] is stable and constitutes the "distance encoding" part in the spatial ambisonic encoding according to the invention, as represented in FIG. 8.
  • the coefficients of these filters correspond to functions of monotonic transfer of the frequency, which tend towards the value 1 in • high frequencies and towards the value (R / p) m in low frequencies.
  • R between an auditory perception point and the HPi loudspeakers is effectively of the order of one or a few meters.
  • total filters are also provided (near field compensation and, if necessary, simulation of a near field) H m ⁇ pc ' c ' (co) which are applied to the ambisonic components, according to their order m, to carry out the encoding of the distance, as represented in FIG. 8.
  • H m ⁇ pc ' c ' (co) which are applied to the ambisonic components, according to their order m, to carry out the encoding of the distance, as represented in FIG. 8.
  • FIG. 11B the propagation of the initial sound wave has been represented from a near-field source situated at a distance p from a point in the acquisition space which corresponds, in the reproduction space , at point P of Figure 7 of auditory perception.
  • the listeners symbolized by diagrammed heads
  • an advantageous method for defining a digital filter from the analytical expression of this filter in the analog domain in continuous time consists of a
  • p / c (c being the acoustic speed in the environment, typically 340 m / s in the air).
  • the bilinear transform consists in presenting, for a sampling frequency f s , the relation [Ail] in the form:
  • X m7q are the q successive roots of the Bessel polynomial
  • q l and are expressed in table 1 below, for different orders m, in the respective forms of their real part, their modulus (separated by a comma) and their value (real) when m is odd.
  • Table 1 values P e X m , q ⁇ ⁇ X m, q (and R e ⁇ . X m, m ⁇ when m is odd) of a Bessel polynomial calculated using the MATLAB ⁇ calculation software.
  • Digital filters are thus produced in the form of an infinite impulse response, easily configurable as shown above. It should be noted that an implementation in the form of a finite impulse response can be envisaged and consists in calculating the complex spectrum of the transfer function from the analytical formula, then to deduce a finite impulse response by inverse Fourier transform. We then apply a convolution operation for filtering.
  • FIG. 8 a modified ambisonic representation is defined (FIG. 8), by adopting as transmissible representation of the signals expressed in the frequency domain, in the form:
  • R is a reference distance with which a compensated near field effect is associated and c is the speed of sound (typically 340 m / s in air).
  • c is the speed of sound (typically 340 m / s in air).
  • modified ambisonics has the same scalability properties (schematically represented by transmitted data "surrounded” near the arrow TR in figure 1) and obeys the same rotational transformations of the field (module 4 in figure 1) as the usual ambisonic representation .
  • the decoding operation is adaptable to any restitution device, of radius R 2 , different from the reference distance R above.
  • type filters are applied H ( ⁇ ) ⁇ as described above, but with distance parameters R and R 2 , instead of p and R.
  • R / c is to be memorized (and / or transmit) between encoding and decoding.
  • the filter module shown there is provided for example in a processing unit of a rendering device.
  • the ambisonic components received were pre-compensated for encoding for a reference distance Ri as the second distance.
  • the reproduction device comprises a plurality of loudspeakers arranged at a third distance R 2 from an auditory perception point P, this third distance R 2 being different from the aforementioned second distance R_.
  • the invention also makes it possible to mix several ambisonic representations of sound fields (real and / or virtual sources), the reference distances R of which are different (if necessary with infinite reference distances and corresponding to distant sources).
  • the reference distances R are different (if necessary with infinite reference distances and corresponding to distant sources).
  • we will filter a pre-compensation of all these sources at the smallest reference distance, before mixing the signals ambisonics, which allows the reproduction to obtain a correct definition of the sound relief.
  • the distance encoding with near field pre-compensation is advantageously applied in combination with the focusing processing.
  • the wave emitted by each loudspeaker is defined by a prior processing of "re-encoding" of the ambisonic field at the center of the restitution device, as follows.
  • decoding consists in comparing the original ambisonic signals received by the restitution device, in the form:
  • the number of loudspeakers is greater than or equal to the number of ambisonic components to be decoded and the decoding matrix D is expressed, as a function of the re-encoding matrix C, in the form:
  • the matrixing operation is preceded by a filtering operation which compensates for the near field on each component B mn ⁇ and which can be implemented in digital form, as described above, with reference to the relation [A14] .
  • the "re-encoding" matrix C is specific to the restitution device. Its coefficients can be determined initially by configuration and sound characterization of the reproduction device reacting to a predetermined excitation.
  • the decoding matrix D is also specific to the restitution device. Its coefficients can be determined by the relation [B8]. Using the previous notation where B is the matrix of precompensated ambisonic components, these can be transmitted to the restitution device in matrix form B with:
  • the restitution device then decodes the data received in matrix form B (column vector of the transmitted components) by applying the decoding matrix D to the pre-compensated ambisonic components, to form the signals Si intended to supply the loudspeakers HPi, with:
  • FIG. 13A a listener having a headset with two headphones of a binaural synthesis device is represented.
  • the two ears of the listener are arranged at respective points 0 L (left ear) and 0 R (right ear) in space.
  • the center of the listener's head is arranged at point O and the radius of the listener's head is a value.
  • a sound source must be heard hearing at a point M in space, located at a distance r from the center of the listener's head (and respectively at distances r R from the right ear and r L from the ear left).
  • the direction of the source placed at point M is defined by the vectors F, FR and f j _.
  • binaural synthesis is defined as follows.
  • Each listener has his own ear shape.
  • the perception of a sound in space by this listener is done by learning, from birth, according to the shape of the ears (in particular the shape of the flags and the dimensions of the head) specific to this listener.
  • the perception of a sound in space is manifested inter alia by the fact that the sound reaches one ear, before the other ear, which results in a delay ⁇ between the signals to be emitted by each earpiece of the listening device.
  • restitution applying binaural synthesis.
  • the restitution device is initially configured, for the same listener, by scanning a sound source around its head, at the same distance R from the center of its head. It will thus be understood that this distance R can be considered as a distance between a "restitution point" as stated above and a hearing perception point (here the center O of the listener's head).
  • the index L is associated with the signal to be restored by the earpiece attached to the left ear and the index R is associated with the signal to be restored by the earpiece attached to the right ear.
  • a delay is applied to the initial signal S for each channel intended to produce a signal for a separate listener.
  • These delays ⁇ L and ⁇ R are a function of a maximum delay X MAX which here corresponds to the ratio a / c where a, as indicated previously, corresponds to the radius of the listener's head and c to the speed of sound.
  • these delays are defined as a function of the difference in distance from point O (center of the head) to point M (position of the source whose sound is to be reproduced, in FIG. 13A) and of each ear at this point
  • respective gains S and -R / are also applied to each channel, which are a function of a ratio of the distances from point O to point M and of each ear to point M.
  • Respective modules applied to each channel 2 L and 2 R encode the signals of each channel, in an ambisonic representation, with NFC near field pre-compensation (for "jVear Field Compensation") within the meaning of present invention.
  • the signals originating from the source M are transmitted to the restitution device comprising ambisonic decoding modules, for each channel, 5 L and 5 R .
  • the restitution device comprising ambisonic decoding modules, for each channel, 5 L and 5 R .
  • an ambisonic encoding / decoding is applied, with near field compensation, for each channel (left earpiece, right earpiece) in the reproduction with binaural synthesis (here of "B-FORMAT" type), in split form.
  • the near field compensation is carried out, for each channel, with a distance r L and r R as the first distance between each ear and the position M of the sound source to be restored.
  • a microphone 141 comprises a plurality of transducer capsules, capable of picking up acoustic pressures and reproducing electrical signals Si, ..., S N.
  • CAPi capsules are arranged on a sphere of predetermined radius r (here, a rigid sphere, such as a ping-pong ball for example). The capsules are spaced evenly on the sphere. In practice, we choose the number N of capsules as a function of the order M desired for the ambisonic representation.
  • the near-field pre-compensation can be applied not only for the simulation of virtual source, as indicated above, but also to the acquisition and, more generally, by combining the pre-compensation of field close to all types of treatment involving ambisonic representation.
  • EQ ra is an equalizing filter which compensates for a weighting m which is linked to the directivity of the capsules and which also includes diffraction by the rigid sphere.
  • this equalization filter is not stable and an infinite gain is obtained at very low frequencies.
  • the spherical harmonic components themselves, are not of finite amplitude when the sound field is not limited to a propagation of plane waves, i.e. from from distant sources, as we saw earlier.
  • the signals Si to S N are recovered from the microphone 141. If necessary, a pre-equalization of these signals is applied by a processing module 142.
  • the module 143 makes it possible to express these signals in the ambisonic context, in the form matrix.
  • Module 144 applies the filter of relation [C7] to the components ambisonics expressed as a function of the radius r of the sphere of the microphone 141.
  • the near field compensation is performed for a reference distance R as the second distance.
  • the signals encoded and thus filtered by the module 144 can be transmitted, if necessary, with the parameter representative of the reference distance R / c.
  • the near field compensation within the meaning of the present invention can be applied to all types of processing involving an ambisonic representation.
  • This near field compensation makes it possible to apply the ambisonic representation to a multiplicity of sound contexts where the direction of a source and advantageously its distance must be taken into account.
  • the possibility of representing sound phenomena of all types (near or far fields) in the ambisonic context is ensured by this pre-compensation, due to the limitation to finite real values of the ambisonic components.
  • the present invention is not limited to the embodiment described above by way of example; it extends to other variants.
  • the near field pre-compensation can be integrated, in the encoding, as much for a near source as for a far source.
  • the distance p expressed above will be considered to be infinite, without substantially modifying the expression of the filters H m given above.
  • processing using room effect processors which generally provide decorrelated signals usable for modeling the late diffuse field (late reverberation) can be combined with near-field pre-compensation.
  • the encoding principle within the meaning of the present invention can be generalized to radiation models other than monopolar sources (real or virtual) and / or loudspeakers.
  • any form of radiation in particular a source spread out in space
  • any form of radiation can be expressed by integration of a continuous distribution of point elementary sources.
  • the present invention applies to all types of sound spatialization systems, in particular for "virtual reality” type applications (navigation in virtual scenes in three-dimensional space, games with three-dimensional sound spatialization, "chat” type conversations with sound on the Internet), interface sonifications, audio editing software for recording, mixing and playing music, but also for acquisition, from the use of three-dimensional microphones, for taking musical or cinematographic sound, or for transmitting sound environment on the Internet, for example for sound-activated "WebCam”.
  • “virtual reality” type applications novigation in virtual scenes in three-dimensional space, games with three-dimensional sound spatialization, "chat” type conversations with sound on the Internet
  • interface sonifications audio editing software for recording, mixing and playing music, but also for acquisition, from the use of three-dimensional microphones, for taking musical or cinematographic sound, or for transmitting sound environment on the Internet, for example for sound-activated "WebCam”.

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Stereophonic System (AREA)
  • Measurement Of Mechanical Vibrations Or Ultrasonic Waves (AREA)
  • Measurement Of Velocity Or Position Using Acoustic Or Ultrasonic Waves (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
EP03782553A 2002-11-19 2003-11-13 Verfahren zur verarbeitung von audiodateien und erfassungsvorrichtung zur anwendung davon Expired - Lifetime EP1563485B1 (de)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
FR0214444 2002-11-19
FR0214444A FR2847376B1 (fr) 2002-11-19 2002-11-19 Procede de traitement de donnees sonores et dispositif d'acquisition sonore mettant en oeuvre ce procede
PCT/FR2003/003367 WO2004049299A1 (fr) 2002-11-19 2003-11-13 Procede de traitement de donnees sonores et dispositif d'acquisition sonore mettant en oeuvre ce procede

Publications (2)

Publication Number Publication Date
EP1563485A1 true EP1563485A1 (de) 2005-08-17
EP1563485B1 EP1563485B1 (de) 2006-03-29

Family

ID=32187712

Family Applications (1)

Application Number Title Priority Date Filing Date
EP03782553A Expired - Lifetime EP1563485B1 (de) 2002-11-19 2003-11-13 Verfahren zur verarbeitung von audiodateien und erfassungsvorrichtung zur anwendung davon

Country Status (13)

Country Link
US (1) US7706543B2 (de)
EP (1) EP1563485B1 (de)
JP (1) JP4343845B2 (de)
KR (1) KR100964353B1 (de)
CN (1) CN1735922B (de)
AT (1) ATE322065T1 (de)
AU (1) AU2003290190A1 (de)
BR (1) BRPI0316718B1 (de)
DE (1) DE60304358T2 (de)
ES (1) ES2261994T3 (de)
FR (1) FR2847376B1 (de)
WO (1) WO2004049299A1 (de)
ZA (1) ZA200503969B (de)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9460726B2 (en) 2010-03-26 2016-10-04 Dolby Laboratories Licensing Corporation Method and device for decoding an audio soundfield representation for audio playback
US10582329B2 (en) 2016-01-08 2020-03-03 Sony Corporation Audio processing device and method

Families Citing this family (76)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE10328335B4 (de) * 2003-06-24 2005-07-21 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Wellenfeldsyntesevorrichtung und Verfahren zum Treiben eines Arrays von Lautsprechern
US20050271216A1 (en) * 2004-06-04 2005-12-08 Khosrow Lashkari Method and apparatus for loudspeaker equalization
EP1938661B1 (de) * 2005-09-13 2014-04-02 Dts Llc System und verfahren zur audio-verarbeitung
WO2007104877A1 (fr) * 2006-03-13 2007-09-20 France Telecom Synthese et spatialisation sonores conjointes
FR2899424A1 (fr) * 2006-03-28 2007-10-05 France Telecom Procede de synthese binaurale prenant en compte un effet de salle
US8180067B2 (en) * 2006-04-28 2012-05-15 Harman International Industries, Incorporated System for selectively extracting components of an audio input signal
US7876903B2 (en) * 2006-07-07 2011-01-25 Harris Corporation Method and apparatus for creating a multi-dimensional communication space for use in a binaural audio system
US8036767B2 (en) 2006-09-20 2011-10-11 Harman International Industries, Incorporated System for extracting and changing the reverberant content of an audio input signal
ES2359752T3 (es) * 2006-09-25 2011-05-26 Dolby Laboratories Licensing Corporation Resolución espacial mejorada del campo sonoro para sistemas de reproducción de audio multicanal mediante derivación de señales con términos angulares de orden superior.
DE102006053919A1 (de) * 2006-10-11 2008-04-17 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Vorrichtung und Verfahren zum Erzeugen einer Anzahl von Lautsprechersignalen für ein Lautsprecher-Array, das einen Wiedergaberaum definiert
JP2008118559A (ja) * 2006-11-07 2008-05-22 Advanced Telecommunication Research Institute International 3次元音場再生装置
JP4873316B2 (ja) * 2007-03-09 2012-02-08 株式会社国際電気通信基礎技術研究所 音響空間共有装置
EP2094032A1 (de) * 2008-02-19 2009-08-26 Deutsche Thomson OHG Audiosignal, Verfahren und Vorrichtung zu dessen Kodierung oder Übertragung sowie Verfahren und Vorrichtung zu dessen Verarbeitung
US20110002469A1 (en) * 2008-03-03 2011-01-06 Nokia Corporation Apparatus for Capturing and Rendering a Plurality of Audio Channels
EP2154910A1 (de) * 2008-08-13 2010-02-17 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Vorrichtung zum Mischen von Raumtonströmen
ES2425814T3 (es) 2008-08-13 2013-10-17 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Aparato para determinar una señal de audio espacial convertida
GB0815362D0 (en) 2008-08-22 2008-10-01 Queen Mary & Westfield College Music collection navigation
US8819554B2 (en) * 2008-12-23 2014-08-26 At&T Intellectual Property I, L.P. System and method for playing media
EP2205007B1 (de) 2008-12-30 2019-01-09 Dolby International AB Verfahren und Vorrichtung zur Kodierung dreidimensionaler Hörbereiche und zur optimalen Rekonstruktion
GB2478834B (en) 2009-02-04 2012-03-07 Richard Furse Sound system
CN102318373B (zh) * 2009-03-26 2014-09-10 松下电器产业株式会社 解码装置、编解码装置及解码方法
CN102687536B (zh) * 2009-10-05 2017-03-08 哈曼国际工业有限公司 用于音频信号的空间提取的系统
JP5672741B2 (ja) * 2010-03-31 2015-02-18 ソニー株式会社 信号処理装置および方法、並びにプログラム
US20110317522A1 (en) * 2010-06-28 2011-12-29 Microsoft Corporation Sound source localization based on reflections and room estimation
US9456289B2 (en) 2010-11-19 2016-09-27 Nokia Technologies Oy Converting multi-microphone captured signals to shifted signals useful for binaural signal processing and use thereof
US9313599B2 (en) 2010-11-19 2016-04-12 Nokia Technologies Oy Apparatus and method for multi-channel signal playback
US9055371B2 (en) 2010-11-19 2015-06-09 Nokia Technologies Oy Controllable playback system offering hierarchical playback options
EP2541547A1 (de) * 2011-06-30 2013-01-02 Thomson Licensing Verfahren und Vorrichtung zum Ändern der relativen Standorte von Schallobjekten innerhalb einer Higher-Order-Ambisonics-Wiedergabe
WO2013068402A1 (en) * 2011-11-10 2013-05-16 Sonicemotion Ag Method for practical implementations of sound field reproduction based on surface integrals in three dimensions
KR101282673B1 (ko) 2011-12-09 2013-07-05 현대자동차주식회사 음원 위치 추정 방법
US8996296B2 (en) * 2011-12-15 2015-03-31 Qualcomm Incorporated Navigational soundscaping
KR102068186B1 (ko) 2012-02-29 2020-02-11 어플라이드 머티어리얼스, 인코포레이티드 로드 록 구성의 저감 및 스트립 프로세스 챔버
EP2645748A1 (de) 2012-03-28 2013-10-02 Thomson Licensing Verfahren und Vorrichtung zum Decodieren von Stereolautsprechersignalen aus einem Ambisonics-Audiosignal höherer Ordnung
WO2013150341A1 (en) 2012-04-05 2013-10-10 Nokia Corporation Flexible spatial audio capture apparatus
US9288603B2 (en) 2012-07-15 2016-03-15 Qualcomm Incorporated Systems, methods, apparatus, and computer-readable media for backward-compatible audio coding
US9473870B2 (en) 2012-07-16 2016-10-18 Qualcomm Incorporated Loudspeaker position compensation with 3D-audio hierarchical coding
EP2688066A1 (de) * 2012-07-16 2014-01-22 Thomson Licensing Verfahren und Vorrichtung zur Codierung von Mehrkanal-HOA-Audiosignalen zur Rauschreduzierung sowie Verfahren und Vorrichtung zur Decodierung von Mehrkanal-HOA-Audiosignalen zur Rauschreduzierung
US9479886B2 (en) 2012-07-20 2016-10-25 Qualcomm Incorporated Scalable downmix design with feedback for object-based surround codec
US9761229B2 (en) * 2012-07-20 2017-09-12 Qualcomm Incorporated Systems, methods, apparatus, and computer-readable media for audio object clustering
CN107454511B (zh) 2012-08-31 2024-04-05 杜比实验室特许公司 用于使声音从观看屏幕或显示表面反射的扬声器
US9838824B2 (en) 2012-12-27 2017-12-05 Avaya Inc. Social media processing with three-dimensional audio
US9892743B2 (en) 2012-12-27 2018-02-13 Avaya Inc. Security surveillance via three-dimensional audio space presentation
US9301069B2 (en) * 2012-12-27 2016-03-29 Avaya Inc. Immersive 3D sound space for searching audio
US10203839B2 (en) * 2012-12-27 2019-02-12 Avaya Inc. Three-dimensional generalized space
US9736609B2 (en) 2013-02-07 2017-08-15 Qualcomm Incorporated Determining renderers for spherical harmonic coefficients
US9959875B2 (en) * 2013-03-01 2018-05-01 Qualcomm Incorporated Specifying spherical harmonic and/or higher order ambisonics coefficients in bitstreams
US10635383B2 (en) 2013-04-04 2020-04-28 Nokia Technologies Oy Visual audio processing apparatus
EP2997573A4 (de) 2013-05-17 2017-01-18 Nokia Technologies OY Räumliche objektorientierte audiovorrichtung
US9420393B2 (en) * 2013-05-29 2016-08-16 Qualcomm Incorporated Binaural rendering of spherical harmonic coefficients
US20140358565A1 (en) 2013-05-29 2014-12-04 Qualcomm Incorporated Compression of decomposed representations of a sound field
EP2824661A1 (de) * 2013-07-11 2015-01-14 Thomson Licensing Verfahren und Vorrichtung zur Erzeugung aus einer Koeffizientendomänenrepräsentation von HOA-Signalen eine gemischte Raum-/Koeffizientendomänenrepräsentation der besagten HOA-Signale
DE102013013378A1 (de) * 2013-08-10 2015-02-12 Advanced Acoustic Sf Gmbh Aufteilung virtueller Schallquellen
US9807538B2 (en) 2013-10-07 2017-10-31 Dolby Laboratories Licensing Corporation Spatial audio processing system and method
EP2866475A1 (de) * 2013-10-23 2015-04-29 Thomson Licensing Verfahren und Vorrichtung zur Decodierung einer Audioschallfelddarstellung für Audiowiedergabe mittels 2D-Einstellungen
US9922656B2 (en) 2014-01-30 2018-03-20 Qualcomm Incorporated Transitioning of ambient higher-order ambisonic coefficients
EP2930958A1 (de) * 2014-04-07 2015-10-14 Harman Becker Automotive Systems GmbH Schallwellenfelderzeugung
US10770087B2 (en) 2014-05-16 2020-09-08 Qualcomm Incorporated Selecting codebooks for coding vectors decomposed from higher-order ambisonic audio signals
JP6388551B2 (ja) * 2015-02-27 2018-09-12 アルパイン株式会社 複数領域音場再現システムおよび方法
DE102015008000A1 (de) * 2015-06-24 2016-12-29 Saalakustik.De Gmbh Verfahren zur Schallwiedergabe in Reflexionsumgebungen, insbesondere in Hörräumen
EP3402221B1 (de) * 2016-01-08 2020-04-08 Sony Corporation Audioverarbeitungsvorrichtung, -verfahren und -programm
EP3402223B1 (de) 2016-01-08 2020-10-07 Sony Corporation Audioverarbeitungsvorrichtung, -verfahren und -programm
EP4376444A2 (de) 2016-08-01 2024-05-29 Magic Leap, Inc. System der gemischten realität mit verräumlichtem audio
US11032663B2 (en) * 2016-09-29 2021-06-08 The Trustees Of Princeton University System and method for virtual navigation of sound fields through interpolation of signals from an array of microphone assemblies
US20180124540A1 (en) * 2016-10-31 2018-05-03 Google Llc Projection-based audio coding
FR3060830A1 (fr) * 2016-12-21 2018-06-22 Orange Traitement en sous-bandes d'un contenu ambisonique reel pour un decodage perfectionne
US10182303B1 (en) * 2017-07-12 2019-01-15 Google Llc Ambisonics sound field navigation using directional decomposition and path distance estimation
US10764684B1 (en) * 2017-09-29 2020-09-01 Katherine A. Franco Binaural audio using an arbitrarily shaped microphone array
US10721559B2 (en) 2018-02-09 2020-07-21 Dolby Laboratories Licensing Corporation Methods, apparatus and systems for audio sound field capture
US11875089B2 (en) * 2018-03-02 2024-01-16 Sonitor Technologies As Acoustic positioning transmitter and receiver system and method
US10771913B2 (en) 2018-05-11 2020-09-08 Dts, Inc. Determining sound locations in multi-channel audio
CN110740404B (zh) * 2019-09-27 2020-12-25 广州励丰文化科技股份有限公司 一种音频相关性的处理方法及音频处理装置
CN110740416B (zh) * 2019-09-27 2021-04-06 广州励丰文化科技股份有限公司 一种音频信号处理方法及装置
EP4085660A4 (de) 2019-12-30 2024-05-22 Comhear Inc. Verfahren zum bereitstellen eines räumlichen schallfeldes
CN111537058B (zh) * 2020-04-16 2022-04-29 哈尔滨工程大学 一种基于Helmholtz方程最小二乘法的声场分离方法
US11743670B2 (en) 2020-12-18 2023-08-29 Qualcomm Incorporated Correlation-based rendering with multiple distributed streams accounting for an occlusion for six degree of freedom applications
CN113791385A (zh) * 2021-09-15 2021-12-14 张维翔 一种三维定位方法及系统

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS53114201U (de) * 1977-02-18 1978-09-11
US4731848A (en) * 1984-10-22 1988-03-15 Northwestern University Spatial reverberator
JP2569872B2 (ja) * 1990-03-02 1997-01-08 ヤマハ株式会社 音場制御装置
JP3578783B2 (ja) * 1993-09-24 2004-10-20 ヤマハ株式会社 電子楽器の音像定位装置
US5745584A (en) * 1993-12-14 1998-04-28 Taylor Group Of Companies, Inc. Sound bubble structures for sound reproducing arrays
GB9726338D0 (en) * 1997-12-13 1998-02-11 Central Research Lab Ltd A method of processing an audio signal
US7231054B1 (en) * 1999-09-24 2007-06-12 Creative Technology Ltd Method and apparatus for three-dimensional audio display
US7340062B2 (en) * 2000-03-14 2008-03-04 Revit Lawrence J Sound reproduction method and apparatus for assessing real-world performance of hearing and hearing aids
CA2406926A1 (en) * 2000-04-19 2001-11-01 Sonic Solutions Multi-channel surround sound mastering and reproduction techniques that preserve spatial harmonics in three dimensions

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See references of WO2004049299A1 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9460726B2 (en) 2010-03-26 2016-10-04 Dolby Laboratories Licensing Corporation Method and device for decoding an audio soundfield representation for audio playback
US10582329B2 (en) 2016-01-08 2020-03-03 Sony Corporation Audio processing device and method

Also Published As

Publication number Publication date
BR0316718A (pt) 2005-10-18
US7706543B2 (en) 2010-04-27
US20060045275A1 (en) 2006-03-02
FR2847376B1 (fr) 2005-02-04
JP2006506918A (ja) 2006-02-23
BRPI0316718B1 (pt) 2021-11-23
FR2847376A1 (fr) 2004-05-21
ATE322065T1 (de) 2006-04-15
EP1563485B1 (de) 2006-03-29
AU2003290190A1 (en) 2004-06-18
DE60304358D1 (de) 2006-05-18
KR100964353B1 (ko) 2010-06-17
CN1735922B (zh) 2010-05-12
ES2261994T3 (es) 2006-11-16
KR20050083928A (ko) 2005-08-26
CN1735922A (zh) 2006-02-15
DE60304358T2 (de) 2006-12-07
JP4343845B2 (ja) 2009-10-14
WO2004049299A1 (fr) 2004-06-10
ZA200503969B (en) 2006-09-27

Similar Documents

Publication Publication Date Title
EP1563485B1 (de) Verfahren zur verarbeitung von audiodateien und erfassungsvorrichtung zur anwendung davon
EP1992198B1 (de) Optimierung des binauralen raumklangeffektes durch mehrkanalkodierung
EP1836876A2 (de) Verfahren und vorrichtung zur individualisierung von hrtfs durch modellierung
EP1586220B1 (de) Verfahren und einrichtung zur steuerung einer wiedergabeeinheitdurch verwendung eines mehrkanalsignals
EP3475943B1 (de) Verfahren zur umwandlung und stereophonen codierung eines dreidimensionalen audiosignals
JP5611970B2 (ja) オーディオ信号を変換するためのコンバータ及び方法
EP1479266A2 (de) Verfahren und vorrichtung zur steuerung einer anordnung zur wiedergabe eines schallfeldes
WO2005069272A1 (fr) Procede de synthese et de spatialisation sonores
EP3400599B1 (de) Verbesserter ambisonic-codierer für eine tonquelle mit mehreren reflexionen
EP3025514B1 (de) Klangverräumlichung mit raumwirkung
FR3065137A1 (fr) Procede de spatialisation sonore
Ifergan et al. On the selection of the number of beamformers in beamforming-based binaural reproduction
WO2021212287A1 (zh) 音频信号处理方法、音频处理装置及录音设备
WO2005096268A2 (fr) Procede de traitement de donnees sonores, en particulier en contexte ambiophonique
EP3384688B1 (de) Aufeinanderfolgende dekompositionen von audiofiltern
FR3040253B1 (fr) Procede de mesure de filtres phrtf d'un auditeur, cabine pour la mise en oeuvre du procede, et procedes permettant d'aboutir a la restitution d'une bande sonore multicanal personnalisee
FR3069693B1 (fr) Procede et systeme de traitement d'un signal audio incluant un encodage au format ambisonique
EP3484185A1 (de) Modellierung einer menge von akustischen übertragungsfunktionen einer person, 3d-soundkarte und 3d-sound-reproduktionssystem
EP3449643B1 (de) Verfahren und system zum senden eines 360°-audiosignals
FR3114209A1 (fr) Systeme de reproduction de sons avec virtualisation du champ reverbere

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20050530

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LI LU MC NL PT RO SE SI SK TR

AX Request for extension of the european patent

Extension state: AL LT LV MK

GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

GRAS Grant fee paid

Free format text: ORIGINAL CODE: EPIDOSNIGR3

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

DAX Request for extension of the european patent (deleted)
AK Designated contracting states

Kind code of ref document: B1

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LI LU MC NL PT RO SE SI SK TR

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: RO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20060329

Ref country code: IT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT;WARNING: LAPSES OF ITALIAN PATENTS WITH EFFECTIVE DATE BEFORE 2007 MAY HAVE OCCURRED AT ANY TIME BEFORE 2007. THE CORRECT EFFECTIVE DATE MAY BE DIFFERENT FROM THE ONE RECORDED.

Effective date: 20060329

Ref country code: SK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20060329

Ref country code: AT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20060329

Ref country code: SI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20060329

Ref country code: IE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20060329

Ref country code: NL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20060329

REG Reference to a national code

Ref country code: GB

Ref legal event code: FG4D

Free format text: NOT ENGLISH

REG Reference to a national code

Ref country code: CH

Ref legal event code: EP

REG Reference to a national code

Ref country code: IE

Ref legal event code: FG4D

Free format text: LANGUAGE OF EP DOCUMENT: FRENCH

REF Corresponds to:

Ref document number: 60304358

Country of ref document: DE

Date of ref document: 20060518

Kind code of ref document: P

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: DK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20060629

Ref country code: SE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20060629

Ref country code: BG

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20060629

GBT Gb: translation of ep patent filed (gb section 77(6)(a)/1977)

Effective date: 20060712

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: PT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20060829

NLV1 Nl: lapsed or annulled due to failure to fulfill the requirements of art. 29p and 29m of the patents act
REG Reference to a national code

Ref country code: IE

Ref legal event code: FD4D

REG Reference to a national code

Ref country code: ES

Ref legal event code: FG2A

Ref document number: 2261994

Country of ref document: ES

Kind code of ref document: T3

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MC

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20061130

Ref country code: BE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20061130

PLBE No opposition filed within time limit

Free format text: ORIGINAL CODE: 0009261

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT

26N No opposition filed

Effective date: 20070102

BERE Be: lapsed

Owner name: FRANCE TELECOM

Effective date: 20061130

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: GR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20060630

Ref country code: CZ

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20060329

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: EE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20060329

Ref country code: FI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20060329

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: CH

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20071130

Ref country code: LU

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20061113

Ref country code: LI

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20071130

Ref country code: TR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20060329

Ref country code: HU

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20060930

REG Reference to a national code

Ref country code: CH

Ref legal event code: PL

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: CY

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20060329

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: FR

Payment date: 20101217

Year of fee payment: 8

REG Reference to a national code

Ref country code: FR

Ref legal event code: ST

Effective date: 20120731

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: FR

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20111130

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: IT

Payment date: 20221020

Year of fee payment: 20

Ref country code: GB

Payment date: 20221021

Year of fee payment: 20

Ref country code: ES

Payment date: 20221201

Year of fee payment: 20

Ref country code: DE

Payment date: 20221020

Year of fee payment: 20

REG Reference to a national code

Ref country code: DE

Ref legal event code: R071

Ref document number: 60304358

Country of ref document: DE

REG Reference to a national code

Ref country code: ES

Ref legal event code: FD2A

Effective date: 20231124

REG Reference to a national code

Ref country code: GB

Ref legal event code: PE20

Expiry date: 20231112

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: GB

Free format text: LAPSE BECAUSE OF EXPIRATION OF PROTECTION

Effective date: 20231112

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: ES

Free format text: LAPSE BECAUSE OF EXPIRATION OF PROTECTION

Effective date: 20231114

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: GB

Free format text: LAPSE BECAUSE OF EXPIRATION OF PROTECTION

Effective date: 20231112

Ref country code: ES

Free format text: LAPSE BECAUSE OF EXPIRATION OF PROTECTION

Effective date: 20231114