EP2539892B1 - Multichannel audio stream compression - Google Patents

Multichannel audio stream compression Download PDF

Info

Publication number
EP2539892B1
EP2539892B1 EP11708920.1A EP11708920A EP2539892B1 EP 2539892 B1 EP2539892 B1 EP 2539892B1 EP 11708920 A EP11708920 A EP 11708920A EP 2539892 B1 EP2539892 B1 EP 2539892B1
Authority
EP
European Patent Office
Prior art keywords
sources
source
space
spatial
signals
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
EP11708920.1A
Other languages
German (de)
French (fr)
Other versions
EP2539892A1 (en
Inventor
Adrien Daniel
Rozenn Nicol
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Orange SA
Original Assignee
Orange SA
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Orange SA filed Critical Orange SA
Publication of EP2539892A1 publication Critical patent/EP2539892A1/en
Application granted granted Critical
Publication of EP2539892B1 publication Critical patent/EP2539892B1/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/008Multichannel audio signal coding or decoding using interchannel correlation to reduce redundancy, e.g. joint-stereo, intensity-coding or matrixing
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/04Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using predictive techniques
    • G10L19/16Vocoder architecture
    • G10L19/18Vocoders using multiple modes
    • G10L19/20Vocoders using multiple modes using sound class specific coding, hybrid encoders or object based coding

Definitions

  • the present invention generally relates to multi-channel audio stream compression - i.e., including a plurality of audio signals - for processing by an audio system including a plurality of loudspeakers to reproduce a scene. spatialized sound.
  • the compression means apply to audio streams encoded according to a 5.1, 6.1, 7.1, 10.2, 22.2 multichannel coding format, or else according to an ambiophonic coding format commonly referred to by the acronym " HOA "for” Higher Order Ambisonics ".
  • the HOA surround encoding format is particularly detailed in the document Daniel, J. Acoustic field representation, application to the transmission and reproduction of complex sound scenes in a multimedia context. 2000, Thesis of the University Pierre and Marie Curie (Paris VI): Paris.
  • the compression performed on the audio streams may in particular be introduced prior to a transmission step, broadcast, or storage for example on an optical disk.
  • Another possible alternative is to mix the different streams to obtain a mono or stereo signal.
  • This technique is used in particular in the coding "MPEG Surround” at low bit rate, that is to say whose rate is typically of the order of 64 kbits / s for 5 to 7 channels. This operation is conventionally described as "downmix” in English.
  • the mono or stereo signal can then be encoded according to a conventional compression scheme to obtain a compressed stream. Spatial information is further calculated and added to the compressed stream.
  • This spatial information is for example the delay between two channels (in English, "ICTD” for “Inter-Channei Time Difference”), the energy difference between two channels (in English “ICLD” for “Inter-Channel Level Difference” ), the correlation between two channels (in English “ICC” for “Inter-Channel Coherence”).
  • the coding of the mono or stereo signal resulting from the "downmix" operation is carried out on the basis of the unsuitable hypothesis of a monophonic or stereophonic perception and thus does not take into account the characteristics specific to a spatial perception of the multi signal.
  • -channel especially in the case where the audio stream has a large number of channels, typically greater than or equal to 7.
  • the inaudible degradation on the signal resulting from the "downmix” operation can become audible on a multi-speaker rendering device of the multichannel stream resulting from the "upmix” processing, in particular because of the binaural demasking phenomenon.
  • a multi-speaker rendering device of the multichannel stream resulting from the "upmix” processing in particular because of the binaural demasking phenomenon.
  • the document WO2009 / 067741 describes a method for encoding parametric representations of sound fields.
  • the temporally and spatially sampled pressure field in a three-dimensional target zone can first be parameterized by orthogonal base function decomposition and secondly parameterized using the spatial and temporal correlations between the parameters of the first set of parameters.
  • the present invention aims to improve the situation.
  • the compression method proposes a solution for exploiting the psychoperceptive and cognitive properties of a listener's spatial audio perception to compress the multichannel audio stream. These properties include the spatial masking of a predominant source on other sources, reducing an auditor's ability to locate them. sound not exploited by the auditory system of the listener, without presenting risks of introduction of audible artifacts in the spatialized rendering system, unlike compression techniques of the prior art.
  • the method according to the invention makes it possible to exploit the interactions between the different sources, since the spatial resolution of each source is determined, not only according to the characteristics of said source, but also according to those of the other sources of space. Compared to other compression techniques treating each signal separately, the compression ratio obtained is potentially higher.
  • the signals of the audio stream include information representing the sound scene in a spherical harmonic base.
  • the method may comprise a step of transposing the information included in the signals of the audio stream representing the sound scene into a spherical harmonics base, thus making it possible to convert the stream.
  • the compressed stream can also be generated by subdividing the space into subspaces, and truncating, for each of the subspaces, an order of representation of the signals in the base of spherical harmonics, up to obtain a spatial resolution substantially equal to the maximum value of the spatial resolutions associated with the sources present in the subspace under consideration.
  • the truncation of the order of representation of the signals makes it possible to reduce the spatial resolution of the representation of the signals.
  • the sound scene can be described by a set of signals corresponding to the coefficients of the decomposition of the acoustic wave on the basis of spherical harmonics.
  • This representation has the property of scalability, in the sense that the coefficients are hierarchical and that the coefficients of the first orders contain a complete description of the sound scene. Higher order coefficients only specify spatial information.
  • the truncation of the order of representation amounts in this case to eliminating the components of the higher orders until reaching the determined resolution.
  • the subdivision of the space into subspaces can be dynamic over time.
  • a dynamic subdivision makes it possible to group adjacent sources of spatial resolutions perceived in a similar manner in the same subspace.
  • the various steps of the compression methods are determined by instructions of computer programs.
  • the invention also relates to computer programs on an information medium, these programs being capable of being implemented respectively in a computer, these programs respectively comprising instructions adapted to the implementation of the steps of the compression methods which have just been described.
  • These programs can use any programming language, and be in the form of source code, object code, or intermediate code between source code and object code, such as in a partially compiled form, or in any other form desirable shape.
  • the invention also relates to a computer-readable information medium, comprising instructions of a computer program as mentioned above.
  • the information carrier may be any entity or device capable of storing the program.
  • the medium may comprise storage means, such as a ROM, for example a CD ROM or a microelectronic circuit ROM, or a magnetic recording medium, for example a diskette (floppy disc) or a disk hard.
  • the information medium may be a transmissible medium such as an electrical or optical signal, which may be conveyed via an electrical or optical cable, by radio or by other means.
  • the program according to the invention can be downloaded in particular on an Internet type network.
  • the information carrier may be an integrated circuit in which the program is incorporated, the circuit being adapted to execute or to be used in the execution of the methods in question.
  • the identification unit can be configured to identify only audible sources.
  • the generation unit may be configured to adapt the subdivision of the space into subspaces over time.
  • the device further comprises a conversion unit adapted to transpose information included in the signals of the audio stream into a spherical harmonic base.
  • a sound scene SCE is considered, that is to say a real acoustic field, formed by sound signals emitted by a plurality of sources SR, or a synthetic acoustic field obtained by artificial spatialization of monophonic signals.
  • the signal emitted by a sound source or source can be represented by a spatial distribution of energy in a frequency band.
  • the corresponding source is then qualified as an extended source, in the opposite case the source is said to be point.
  • the sound stage is picked up by a limited number of sound sensors, to form a multi-channel audio stream F comprising a plurality of signals S.
  • the scene can be synthesized by spatialization of monophonic signals.
  • the stream F can be subdivided into time T frames.
  • the stream F can be considered as a description or representation over time of the sound stage SCE.
  • the spatial components of the SCE sound scene can be represented in the HOA domain by spatial components projected into a spherical harmonic base.
  • the term "ambisonic" is used to define the step of obtaining these spatial components of the field in the base of spherical harmonics. This encoding thus makes it possible to represent the sound scene in the form of surround signals.
  • a step 10 by spatio-frequency analysis of the signals S, the sources SR are identified, and for each identified source SR, a frequency band of the source or the central frequency of said frequency band is determined, a level of energy and a spatial position.
  • each of the signals S constituting the flux F it will be possible in particular to perform a time / frequency analysis of each of the signals S constituting the flux F to extract a frequency band energy level for each frame T.
  • each identified SR source is associated with the following quantities: its frequency band of the source or the center frequency of said frequency band, its energy level and its spatial position.
  • the frequency band of the source or the center frequency of said frequency band can be obtained directly, following the time / frequency analysis used to identify each SR source.
  • a spatial resolution RS is calculated for each of the sources SR identified during step 10, by implementing a psychoacoustic model.
  • the RS spatial resolution calculated for a source corresponds to an optimal resolution beyond which an average listener does not perceive a significant increase in the level of precision in the location of said source.
  • the RS spatial resolution also corresponds to a maximum spatial degradation applicable to the corresponding SR source, without significant impairment of the capabilities of a listener to locate said source SR, in the presence of other SR sources.
  • the spatial resolution RS is equal to 1 degree for one of the sources SR, it will be considered that the listener is not able to locate said source SR with a precision greater than 1 degree.
  • each source SR corresponds to a specific RS spatial resolution.
  • the spatial resolution RS of one of the sources SR can also be defined as the minimum audible angle associated with said source RS, in the sense for example of the Mills experiment of 1958, presented in the document AW Mills, "On the Minimum Audible Angle," The Journal of the Acoustical Society of America, vol. 30, Apr. 1958, pp. 237-246 .
  • the minimum audible angle of the source SR is substantially equivalent to the measurement made, under the same conditions as those described in the Mills experiment, for a target source within the meaning of AW Mills, having the same characteristics that the source RS.
  • the psychoacoustic model can therefore be described by a function f (s c , sd 1 , sd 2 , ..., sd N ), where s c represents the source SR for which we wish to obtain the spatial resolution RS, and sd 1 , sd 2 , ..., sd N represents all or part of the other SR sources.
  • SR sources may each be described by a tuple ⁇ f c, i, ⁇ , ⁇ , where f c is the center frequency, the I energy level, the angular position ⁇ azimuth ⁇ and the angular position in elevation .
  • the psychoacoustic model can also be constructed from models describing the capabilities of a listener according to the parameters described above, and / or from test results. For the construction of the model, it is furthermore possible to assume that the listener is always facing the source SR for which the spatial resolution RS is calculated, in which case the listener's ability to separate the sources is Max.
  • a compressed stream F c containing compressed signals S c is generated, so that the compressed stream F e comprises the information necessary for the reproduction of each source SR with the corresponding spatial resolution RS, calculated during the first time.
  • Step 20 This also amounts to generating the compressed stream F c by reducing the amount of spatial information initially contained in the stream F for each source SR, until the information necessary for the restitution of each source SR is maintained with at least the corresponding RS spatial resolution. It should therefore be noted that the compressed stream Fc therefore has a smaller amount of information than the stream F.
  • the spatial resolution RS is equal to 1 degree for one of the sources SR
  • said source SR will have to be encoded in the compressed stream F c so as to allow when it is restored by a system audio to an average listener to locate the SR source with an accuracy of 1 degree.
  • the source SR will not bring a significant gain in the ability of the listener to locate with a higher accuracy the SR source.
  • the stream F comprises the necessary information to achieve a resolution of 0.5 degree to the SR source
  • the compressed stream F c only include the information needed to restore the SR source with an accuracy of 1 degree.
  • the figure 2 illustrates the steps of an embodiment of the compression method, in a base of spherical harmonics, for example in the HOA domain, applied to the flux F.
  • the method may comprise a step 100 of transforming flux F into a base of spherical harmonics.
  • This step 100 is optional if the flux F is already encoded in a base of spherical harmonics.
  • this transformation may correspond to a projection of the information included in the signals S in a base of spherical harmonics.
  • step 100 an acoustic wave corresponding to that which would be obtained by an audio reproduction system fed by the signals S of the flux F is simulated.
  • the simulated acoustic wave is then decomposed on the basis of the harmonics. spherical, by projection in this base, or by simulation of a synthetic sound recording by an encoding device HOA as a sphere of microphones.
  • This last possibility is for example described in the document Moreau, S. "Study and realization of advanced spatial encoding tools for sound spatialization technique Higher Order Ambisonics: 3D microphone and distance control" University of Maine, Le Mans, France, 2006 .
  • decomposition coefficients C forming S HOA signals corresponding to S signals in an HOA encoding format are obtained.
  • the method comprises a step 110 of time / frequency analysis of the signals S HOA for extracting, for each signal S HOA , for each frame T, and for each frequency band, a level of energy E.
  • the method comprises a step 120 in which one calculates, for each frame T and for each frequency band, a spatial projection Pr energy levels E on a sphere.
  • a model is thus obtained for determining the energy level E as a function of the direction, for each frame T and for each frequency band.
  • it will be possible to calculate the spatial projection Pr of the energy levels E by carrying out an inverse transformation of the S HOA signals in a domain of space variables. For example, an acoustic wave corresponding to the S HOA signals is reconstructed by linear combination of spherical harmonics weighted by the values of the HOA components. We thus obtain a spatial evolution of the acoustic wave on a sphere.
  • the spatial projection Pr of the energy levels is then constructed by spatially sampling the sphere, the number of samples chosen being a function of the desired resolution.
  • the method comprises a step 130 during which, for each frame T, the SR sources are identified, their spatial position and their respective energy. For this, we search all the directions of the spatial projection Pr for which the energy level E is non-zero. Then, for each direction in which the energy level is non-zero, one calculates the correlation with the energy levels present in the neighboring directions. For example, for each frequency band, the energy fluctuations in time are determined, possibly taking into account the T frames preceding and / or following said frame T, for each direction. To increase the accuracy In time, it is possible to calculate the correlation over overlapping time ranges, and then to sub-sample the results thus obtained for the frequency band.
  • step 130 it is thus possible to describe the sound scene SCE in the form of a set of SR sources whose position, spatial extent and energy are known.
  • a subset of the SR sources identified in step 130 is selected. For example, only the audible SR sources for an average listener will be selected. To determine, if a source is audible, it will be possible in particular to implement a simultaneous energy masking analysis taking into account the binaural unmasking.
  • a step 140 it is determined, using a psycho-acoustic spatial masking model, for each source SR identified during step 130 and possibly selected during step 135, the spatial resolution RS corresponding.
  • the masking power in each region of the space and in each frequency band of each source SR identified on the other identified SR sources is evaluated. More specifically, for each source SR identified, in particular according to its position, the frequency band, and its energy level, the spatial resolution RS with which the source SR is perceived is determined.
  • a step 150 the compressed stream F c containing the compressed signals S c is generated, so that the compressed stream F c comprises the information necessary for the reproduction of each source SR with at least the corresponding spatial resolution RS, calculated during of step 140.
  • This operation amounts to compressing the stream F by adapting the spatial resolution of the signals S HOA as a function of the spatial resolution RS obtained for each source SR identified.
  • the space is decomposed into a set of subspaces, so that the union of the subspaces is substantially equal to the space. For each of these subspaces, a sub-base of spherical harmonics is constructed.
  • a suitable construction method may be that described in the document Pomberger H. & Zotter F.
  • a dynamic decomposition has the advantage of being able to group in the same subspace adjacent sources whose perceived spatial resolution is substantially equal. For each of the subspaces, the order of representation in the base of the spherical harmonics of the S HOA signals is then truncated down to a spatial resolution corresponding to the maximum value of the RS spatial resolutions associated with the SR sources present in the subspace considered.
  • the figure 3 shows, in a block diagram, a multichannel audio stream compression device 200, according to one embodiment.
  • the device 200 is particularly suitable for implementing the method according to the invention.
  • the device 200 includes an input 210 for receiving the multi-channel audio stream F describing the sound scene SCE produced by a plurality of SR sources in a space.
  • the device 200 delivers on an output 260 the compressed stream F c .
  • the device 200 comprises an identification unit 220 of the sources SR coupled to the input 210 so as to receive the stream F.
  • the identification unit 220 is adapted to identify the sources SR from the stream F, and to determine for each identified SR source a frequency band, an energy level and a spatial position in the space.
  • the identification unit 220 outputs, on an output, the frequency band, the energy level and the spatial position in the space of each identified source SR.
  • the identification unit 220 may be configured to identify only the audible SR sources.
  • the device 200 comprises a generation unit 250, coupled to the output of the identification unit 220, adapted to form the compressed stream FC from the information necessary to restore each source SR identified with at least the corresponding RS spatial resolution.
  • the figure 4 shows, in a block diagram, a multichannel audio stream compression device 300, according to one embodiment.
  • the device 300 includes an input 310 for receiving the multi-channel audio stream F describing the sound scene SCE produced by a plurality of SR sources in a space.
  • the device 300 delivers on an output 390 the compressed FC stream.
  • the device 300 may comprise a conversion unit 320 adapted to transpose information included in the signals S of the audio stream F representing the sound scene SCE in a spherical harmonics base, when the stream F comprises signals S intended to feed directly loudspeakers, such as S-type signals 5.1, 6.1, 7.1, 10.2, 22.2.
  • the conversion unit 320 outputs S HOA signals described in a base of spherical harmonics.
  • the device 300 includes an identification unit 330 of the SR sources coupled to the output of the conversion unit 320 to receive the S HOA signals.
  • the identification unit 330 is adapted to identify the sources SR from the stream F, and to determine for each of the identified SR sources a frequency band, a level of energy and a spatial position in space.
  • the identification unit 330 is configured to calculate a spatial projection of the energy levels of the sources on a sphere and to search the directions of the spatial projection whose energy level is non-zero.
  • the identification unit 330 delivers, on an output, the frequency band, the energy level and the spatial position in the space of each identified source SR.
  • the identification unit 330 may be configured to identify only the audible SR sources.
  • the device 300 includes a generation unit 360, coupled to the output of the identification unit 340, adapted to form the compressed stream FC from the information necessary to restore each source SR identified with at least the corresponding RS spatial resolution.
  • the generation unit 360 is particularly adapted to produce the compressed stream F c by subdividing the space into subspaces, and by truncating, for each of the subspaces, an order of representation of the signals in the base of the spherical harmonics, to obtain a spatial resolution substantially equal to the maximum value of the spatial resolutions associated with the sources present in the subspace under consideration.
  • the subdivision of the space into subspaces can also be dynamic over time.
  • the figure 5 represents a processing device 400 for implementing the compression method according to the invention.
  • the device 400 comprises an interface 420 coupled to an input 410 for receiving the stream F and an output F for delivering the compressed stream F c .
  • the interface 420 is for example an interface for accessing a communication network, a storage device, and / or a support reader.
  • the device 400 also comprises a processor 440 coupled to a memory 450.
  • the processor 440 is configured to communicate with the interface 420.
  • the processor is adapted to execute computer programs, included in the memory 450, respectively comprising instructions adapted to the implementation of the steps of the compression methods which have just been described.
  • the memory 450 may be a combination of elements chosen from the following list: a RAM, a ROM, for example a CD ROM or a microelectronic circuit ROM, or else a magnetic recording means, for example a diskette or a disk hard, a transmissible medium such as an electrical or optical signal, which can be routed via an electrical or optical cable, by radio or by other means.
  • the computer program can be downloaded in particular on an Internet type network.
  • the memory 450 may be an integrated circuit in which the program is incorporated, the circuit being adapted to execute or to be used in the execution of the processes in question.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Mathematical Physics (AREA)
  • Stereophonic System (AREA)

Description

La présente invention se rapporte de manière générale à la compression de flux audio multicanal - c'est-à-dire comprenant une pluralité de signaux audio - destinés à être traités par un système audio comprenant une pluralité de haut-parleurs afin de reproduire une scène sonore spatialisée. En particulier, les moyens de compression s'appliquent aux flux audio encodés selon un format de codage multicanal de type 5.1, 6.1, 7.1, 10.2, 22.2, ou encore selon un format de codage ambiophonique communément désigné par l'acronyme anglo-saxon « HOA » pour «Higher Order Ambisonics ». Le format d'encodage ambiophonique HOA est notamment détaillé dans le document Daniel, J., Représentation de champs acoustiques, application à la transmission et à la reproduction de scènes sonores complexes dans un contexte multimédia. 2000, Thèse de l'Université Pierre et Marie Curie (Paris VI): Paris. La compression opérée sur les flux audio peut notamment être introduite préalablement à une étape de transmission, de diffusion, ou de stockage par exemple sur un disque optique.The present invention generally relates to multi-channel audio stream compression - i.e., including a plurality of audio signals - for processing by an audio system including a plurality of loudspeakers to reproduce a scene. spatialized sound. In particular, the compression means apply to audio streams encoded according to a 5.1, 6.1, 7.1, 10.2, 22.2 multichannel coding format, or else according to an ambiophonic coding format commonly referred to by the acronym " HOA "for" Higher Order Ambisonics ". The HOA surround encoding format is particularly detailed in the document Daniel, J. Acoustic field representation, application to the transmission and reproduction of complex sound scenes in a multimedia context. 2000, Thesis of the University Pierre and Marie Curie (Paris VI): Paris. The compression performed on the audio streams may in particular be introduced prior to a transmission step, broadcast, or storage for example on an optical disk.

Pour réduire la quantité d'information nécessaire pour représenter un flux audio multicanal, il est possible de coder séparément les différents signaux constitutifs dudit flux selon un schéma conventionnel de compression de flux audio, exploitant généralement les propriétés de masquage fréquentiel observées dans la perception d'un signal sonore par un auditeur. On peut citer à titre d'exemple le codage « MPEG-1/2 Audio Layer 3 », plus généralement désigné par son acronyme MP3, ou encore le codage audio avancé ou « AAC » pour « Advanced Audio Coding » en anglais. Les signaux étant considérés séparément, les éventuelles redondances entre les signaux sont peu exploitées. Cette solution est adaptée à l'encodage de flux audio multicanaux à haut débit, typiquement ayant un débit supérieur ou égal à 128 kbit/s par canal dans le cas du MP3, 64 kbits/s par canal dans le cas de l'AAC. Ainsi, l'encodage séparé des signaux d'un flux n'est pas adapté à la production de flux, dont le débit est de l'ordre typiquement de 64 kbits/s pour 5 à 7 canaux, sans réduction significative du niveau de qualité sonore.To reduce the amount of information necessary to represent a multichannel audio stream, it is possible to separately code the different signals constituting said stream according to a conventional audio stream compression scheme, generally exploiting the frequency masking properties observed in the perception of a sound signal by a listener. As an example, the coding "MPEG-1/2 Audio Layer 3", more generally referred to by its acronym MP3, or the advanced audio coding or "AAC" for "Advanced Audio Coding" in English. The signals being considered separately, the possible redundancies between the signals are little exploited. This solution is suitable for encoding high-speed multichannel audio streams, typically having a bit rate greater than or equal to 128 kbit / s per channel in the case of MP3, 64 kbit / s per channel in the case of AAC. Thus, the separate encoding of the signals of a stream is not suitable for the production of streams, whose bit rate is typically of the order of 64 kbit / s for 5 to 7 channels, without significant reduction in the quality level. sound.

Une autre alternative possible consiste à mélanger les différents flux pour obtenir un signal mono ou stéréo. Cette technique est notamment employée dans le codage « MPEG Surround » en bas débit, c'est-à-dire dont le débit est de l'ordre typiquement de 64 kbits/s pour 5 à 7 canaux. Cette opération est conventionnellement qualifiée de « downmix » en anglais. Le signal mono ou stéréo peut alors être codé selon un schéma conventionnel de compression pour obtenir un flux compressé. Des informations spatiales sont en outre calculées puis ajoutées au flux compressé. Ces informations spatiales sont par exemple le retard entre deux canaux (en anglais, « ICTD » pour « Inter-Channei Time Difference »), la différence d'énergie entre deux canaux ( en anglais « ICLD » pour « Inter-Channel Level Difference »), la corrélation entre deux canaux (en anglais « ICC » pour « Inter-Channel Coherence »).Another possible alternative is to mix the different streams to obtain a mono or stereo signal. This technique is used in particular in the coding "MPEG Surround" at low bit rate, that is to say whose rate is typically of the order of 64 kbits / s for 5 to 7 channels. This operation is conventionally described as "downmix" in English. The mono or stereo signal can then be encoded according to a conventional compression scheme to obtain a compressed stream. Spatial information is further calculated and added to the compressed stream. This spatial information is for example the delay between two channels (in English, "ICTD" for "Inter-Channei Time Difference"), the energy difference between two channels (in English "ICLD" for "Inter-Channel Level Difference" ), the correlation between two channels (in English "ICC" for "Inter-Channel Coherence").

Le codage du signal mono ou stéréo issu de l'opération de "downmix" est effectué en se basant sur l'hypothèse inadaptée d'une perception monophonique ou stéréophonique et ne prend donc pas en compte les caractéristiques propres à une perception spatiale du signal multi-canal, notamment dans le cas où le flux audio comporte un nombre important de canaux, typiquement supérieur ou égal à 7.The coding of the mono or stereo signal resulting from the "downmix" operation is carried out on the basis of the unsuitable hypothesis of a monophonic or stereophonic perception and thus does not take into account the characteristics specific to a spatial perception of the multi signal. -channel, especially in the case where the audio stream has a large number of channels, typically greater than or equal to 7.

Ainsi, la dégradation inaudible sur le signal issu de l'opération de "downmix" peut devenir audible sur un dispositif de restitution multi haut-parleurs du flux multi-canal résultant du traitement de "upmix", notamment en raison du phénomène de démasquage binaural, décrit notamment dans le document Saberi, K., Dostal, L., Sadralodabai, T., and Bull, V., "Free-field release from masking," Journal of the Acoustical Society of America, vol. 90, 1991, pp. 1355-1370 .Thus, the inaudible degradation on the signal resulting from the "downmix" operation can become audible on a multi-speaker rendering device of the multichannel stream resulting from the "upmix" processing, in particular because of the binaural demasking phenomenon. , described in particular in the document Saberi, K., Dostal, L., Sadralodabai, T., and Bull, V., "Free-field release from masking," Journal of the Acoustical Society of America, vol. 90, 1991, pp. 1355-1370 .

Le document WO2009/067741 décrit un procédé de codage de représentations paramétriques de champs sonores. Le champ de pression échantillonné temporellement et spatialement dans une zone cible tridimensionnelle peut être premièrement paramétré par une décomposition par des fonctions d'une base orthogonale et deuxièmement paramétré en utilisant les corrélations spatiales et temporelles entre les paramètres du premier jeu de paramètres.The document WO2009 / 067741 describes a method for encoding parametric representations of sound fields. The temporally and spatially sampled pressure field in a three-dimensional target zone can first be parameterized by orthogonal base function decomposition and secondly parameterized using the spatial and temporal correlations between the parameters of the first set of parameters.

Il existe donc un besoin pour compresser plus efficacement des flux audio spatialisés tout en conservant une qualité sonore perçue au moins équivalente aux techniques de l'état de l'art.There is therefore a need to more efficiently compress spatialized audio streams while maintaining a perceived sound quality at least equivalent to state-of-the-art techniques.

La présente invention vise à améliorer la situation.The present invention aims to improve the situation.

Selon un premier aspect, il est proposé un procédé de compression d'un flux audio comprenant une pluralité de signaux. Le flux audio décrit une scène sonore produite par une pluralité de sources dans un espace. Le procédé comporte les étapes suivantes :

  • à partir du flux audio, identification des sources ;
  • détermination pour chacune des sources identifiées d'une bande de fréquences, d'un niveau d'énergie et d'une position spatiale dans l'espace;
  • détermination, pour chaque source identifiée, d'une résolution spatiale correspondant à une variation de position de ladite source dans l'espace la plus faible qu'un auditeur est susceptible de percevoir, en fonction :
    • ○ de la bande de fréquences, du niveau d'énergie, et de la position spatiale de ladite source ; et,
    • ○ de la bande de fréquences, du niveau d'énergie, et de la position spatiale des autres sources identifiées ;
  • génération d'un flux compressé comportant les informations nécessaires pour restituer chaque source identifiée avec au moins la résolution spatiale correspondante
In a first aspect, there is provided a method of compressing an audio stream comprising a plurality of signals. The audio stream describes a sound scene produced by a plurality of sources in a space. The method comprises the following steps:
  • from the audio stream, source identification;
  • determination for each of the identified sources of a frequency band, an energy level and a spatial position in space;
  • determining, for each identified source, a spatial resolution corresponding to a variation of position of said source in the lowest space that a listener is likely to perceive, based on:
    • ○ the frequency band, the energy level, and the spatial position of said source; and,
    • ○ the frequency band, the energy level, and the spatial position of the other identified sources;
  • generating a compressed stream comprising the information necessary to render each identified source with at least the corresponding spatial resolution

Le procédé de compression propose une solution pour exploiter les propriétés psychoperceptives et cognitives de perception audio spatialisée d'un auditeur pour compresser le flux audio multicanal. Parmi ces propriétés, on peut citer le masquage spatial d'une source prédominante sur les autres sources, réduisant la capacité d'un auditeur à localiser ces dernières. sonores non exploitées par le système auditif de l'auditeur, sans présenter de risques d'introduction d'artefacts audibles dans le système de restitution spatialisée, contrairement aux techniques de compression de l'art antérieur.The compression method proposes a solution for exploiting the psychoperceptive and cognitive properties of a listener's spatial audio perception to compress the multichannel audio stream. These properties include the spatial masking of a predominant source on other sources, reducing an auditor's ability to locate them. sound not exploited by the auditory system of the listener, without presenting risks of introduction of audible artifacts in the spatialized rendering system, unlike compression techniques of the prior art.

En outre, le procédé selon l'invention permet d'exploiter les interactions entre les différentes sources, puisque la résolution spatiale de chaque source est déterminée, non seulement en fonction des caractéristiques de ladite source, mais encore en fonction de celles des autres sources de l'espace. En comparaison des autres techniques de compression traitant chaque signal séparément, le taux de compression obtenu s'avère potentiellement plus important.In addition, the method according to the invention makes it possible to exploit the interactions between the different sources, since the spatial resolution of each source is determined, not only according to the characteristics of said source, but also according to those of the other sources of space. Compared to other compression techniques treating each signal separately, the compression ratio obtained is potentially higher.

Il est possible d'identifier, dans l'espace, seulement les sources audibles par un auditeur, ce qui permet de réduire encore ainsi les informations à coder. Par exemple, à l'aide d'une analyse de masquage énergétique simultané prenant en compte le démasquage binaural, un sous-ensemble des sources sonores est répertorié. En effet, les sources non-audibles n'ont pas nécessairement besoin d'être considérées dans la mise en oeuvre du modèle psycho-acoustique de masquage spatial. Ainsi, la complexité, au sens algorithmique du terme, du procédé peut être diminuée.It is possible to identify, in space, only the sources audible by a listener, which further reduces the information to be coded. For example, using a simultaneous energetic masking analysis that takes into account binaural unmasking, a subset of the sound sources is listed. Indeed, non-audible sources do not necessarily need to be considered in the implementation of the psychoacoustic model of spatial masking. Thus, the complexity, in the algorithmic sense of the term, of the process can be diminished.

Dans un mode de réalisation, les signaux du flux audio comprennent des informations représentant la scène sonore dans une base d'harmoniques sphériques. Alternativement, le procédé peut comporter une étape de transposition des informations comprises dans les signaux du flux audio représentant la scène sonore dans une base d'harmoniques sphériques, permettant ainsi de convertir le flux.In one embodiment, the signals of the audio stream include information representing the sound scene in a spherical harmonic base. Alternatively, the method may comprise a step of transposing the information included in the signals of the audio stream representing the sound scene into a spherical harmonics base, thus making it possible to convert the stream.

Dans ce mode de réalisation, le flux compressé peut également être généré en subdivisant l'espace en sous-espaces, et en tronquant, pour chacun des sous-espaces, un ordre de représentation des signaux dans la base des harmoniques sphériques, jusqu'à obtenir une résolution spatiale sensiblement égale à la valeur maximale des résolutions spatiales associées aux sources présentes dans le sous-espace considéré.In this embodiment, the compressed stream can also be generated by subdividing the space into subspaces, and truncating, for each of the subspaces, an order of representation of the signals in the base of spherical harmonics, up to obtain a spatial resolution substantially equal to the maximum value of the spatial resolutions associated with the sources present in the subspace under consideration.

La troncature de l'ordre de représentation des signaux permet de diminuer la résolution spatiale de la représentation des signaux. Dans le cas d'une représentation HOA, la scène sonore peut être décrite par un ensemble de signaux correspondant aux coefficients de la décomposition de l'onde acoustique sur la base des harmoniques sphériques. Cette représentation possède la propriété de scalabilité, au sens où les coefficients sont hiérarchisés et que les coefficients des premiers ordres contiennent une description complète de la scène sonore. Les coefficients des ordres supérieurs ne font que préciser l'information spatiale. La troncature de l'ordre de représentation revient en ce cas à éliminer les composantes des ordres supérieurs jusqu'à atteindre la résolution déterminée.The truncation of the order of representation of the signals makes it possible to reduce the spatial resolution of the representation of the signals. In the case of an HOA representation, the sound scene can be described by a set of signals corresponding to the coefficients of the decomposition of the acoustic wave on the basis of spherical harmonics. This representation has the property of scalability, in the sense that the coefficients are hierarchical and that the coefficients of the first orders contain a complete description of the sound scene. Higher order coefficients only specify spatial information. The truncation of the order of representation amounts in this case to eliminating the components of the higher orders until reaching the determined resolution.

Dans ce mode de réalisation, la subdivision de l'espace en sous-espaces peut être dynamique au cours du temps. Une subdivision dynamique permet de regrouper dans un même sous-espace des sources adjacentes de résolutions spatiales perçues de manière similaire.In this embodiment, the subdivision of the space into subspaces can be dynamic over time. A dynamic subdivision makes it possible to group adjacent sources of spatial resolutions perceived in a similar manner in the same subspace.

Dans un mode particulier de réalisation, les différentes étapes des procédés de compression sont déterminées par des instructions de programmes d'ordinateurs.In a particular embodiment, the various steps of the compression methods are determined by instructions of computer programs.

En conséquence, l'invention vise aussi des programmes d'ordinateur sur un support d'informations, ces programmes étant susceptibles d'être mis en oeuvre respectivement dans un ordinateur, ces programmes comportant respectivement des instructions adaptées à la mise en oeuvre des étapes des procédés de compression qui viennent d'être décrits.Accordingly, the invention also relates to computer programs on an information medium, these programs being capable of being implemented respectively in a computer, these programs respectively comprising instructions adapted to the implementation of the steps of the compression methods which have just been described.

Ces programmes peuvent utiliser n'importe quel langage de programmation, et être sous la forme de code source, code objet, ou de code intermédiaire entre code source et code objet, tel que dans une forme partiellement compilée, ou dans n'importe quelle autre forme souhaitable.These programs can use any programming language, and be in the form of source code, object code, or intermediate code between source code and object code, such as in a partially compiled form, or in any other form desirable shape.

L'invention vise aussi un support d'informations lisible par un ordinateur, et comportant des instructions d'un programme d'ordinateur tel que mentionné ci-dessus.The invention also relates to a computer-readable information medium, comprising instructions of a computer program as mentioned above.

Le support d'informations peut être n'importe quelle entité ou dispositif capable de stocker le programme. Par exemple, le support peut comporter un moyen de stockage, tel qu'une ROM, par exemple un CD ROM ou une ROM de circuit microélectronique, ou encore un moyen d'enregistrement magnétique, par exemple une disquette (floppy disc) ou un disque dur.The information carrier may be any entity or device capable of storing the program. For example, the medium may comprise storage means, such as a ROM, for example a CD ROM or a microelectronic circuit ROM, or a magnetic recording medium, for example a diskette (floppy disc) or a disk hard.

D'autre part, le support d'informations peut être un support transmissible tel qu'un signal électrique ou optique, qui peut être acheminé via un câble électrique ou optique, par radio ou par d'autres moyens. Le programme selon l'invention peut être en particulier téléchargé sur un réseau de type Internet.On the other hand, the information medium may be a transmissible medium such as an electrical or optical signal, which may be conveyed via an electrical or optical cable, by radio or by other means. The program according to the invention can be downloaded in particular on an Internet type network.

Alternativement, le support d'informations peut être un circuit intégré dans lequel le programme est incorporé, le circuit étant adapté pour exécuter ou pour être utilisé dans l'exécution des procédés en question.Alternatively, the information carrier may be an integrated circuit in which the program is incorporated, the circuit being adapted to execute or to be used in the execution of the methods in question.

Selon un deuxième aspect, il est proposé un dispositif de compression de flux audio multicanal, adapté à la mise en oeuvre du procédé selon le premier aspect. Le dispositif comprend une entrée pour recevoir un flux audio multicanal décrivant une scène sonore produite par une pluralité de sources dans un espace, et une sortie pour délivrer un flux compressé. Le dispositif comporte en outre :

  • une unité d'identification des sources, couplée à l'entrée, adaptée pour identifier les sources, à partir du flux, et pour déterminer pour chacune des sources identifiées une bande de fréquence, un niveau d'énergie et une position spatiale dans l'espace ;
  • une unité de détermination de résolution spatiale, couplée à l'unité d'identification, adaptée pour déterminer, pour chaque source identifiée, une résolution spatiale correspondant à une variation de position de ladite source dans l'espace la plus faible qu'un auditeur est susceptible de percevoir, en fonction
    • ○ de la bande de fréquences, du niveau d'énergie, et de la position spatiale de ladite source ; et,
    • ○ de la bande de fréquences, du niveau d'énergie, et de la position spatiale des autres sources identifiées ;
  • une unité de génération du flux compressé, couplée à l'unité de détermination de résolution spatiale, adaptée pour former le flux compressé à partir des informations nécessaires pour restituer chaque source identifiée avec au moins la résolution spatiale correspondante, et délivrer le flux compressé sur la sortie.
According to a second aspect, there is provided a multichannel audio stream compression device adapted to the implementation of the method according to the first aspect. The device includes an input for receiving a multichannel audio stream describing a sound scene produced by a plurality of sources in a space, and an output for delivering a compressed stream. The device further comprises:
  • a source identification unit, coupled to the input, adapted to identify the sources, from the stream, and to determine for each of the identified sources a frequency band, an energy level and a spatial position in the space ;
  • a spatial resolution determination unit, coupled to the identification unit, adapted to determine, for each identified source, a spatial resolution corresponding to a variation of position of said source in the weakest space that a listener is likely to perceive, according to
    • ○ the frequency band, the energy level, and the spatial position of said source; and,
    • ○ the frequency band, the energy level, and the spatial position of the other identified sources;
  • a compressed stream generation unit, coupled to the spatial resolution determining unit, adapted to form the compressed stream from the information necessary to render each identified source with at least the corresponding spatial resolution, and to deliver the compressed stream over the exit.

L'unité d'identification peut être configurée pour identifier seulement les sources audibles.The identification unit can be configured to identify only audible sources.

Dans un mode de réalisation, l'unité de génération peut être adaptée pour produire le flux compressé à partir dès signaux lorsque ces derniers comportent des informations représentant la scène sonore dans une base d'harmoniques sphériques en :

  • subdivisant l'espace en sous-espaces, et
  • tronquant, pour chacun des sous-espaces, un ordre de représentation des signaux dans la base des harmoniques sphériques, jusqu'à obtenir une résolution spatiale sensiblement égale à la valeur maximale des résolutions spatiales associées aux sources présentes dans le sous-espace considéré.
In one embodiment, the generation unit may be adapted to produce the compressed stream from signals when they include information representing the sound scene in a base of spherical harmonics in:
  • subdividing the space into subspaces, and
  • truncating, for each of the subspaces, an order of representation of the signals in the base of the spherical harmonics, until a spatial resolution substantially equal to the maximum value of the spatial resolutions associated with the sources present in the subspace under consideration.

L'unité de génération peut être configurée pour adapter la subdivision de l'espace en sous-espaces au cours du temps.The generation unit may be configured to adapt the subdivision of the space into subspaces over time.

Dans un mode de réalisation, le dispositif comprend en outre une unité de conversion adaptée pour transposer des informations comprises dans les signaux du flux audio dans une base d'harmoniques sphériques.In one embodiment, the device further comprises a conversion unit adapted to transpose information included in the signals of the audio stream into a spherical harmonic base.

D'autres aspects, buts et avantages de l'invention apparaîtront à la lecture de la description d'un de ses modes de réalisation.Other aspects, objects and advantages of the invention will appear on reading the description of one of its embodiments.

L'invention sera également mieux comprise à l'aide des dessins, sur lesquels :

  • la figure 1 illustre, par un synoptique, les étapes principales du procédé de compression appliqué à un flux audio multicanal ;
  • la figure 2 illustre, par un synoptique, les étapes d'un mode de réalisation du procédé de compression, dans une base d'harmoniques sphériques, par exemple dans le domaine HOA, appliqué à un flux audio multicanal;
  • la figure 3 montre, par un schéma de principe, un dispositif de compression de flux audio multicanal ;
  • la figure 4 montre, par un schéma de principe, un dispositif de compression de flux audio multicanal, selon un autre mode de réalisation ;
  • la figure 5 illustre, par un schéma de principe, un dispositif de traitement pour mettre en oeuvre le procédé de compression.
The invention will also be better understood with the aid of the drawings, in which:
  • the figure 1 illustrates, through a synoptic, the main steps of the compression method applied to a multichannel audio stream;
  • the figure 2 illustrates, by a synoptic, the steps of an embodiment of the compression method, in a base of spherical harmonics, for example in the HOA domain, applied to a multichannel audio stream;
  • the figure 3 shows, in a schematic diagram, a multichannel audio stream compression device;
  • the figure 4 shows, by a block diagram, a multichannel audio stream compression device, according to another embodiment;
  • the figure 5 illustrates, in a schematic diagram, a processing device for implementing the compression method.

Dans la présente description, on considère une scène sonore SCE, c'est-à-dire un champ acoustique réel, formé par des signaux sonores émis par une pluralité de sources SR, ou un champ acoustique synthétique obtenu par spatialisation artificielle de signaux monophoniques. Le signal émis par une source sonore ou source peut être représenté par une distribution spatiale d'énergie dans une bande de fréquences. Lorsque la distribution spatiale de l'énergie est corrélée et contigüe dans l'espace, la source correspondante est alors qualifiée de source étendue, dans le cas contraire la source est dite ponctuelle. La scène sonore est captée par un nombre limité de capteurs sonores, pour former un flux F audio multicanal comportant une pluralité de signaux S. Alternativement la scène peut être synthétisée par spatialisation de signaux monophoniques. Le flux F peut être subdivisé en trames T temporelles. Le flux F peut être considéré comme une description ou représentation au cours du temps de la scène sonore SCE. Les composantes spatiales de la scène sonore SCE peuvent être représentées dans le domaine HOA par des composantes spatiales projetés dans une base d'harmoniques sphériques. On définit par les termes encodage ambiophonique (traduction du mot anglais "ambisonic") l'étape consistant à obtenir ces composantes spatiales du champ dans la base d'harmoniques sphériques. Cet encodage permet ainsi de représenter la scène sonore sous forme de signaux ambiophoniques.In the present description, a sound scene SCE is considered, that is to say a real acoustic field, formed by sound signals emitted by a plurality of sources SR, or a synthetic acoustic field obtained by artificial spatialization of monophonic signals. The signal emitted by a sound source or source can be represented by a spatial distribution of energy in a frequency band. When the spatial distribution of energy is correlated and contiguous in space, the corresponding source is then qualified as an extended source, in the opposite case the source is said to be point. The sound stage is picked up by a limited number of sound sensors, to form a multi-channel audio stream F comprising a plurality of signals S. Alternatively the scene can be synthesized by spatialization of monophonic signals. The stream F can be subdivided into time T frames. The stream F can be considered as a description or representation over time of the sound stage SCE. The spatial components of the SCE sound scene can be represented in the HOA domain by spatial components projected into a spherical harmonic base. The term "ambisonic" is used to define the step of obtaining these spatial components of the field in the base of spherical harmonics. This encoding thus makes it possible to represent the sound scene in the form of surround signals.

Sur la figure 1 sont représentées les étapes principales du procédé de compression appliqué au flux F.On the figure 1 are represented the main steps of the compression process applied to the flow F.

Dans une étape 10, par analyse spatio-fréquentielle des signaux S, on identifie les sources SR, et on détermine, pour chaque source SR identifiée, une bande de fréquences de la source ou la fréquence centrale de ladite bande de fréquence, un niveau d'énergie et une position spatiale.In a step 10, by spatio-frequency analysis of the signals S, the sources SR are identified, and for each identified source SR, a frequency band of the source or the central frequency of said frequency band is determined, a level of energy and a spatial position.

Pour identifier les sources, on pourra notamment procéder à une analyse temps/fréquence de chacun des signaux S constituant le flux F pour extraire un niveau d'énergie par bande de fréquences pour chaque trame T. Des résultats d'une analyse temps/fréquence réalisée préalablement à la mise en oeuvre du procédé selon l'invention, par exemple lors d'une compression éventuelle des signaux S par des techniques de masquage fréquentiel, pourront également être exploités au cours de l'étape 10 pour identifier les sources SR.To identify the sources, it will be possible in particular to perform a time / frequency analysis of each of the signals S constituting the flux F to extract a frequency band energy level for each frame T. The results of a time / frequency analysis performed prior to the implementation of the method according to the invention, for example during a possible compression of the signals S by frequency masking techniques, may also be used during step 10 to identify the sources SR.

Au cours de l'étape 10, on associe à chaque source SR identifiée les grandeurs suivantes : sa bande de fréquences de la source ou la fréquence centrale de ladite bande de fréquence, son niveau d'énergie et sa position spatiale. En particulier, la bande de fréquences de la source ou la fréquence centrale de ladite bande de fréquence pourra être obtenue directement, suite à l'analyse temps/fréquence mise en oeuvre pour identifier chaque source SR.In step 10, each identified SR source is associated with the following quantities: its frequency band of the source or the center frequency of said frequency band, its energy level and its spatial position. In particular, the frequency band of the source or the center frequency of said frequency band can be obtained directly, following the time / frequency analysis used to identify each SR source.

Des méthodes d'identification ou de séparation de sources adaptées sont décrites dans le document Arberet, S. "Estimation robuste et apprentissage aveugle de modèles pour la séparation de sources sonores", Thèse de l'Université de Rennes 1, 2008 , ou des méthodes de formation de faisceau, comme celle décrite dans le document Veen, B. D. V. & Buckley, K. M. "Beamforming: a versatile approach to spatial filtering" IEEE ASSP Magazine, 1988, 4-24 . Si la source SR considérée est une source étendue, la position spatiale peut correspondre au barycentre spatial de ladite source étendue, et une mesure de la largeur de l'étendue spatiale de ladite source est également réalisée. De manière optionnelle, il est possible de ne sélectionner qu'un sous-ensemble des sources SR identifiées au cours de l'étape 10. Par exemple, ne seront sélectionnées que les sources SR audibles pour un auditeur moyen. Pour déterminer, si une source est audible, on pourra notamment mettre en oeuvre une analyse de masquage énergétique simultané prenant en compte le démasquage binaural, comme celle décrite notamment dans le document Saberi, K., Dostal, L., Sadralodabai, T., and Bull, V., "Free-field release from masking," Journal of the Acoustical Society of America, vol. 90,1991, pp. 1355--1370 .Methods of identification or separation of suitable sources are described in the document Arberet, S. "Robust estimation and blind learning of models for the separation of sound sources", Thesis of the University of Rennes 1, 2008 , or beamforming methods, such as that described in the document Veen, BDV & Buckley, KM "Beamforming: a versatile approach to spatial filtering" IEEE ASSP Magazine, 1988, 4-24 . If the source SR considered is an extended source, the spatial position may correspond to the spatial barycenter of said extended source, and a measurement of the width of the spatial extent of said source is also performed. Optionally, it is possible to select only a subset of the SR sources identified in step 10. For example, only audible SR sources for an average listener will be selected. To determine, if a source is audible, it will be possible in particular to implement a simultaneous energy masking analysis taking into account the binaural unmasking, as described in particular in the document Saberi, K., Dostal, L., Sadralodabai, T., and Bull, V., "Free-field release from masking," Journal of the Acoustical Society of America, vol. 90,1991, pp. 1355--1370 .

Dans une étape 20, on calcule une résolution spatiale RS pour chacune des sources SR identifiée au cours de l'étape 10, par mise en oeuvre d'un modèle psycho-acoustique. La résolution spatiale RS calculée pour une source correspond à une résolution optimale au-delà de laquelle un auditeur moyen ne perçoit pas une augmentation significative du niveau de précision dans la localisation de ladite source. La résolution spatiale RS correspond également à une dégradation spatiale maximale applicable à la source SR correspondante, sans dégradation sensible des capacités d'un auditeur à localiser ladite source SR, en présence des autres sources SR.In a step 20, a spatial resolution RS is calculated for each of the sources SR identified during step 10, by implementing a psychoacoustic model. The RS spatial resolution calculated for a source corresponds to an optimal resolution beyond which an average listener does not perceive a significant increase in the level of precision in the location of said source. The RS spatial resolution also corresponds to a maximum spatial degradation applicable to the corresponding SR source, without significant impairment of the capabilities of a listener to locate said source SR, in the presence of other SR sources.

A titre d'exemple non limitatif, si la résolution spatiale RS est égale à 1 degré pour une des sources SR, on considérera que l'auditeur n'est pas en mesure de localiser ladite source SR avec une précision supérieure à 1 degré.By way of nonlimiting example, if the spatial resolution RS is equal to 1 degree for one of the sources SR, it will be considered that the listener is not able to locate said source SR with a precision greater than 1 degree.

En fonction des caractéristiques de la source SR considérée, le modèle psycho-acoustique retourne une résolution spatiale adaptée. Ainsi à chaque source SR correspond une résolution spatiale RS propre. La résolution spatiale RS d'une des sources SR peut également être définie comme l'angle minimum audible associé à ladite source RS, au sens par exemple de l'expérience de Mills de 1958, présentée dans le document A.W. Mills, "On the Minimum Audible Angle", The Journal of the Acoustical Society of America, vol. 30, Apr. 1958, pp. 237-246 . D'après cette définition, l'angle minimum audible de la source SR est sensiblement équivalent à la mesure réalisée, dans les mêmes conditions que celles décrites dans l'expérience de Mills, pour une source cible au sens de A.W. Mills, ayant les mêmes caractéristiques que la source RS.Depending on the characteristics of the source SR considered, the psycho-acoustic model returns a suitable spatial resolution. Thus each source SR corresponds to a specific RS spatial resolution. The spatial resolution RS of one of the sources SR can also be defined as the minimum audible angle associated with said source RS, in the sense for example of the Mills experiment of 1958, presented in the document AW Mills, "On the Minimum Audible Angle," The Journal of the Acoustical Society of America, vol. 30, Apr. 1958, pp. 237-246 . According to this definition, the minimum audible angle of the source SR is substantially equivalent to the measurement made, under the same conditions as those described in the Mills experiment, for a target source within the meaning of AW Mills, having the same characteristics that the source RS.

La résolution spatiale RS associée à l'une des sources SR est fonction notamment des paramètres suivants :

  • la fréquence centrale de la bande de fréquences de la source SR ;
  • le niveau d'énergie de la source SR ;
  • la position spatiale de la source SR ;
  • la fréquence centrale de la bande de fréquences de chacune des autres sources SR ;
  • le niveau d'énergie de chacune des autres sources SR ;
  • la position spatiale de chacune des autres sources SR.
The spatial resolution RS associated with one of the sources SR is a function notably of the following parameters:
  • the center frequency of the frequency band of the SR source;
  • the energy level of the SR source;
  • the spatial position of the SR source;
  • the center frequency of the frequency band of each of the other SR sources;
  • the energy level of each of the other SR sources;
  • the spatial position of each of the other SR sources.

Le modèle psycho-acoustique peut donc être décrit par une fonction f(sc, sd1, sd2, ..., sdN), où sc représente la source SR pour laquelle on souhaite obtenir la résolution spatiale RS, et sd1, sd2, ..., sdN représente tout ou partie des autres sources SR. Les sources SR peuvent chacune être décrites par un quadruplet {fc, I, θ, ϕ}, où fc représente la fréquence centrale, I le niveau d'énergie, θ la position angulaire en azimut, et ϕ la position angulaire en élévation.The psychoacoustic model can therefore be described by a function f (s c , sd 1 , sd 2 , ..., sd N ), where s c represents the source SR for which we wish to obtain the spatial resolution RS, and sd 1 , sd 2 , ..., sd N represents all or part of the other SR sources. SR sources may each be described by a tuple {f c, i, θ, φ}, where f c is the center frequency, the I energy level, the angular position θ azimuth φ and the angular position in elevation .

Le modèle psycho-acoustique peut en outre être construit à partir de modèles décrivant les capacités d'un auditeur en fonction des paramètres précédemment décrits, et/ou à partir de résultat de tests. Pour la construction du modèle, il est en outre possible de prendre l'hypothèse que l'auditeur fait toujours face à la source SR pour laquelle on calcule la résolution spatiale RS, cas dans lequel la capacité de l'auditeur à séparer les sources est maximale.The psychoacoustic model can also be constructed from models describing the capabilities of a listener according to the parameters described above, and / or from test results. For the construction of the model, it is furthermore possible to assume that the listener is always facing the source SR for which the spatial resolution RS is calculated, in which case the listener's ability to separate the sources is Max.

Dans une étape 30, on génère un flux compressé Fc comportant des signaux compressés Sc, de sorte que le flux compressé Fe comporte les informations nécessaires à la restitution de chaque source SR avec la résolution spatiale RS correspondante, calculée au cours de l'étape 20. Cela revient également à générer le flux compressé Fc en réduisant la quantité d'informations spatiales contenue initialement dans le flux F pour chaque source SR, jusqu'à conserver les informations nécessaires à la restitution de chaque source SR avec au moins la résolution spatiale RS correspondante. Il convient donc de noter que le flux compressé Fc comporte en conséquence une quantité d'informations inférieure au flux F.In a step 30, a compressed stream F c containing compressed signals S c is generated, so that the compressed stream F e comprises the information necessary for the reproduction of each source SR with the corresponding spatial resolution RS, calculated during the first time. Step 20. This also amounts to generating the compressed stream F c by reducing the amount of spatial information initially contained in the stream F for each source SR, until the information necessary for the restitution of each source SR is maintained with at least the corresponding RS spatial resolution. It should therefore be noted that the compressed stream Fc therefore has a smaller amount of information than the stream F.

A titre d'exemple non limitatif, si la résolution spatiale RS est égale à 1 degré pour une des sources SR, on considérera que ladite source SR devra être encodée dans le flux compressé Fc de sorte à permettre lors de sa restitution par un système audio à un auditeur moyen de localiser la source SR avec une précision de 1 degré. D'autre part, on notera dans cet exemple, qu'encoder la source SR avec une résolution supérieure, par exemple 0,5 degré, n'apportera pas un gain sensible dans la capacité de l'auditeur à localiser avec une précision supérieure la source SR. Par exemple, si le flux F comprend les informations nécessaires pour atteindre une résolution de 0,5 degré pour la source SR, le flux compressé Fc comportera seulement les informations nécessaires pour restituer la source SR avec une précision de 1 degré.By way of nonlimiting example, if the spatial resolution RS is equal to 1 degree for one of the sources SR, it will be considered that said source SR will have to be encoded in the compressed stream F c so as to allow when it is restored by a system audio to an average listener to locate the SR source with an accuracy of 1 degree. On the other hand, it will be noted in this example, that to encode the source SR with a higher resolution, for example 0.5 degree, will not bring a significant gain in the ability of the listener to locate with a higher accuracy the SR source. For example, if the stream F comprises the necessary information to achieve a resolution of 0.5 degree to the SR source, the compressed stream F c only include the information needed to restore the SR source with an accuracy of 1 degree.

La figure 2 illustre les étapes d'un mode de réalisation du procédé de compression, dans une base d'harmoniques sphériques, par exemple dans le domaine HOA, appliqué au flux F.The figure 2 illustrates the steps of an embodiment of the compression method, in a base of spherical harmonics, for example in the HOA domain, applied to the flux F.

Le procédé peut comporter une étape 100 de transformation, dans une base des harmoniques sphériques, du flux F. Cette étape 100 est optionnelle si le flux F est déjà encodé dans une base des harmoniques sphériques. Typiquement, cette transformation peut correspondre à une projection des informations comprises dans les signaux S dans une base d'harmoniques sphériques.The method may comprise a step 100 of transforming flux F into a base of spherical harmonics. This step 100 is optional if the flux F is already encoded in a base of spherical harmonics. Typically, this transformation may correspond to a projection of the information included in the signals S in a base of spherical harmonics.

Dans un mode de réalisation de l'étape 100, on simule une onde acoustique correspondant à celle qui serait obtenue par un système de restitution audio alimenté par les signaux S du flux F. L'onde acoustique simulée est alors décomposée sur une base des harmoniques sphériques, par projection dans cette base, ou par simulation d'une captation sonore synthétique par un dispositif d'encodage HOA comme une sphère de microphones. Cette dernière possibilité est par exemple décrite dans le document Moreau, S. "Etude et réalisation d'outils avancés d'encodage spatial pour la technique de spatialisation sonore Higher Order Ambisonics: microphone 3D et contrôle de la distance" Université du Maine, Le Mans, France, 2006 . On obtient ainsi des coefficients C de décomposition formant des signaux SHOA correspondant aux signaux S dans un format d'encodage HOA.In one embodiment of step 100, an acoustic wave corresponding to that which would be obtained by an audio reproduction system fed by the signals S of the flux F is simulated. The simulated acoustic wave is then decomposed on the basis of the harmonics. spherical, by projection in this base, or by simulation of a synthetic sound recording by an encoding device HOA as a sphere of microphones. This last possibility is for example described in the document Moreau, S. "Study and realization of advanced spatial encoding tools for sound spatialization technique Higher Order Ambisonics: 3D microphone and distance control" University of Maine, Le Mans, France, 2006 . Thus, decomposition coefficients C forming S HOA signals corresponding to S signals in an HOA encoding format are obtained.

Le procédé comporte une étape 110 d'analyse temps/fréquence des signaux SHOA pour extraire, pour chaque signal SHOA, pour chaque trame T, et pour chaque bande de fréquences, un niveau d'énergie E.The method comprises a step 110 of time / frequency analysis of the signals S HOA for extracting, for each signal S HOA , for each frame T, and for each frequency band, a level of energy E.

Le procédé comporte une étape 120 au cours de laquelle on calcule, pour chaque trame T et pour chaque bande de fréquences, une projection spatiale Pr des niveaux d'énergie E sur une sphère. On obtient ainsi un modèle permettant de déterminer le niveau d'énergie E en fonction de la direction, pour chaque trame T et pour chaque bande de fréquences. On pourra notamment calculer la projection spatiale Pr des niveaux d'énergie E en procédant à une transformation inverse des signaux SHOA dans un domaine de variables d'espace. Par exemple, on reconstruit une onde acoustique correspondant aux signaux SHOA par combinaison linéaire des harmoniques sphériques pondérées par les valeurs des composantes HOA. On obtient ainsi une évolution spatiale de l'onde acoustique sur une sphère. La projection spatiale Pr des niveaux d'énergie est alors construite en échantillonnant spatialement la sphère, le nombre d'échantillons choisi étant fonction de la résolution souhaitée.The method comprises a step 120 in which one calculates, for each frame T and for each frequency band, a spatial projection Pr energy levels E on a sphere. A model is thus obtained for determining the energy level E as a function of the direction, for each frame T and for each frequency band. In particular, it will be possible to calculate the spatial projection Pr of the energy levels E by carrying out an inverse transformation of the S HOA signals in a domain of space variables. For example, an acoustic wave corresponding to the S HOA signals is reconstructed by linear combination of spherical harmonics weighted by the values of the HOA components. We thus obtain a spatial evolution of the acoustic wave on a sphere. The spatial projection Pr of the energy levels is then constructed by spatially sampling the sphere, the number of samples chosen being a function of the desired resolution.

Le procédé comporte une étape 130 au cours de laquelle on identifie, pour chaque trame T, les sources SR, leur position spatiale et leur énergie respective. Pour cela, on recherche toutes les directions de la projection spatiale Pr pour lesquelles le niveau d'énergie E est non nul. Puis, pour chaque direction dans laquelle le niveau d'énergie est non nul, on calcule la corrélation avec les niveaux d'énergie présents dans les directions voisines. Par exemple, pour chaque bande de fréquences, on détermine les fluctuations d'énergie dans le temps, éventuellement en tenant compte des trames T précédant et/ou suivant ladite trame T, pour chaque direction. Pour augmenter la précision temporelle, il est possible de calculer la corrélation sur des plages temporelles se recouvrant, puis de sous-échantillonner les résultats ainsi obtenus pour la bande de fréquences.The method comprises a step 130 during which, for each frame T, the SR sources are identified, their spatial position and their respective energy. For this, we search all the directions of the spatial projection Pr for which the energy level E is non-zero. Then, for each direction in which the energy level is non-zero, one calculates the correlation with the energy levels present in the neighboring directions. For example, for each frequency band, the energy fluctuations in time are determined, possibly taking into account the T frames preceding and / or following said frame T, for each direction. To increase the accuracy In time, it is possible to calculate the correlation over overlapping time ranges, and then to sub-sample the results thus obtained for the frequency band.

Si le niveau d'énergie est corrélé pour un ensemble de directions, on identifie une source étendue dans lesdites directions, et on calcule le niveau d'énergie correspondant en additionnant les niveaux d'énergies associés à l'ensemble des directions. Si le niveau d'énergie n'est pas corrélé avec les niveaux d'énergie présents dans les directions voisines, on identifie une source et le niveau d'énergie correspond à celui donné par la projection spatiale Pr dans cette direction. A l'issue de l'étape 130, il est ainsi possible de décrire la scène sonore SCE sous la forme d'un ensemble de sources SR dont on connaît la position, l'étendue spatiale et l'énergie.If the energy level is correlated for a set of directions, an extended source is identified in said directions, and the corresponding energy level is calculated by summing the energy levels associated with all the directions. If the energy level is not correlated with the energy levels present in the neighboring directions, a source is identified and the energy level corresponds to that given by the spatial projection Pr in this direction. At the end of step 130, it is thus possible to describe the sound scene SCE in the form of a set of SR sources whose position, spatial extent and energy are known.

Dans une étape optionnelle 135, on sélectionne un sous-ensemble des sources SR identifiées au cours de l'étape 130. Par exemple, ne seront sélectionnées que les sources SR audibles pour un auditeur moyen. Pour déterminer, si une source est audible, on pourra notamment mettre en oeuvre une analyse de masquage énergétique simultané prenant en compte le démasquage binaural.In an optional step 135, a subset of the SR sources identified in step 130 is selected. For example, only the audible SR sources for an average listener will be selected. To determine, if a source is audible, it will be possible in particular to implement a simultaneous energy masking analysis taking into account the binaural unmasking.

Dans une étape 140, on détermine, à l'aide d'un modèle psycho-acoustique de masquage spatial, pour chaque source SR identifiée au cours de l'étape 130 et éventuellement sélectionnée au cours de l'étape 135, la résolution spatiale RS correspondante. Typiquement, pour une trame T, on évalue le pouvoir masquant dans chaque région de l'espace et dans chaque bande de fréquences de chaque source SR identifiée sur les autres sources SR identifiées. Plus spécifiquement, pour chaque source SR identifiée, en fonction notamment de sa position, de la bande de fréquences, et de son niveau d'énergie, on détermine la résolution spatiale RS avec laquelle la source SR est perçue.In a step 140, it is determined, using a psycho-acoustic spatial masking model, for each source SR identified during step 130 and possibly selected during step 135, the spatial resolution RS corresponding. Typically, for a frame T, the masking power in each region of the space and in each frequency band of each source SR identified on the other identified SR sources is evaluated. More specifically, for each source SR identified, in particular according to its position, the frequency band, and its energy level, the spatial resolution RS with which the source SR is perceived is determined.

Dans une étape 150, on génère le flux compressé Fc comportant les signaux compressés Sc, de sorte que le flux compressé Fc comprenne les informations nécessaires à la restitution de chaque source SR avec au moins la résolution spatiale RS correspondante, calculée au cours de l'étape 140. Cette opération revient à compresser le flux F en adaptant la résolution spatiale des signaux SHOA en fonction de la résolution spatiale RS obtenue pour chaque source SR identifiée. Dans un mode de réalisation de l'étape 150, on décompose l'espace en un ensemble de sous-espaces, de sorte que l'union des sous-espaces soit sensiblement égale à l'espace. Pour chacun de ces sous-espaces, on construit une sous-base d'harmoniques sphériques. Par exemple, une méthode de construction adéquate peut être celle décrite dans le document Pomberger H. & Zotter F. "An Ambisonics format for flexible playback layouts" Ambisonics Symposium 2009, 2009 . Les fonctions propres de la base d'harmoniques sphériques de l'espace complet sont recombinées pour former, pour chacun des sous-espaces, une sous-base de représentation de ce sous-espace uniquement. A partir des signaux obtenus à l'étape 110, pour une des trames T donnée et une bande de fréquences donnée, en projetant l'énergie dans cette bande de fréquences sur chacune des sous-bases de représentation des sous-espaces, on obtient un ensemble de représentations supplémentaires de la représentation d'origine, chacune restreinte à un des sous-espaces. La décomposition de l'espace peut soit être statique, soit varier d'une trame T à l'autre. Une décomposition dynamique présente l'avantage de pouvoir regrouper dans un même sous-espace des sources adjacentes dont la résolution spatiale perçue est sensiblement égale. On tronque alors, pour chacun des sous-espaces, l'ordre de représentation dans la base des harmoniques sphériques des signaux SHOA, jusqu'à obtenir une résolution spatiale correspondant à la valeur maximale des résolutions spatiales RS associées aux sources SR présentes dans le sous-espace considéré.In a step 150, the compressed stream F c containing the compressed signals S c is generated, so that the compressed stream F c comprises the information necessary for the reproduction of each source SR with at least the corresponding spatial resolution RS, calculated during of step 140. This operation amounts to compressing the stream F by adapting the spatial resolution of the signals S HOA as a function of the spatial resolution RS obtained for each source SR identified. In one embodiment of step 150, the space is decomposed into a set of subspaces, so that the union of the subspaces is substantially equal to the space. For each of these subspaces, a sub-base of spherical harmonics is constructed. For example, a suitable construction method may be that described in the document Pomberger H. & Zotter F. "An Ambisonics Format for Flexible Playback Layouts" Ambisonics Symposium 2009, 2009 . The eigenfunctions of the base of spherical harmonics of the complete space are recombined to form, for each of the subspaces, a sub-base of representation of this subspace only. From the signals obtained in step 110, for a given frame T and a given frequency band, by projecting the energy in this frequency band on each sub-base of representation of the subspaces, we obtain a set of representations the original representation, each restricted to one of the subspaces. The decomposition of the space can either be static or vary from one frame T to the other. A dynamic decomposition has the advantage of being able to group in the same subspace adjacent sources whose perceived spatial resolution is substantially equal. For each of the subspaces, the order of representation in the base of the spherical harmonics of the S HOA signals is then truncated down to a spatial resolution corresponding to the maximum value of the RS spatial resolutions associated with the SR sources present in the subspace considered.

Il est également possible, en plus de la dégradation de résolution spatiale dans le flux compressé Fc par rapport au flux F, de compresser le flux compressé Fc en exploitant les informations de masquage énergétique. Toutefois, et pour prendre en compte les effets de démasquage binaural, il convient de se placer dans le cas le plus défavorable en termes de masquage en considérant :

  • d'une part le seuil de masquage le plus bas parmi ceux de toutes les sources SR en présence dans le sous-espace considéré. ;
  • et de façon conjointe, pour chaque source SR, son seuil de masquage le plus bas du fait de sa position spatiale dans le sous-espace considéré.
It is also possible, in addition to the spatial resolution degradation in the compressed stream F c with respect to the stream F, to compress the compressed stream F c by exploiting the energy masking information. However, and to take into account the effects of binaural unmasking, it is appropriate to place oneself in the worst case in terms of masking by considering:
  • on the one hand, the lowest masking threshold among those of all SR sources present in the subspace under consideration. ;
  • and jointly, for each SR source, its lowest masking threshold due to its spatial position in the subspace considered.

La figure 3 montre, sur un schéma de principe, un dispositif 200 de compression de flux audio multicanal, selon un mode de réalisation. Le dispositif 200 est notamment adapté à la mise en oeuvre du procédé selon l'invention.The figure 3 shows, in a block diagram, a multichannel audio stream compression device 200, according to one embodiment. The device 200 is particularly suitable for implementing the method according to the invention.

Comme représenté sur la figure 3, le dispositif 200 comprend une entrée 210 pour recevoir le flux F audio multicanal décrivant la scène sonore SCE produite par une pluralité de sources SR dans un espace. Le dispositif 200 délivre sur une sortie 260 le flux compressé Fc.As shown on the figure 3 the device 200 includes an input 210 for receiving the multi-channel audio stream F describing the sound scene SCE produced by a plurality of SR sources in a space. The device 200 delivers on an output 260 the compressed stream F c .

Le dispositif 200 comprend une unité d'identification 220 des sources SR couplée à l'entrée 210 de sorte à recevoir le flux F. L'unité d'identification 220 est adaptée pour identifier les sources SR à partir du flux F, et pour déterminer pour chacune des sources SR identifiées une bande de fréquence, un niveau d'énergie et une position spatiale dans l'espace. L'unité d'identification 220 délivre, sur une sortie, la bande de fréquence, le niveau d'énergie et la position spatiale dans l'espace de chaque source SR identifiée. En particulier, l'unité d'identification 220 peut être configurée pour identifier seulement les sources SR audibles.The device 200 comprises an identification unit 220 of the sources SR coupled to the input 210 so as to receive the stream F. The identification unit 220 is adapted to identify the sources SR from the stream F, and to determine for each identified SR source a frequency band, an energy level and a spatial position in the space. The identification unit 220 outputs, on an output, the frequency band, the energy level and the spatial position in the space of each identified source SR. In particular, the identification unit 220 may be configured to identify only the audible SR sources.

Le dispositif 200 comporte une unité de détermination 230 de la résolution spatiale RS, couplée à la sortie de l'unité d'identification 220, correspondant à la variation de position de ladite source dans l'espace la plus faible qu'un auditeur est susceptible de percevoir. L'unité de détermination 230, à l'aide par exemple d'un modèle psycho-acoustique 240, fournit sur une sortie la résolution spatiale RS pour chaque source SR identifiée, en fonction :

  • ○ de la bande de fréquences, du niveau d'énergie, et de la position spatiale de ladite source ; et,
  • ○ de la bande de fréquences, du niveau d'énergie, et de la position spatiale d'au moins un sous-ensemble des autres sources identifiées.
The device 200 comprises a determination unit 230 of the spatial resolution RS, coupled to the output of the identification unit 220, corresponding to the variation of position of said source in the weakest space that an auditor is likely to to perceive. The determination unit 230, using for example a psychoacoustic model 240, provides on an output the spatial resolution RS for each identified source SR, based on:
  • ○ the frequency band, the energy level, and the spatial position of said source; and,
  • ○ the frequency band, the energy level, and the spatial position of at least one subset of the other identified sources.

Le dispositif 200 comporte une unité de génération 250, couplée à la sortie de l'unité d'identification 220, adaptée pour former le flux compressé FC à partir des informations nécessaires pour restituer chaque source SR identifiée avec au moins la résolution spatiale RS correspondante.The device 200 comprises a generation unit 250, coupled to the output of the identification unit 220, adapted to form the compressed stream FC from the information necessary to restore each source SR identified with at least the corresponding RS spatial resolution.

La figure 4 montre, sur un schéma de principe, un dispositif 300 de compression de flux audio multicanal, selon un mode de réalisation. Comme représenté sur la figure 4, le dispositif 300 comprend une entrée 310 pour recevoir le flux F audio multicanal décrivant la scène sonore SCE produite par une pluralité de sources SR dans un espace. Le dispositif 300 délivre sur une sortie 390 le flux FC compressé.The figure 4 shows, in a block diagram, a multichannel audio stream compression device 300, according to one embodiment. As shown on the figure 4 the device 300 includes an input 310 for receiving the multi-channel audio stream F describing the sound scene SCE produced by a plurality of SR sources in a space. The device 300 delivers on an output 390 the compressed FC stream.

Le dispositif 300 peut comprendre une unité de conversion 320 adaptée pour transposer des informations comprises dans les signaux S du flux F audio représentant la scène sonore SCE dans une base d'harmoniques sphériques, lorsque le flux F comprend des signaux S destinés à alimenter directement des haut-parleurs, comme par exemple des signaux S de type 5.1, 6.1, 7.1, 10.2, 22.2. L'unité de conversion 320 délivre en sortie des signaux SHOA décrits dans une base d'harmoniques sphériques.The device 300 may comprise a conversion unit 320 adapted to transpose information included in the signals S of the audio stream F representing the sound scene SCE in a spherical harmonics base, when the stream F comprises signals S intended to feed directly loudspeakers, such as S-type signals 5.1, 6.1, 7.1, 10.2, 22.2. The conversion unit 320 outputs S HOA signals described in a base of spherical harmonics.

Le dispositif 300 comporte une unité d'identification 330 des sources SR couplée à la sortie de l'unité de conversion 320 pour recevoir les signaux SHOA. L'unité d'identification 330 est adaptée pour identifier les sources SR à partir du flux F, et pour déterminer pour chacune des sources SR identifiées une bande de fréquence, un niveau d'énergie et une position spatiale dans l'espace. Pour cela, l'unité d'identification 330 est configurée pour calculer une projection spatiale des niveaux d'énergie des sources sur une sphère et pour rechercher les directions de la projection spatiale dont le niveau d'énergie est non nul. L'unité d'identification 330 délivre, sur une sortie, la bande de fréquence, le niveau d'énergie et la position spatiale dans l'espace de chaque source SR identifiée. En particulier, l'unité d'identification 330 peut être configurée pour identifier seulement les sources SR audibles.The device 300 includes an identification unit 330 of the SR sources coupled to the output of the conversion unit 320 to receive the S HOA signals. The identification unit 330 is adapted to identify the sources SR from the stream F, and to determine for each of the identified SR sources a frequency band, a level of energy and a spatial position in space. For this purpose, the identification unit 330 is configured to calculate a spatial projection of the energy levels of the sources on a sphere and to search the directions of the spatial projection whose energy level is non-zero. The identification unit 330 delivers, on an output, the frequency band, the energy level and the spatial position in the space of each identified source SR. In particular, the identification unit 330 may be configured to identify only the audible SR sources.

Le dispositif 300 comporte une unité de détermination 340 de la résolution spatiale RS, couplée à la sortie de l'unité d'identification 330, correspondant à la variation de position de ladite source dans l'espace la plus faible qu'un auditeur est susceptible de percevoir. L'unité de détermination 340, à l'aide par exemple d'un modèle psycho-acoustique 350, délivre sur une sortie la résolution spatiale RS pour chaque source SR identifiée, en fonction :

  • ○ de la bande de fréquences, du niveau d'énergie, et de la position spatiale de ladite source ; et,
  • ○ de la bande de fréquences, du niveau d'énergie, et de la position spatiale d'au moins un sous-ensemble des autres sources identifiées.
The device 300 comprises a determination unit 340 of the spatial resolution RS, coupled to the output of the identification unit 330, corresponding to the variation of position of said source in the weakest space that an auditor is likely to have. to perceive. The determining unit 340, using, for example, a psychoacoustic model 350, delivers on an output the spatial resolution RS for each identified source SR, based on:
  • ○ the frequency band, the energy level, and the spatial position of said source; and,
  • ○ the frequency band, the energy level, and the spatial position of at least one subset of the other identified sources.

Le dispositif 300 comporte une unité de génération 360, couplée à la sortie de l'unité d'identification 340, adaptée pour former le flux compressé FC à partir des informations nécessaires pour restituer chaque source SR identifiée avec au moins la résolution spatiale RS correspondante. L'unité de génération 360 est notamment adaptée pour produire le flux compressé Fc en subdivisant l'espace en sous-espaces, et en tronquant, pour chacun des sous-espaces, un ordre de représentation des signaux dans la base des harmoniques sphériques, jusqu'à obtenir une résolution spatiale sensiblement égale à la valeur maximale des résolutions spatiales associées aux sources présentes dans le sous-espace considéré. La subdivision de l'espace en sous-espaces peut en outre être dynamique au cours du temps.The device 300 includes a generation unit 360, coupled to the output of the identification unit 340, adapted to form the compressed stream FC from the information necessary to restore each source SR identified with at least the corresponding RS spatial resolution. The generation unit 360 is particularly adapted to produce the compressed stream F c by subdividing the space into subspaces, and by truncating, for each of the subspaces, an order of representation of the signals in the base of the spherical harmonics, to obtain a spatial resolution substantially equal to the maximum value of the spatial resolutions associated with the sources present in the subspace under consideration. The subdivision of the space into subspaces can also be dynamic over time.

La figure 5 représente un dispositif de traitement 400 pour mettre en oeuvre le procédé de compression selon l'invention.The figure 5 represents a processing device 400 for implementing the compression method according to the invention.

Le dispositif 400 comprend une interface 420 couplée à une entrée 410 pour recevoir le flux F et une sortie F pour délivrer le flux compressé Fc. L'interface 420 est par exemple une interface pour accéder à un réseau de communication, à un dispositif de stockage, et/ou encore à un lecteur de support.The device 400 comprises an interface 420 coupled to an input 410 for receiving the stream F and an output F for delivering the compressed stream F c . The interface 420 is for example an interface for accessing a communication network, a storage device, and / or a support reader.

Le dispositif 400 comprend également un processeur 440 couplé à une mémoire 450. Le processeur 440 est configuré pour communiquer avec l'interface 420. En particulier, le processeur est adapté pour exécuter des programmes d'ordinateur, compris dans la mémoire 450, comportant respectivement des instructions adaptées à la mise en oeuvre des étapes des procédés de compression qui viennent d'être décrits. La mémoire 450 peut être une combinaison d'éléments choisie parmi la liste suivante : une RAM, une ROM, par exemple un CD ROM ou une ROM de circuit microélectronique, ou encore un moyen d'enregistrement magnétique, par exemple une disquette ou un disque dur, un support transmissible tel qu'un signal électrique ou optique, qui peut être acheminé via un câble électrique ou optique, par radio ou par d'autres moyens. Le programme d'ordinateur peut être en particulier téléchargé sur un réseau de type Internet. Alternativement, la mémoire 450 peut être un circuit intégré dans lequel le programme est incorporé, le circuit étant adapté pour exécuter ou pour être utilisé dans l'exécution des procédés en question.The device 400 also comprises a processor 440 coupled to a memory 450. The processor 440 is configured to communicate with the interface 420. In particular, the processor is adapted to execute computer programs, included in the memory 450, respectively comprising instructions adapted to the implementation of the steps of the compression methods which have just been described. The memory 450 may be a combination of elements chosen from the following list: a RAM, a ROM, for example a CD ROM or a microelectronic circuit ROM, or else a magnetic recording means, for example a diskette or a disk hard, a transmissible medium such as an electrical or optical signal, which can be routed via an electrical or optical cable, by radio or by other means. The computer program can be downloaded in particular on an Internet type network. Alternatively, the memory 450 may be an integrated circuit in which the program is incorporated, the circuit being adapted to execute or to be used in the execution of the processes in question.

Claims (13)

  1. Method for compression of an audio stream comprising a plurality of signals, said audio stream describing a sound scene produced by a plurality of sources in a space, characterized in that it includes the following steps:
    • from the audio stream, identification (10; 120, 130, 135) of the sources;
    • determination of a frequency band, of an energy level and of a spatial position in the space for each of the identified sources;
    • determination (20;140), for each identified source, of a spatial resolution corresponding to a smallest variation in position of said source in the space that a listener is able to perceive, as a function:
    ○ of the frequency band, of the energy level and of the spatial position of said source; and
    ○ of the frequency band, of the energy level and of the spatial position of the other identified sources;
    • generation (30;150) of a compressed stream including the necessary information to restore each identified source with at least the corresponding spatial resolution.
  2. Method according to Claim 1, in which the step of identification of the sources includes a step of identification of audible sources only.
  3. Method according to Claim 1 or 2, in which the signals of the audio stream comprise information representing the sound scene in a basis of spherical harmonics.
  4. Method according to Claim 1 or 2, characterized in that it includes a step of transposition (100) of the information comprised in the signals of the audio stream representing the sound scene in a basis of spherical harmonics.
  5. Method according to any one of Claims 3 to 4, in which the step of generation (150) of the compressed stream is carried out by sub-dividing the space into sub-spaces, and by truncating, for each of the sub-spaces, an order of representation of the signals in the basis of spherical harmonics, until the obtention of a spatial resolution substantially equal to the maximum value of the spatial resolutions associated with the sources present in the sub-space under consideration.
  6. Method according to Claim 5, in which the subdivision of the space into sub-spaces is dynamic over time.
  7. Computer program including instructions for the implementation of the method according to any one of Claims 1 to 6 when this program is executed by a processor.
  8. Information medium readable by a computer, and including instructions of a computer program according to Claim 7.
  9. Device (200; 300; 400) for compression of multi-channel audio streams, comprising an input (210;310;410) for receiving a multi-channel audio stream describing a sound scene produced by a plurality of sources in a space, and an output (260;390;430) for delivering a compressed stream, characterized in that it includes:
    • a sources identification unit (220;330;440,450), coupled to the input (210;310;410), adapted for identifying the sources, from the stream, and for determining a frequency band, an energy level and a spatial position in the space for each of the identified sources;
    • a spatial resolution determination unit (230; 340; 440,450), coupled to the identification unit (220;330;440,450) adapted for determining, for each identified source, a spatial resolution corresponding to a smallest variation in position of said source in the space that a listener is able to perceive, as a function
    ○ of the frequency band, of the energy level and of the spatial position of said source; and
    ○ of the frequency band, of the energy level and of the spatial position of the other identified sources;
    • a unit for generation (250;360;440,450) of the compressed stream, coupled to the spatial resolution determination unit (230; 340; 440,450), adapted for forming the compressed stream from the necessary information for restoring each identified source with at least the corresponding spatial resolution, and delivering the compressed stream on the output (260;390;440,450).
  10. Device according to Claim 9, in which the identification unit (220;330;440,450) is configured to identify audible sources only.
  11. Device according to any one of Claims 9 to 10, in which the generation unit (360) is adapted for producing the compressed stream from the signals when the latter include information representing the sound scene in a basis of spherical harmonics by:
    • sub-dividing the space into sub-spaces, and
    • truncating, for each of the sub-spaces, an order of representation of the signals in the basis of spherical harmonics, until the obtention of a spatial resolution substantially equal to the maximum value of the spatial resolutions associated with the sources present in the sub-space under consideration.
  12. Device according to Claim 11, in which the generation unit (360) is configured to adapt the subdivision of the space into sub-spaces over time.
  13. Device according to any one of Claims 11 to 12, furthermore comprising a conversion unit (320) adapted for transposing information comprised in the signals of the audio stream in a basis of spherical harmonics.
EP11708920.1A 2010-02-26 2011-02-10 Multichannel audio stream compression Active EP2539892B1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
FR1051420 2010-02-26
PCT/FR2011/050282 WO2011104463A1 (en) 2010-02-26 2011-02-10 Multichannel audio stream compression

Publications (2)

Publication Number Publication Date
EP2539892A1 EP2539892A1 (en) 2013-01-02
EP2539892B1 true EP2539892B1 (en) 2014-04-02

Family

ID=42670337

Family Applications (1)

Application Number Title Priority Date Filing Date
EP11708920.1A Active EP2539892B1 (en) 2010-02-26 2011-02-10 Multichannel audio stream compression

Country Status (3)

Country Link
US (1) US9058803B2 (en)
EP (1) EP2539892B1 (en)
WO (1) WO2011104463A1 (en)

Families Citing this family (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2665208A1 (en) * 2012-05-14 2013-11-20 Thomson Licensing Method and apparatus for compressing and decompressing a Higher Order Ambisonics signal representation
US9288603B2 (en) 2012-07-15 2016-03-15 Qualcomm Incorporated Systems, methods, apparatus, and computer-readable media for backward-compatible audio coding
US9190065B2 (en) 2012-07-15 2015-11-17 Qualcomm Incorporated Systems, methods, apparatus, and computer-readable media for three-dimensional audio coding using basis function coefficients
US9473870B2 (en) 2012-07-16 2016-10-18 Qualcomm Incorporated Loudspeaker position compensation with 3D-audio hierarchical coding
US9479886B2 (en) 2012-07-20 2016-10-25 Qualcomm Incorporated Scalable downmix design with feedback for object-based surround codec
US9761229B2 (en) * 2012-07-20 2017-09-12 Qualcomm Incorporated Systems, methods, apparatus, and computer-readable media for audio object clustering
US9959875B2 (en) * 2013-03-01 2018-05-01 Qualcomm Incorporated Specifying spherical harmonic and/or higher order ambisonics coefficients in bitstreams
US9384741B2 (en) * 2013-05-29 2016-07-05 Qualcomm Incorporated Binauralization of rotated higher order ambisonics
US9466305B2 (en) * 2013-05-29 2016-10-11 Qualcomm Incorporated Performing positional analysis to code spherical harmonic coefficients
US9883312B2 (en) 2013-05-29 2018-01-30 Qualcomm Incorporated Transformed higher order ambisonics audio data
US9466302B2 (en) * 2013-09-10 2016-10-11 Qualcomm Incorporated Coding of spherical harmonic coefficients
US9489955B2 (en) 2014-01-30 2016-11-08 Qualcomm Incorporated Indicating frame parameter reusability for coding vectors
US9922656B2 (en) 2014-01-30 2018-03-20 Qualcomm Incorporated Transitioning of ambient higher-order ambisonic coefficients
US10770087B2 (en) 2014-05-16 2020-09-08 Qualcomm Incorporated Selecting codebooks for coding vectors decomposed from higher-order ambisonic audio signals
US9852737B2 (en) 2014-05-16 2017-12-26 Qualcomm Incorporated Coding vectors decomposed from higher-order ambisonics audio signals
US9620137B2 (en) 2014-05-16 2017-04-11 Qualcomm Incorporated Determining between scalar and vector quantization in higher order ambisonic coefficients
US9838819B2 (en) * 2014-07-02 2017-12-05 Qualcomm Incorporated Reducing correlation between higher order ambisonic (HOA) background channels
US9847088B2 (en) * 2014-08-29 2017-12-19 Qualcomm Incorporated Intermediate compression for higher order ambisonic audio data
US9747910B2 (en) 2014-09-26 2017-08-29 Qualcomm Incorporated Switching between predictive and non-predictive quantization techniques in a higher order ambisonics (HOA) framework
EP3762923A1 (en) * 2018-03-08 2021-01-13 Nokia Technologies Oy Audio coding
JP7455812B2 (en) 2018-08-21 2024-03-26 ドルビー・インターナショナル・アーベー METHODS, APPARATUS AND SYSTEM FOR GENERATION, TRANSPORTATION AND PROCESSING OF IMMEDIATELY REPLACED FRAMES (IPF)
US11430451B2 (en) * 2019-09-26 2022-08-30 Apple Inc. Layered coding of audio with discrete objects
US11363402B2 (en) 2019-12-30 2022-06-14 Comhear Inc. Method for providing a spatialized soundfield
US11743670B2 (en) 2020-12-18 2023-08-29 Qualcomm Incorporated Correlation-based rendering with multiple distributed streams accounting for an occlusion for six degree of freedom applications

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101190875B1 (en) * 2004-01-30 2012-10-15 프랑스 뗄레콤 Dimensional vector and variable resolution quantization
WO2009067741A1 (en) * 2007-11-27 2009-06-04 Acouity Pty Ltd Bandwidth compression of parametric soundfield representations for transmission and storage
WO2010076460A1 (en) * 2008-12-15 2010-07-08 France Telecom Advanced encoding of multi-channel digital audio signals

Also Published As

Publication number Publication date
EP2539892A1 (en) 2013-01-02
WO2011104463A1 (en) 2011-09-01
US20120314878A1 (en) 2012-12-13
US9058803B2 (en) 2015-06-16

Similar Documents

Publication Publication Date Title
EP2539892B1 (en) Multichannel audio stream compression
EP2374123B1 (en) Improved encoding of multichannel digital audio signals
AU2016266052B2 (en) Audio apparatus and audio providing method thereof
EP2374124B1 (en) Advanced encoding of multi-channel digital audio signals
TWI744341B (en) Distance panning using near / far-field rendering
EP2005420B1 (en) Device and method for encoding by principal component analysis a multichannel audio signal
KR102516625B1 (en) Systems and methods for capturing, encoding, distributing, and decoding immersive audio
EP2304721B1 (en) Spatial synthesis of multichannel audio signals
EP3427260B1 (en) Optimized coding and decoding of spatialization information for the parametric coding and decoding of a multichannel audio signal
EP3475943B1 (en) Method for conversion and stereophonic encoding of a three-dimensional audio signal
CN102422348A (en) Audio format transcoder
TR201811059T4 (en) Parametric composite coding of audio sources.
FR2898725A1 (en) DEVICE AND METHOD FOR GRADUALLY ENCODING A MULTI-CHANNEL AUDIO SIGNAL ACCORDING TO MAIN COMPONENT ANALYSIS
EP1992198A2 (en) Optimization of binaural sound spatialization based on multichannel encoding
EP3079074A1 (en) Data-processing method for estimating parameters for mixing audio signals, associated mixing method, devices and computer programs
Cobos et al. An overview of machine learning and other data-based methods for spatial audio capture, processing, and reproduction
FR3049084A1 (en)
FR3046489A1 (en) IMPROVED AMBASSIC ENCODER OF SOUND SOURCE WITH A PLURALITY OF REFLECTIONS
EP4042418B1 (en) Determining corrections to be applied to a multichannel audio signal, associated coding and decoding
WO2023232823A1 (en) Title: spatialized audio encoding with configuration of a decorrelation processing operation
WO2009081002A1 (en) Processing of a 3d audio stream as a function of a level of presence of spatial components
Mcadams et al. Nils Peters
FR3034892A1 (en) DATA PROCESSING METHOD FOR ESTIMATING AUDIO SIGNAL MIXING PARAMETERS, MIXING METHOD, DEVICES, AND ASSOCIATED COMPUTER PROGRAMS

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20120820

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

RIN1 Information on inventor provided before grant (corrected)

Inventor name: NICOL, ROZENN

Inventor name: DANIEL, ADRIEN

DAX Request for extension of the european patent (deleted)
RAP1 Party data changed (applicant data changed or rights of an application transferred)

Owner name: ORANGE

REG Reference to a national code

Ref country code: DE

Ref legal event code: R079

Ref document number: 602011005897

Country of ref document: DE

Free format text: PREVIOUS MAIN CLASS: G10L0019000000

Ipc: G10L0019008000

GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

RIC1 Information provided on ipc code assigned before grant

Ipc: H04S 3/00 20060101ALI20131009BHEP

Ipc: G10L 19/008 20130101AFI20131009BHEP

Ipc: G10L 19/20 20130101ALI20131009BHEP

INTG Intention to grant announced

Effective date: 20131029

GRAS Grant fee paid

Free format text: ORIGINAL CODE: EPIDOSNIGR3

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

AK Designated contracting states

Kind code of ref document: B1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

REG Reference to a national code

Ref country code: GB

Ref legal event code: FG4D

Free format text: NOT ENGLISH

REG Reference to a national code

Ref country code: AT

Ref legal event code: REF

Ref document number: 660536

Country of ref document: AT

Kind code of ref document: T

Effective date: 20140415

Ref country code: CH

Ref legal event code: EP

REG Reference to a national code

Ref country code: IE

Ref legal event code: FG4D

Free format text: LANGUAGE OF EP DOCUMENT: FRENCH

REG Reference to a national code

Ref country code: DE

Ref legal event code: R096

Ref document number: 602011005897

Country of ref document: DE

Effective date: 20140515

REG Reference to a national code

Ref country code: AT

Ref legal event code: MK05

Ref document number: 660536

Country of ref document: AT

Kind code of ref document: T

Effective date: 20140402

REG Reference to a national code

Ref country code: NL

Ref legal event code: VDEP

Effective date: 20140402

REG Reference to a national code

Ref country code: LT

Ref legal event code: MG4D

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: NL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20140402

Ref country code: NO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20140702

Ref country code: IS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20140802

Ref country code: LT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20140402

Ref country code: CZ

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20140402

Ref country code: CY

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20140402

Ref country code: GR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20140703

Ref country code: BG

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20140702

Ref country code: FI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20140402

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: ES

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20140402

Ref country code: LV

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20140402

Ref country code: RS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20140402

Ref country code: SE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20140402

Ref country code: HR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20140402

Ref country code: AT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20140402

Ref country code: PL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20140402

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: PT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20140804

REG Reference to a national code

Ref country code: DE

Ref legal event code: R097

Ref document number: 602011005897

Country of ref document: DE

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20140402

Ref country code: EE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20140402

Ref country code: RO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20140402

Ref country code: DK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20140402

PLBE No opposition filed within time limit

Free format text: ORIGINAL CODE: 0009261

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT

REG Reference to a national code

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 5

26N No opposition filed

Effective date: 20150106

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20140402

REG Reference to a national code

Ref country code: DE

Ref legal event code: R097

Ref document number: 602011005897

Country of ref document: DE

Effective date: 20150106

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: RS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20141119

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: BE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20150228

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20140402

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: LU

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20150210

REG Reference to a national code

Ref country code: CH

Ref legal event code: PL

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MC

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20140402

Ref country code: CH

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20150228

Ref country code: LI

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20150228

REG Reference to a national code

Ref country code: IE

Ref legal event code: MM4A

REG Reference to a national code

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 6

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20150210

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20140402

REG Reference to a national code

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 7

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: HU

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT; INVALID AB INITIO

Effective date: 20110210

Ref country code: SM

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20140402

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: TR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20140402

REG Reference to a national code

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 8

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20140402

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: AL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20140402

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: FR

Payment date: 20230119

Year of fee payment: 13

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: DE

Payment date: 20240123

Year of fee payment: 14

Ref country code: GB

Payment date: 20240123

Year of fee payment: 14