EP2374124A1 - Codage perfectionne de signaux audionumériques multicanaux - Google Patents
Codage perfectionne de signaux audionumériques multicanauxInfo
- Publication number
- EP2374124A1 EP2374124A1 EP09803839A EP09803839A EP2374124A1 EP 2374124 A1 EP2374124 A1 EP 2374124A1 EP 09803839 A EP09803839 A EP 09803839A EP 09803839 A EP09803839 A EP 09803839A EP 2374124 A1 EP2374124 A1 EP 2374124A1
- Authority
- EP
- European Patent Office
- Prior art keywords
- sources
- sound
- module
- data
- signal
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 230000005236 sound signal Effects 0.000 title claims abstract description 19
- 239000011159 matrix material Substances 0.000 claims abstract description 78
- 238000000034 method Methods 0.000 claims abstract description 61
- 238000002156 mixing Methods 0.000 claims abstract description 37
- 239000013598 vector Substances 0.000 claims description 30
- 230000006978 adaptation Effects 0.000 claims description 10
- 238000004590 computer program Methods 0.000 claims description 9
- 238000009826 distribution Methods 0.000 claims description 8
- 238000000354 decomposition reaction Methods 0.000 claims description 7
- 238000000605 extraction Methods 0.000 claims description 7
- 238000012847 principal component analysis method Methods 0.000 claims description 6
- 238000004364 calculation method Methods 0.000 claims description 4
- 238000012986 modification Methods 0.000 claims description 3
- 230000004048 modification Effects 0.000 claims description 3
- 241001362574 Decodes Species 0.000 claims description 2
- 238000003780 insertion Methods 0.000 claims 1
- 230000037431 insertion Effects 0.000 claims 1
- 230000006870 function Effects 0.000 description 17
- 238000012545 processing Methods 0.000 description 11
- 238000004458 analytical method Methods 0.000 description 8
- 230000015654 memory Effects 0.000 description 8
- 238000013139 quantization Methods 0.000 description 7
- 238000003860 storage Methods 0.000 description 7
- 230000015572 biosynthetic process Effects 0.000 description 6
- 238000004422 calculation algorithm Methods 0.000 description 6
- 230000008859 change Effects 0.000 description 6
- 230000008447 perception Effects 0.000 description 5
- 238000007781 pre-processing Methods 0.000 description 5
- 238000000513 principal component analysis Methods 0.000 description 5
- 238000009877 rendering Methods 0.000 description 5
- 230000005540 biological transmission Effects 0.000 description 4
- 238000004891 communication Methods 0.000 description 4
- 230000006835 compression Effects 0.000 description 4
- 238000007906 compression Methods 0.000 description 4
- 238000004091 panning Methods 0.000 description 4
- 238000011084 recovery Methods 0.000 description 4
- 238000003786 synthesis reaction Methods 0.000 description 4
- 230000002123 temporal effect Effects 0.000 description 4
- 230000009466 transformation Effects 0.000 description 4
- 238000010586 diagram Methods 0.000 description 3
- 238000012360 testing method Methods 0.000 description 3
- 238000012935 Averaging Methods 0.000 description 2
- 238000013459 approach Methods 0.000 description 2
- 230000008901 benefit Effects 0.000 description 2
- 238000009499 grossing Methods 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 230000002441 reversible effect Effects 0.000 description 2
- 230000007704 transition Effects 0.000 description 2
- 238000011282 treatment Methods 0.000 description 2
- 230000003936 working memory Effects 0.000 description 2
- 230000001427 coherent effect Effects 0.000 description 1
- 238000005094 computer simulation Methods 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 238000005520 cutting process Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000006073 displacement reaction Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000011002 quantification Methods 0.000 description 1
- 230000005855 radiation Effects 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
- 230000003595 spectral effect Effects 0.000 description 1
- 238000010183 spectrum analysis Methods 0.000 description 1
- 238000010561 standard procedure Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L19/00—Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
- G10L19/008—Multichannel audio signal coding or decoding using interchannel correlation to reduce redundancy, e.g. joint-stereo, intensity-coding or matrixing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S3/00—Systems employing more than two channels, e.g. quadraphonic
- H04S3/008—Systems employing more than two channels, e.g. quadraphonic in which the audio signals are in digital form, i.e. employing more than two discrete digital channels
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S2400/00—Details of stereophonic systems covered by H04S but not provided for in its groups
- H04S2400/11—Positioning of individual sound objects, e.g. moving airplane, within a sound field
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S2420/00—Techniques used stereophonic systems covered by H04S but not provided for in its groups
- H04S2420/03—Application of parametric coding in stereophonic audio systems
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S2420/00—Techniques used stereophonic systems covered by H04S but not provided for in its groups
- H04S2420/07—Synergistic effects of band splitting and sub-band processing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S3/00—Systems employing more than two channels, e.g. quadraphonic
- H04S3/02—Systems employing more than two channels, e.g. quadraphonic of the matrix type, i.e. in which input signals are combined algebraically, e.g. after having been phase shifted with respect to each other
Definitions
- the present invention relates to the field of encoding / decoding multichannel digital audio signals. More particularly, the present invention relates to the parametric encoding / decoding of multichannel audio signals.
- This type of coding / decoding is based on the extraction of spatialization parameters so that at decoding, the spatial perception of the listener can be reconstituted.
- Such a coding technique is known as "Binaural Cue
- Coding in English (BCC) which aims on the one hand to extract and then code the auditory spatialization indices and on the other hand to code a monophonic or stereophonic signal from a mastering of the original multichannel signal.
- This parametric approach is a low rate coding.
- the main advantage of this coding approach is to allow a better compression rate than conventional multi-channel digital audio compression methods while ensuring the backward compatibility of the compressed format obtained with the existing coding formats and broadcasting systems.
- FIG. 1 describes such a coding / decoding system in which the coder 100 constructs a sum signal ("downmix" in English) S s by matrixing in
- the 110 channels of the original multichannel signal S and provides via a parameter extraction module 120, a reduced set of parameters P which characterize the spatial content of the original multichannel signal.
- the multichannel signal is reconstructed (S ') by a synthesis module 160 which takes into account both the sum signal and the transmitted parameters P.
- the sum signal has a reduced number of channels. These channels can be encoded by a conventional audio encoder before transmission or storage.
- the sum signal has two channels and is compatible with conventional stereo broadcasting. Before transmission or storage, this sum signal can thus be encoded by any conventional stereo encoder. The signal thus coded is then compatible with the devices comprising the corresponding decoder which reconstruct the sum signal while ignoring the spatial data.
- the sum signal contains two channels
- a stereophonic reproduction must make it possible to respect the relative position of the sound sources in the reconstructed sound space.
- the left / right positioning of the sound sources must be able to be respected.
- the resulting sum signal is then transmitted to the decoder in the form of a time signal.
- the transition from time-frequency space to time space involves interactions between frequency bands and near time frames that introduce troublesome artifacts and artifacts.
- There is therefore a need for a frequency band parametric coding / decoding technique which makes it possible to limit the defects introduced by the passage of the time-frequency domain signals to the time domain and to control the spatial coherence between the multichannel audio signal and the sum signal resulting from a mastering of sound sources.
- the present invention improves the situation. For this purpose, it proposes a method of encoding a multichannel audio signal representing a sound scene comprising a plurality of sound sources. The method is such that it comprises a step of decomposing the multichannel signal into frequency bands and the following steps per frequency band: obtaining data representative of the direction of the sound sources of the sound scene;
- the mixing matrix takes into account source direction information data. This makes it possible to adapt the resulting sum signal, for a good sound reproduction in the space during the reconstruction of this signal to the decoder.
- the sum signal is thus adapted to the multichannel signal reproduction characteristics and to the possible recoveries of the positions of the sound sources.
- the spatial coherence between the sum signal and the multichannel signal is thus respected.
- the data representative of the direction are directivity information representative of the distribution of the sound sources in the sound scene.
- the directivity information associated with a source gives not only the direction of the source but also the shape, or spatial distribution, of the source, ie the interaction that this source can have with other sources of the sound stage.
- Knowing this directivity information associated with the sum signal will allow the decoder to obtain a signal of better quality which takes into account interchannel redundancies in a global manner and the probable phase oppositions between channels.
- the coding of the directivity information is performed by a parametric representation method.
- This method is of low complexity and adapts particularly to the case of synthetic sound scenes representing an ideal coding situation.
- the coding of the directivity information is performed by a principal component analysis method delivering basic directivity vectors associated with gains allowing the reconstruction of the initial directivities. This thus makes it possible to code the directivities of complex sound scenes whose coding can not easily be represented by a model.
- the coding of the directivity information is performed by a combination of a principal component analysis method and a parametric representation method.
- the method further includes encoding secondary sources among the unselected sources of the sound scene and inserting coding information of the secondary sources into the bit stream.
- the coding of the secondary sources will thus make it possible to provide additional precision on the decoded signal, in particular for complex signals of the type, for example, ambiophonic ones.
- the present invention also relates to a method for decoding a multichannel audio signal representing a sound scene comprising a plurality of sound sources, from a bit stream and a sum signal.
- the method is such that it comprises the following steps: extraction in the bit stream and decoding of data representative of the direction of the sound sources in the sound scene;
- the decoded direction data will thus make it possible to find the inverse mixing matrix of that used at the encoder.
- This mixing matrix makes it possible to find, from the sum signal, the main sources that will be rendered in space with a good spatial coherence.
- the adaptation step thus makes it possible to find the directions of the sources to be spatialized so as to obtain a restitution of the sound which is coherent with the rendering system.
- the reconstructed signal is then well adapted to the characteristics of restitution of the multichannel signal while avoiding possible recoveries of the positions of the sound sources.
- the decoding method further comprises the following steps:
- decoding the secondary sources from the extracted coding information - grouping secondary sources with the main sources for spatialization.
- the present invention also relates to an encoder of a multichannel audio signal representing a sound scene having a plurality of sound sources.
- the encoder is such that it comprises: a module for decomposing the multichannel signal into a frequency band;
- a mastering module for the main sources selected by the determined matrix to obtain a sum signal with a reduced number of channels
- a coding module for data representative of the direction of the sound sources
- the decoder is such that it comprises:
- a means of storage readable by a computer or a processor, whether or not integrated into the encoder, possibly removable, stores a computer program implementing an encoding method and / or a decoding method according to the invention.
- FIG. 2 illustrates an encoder and a coding method according to one embodiment of the invention
- FIG. 3a illustrates a first embodiment of the coding of the directivities according to the invention
- FIG. 3b illustrates a second embodiment of the coding of the directivities according to the invention
- FIG. 4 illustrates a flowchart representing the steps of determining a mixing matrix according to one embodiment of the invention
- FIG. 5a illustrates an example of distribution of sound sources around a listener
- FIG. 5b illustrates the adaptation of the distribution of sound sources around a listener to adapt the direction data of the sound sources according to one embodiment of the invention
- FIG. 6 illustrates a decoder and a decoding method according to one embodiment of the invention
- FIGS. 7a and 7b respectively represent an example of a device comprising an encoder and an exemplary device comprising a decoder according to the invention.
- FIG. 2 illustrates in the form of a block diagram, an encoder according to one embodiment of the invention as well as the steps of a coding method according to one embodiment of the invention.
- the encoder thus illustrated comprises a time-frequency transform module 210 which receives as input an original multichannel signal representing a sound scene comprising a plurality of sound sources.
- This module therefore performs a step T of calculating the time-frequency transform of the original multichannel signal S m .
- This transform is carried out for example by a short-term Fourier transform.
- each of the n x channels of the original signal is window on the current time frame, then the Fourier transform F of the window signal is computed using a fast calculation algorithm on nm-points.
- FFT X n x a complex X matrix of size ⁇ FFT X n x containing the coefficients of the original multichannel signal in the frequency space.
- the subsequent processing by the encoder is done by frequency band. This is done by cutting the matrix of coefficients X into a set of sub-matrices X j each containing the frequency coefficients in the j th band.
- the signal is thus obtained for a given frequency band S Q.
- a module for obtaining data of directions of the sound sources 220 makes it possible to determine, by a step OBT, on the one hand, the direction data associated with each of the sources of the sound stage and, on the other hand, to determine the sources of the sound stage for the given frequency band.
- the direction data can be for example arrival direction data of a source that corresponds to the position of the source. Data of this type are for example described in the document of M.
- the direction data is intensity difference data between the sound sources. These differences in intensity make it possible to define average positions of the sources. They take for example the CLD (for "Channel Level Differences" in English) for the standard encoder MPEG Surround.
- the data representative of the directions of the sources are directional information.
- the directivity information is representative of the spatial distribution of the sound sources in the sound scene.
- the directivities are vectors of the same dimension as the number n s of channels of the multichannel signal S m .
- Each source is associated with a vector of directivity.
- the directivity vector associated with a source corresponds to the weighting function to be applied to this source before playing it on a loudspeaker, so as to reproduce at best a direction of arrival and a width of source.
- the directivity vector makes it possible to faithfully represent the radiation of a sound source.
- the vector of directivity is obtained by the application of an inverse spherical Fourier transform on the components of the ambiophonic orders.
- the ambiophonic signals correspond to a decomposition into spherical harmonics, hence the direct correspondence with the directivity of the sources.
- the set of directivity vectors therefore constitutes a large amount of data that would be too expensive to transmit directly for low coding rate applications.
- two methods of representing the directivities can for example be used.
- the Cod.Di coding module 230 for directivity information can thus implement one of the two methods described below or a combination of the two methods.
- a first method is a parametric modeling method that makes it possible to exploit knowledge a priori on the signal format used. It consists of transmitting only a very small number of parameters and reconstructing the directivities according to known coding schemes.
- the associated directivity is known as a function of the direction of arrival of the sound source.
- a search for peaks in the directivity diagram by analogy with sinusoidal analysis, as explained for example in the document "Computer modeling of musical sound (analysis. transformation, synthesis) "Sylvain Marchand, PhD thesis, University Bordeaux 1, allows to detect relatively accurately the direction of arrival.
- a parametric representation can also use a simple form dictionary to represent the directivities.
- a simple form dictionary to represent the directivities.
- one associates with an element of the dictionary a datum for example the corresponding azimuth and a gain allowing to play on the amplitude of this vector of directivity of the dictionary. It is thus possible, from a dictionary of directivity form, to deduce the best form or the combination of forms that will best reconstruct the initial directivity.
- the directivity coding module 230 comprises a parametric modeling module which outputs P directionality parameters. These parameters are then quantized by the quantization module 240.
- This first method makes it possible to obtain a very good level of compression when the scene corresponds to an ideal coding. This will be particularly the case on synthetic soundtracks.
- the representation of the directivity information is in the form of a linear combination of a limited number of basic directivities.
- This method is based on the fact that the set of directivities at a given moment generally has a reduced dimension. Indeed, only a small number of sources is active at a given moment and the directivity for each source varies little with the frequency. It is thus possible to represent all the directivities in a group of frequency bands from a very small number of well-chosen basic directivities.
- the parameters transmitted are then the basic directivity vectors for the group of bands considered, and for each directivity to be coded, the coefficients to be applied to the basic directivities to reconstruct the directivity considered.
- This method is based on a principal component analysis (PCA or PCA) method.
- PCA Principal component analysis
- This tool is largely developed by LT. Jolliffe in "Principal Component Analysis", Springer, 2002.
- the representation of the directivities is therefore done from basic directivities.
- the matrix of directivities Di is written as the linear combination of these basic directivities.
- Di G D D B
- D B the matrix of the basic directivities for all the bands
- G D the matrix of the associated gains.
- the number of rows of this matrix represents the total number of sources of the sound stage and the number of columns represents the number of basic directivity vectors.
- basic directivities are sent by group of considered bands, in order to more accurately represent the directivities.
- group of considered bands in order to more accurately represent the directivities.
- two directivity groups of base one for low frequencies and one for high frequencies.
- the limit between these two groups can for example be chosen between 5 and 7 kHz.
- the coding module 230 comprises a main component analysis module delivering basic directivity vectors D B and associated coefficients or gain vectors G D -
- a limited number of directivity vectors will be encoded and transmitted.
- the number of basic vectors to be transmitted may be fixed, or else selected by the coder by using, for example, a threshold on the mean square error between the original directivity and the reconstructed directivity. Thus, if the error is below the threshold, the base vector (s) hitherto selected (s) are sufficient, it is then not necessary to code an additional base vector.
- the coding of the directivities is achieved by a combination of the two representations listed above.
- FIG. 3a illustrates in a detailed manner, the directivity coding block 230, in a first variant embodiment.
- This coding mode uses the two diagrams of representation of the directivities.
- a module 310 performs parametric modeling as previously explained to provide directional parameters (P).
- a module 320 performs principal component analysis to provide both basic directivity vectors (D B ) and associated coefficients (G D ).
- a selection module 330 selects frequency band per frequency band, the best mode of coding for the directivity by choosing the best compromise reconstruction of the directivities / flow.
- the choice of the representation chosen (parametric representation or by linear combination of basic directivities) is done in order to optimize the efficiency of the compression.
- a selection criterion is, for example, the minimization of the mean squared error.
- a perceptual weighting may possibly be used for the choice of the directivity coding mode. This weighting is intended for example to promote the reconstruction of the directivities in the frontal area, for which the ear is more sensitive.
- the error function to be minimized in the case of the ACP encoding model can be in the following form:
- the directivity parameters from the selection module are then quantized by the quantization module 240 of FIG.
- a parametric modeling module 340 performs a modeling for a certain number of directivities and outputs at the same time directivity parameters (P) for the modeled directivities and unmodelled directivities or residual directivities DiR .
- D residual directivities
- main component analysis module 350 which outputs basic directional vectors (D B ) and associated coefficients (G D ).
- the directivity parameters, the basic directivity vectors as well as the coefficients are provided at the input of the quantization module 240 of FIG. 2.
- Quantization Q is performed by reducing the accuracy as a function of perception data and then applying entropy coding. Also, the possibility of exploiting the redundancy between frequency bands or between successive frames can reduce the flow. Intra-frame or inter-frame predictions on the parameters can therefore be used. In general, the standard methods of quantification can be used. On the other hand, the vectors to be quantified being orthonormed, this property can be exploited during the scalar quantization of the components of the vector. Indeed, for a N dimension vector, only NI components will need to be quantified, the last component can be recalculated.
- the parameters thus intended for the decoder are decoded by the internal decoding module 235 to find the same information as that which the decoder will have after receiving the data of directions. coded for the main sources selected by the module 260 described later. We thus obtain principal directions.
- direction data in the form of direction of arrival of the sources, the information can be taken into account as it is.
- the module 235 determines a unique position by source by averaging the directivities. This average can for example be calculated as the barycenter of the directivity vector. These single positions or principal directions are then used by the module 275. This first determines the directions of the main sources and adapts them according to spatial coherence criterion, knowing the multichannel signal reproduction system.
- the restitution takes place by two loudspeakers located at the front of the listener.
- the steps implemented by the module 275 are described with reference to FIG.
- Figure 5a represents an original sound scene with 4 sound sources (A, B, C and D) distributed around the listener.
- Sources C and D are located behind the listener centered in the center of the circle. Sources C and D are brought to the front of the stage by symmetry.
- Figure 5b illustrates in the form of arrows, this operation.
- Step E 31 of FIG. 4 carries out a test to know if the preceding operation generates a recovery of the positions of the sources in space.
- this is for example the case for sources B and D which after the operation of step E30 are located at a distance that does not differentiate them.
- step E32 modifies the position of one of the two sources in question to position it at a minimum distance e mm that allows the listener to differentiate these interlocutors.
- the spacing is symmetrically with respect to the equidistant point of the two sources to minimize the displacement of each. If the sources are placed too close to the limit of the sound image (extreme left or right), we position the source closest to this limit to this limit position, and place the other source with the minimum spacing by report to the first source. In the example illustrated in FIG. 5b, it is the source B that is shifted so that the distance e mm separates the sources B and D.
- step E31 If the test of step E31 is negative, the positions of the sources are maintained and step E33 is implemented. This step consists of constructing a mixing matrix from the source position information thus defined in the previous steps.
- step E30 In the case of a signal reproduction by a 5.1 type system, the speakers are distributed around the listener. It is then not necessary to implement step E30 which brings the sources located at the rear of the listener forward.
- step E32 of modifying the distances between two sources is possible. Indeed, when you want to position a sound source between two loudspeakers 5.1 speakers, there may be two sources at a distance that does not allow the listener to differentiate them.
- the directions of the sources are therefore modified to obtain a minimum distance between two sources, as explained above.
- the mixing matrix is thus determined in step E33, as a function of the directions obtained after or without modifications.
- This matrix is constructed in such a way as to ensure the spatial coherence of the sum signal, ie if it is rendered alone, the sum signal already makes it possible to obtain a sound scene where the relative position of the sound sources is respected: a frontal source in the original scene will be well perceived in front of the listener, a source on the left will be seen on the left, a source on the left will also be perceived more to the left, likewise on the right.
- weighting coefficients set to 1 for the left channel and 0 for the right channel to represent the signal at the -45 ° position and vice versa to represent the 45 ° signal.
- the matrixing coefficients for the left channel and the right channel must be equal.
- the encoder as described herein further comprises a selection module 260 able to select in the step Select main sources (S pr jn c ) from the sources of the sound stage to be encoded. (S to t).
- a particular embodiment uses a principal component analysis method, ACP, in each frequency band in block 220 to extract all the sources of the sound scene (S tot ).
- ACP principal component analysis method
- the sources of greater importance are then selected by the module 260 to constitute the main sources (S P ri nC ), which are then stamped by the module 270, by the matrix M as defined by the module 275, to construct a signal sum (S Sf ,) (or "downmix" in English).
- This sum signal per frequency band undergoes an inverse time-frequency transformation T 1 by the inverse transform module 290 in order to provide a time sum signal (S s ).
- This sum signal is then encoded by a speech coder or an audio coder of the state of the art (for example: G.729.1 or MPEG-4 AAC).
- Secondary sources (S sec ) may be encoded by a coding module 280 and added to the bitstream in the bitstream building module 250. For these secondary sources, that is to say the sources that are not transmitted directly in the sum signal, there are different alternatives of treatments.
- the coding module 280 which may in one embodiment be a short-term Fourier transform coding module. These sources can then be separately encoded using the aforementioned audio or speech coders. In a variant of this coding, the coefficients of the transform of these secondary sources can be coded directly only in the bands considered to be important.
- the secondary sources can be encoded by parametric representations, these representations can be in the form of spectral envelope or temporal envelope.
- This module performs a base change step in order to express the sound scene using the plane wave decomposition of the acoustic field.
- the original surround signal is seen as the angular Fourier transform of a sound field.
- the first plane wave decomposition operation therefore corresponds to taking the omnidirectional component of the ambiophonic signal as representing the zero angular frequency (this component is therefore a real component).
- the following surround components are combined to obtain the complex coefficients of the angular Fourier transform.
- the first component represents the real part
- the second component represents the imaginary part.
- O For a two-dimensional representation, for an order O, we obtain O + 1 complex components.
- a Short Term Fourier Transform (on the time dimension) is then applied to obtain the Fourier transforms (in the frequency domain) of each angular harmonic. This step then integrates the transformation step T of the module 210. the complete angular transform by recreating the harmonics of negative frequencies by Hermitian symmetry.
- an inverse Fourier transform is carried out on the dimension of the angular frequencies to pass in the domain of the directivities.
- This preprocessing step P allows the coder to work in a signal space whose physical and perceptual interpretation is simplified, which makes it possible to more effectively exploit the knowledge of spatial auditory perception and thus to improve the coding performances.
- the encoding of the surround signals remains possible without this pre-processing step.
- Figure 6 now describes a decoder and a decoding method in one embodiment of the invention.
- This decoder receives as input the bit stream F b as constructed by the encoder described above as well as the sum signal S s . In the same way as for the encoder, all the processing is done by time frame. To simplify the notations, the description of the decoder which follows only describes the processing performed on a fixed time frame and does not show the temporal dependence in the notations. In the decoder, however, this same processing is successively applied to all the time frames of the signal.
- the decoder thus described comprises a decoding module 650 Decod.Fb information contained in the bit stream Fb received.
- the direction information and more particularly here, direct officerss are extracted from the bit stream.
- this decoding module of the bitstream depend on the coding methods of the directivities used in the coding. They can be in the form of basic directivity vectors D B and associated coefficients G D and / or modeling parameters P. These data are then transmitted to a directional information reconstruction module 660 which decodes the information of directivities by operations opposite to those performed at the coding.
- the number of directivity to be reconstructed is equal to the number n tot of sources in the frequency band considered, each source being associated with a directivity vector.
- the matrix of directivities Di is written as the linear combination of these basic directivities.
- Di G D D B
- D B is the matrix of the basic directivities for all the bands
- G D the matrix of the associated gains.
- This gain matrix has a number of lines equal to the total number of sources n tot , and a number of columns equal to the number of basic directivity vectors.
- basic directivities are decoded by group of frequency bands considered, in order to more accurately represent the directivities.
- group of frequency bands considered in order to more accurately represent the directivities.
- a vector of gains associated with the basic directivities is then decoded for each band.
- a module 690 for defining the principal directions of the sources and for determining the mixing matrix N receives this information of directions or decoded directivities.
- This module first calculates the main directions by, for example, averaging the directivities received to find the directions. In function of these directions, a mixing matrix, inverse to that used for the coding is determined. Knowing the panning laws used for the mixing matrix at the encoder, the decoder is able to reconstruct the inverse mixing matrix with the directions information corresponding to the directions of the main sources.
- the directivity information is transmitted separately for each source.
- the directivities relative to the main sources and the directivities of the secondary sources are well identified.
- this decoder does not need any other information to calculate this matrix since it depends on the direction information received in the bit stream.
- the number of rows of the matrix N corresponds to the number of channels of the sum signal, and the number of columns corresponds to the number of main sources transmitted.
- the inverse matrix N as defined is then used by the demosaicing module 620.
- the decoder thus receives in parallel the bit stream, the sum signal S s . This undergoes a first step of time-frequency transform by the transform T 610 module to obtain a sum signal by frequency band, S Sf i.
- This transform is carried out using, for example, the short-term Fourier transform. It should be noted that other transforms or filterbanks may also be used, including non-uniform filterbanks according to a perception scale (e.g. Bark). It may be noted that in order to avoid discontinuities during the reconstruction of the signal from this transform, a recovery addition method is used. For the time frame considered, the step of calculating the transform of
- the entire processing is done in frequency bands.
- the matrix coefficients F is cut into a plurality of submatrices F j each containing the frequency coefficients in the j th band.
- Different choices for the frequency division of the bands are possible.
- symmetrical bands with respect to the zero frequency in the Fourier transform are chosen in the short term.
- the description of the decoding steps performed by the decoder will be made for a given frequency band. The steps are of course carried out for each of the frequency bands to be processed.
- the frequency coefficients of the signal transform sum of the frequency band considered are matrixed by the module 620 by the matrix N determined according to the determination step described above so as to find the main sources of the sound scene. More precisely, the matrix S princ of the frequency coefficients for the current frequency band of the n main sources p ⁇ nc is obtained according to the relation:
- N is of dimension n f xn prmc and B is a matrix of dimension n bin xn f where n b m is the number of components (or bins) frequency retained in the frequency band considered.
- the lines of B are the frequency components in the current frequency band, the columns correspond to the channels of the sum signal.
- the lines of S prmt are the frequency components in the current frequency band, and each column corresponds to a main source.
- additional or secondary sources are coded and then decoded from the bitstream for the current band by the module 650 for decoding the bitstream.
- This decoding module then decodes, in addition to the directional information, the secondary sources.
- the decoding of the secondary sources is carried out by the inverse operations that those which were carried out with the coding.
- the corresponding data are decoded to reconstruct the dry matrix S of the frequency coefficients in the current band of the n sec secondary sources.
- the shape of the dry matrix S is similar to the matrix S p ⁇ nc , that is to say that the lines are the frequency components in the current frequency band, and each column corresponds to a secondary source.
- the frequency coefficients of the multichannel signal reconstructed in the band are calculated in the spatialization module 630, according to the relation:
- Y SD T , where Y is the reconstructed signal in the band.
- the rows of the matrix Y are the frequency components in the current frequency band, and each column corresponds to a channel of the multichannel signal to be reconstructed.
- the corresponding time signals are then obtained by inverse Fourier transform T 1 , using a fast algorithm implemented by the inverse transform module 640. This gives the multichannel signal S m on the current time frame.
- the different time frames are then combined by conventional overlap-add (or overlap-add) method to reconstruct the complete multichannel signal.
- temporal or frequency smoothing of the parameters can be used both for analysis and synthesis to ensure smooth transitions in the sound scene.
- a sign of sudden change of the sound stage may be reserved in the bit stream to avoid smoothing the decoder in the case of detection of a rapid change in the composition of the sound stage.
- conventional methods of adapting the resolution of the time-frequency analysis can be used (change in the size of the analysis and synthesis windows over time).
- a base change module can perform a pre-processing to obtain a plane wave decomposition of the signals
- a base change module 670 performs the inverse operation P "1 from the signals. in plane waves to find the original multichannel signal.
- the encoders and decoders as described with reference to FIGS. 2 and 6 can be integrated in a multimedia equipment of the living room decoder type, computer or communication equipment such as a mobile telephone or personal electronic organizer.
- FIG. 7a represents an example of such multimedia equipment or coding device comprising an encoder according to the invention. This device comprises a PROC processor cooperating with a memory block BM having a storage and / or working memory MEM.
- the device comprises an input module adapted to receive a multi-channel signal representing a sound scene, either by a communication network, or by reading a content stored on a storage medium.
- This multimedia equipment may also include means for capturing such a multichannel signal.
- the memory block BM may advantageously comprise a computer program comprising code instructions for implementing the steps of the coding method within the meaning of the invention, when these instructions are executed by the processor PROC, and in particular the steps of decomposition of the multichannel signal in frequency bands and the following steps per frequency band:
- FIG. 2 repeats the steps of an algorithm of such a computer program.
- the computer program can also be stored on a memory medium readable by a reader of the device or downloadable in the memory space of the equipment.
- the device comprises an output module capable of transmitting a bit stream Fb and a sum signal Ss resulting from the coding of the multichannel signal.
- FIG. 7b illustrates an example of multimedia equipment or decoding device comprising a decoder according to the invention.
- This device comprises a PROC processor cooperating with a memory block BM having a storage and / or working memory MEM.
- the device comprises an input module adapted to receive a bit stream Fb and a sum signal S s coming for example from a communication network. These input signals can come from a reading on a storage medium.
- the memory block may advantageously comprise a computer program comprising code instructions for implementing the steps of the decoding method in the sense of the invention, when these instructions are executed by the processor PROC, and in particular the steps of extraction in the bitstream and decoding of data representative of the direction of the sound sources in the sound scene;
- FIG. 6 shows the steps of an algorithm of such a computer program.
- the computer program can also be stored on a memory medium readable by a reader of the device or downloadable in the memory space of the equipment.
- the device comprises an output module capable of transmitting a multichannel signal decoded by the decoding method implemented by the equipment.
- This multimedia equipment may also include speaker-type reproduction means or communication means capable of transmitting this multi-channel signal.
- Such multimedia equipment may include both the encoder and the decoder according to the invention. The input signal then being the original multichannel signal and the output signal, the decoded multichannel signal.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Multimedia (AREA)
- Acoustics & Sound (AREA)
- Signal Processing (AREA)
- Mathematical Physics (AREA)
- Computational Linguistics (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Human Computer Interaction (AREA)
- Stereophonic System (AREA)
Abstract
Description
Claims
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
FR0858563 | 2008-12-15 | ||
PCT/FR2009/052492 WO2010076460A1 (fr) | 2008-12-15 | 2009-12-11 | Codage perfectionne de signaux audionumériques multicanaux |
Publications (2)
Publication Number | Publication Date |
---|---|
EP2374124A1 true EP2374124A1 (fr) | 2011-10-12 |
EP2374124B1 EP2374124B1 (fr) | 2013-05-29 |
Family
ID=40763760
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP09803839.1A Active EP2374124B1 (fr) | 2008-12-15 | 2009-12-11 | Codage perfectionne de signaux audionumériques multicanaux |
Country Status (4)
Country | Link |
---|---|
US (1) | US8817991B2 (fr) |
EP (1) | EP2374124B1 (fr) |
ES (1) | ES2435792T3 (fr) |
WO (1) | WO2010076460A1 (fr) |
Families Citing this family (30)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2011104463A1 (fr) * | 2010-02-26 | 2011-09-01 | France Telecom | Compression de flux audio multicanal |
EP2661746B1 (fr) * | 2011-01-05 | 2018-08-01 | Nokia Technologies Oy | Codage et/ou décodage de multiples canaux |
EP2770506A4 (fr) * | 2011-10-19 | 2015-02-25 | Panasonic Ip Corp America | Dispositif de codage et procédé de codage |
EP2665208A1 (fr) | 2012-05-14 | 2013-11-20 | Thomson Licensing | Procédé et appareil de compression et de décompression d'une représentation de signaux d'ambiophonie d'ordre supérieur |
US9516446B2 (en) | 2012-07-20 | 2016-12-06 | Qualcomm Incorporated | Scalable downmix design for object-based surround codec with cluster analysis by synthesis |
US9761229B2 (en) * | 2012-07-20 | 2017-09-12 | Qualcomm Incorporated | Systems, methods, apparatus, and computer-readable media for audio object clustering |
PT2880654T (pt) * | 2012-08-03 | 2017-12-07 | Fraunhofer Ges Forschung | Descodificador e método para um conceito paramétrico generalizado de codificação de objeto de áudio espacial para caixas de downmix/upmix multicanal |
US9466305B2 (en) | 2013-05-29 | 2016-10-11 | Qualcomm Incorporated | Performing positional analysis to code spherical harmonic coefficients |
US9980074B2 (en) * | 2013-05-29 | 2018-05-22 | Qualcomm Incorporated | Quantization step sizes for compression of spatial components of a sound field |
US9338573B2 (en) | 2013-07-30 | 2016-05-10 | Dts, Inc. | Matrix decoder with constant-power pairwise panning |
JP6612753B2 (ja) * | 2013-11-27 | 2019-11-27 | ディーティーエス・インコーポレイテッド | 高チャンネル数マルチチャンネルオーディオのためのマルチプレットベースのマトリックスミキシング |
US9502045B2 (en) | 2014-01-30 | 2016-11-22 | Qualcomm Incorporated | Coding independent frames of ambient higher-order ambisonic coefficients |
US9922656B2 (en) | 2014-01-30 | 2018-03-20 | Qualcomm Incorporated | Transitioning of ambient higher-order ambisonic coefficients |
CN109036441B (zh) * | 2014-03-24 | 2023-06-06 | 杜比国际公司 | 对高阶高保真立体声信号应用动态范围压缩的方法和设备 |
US9852737B2 (en) * | 2014-05-16 | 2017-12-26 | Qualcomm Incorporated | Coding vectors decomposed from higher-order ambisonics audio signals |
US10770087B2 (en) | 2014-05-16 | 2020-09-08 | Qualcomm Incorporated | Selecting codebooks for coding vectors decomposed from higher-order ambisonic audio signals |
US9620137B2 (en) | 2014-05-16 | 2017-04-11 | Qualcomm Incorporated | Determining between scalar and vector quantization in higher order ambisonic coefficients |
US9847087B2 (en) * | 2014-05-16 | 2017-12-19 | Qualcomm Incorporated | Higher order ambisonics signal compression |
US9747910B2 (en) | 2014-09-26 | 2017-08-29 | Qualcomm Incorporated | Switching between predictive and non-predictive quantization techniques in a higher order ambisonics (HOA) framework |
DK3007467T3 (da) * | 2014-10-06 | 2017-11-27 | Oticon As | Høreapparat, der omfatter en lydkildeadskillelsesenhed med lav latenstid |
CN106297820A (zh) | 2015-05-14 | 2017-01-04 | 杜比实验室特许公司 | 具有基于迭代加权的源方向确定的音频源分离 |
FR3048808A1 (fr) * | 2016-03-10 | 2017-09-15 | Orange | Codage et decodage optimise d'informations de spatialisation pour le codage et le decodage parametrique d'un signal audio multicanal |
MC200185B1 (fr) * | 2016-09-16 | 2017-10-04 | Coronal Audio | Dispositif et procédé de captation et traitement d'un champ acoustique tridimensionnel |
MC200186B1 (fr) | 2016-09-30 | 2017-10-18 | Coronal Encoding | Procédé de conversion, d'encodage stéréophonique, de décodage et de transcodage d'un signal audio tridimensionnel |
CN110114826B (zh) | 2016-11-08 | 2023-09-05 | 弗劳恩霍夫应用研究促进协会 | 使用相位补偿对多声道信号进行下混合或上混合的装置和方法 |
FR3060830A1 (fr) * | 2016-12-21 | 2018-06-22 | Orange | Traitement en sous-bandes d'un contenu ambisonique reel pour un decodage perfectionne |
KR102675420B1 (ko) | 2018-04-05 | 2024-06-17 | 텔레호낙티에볼라게트 엘엠 에릭슨(피유비엘) | 컴포트 노이즈 생성 지원 |
CN109258509B (zh) * | 2018-11-16 | 2023-05-02 | 太原理工大学 | 一种生猪异常声音智能监测系统与方法 |
CN116978387A (zh) * | 2019-07-02 | 2023-10-31 | 杜比国际公司 | 用于离散指向性数据的表示、编码和解码的方法、设备和系统 |
WO2021107941A1 (fr) * | 2019-11-27 | 2021-06-03 | Vitalchains Corporation | Procédé et système de séparation de sons à partir de différentes sources |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR101339854B1 (ko) | 2006-03-15 | 2014-02-06 | 오렌지 | 주 성분 분석에 의해 다중채널 오디오 신호를 인코딩하기 위한 장치 및 방법 |
US8379868B2 (en) * | 2006-05-17 | 2013-02-19 | Creative Technology Ltd | Spatial audio coding based on universal spatial cues |
-
2009
- 2009-12-11 US US13/139,611 patent/US8817991B2/en active Active
- 2009-12-11 WO PCT/FR2009/052492 patent/WO2010076460A1/fr active Application Filing
- 2009-12-11 ES ES09803839T patent/ES2435792T3/es active Active
- 2009-12-11 EP EP09803839.1A patent/EP2374124B1/fr active Active
Non-Patent Citations (1)
Title |
---|
See references of WO2010076460A1 * |
Also Published As
Publication number | Publication date |
---|---|
ES2435792T3 (es) | 2013-12-23 |
US8817991B2 (en) | 2014-08-26 |
EP2374124B1 (fr) | 2013-05-29 |
US20110249822A1 (en) | 2011-10-13 |
WO2010076460A1 (fr) | 2010-07-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP2374124B1 (fr) | Codage perfectionne de signaux audionumériques multicanaux | |
EP2374123B1 (fr) | Codage perfectionne de signaux audionumeriques multicanaux | |
EP2002424B1 (fr) | Dispositif et procede de codage scalable d'un signal audio multi-canal selon une analyse en composante principale | |
EP2898707B1 (fr) | Calibration optimisee d'un systeme de restitution sonore multi haut-parleurs | |
EP2691952B1 (fr) | Allocation par sous-bandes de bits de quantification de paramètres d'information spatiale pour un codage paramétrique | |
EP2539892B1 (fr) | Compression de flux audio multicanal | |
EP3427260B1 (fr) | Codage et décodage optimisé d'informations de spatialisation pour le codage et le décodage paramétrique d'un signal audio multicanal | |
EP2304721B1 (fr) | Synthese spatiale de signaux audio multicanaux | |
EP1600042B1 (fr) | Procede de traitement de donnees sonores compressees, pour spatialisation | |
FR2966634A1 (fr) | Codage/decodage parametrique stereo ameliore pour les canaux en opposition de phase | |
EP2143102B1 (fr) | Procede de codage et decodage audio, codeur audio, decodeur audio et programmes d'ordinateur associes | |
EP2168121B1 (fr) | Quantification apres transformation lineaire combinant les signaux audio d'une scene sonore, codeur associe | |
WO2017103418A1 (fr) | Traitement de réduction de canaux adaptatif pour le codage d'un signal audio multicanal | |
EP2489039A1 (fr) | Codage/décodage paramétrique bas débit optimisé | |
FR3049084A1 (fr) | ||
EP4042418B1 (fr) | Détermination de corrections à appliquer a un signal audio multicanal, codage et décodage associés | |
WO2024213555A1 (fr) | Traitement optimisé de réduction de canaux d'un signal audio stéréophonique | |
EP2198425A1 (fr) | Procede, module et programme d'ordinateur avec quantification en fonction des vecteurs de gerzon | |
WO2023232823A1 (fr) | Titre: codage audio spatialisé avec adaptation d'un traitement de décorrélation | |
FR3112015A1 (fr) | Codage optimisé d’une information représentative d’une image spatiale d’un signal audio multicanal | |
WO2009081002A1 (fr) | Traitement d'un flux audio 3d en fonction d'un niveau de presence de composantes spatiales |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
17P | Request for examination filed |
Effective date: 20110708 |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO SE SI SK SM TR |
|
DAX | Request for extension of the european patent (deleted) | ||
RIN1 | Information on inventor provided before grant (corrected) |
Inventor name: VIRETTE, DAVID Inventor name: JAILLET, FLORENT |
|
GRAP | Despatch of communication of intention to grant a patent |
Free format text: ORIGINAL CODE: EPIDOSNIGR1 |
|
GRAS | Grant fee paid |
Free format text: ORIGINAL CODE: EPIDOSNIGR3 |
|
GRAA | (expected) grant |
Free format text: ORIGINAL CODE: 0009210 |
|
AK | Designated contracting states |
Kind code of ref document: B1 Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO SE SI SK SM TR |
|
REG | Reference to a national code |
Ref country code: GB Ref legal event code: FG4D Free format text: NOT ENGLISH |
|
REG | Reference to a national code |
Ref country code: CH Ref legal event code: EP |
|
REG | Reference to a national code |
Ref country code: AT Ref legal event code: REF Ref document number: 614840 Country of ref document: AT Kind code of ref document: T Effective date: 20130615 |
|
REG | Reference to a national code |
Ref country code: IE Ref legal event code: FG4D Free format text: LANGUAGE OF EP DOCUMENT: FRENCH |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R096 Ref document number: 602009016110 Country of ref document: DE Effective date: 20130725 |
|
REG | Reference to a national code |
Ref country code: CH Ref legal event code: PUE Owner name: ORANGE, FR Free format text: FORMER OWNER: FRANCE TELECOM, FR |
|
RAP2 | Party data changed (patent owner data changed or rights of a patent transferred) |
Owner name: ORANGE |
|
REG | Reference to a national code |
Ref country code: AT Ref legal event code: MK05 Ref document number: 614840 Country of ref document: AT Kind code of ref document: T Effective date: 20130529 |
|
REG | Reference to a national code |
Ref country code: LT Ref legal event code: MG4D |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: SE Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20130529 Ref country code: LT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20130529 Ref country code: GR Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20130830 Ref country code: SI Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20130529 Ref country code: AT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20130529 Ref country code: NO Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20130829 Ref country code: FI Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20130529 Ref country code: IS Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20130929 Ref country code: PT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20130930 |
|
REG | Reference to a national code |
Ref country code: NL Ref legal event code: VDEP Effective date: 20130529 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: PL Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20130529 Ref country code: HR Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20130529 Ref country code: BG Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20130829 |
|
REG | Reference to a national code |
Ref country code: ES Ref legal event code: FG2A Ref document number: 2435792 Country of ref document: ES Kind code of ref document: T3 Effective date: 20131223 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: LV Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20130529 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: CZ Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20130529 Ref country code: DK Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20130529 Ref country code: SK Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20130529 Ref country code: EE Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20130529 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: RO Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20130529 Ref country code: NL Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20130529 |
|
PLBE | No opposition filed within time limit |
Free format text: ORIGINAL CODE: 0009261 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT |
|
26N | No opposition filed |
Effective date: 20140303 |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R097 Ref document number: 602009016110 Country of ref document: DE Effective date: 20140303 |
|
BERE | Be: lapsed |
Owner name: FRANCE TELECOM Effective date: 20131231 |
|
REG | Reference to a national code |
Ref country code: CH Ref legal event code: PL |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: LU Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20131211 |
|
REG | Reference to a national code |
Ref country code: IE Ref legal event code: MM4A |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: BE Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20131231 Ref country code: CH Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20131231 Ref country code: LI Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20131231 Ref country code: IE Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20131211 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: SM Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20130529 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: CY Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20130529 Ref country code: TR Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20130529 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: HU Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT; INVALID AB INITIO Effective date: 20091211 Ref country code: MK Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20130529 Ref country code: MC Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20130529 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: MT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20130529 |
|
REG | Reference to a national code |
Ref country code: FR Ref legal event code: PLFP Year of fee payment: 7 |
|
REG | Reference to a national code |
Ref country code: FR Ref legal event code: PLFP Year of fee payment: 8 |
|
REG | Reference to a national code |
Ref country code: FR Ref legal event code: PLFP Year of fee payment: 9 |
|
PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: IT Payment date: 20221122 Year of fee payment: 14 |
|
PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: GB Payment date: 20231121 Year of fee payment: 15 |
|
PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: FR Payment date: 20231122 Year of fee payment: 15 Ref country code: DE Payment date: 20231121 Year of fee payment: 15 |
|
PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: ES Payment date: 20240102 Year of fee payment: 15 |