EP1316087B1 - Übertragungsfehler-verdeckung in einem audiosignal - Google Patents

Übertragungsfehler-verdeckung in einem audiosignal Download PDF

Info

Publication number
EP1316087B1
EP1316087B1 EP01969857A EP01969857A EP1316087B1 EP 1316087 B1 EP1316087 B1 EP 1316087B1 EP 01969857 A EP01969857 A EP 01969857A EP 01969857 A EP01969857 A EP 01969857A EP 1316087 B1 EP1316087 B1 EP 1316087B1
Authority
EP
European Patent Office
Prior art keywords
signal
samples
synthesis
decoded
voiced
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Lifetime
Application number
EP01969857A
Other languages
English (en)
French (fr)
Other versions
EP1316087A1 (de
Inventor
Balazs Kovesi
Dominique Massaloux
David Deleam
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Orange SA
Original Assignee
France Telecom SA
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by France Telecom SA filed Critical France Telecom SA
Publication of EP1316087A1 publication Critical patent/EP1316087A1/de
Application granted granted Critical
Publication of EP1316087B1 publication Critical patent/EP1316087B1/de
Anticipated expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/005Correction of errors induced by the transmission channel, if related to the coding algorithm

Definitions

  • the present invention relates to techniques for concealing consecutive transmission errors in transmission systems using any type of digital coding of the speech and / or sound signal.
  • the coded values are then transformed into a bit stream which will be transmitted on a transmission channel.
  • disturbances may affect the transmitted signal and produce errors on the bitstream received by the decoder. These errors may occur in isolation in the bit stream but occur very frequently in bursts. It is then a packet of bits corresponding to a complete portion of signal which is erroneous or not received. This type of problem occurs for example for transmissions on mobile networks. It is also found in transmission on packet networks and in particular on Internet-type networks.
  • a general object of the invention is to improve, for any compression system of speech and sound, the subjective quality of the speech signal restored to the decoder when due to a poor quality of the transmission channel or as a result of the loss or non-receipt of a packet in a packet transmission system, a set of consecutive encoded data has been lost.
  • Most predictive coding algorithms provide erased frame retrieval techniques ([GSM-FR], [REC G.723.1A], [SALAMI], [HONKANEN], [COX-2], [CHEN-2] ], [CHEN-3], [CHEN-4], [CHEN-5], [CHEN-6], [CHEN-7], [KROON-2], [WATKINS]).
  • the decoder is informed of the occurrence of an erased frame in one way or another, for example in the case of mobile radio systems by the transmission of the frame erase information from the channel decoder.
  • the purpose of the erased frame recovery devices is to extrapolate the parameters of the erased frame from the last (or more) previous frames considered valid.
  • Some parameters manipulated or coded by the predictive coders have a strong inter-frame correlation (case of the short-term prediction parameters, also called “LPC” of "Linear Predictive Coding” (see [RABINER]) which represent the spectral envelope, and long-term prediction parameters for voiced sounds, for example). Because of this correlation, there is much it is more advantageous to reuse the parameters of the last valid frame to synthesize the erased frame than to use erroneous or random parameters.
  • the procedures for concealing erased frames are strongly related to the decoder and use modules of this decoder, such as the signal synthesis module. They use also intermediate signals available within this decoder as the excitation signal passed and stored during the processing of valid frames preceding the erased frames.
  • the techniques for reconstructing erased frames are also based on the coding structure used: the algorithms, such as [PICTEL, MAHIEUX-2], aim at regenerating the transformed coefficients lost from the values taken by these transformers. coefficients before deletion.
  • the energy of the synthesis signal thus generated is controlled by means of a gain calculated and adapted sample by sample.
  • the gain for the control of the synthesis signal is advantageously calculated as a function of at least one of the following parameters: energy values previously stored for the samples corresponding to valid data, fundamental period for the voiced sounds, or any parameter characterizing the frequency spectrum.
  • the gain applied to the synthesis signal decreases progressively as a function of the time during which the synthesis samples are generated.
  • stationary and non-stationary sounds are discriminated in the valid data and adaptation laws of this gain (eg decreasing speed) are used, different on the one hand for the generated samples. as a result of valid data corresponding to stationary sounds and secondly for the samples generated as a result of valid data corresponding to nonstationary sounds.
  • the contents of the memories used for the decoding process are updated according to the generated synthesis samples.
  • the synthesized samples is implemented with a coding analogous to that implemented at the transmitter, optionally followed by a (possibly partial) decoding operation, the data obtained being used to regenerate the decoder memories.
  • this possibly partial coding-decoding operation can be advantageously used to regenerate the first erased frame because it makes it possible to exploit the contents of the decoder memories before the cutoff, when these memories contain information not provided by the last valid samples decoded (eg in the case of overlap-transform transform coders, see paragraph 5.2.2.2.1 point 10).
  • the input of the short-term prediction operator is generated by an excitation signal which, in a voiced area, is the sum of a harmonic component and a weakly harmonic component or non-harmonic, and in voiced area limited to the non-harmonic component.
  • the harmonic component is advantageously obtained by implementing a filtering by means of the long-term prediction operator applied to a residual signal calculated by implementing inverse short-term filtering on the stored samples.
  • the other component can be determined using a long-term prediction operator to which disturbances (eg perturbation of gain, or period), pseudo-random, are applied.
  • disturbances eg perturbation of gain, or period
  • the harmonic component represents the low frequencies of the spectrum, while the other component the high frequency portion.
  • the long-term prediction operator is determined from the stored valid frame samples, with a number of samples used for this estimate varying between a minimum value and a value equal to at least two times the fundamental period estimated for voiced sound.
  • the residual signal is advantageously modified by nonlinear type of processing to eliminate amplitude peaks.
  • the voice activity is detected by estimating noise parameters when the signal is considered as non-active, and parameters of the synthesized signal are adjusted to those of the estimated noise.
  • the spectral envelope of the noise of the decoded samples is estimated valid and generates a synthesized signal evolving towards a signal having the same spectral envelope.
  • the invention also proposes a method for processing sound signals, characterized in that it implements a discrimination between the speech and the musical sounds and when musical sounds are detected, a method of the aforementioned type is implemented. without estimating a long-term prediction operator, the excitation signal being limited to a non-harmonic component obtained for example by generating a uniform white noise.
  • the invention further relates to a transmission error concealment device in an audio-digital signal which receives as input a decoded signal transmitted to it by a decoder and which generates missing or erroneous samples in this decoded signal, characterized in that it comprises processing means capable of implementing the aforementioned method.
  • It also relates to a transmission system comprising at least one encoder, at least one transmission channel, a module able to detect that transmitted data has been lost or is highly erroneous, at least one decoder and an error concealment device which receives the decoded signal, characterized in that this error concealment device is a device of the aforementioned type.
  • FIG. 1 presents a device for coding and decoding the digital audio signal, comprising an encoder 1, a transmission channel 2, a module 3 making it possible to detect that data transmitted has been lost or is highly erroneous, a decoder 4, and a module 5 for concealing errors or lost packets according to a possible embodiment of the invention.
  • this module in addition to the indication of erased data, receives the decoded signal in valid period and transmits to the decoder signals used for its update.
  • the memory of the decoded samples containing a sufficient number of samples for the regeneration of any erased periods in the following, is updated.
  • the memory of the decoded samples containing a sufficient number of samples for the regeneration of any erased periods in the following, is updated.
  • the energy of the valid frames is also calculated and the energies corresponding to the last valid processed frames (typically of the order of 5 s) are stored in memory.
  • This spectral envelope is calculated as an LPC [RABINER] [KLEIJN] filter.
  • the analysis is carried out by conventional methods ([KLEIJN]) after winding the samples stored in valid period.
  • an LPC analysis (step 10) is used to obtain the parameters of a filter A (z), the inverse of which is used for the LPC filtering (step 11). Since the coefficients thus calculated do not have to be transmitted, a high order can be used for this analysis, which makes it possible to obtain good performances on the musical signals.
  • a method for detecting voiced sounds (processing 12 of FIG. 3: V / NV detection, for "voiced / unvoiced") is used on the last stored data.
  • V / NV detection for "voiced / unvoiced"
  • the normalized correlation [KLEIJN]
  • the criterion presented in the following exemplary embodiment can be used for this purpose.
  • LTP filter When the signal is declared voiced, the parameters enabling the generation of a long-term synthesis filter, also called LTP filter ([KLEIJN]) are calculated (FIG. 3: LTP analysis, the inverse filter is defined by B (Z). Calculated LTP).
  • LTP filter is generally represented by a period corresponding to the fundamental period and a gain. The accuracy of this filter can be improved by the use of fractional pitch or a multi-coefficient structure [KROON].
  • the length of the analysis window varies between a minimum value and a value related to the fundamental period of the signal.
  • a residual signal is calculated by LPC inverse filtering (processing 10) of the last stored samples. This signal is then used to generate an excitation signal of the LPC synthesis filter 11 (see below).
  • the synthesis of the replacement samples is carried out by introducing an excitation signal (calculated at 13 from the signal at the output of the inverse LPC filter) in the LPC synthesis filter 11 (1 / A (z)) calculated in 1.
  • This excitation signal is generated in two different ways depending on whether the signal is voiced or unvoiced:
  • the excitation signal is the sum of two signals, one strongly harmonic component and the other less harmonic or not at all.
  • the strongly harmonic component is obtained by LTP filtering (processing module 14) using the parameters calculated in 2, of the residual signal mentioned in 3.
  • the second component can also be obtained by LTP filtering but made non-periodic by random modifications of the parameters, by generating a pseudo-random signal.
  • the residual signal used for generating the excitation is processed to eliminate the amplitude peaks significantly above the average.
  • the energy of the synthesis signal is controlled using a calculated and adapted sample gain per sample. In the case where the erasure period is relatively long, it is necessary to gradually lower the energy of the synthesis signal.
  • the gain adaptation law is calculated according to different parameters: stored energy values before the deletion (see in 1), fundamental period, and local stationarity of the signal at the time of the cut.
  • the system includes a module that discriminates between stationary (such as music) and non-stationary sounds (such as speech), different adaptation laws may also be used.
  • the first half of the memory of the last correctly received frame contains fairly accurate information about the first half of the first lost frame (its weight in addition-overlap is greater than that of the current frame). This information can also be used for calculating adaptive gain.
  • the system is coupled to a voice activity detection device with estimation of the noise parameters (such as [REC-G.723.1A], [SALAMI-2], [BENYASSINE]), it is particularly interesting to make the parameters for generating the signal to be reconstructed towards those of the estimated noise: in particular at the level of the spectral envelope (interpolation of the LPC filter with that of the estimated noise, the interpolation coefficients evolving over time until the filter is obtained noise) and energy (level gradually evolving towards that of noise, for example by windowing).
  • the noise parameters such as [REC-G.723.1A], [SALAMI-2], [BENYASSINE]
  • the present invention performs time domain weighting with interpolation between replacement samples prior to restoration of communication and valid decoded samples following the erased period. This operation is a priori independent of the type of encoder used.
  • this desynchronization can cause audible impairments that can persist for a long time or even increase over time if there are instabilities in the structure. In this case, it is therefore important to try to resynchronize the encoder and the decoder, ie to make an estimation of the decoder memories as close as possible to those of the encoder.
  • resynchronization techniques depend on the coding structure used. One will be presented whose principle is general in the present patent, but whose complexity is potentially important.
  • One possible method consists in introducing into the decoder on reception a coding module of the same type as that present at the transmission, making it possible to perform the coding-decoding of the samples of the signal produced by the techniques mentioned in the preceding paragraph during the periods deleted. In this way the memories necessary to decode the next samples are completed with data a priori close (subject to a certain stationarity during the erased period) of those which have been lost. In the event that this assumption of stationarity is not respected, after a long erased period for example, we do not have any information sufficient to do better.
  • This update can be done at the time of the production of the replacement samples, which distributes the complexity over the entire erasure zone, but is cumulative with the synthesis procedure described above.
  • the above procedure can also be limited to an intermediate zone at the beginning of the valid data period after an erased period, the updating procedure then being combined with the decoding operation. .
  • TDAC-type digital transform coding / decoding system TDAC-type digital transform coding / decoding system.
  • Wide band encoder (50-7000 Hz) at 24 kb / s or 32 kb / s.
  • a bit frame contains the encoded parameters obtained by the TDAC transformation on a window. After decoding these parameters, doing the inverse TDAC transformation, we obtain an output frame of 20 ms which is the sum of the second one. half of the previous window and the first half of the current window. In Figure 4, it was marked in bold the two parts of windows used for the reconstruction of the frame n (in time). Thus, a lost binary frame disrupts the reconstruction of two consecutive frames (the current one and the next one, figure 5). On the other hand, by correctly replacing the lost parameters, it is possible to recover the parts of the information coming from the preceding and following bit frame (FIG. 6), for the reconstruction of these two frames.
  • the memory of the decoded samples is updated.
  • This memory is used for the LPC and LTP analyzes of the signal passed in the case of erasure of a bit frame.
  • the LPC analysis is done over a 20 ms signal period (320 samples).
  • LTP analysis requires more samples to memorize.
  • the number of stored samples is twice the maximum value of the pitch. For example, if the maximum MaxPitch pitch is 320 samples (50 Hz, 20 ms), the last 640 samples will be stored (40 ms of the signal).
  • This spectral envelope is calculated as an LPC [RABINER] [KLEIJN] filter.
  • the analysis is carried out by conventional methods ([KLEIJN]). After windowing of the samples stored in valid period, an LPC analysis is used to calculate an LPC A (Z) filter (step 19). For this analysis, a high order (> 100) is used to obtain good performances on musical signals.
  • the synthesis of the replacement samples is carried out by introducing an excitation signal into the LPC synthesis filter (1 / A (z)) calculated in step 19.
  • This excitation signal - calculated in a step 20 - is a white noise whose amplitude is chosen to obtain a signal having the same energy as that of the last N samples stored in valid period.
  • the filtering step is referenced 21.
  • this gain G can be calculated as follows:
  • the Durbin algorithm gives the energy of the residual signal. Also knowing the energy of the signal to modeled one estimates the gain G LPC of the filter LPC as the ratio of these two energies.
  • the target energy is estimated as equal to the energy of the last N samples stored in valid period (N is typically ⁇ the length of the signal used for the LPC analysis).
  • the energy of the synthesized signal is the product of the white noise energy by G 2 and G LPC .
  • G we choose G so that this energy is equal to the target energy.
  • the energy of the synthesis signal is controlled using a calculated and adapted sample gain per sample. In the case where the erasure period is relatively long, it is necessary to gradually lower the energy of the synthesis signal.
  • the gain adaptation law can be calculated according to various parameters such as energy values stored before the erasure, and local stationarity of the signal at the time of the cutoff.
  • the system is coupled to a voice activity detection device or musical signals with estimation of noise parameters (such as [REC-G.723.1A], [SALAMI-2], [BENYASSINE]), it will be of particular interest to make the parameters of generation of the signal to be reconstructed tend towards those of the estimated noise: in particular at the level of the spectral envelope (interpolation of the LPC filter with that of the estimated noise, the interpolation coefficients evolving over time until to obtain the noise filter) and energy (level gradually evolving towards that of the noise, for example by windowing).
  • noise parameters such as [REC-G.723.1A], [SALAMI-2], [BENYASSINE]
  • the technique which has just been described has the advantage of being usable with any type of encoder; in particular, it makes it possible to remedy the problems of lost bit packets for time or transform coders, on speech and music signals with good performances: in fact in the present technique, the only signals stored during the periods when the data transmitted are valid are the samples from the decoder, information that is available regardless of the coding structure used.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Acoustics & Sound (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Signal Processing (AREA)
  • Computational Linguistics (AREA)
  • Compression, Expansion, Code Conversion, And Decoders (AREA)
  • Detection And Prevention Of Errors In Transmission (AREA)
  • Transmission Systems Not Characterized By The Medium Used For Transmission (AREA)
  • Mobile Radio Communication Systems (AREA)
  • Input Circuits Of Receivers And Coupling Of Receivers And Audio Equipment (AREA)
  • Automobile Manufacture Line, Endless Track Vehicle, Trailer (AREA)
  • Arrangements For Transmission Of Measured Signals (AREA)

Claims (18)

  1. Verfahren zur Verdeckung eines Übertragungsfehlers in einem digitalen Audiosignal, bei dem bei Erfassung (3) von fehlenden oder fehlerhaften Proben in einem Signal Syntheseproben (5) mit Hilfe mindestens eines Kurzzeitvorhersage-Operators und mindestens für die stimmhaften Töne eines Langzeitvorhersage-Operators, geschätzt in Abhängigkeit von decodierten Proben eines vergangenen decodierten Signals, erzeugt, wobei die decodierten Proben vorher gespeichert werden (6), wenn die übertragenen Daten des vergangenen Signals gültig sind, dadurch gekennzeichnet, dass die Energie des so erzeugten Synthesesignals mit Hilfe einer berechneten und angepassten Verstärkung Probe für Probe gemäß einem Anpassungsgesetz gesteuert wird, das von mindestens einem Parameter der decodierten gespeicherten Proben abhängt.
  2. Verfahren nach Anspruch 1, dadurch gekennzeichnet, dass die Verstärkung für die Steuerung des Synthesesignals in Abhängigkeit von mindestens einem der folgende Parameter berechnet wird: vorher gespeicherte Energiewerte für die gültigen Daten entsprechenden Proben, Grundperiode für die stimmhaften Töne, oder jeder das Frequenzspektrum kennzeichnende Parameter.
  3. Verfahren nach einem der vorhergehenden Ansprüche, dadurch gekennzeichnet, dass die an das Synthesesignal angewendete Verstärkung progressiv in Abhängigkeit von der Dauer abnimmt, während der die Syntheseproben erzeugt werden.
  4. Verfahren nach einem der vorhergehenden Ansprüche, dadurch gekennzeichnet, dass in den gültigen Daten die stationären Töne und die nicht-stationären Töne unterschieden werden und dass Anpassungsgesetze der Verstärkung, die es ermöglichen, das Synthesesignal zu steuern, angewendet werden, die einerseits für die Proben, die nach stationären Tönen entsprechenden gültigen Daten erzeugt werden, und andererseits für die Proben, die nach nicht-stationären Tönen entsprechenden gültigen Daten erzeugt werden, unterschiedlich sind.
  5. Verfahren nach einem der vorhergehenden Ansprüche, dadurch gekennzeichnet, dass in Abhängigkeit von den erzeugten Syntheseproben der Inhalt von Speichern aktualisiert wird, die für die Decodierungsverarbeitung verwendet werden.
  6. Verfahren nach Anspruch 5, dadurch gekennzeichnet, dass mindestens teilweise an die synthetisierten Proben ein Codierung analog zu derjenigen angewendet wird, die an den Emitter angewendet wird, gefolgt ggf. von einem mindestens teilweisen Decodierungsvorgang, wobei die erhaltenen Daten dazu dienen, die Speicher des Decodierers zu regenerieren.
  7. Verfahren nach Anspruch 6, dadurch gekennzeichnet, dass der erste gelöscht Rahmen mittels dieses Codier-Decodier-Vorgangs regeneriert wird, indem der Inhalt der Speicher des Decodierers vor dem Abschalten ausgewertet wird, wenn die Speicher in diesem Vorgang auswertbare Informationen enthalten.
  8. Verfahren nach einem der vorhergehenden Ansprüche, dadurch gekennzeichnet, dass am Eingang des Kurzzeitvorhersage-Operators ein Anregungssignal erzeugt wird, das in der stimmhaften Zone die Summe einer harmonischen Komponente und einer gering harmonischen oder nicht-harmonischen Komponente ist und in der nicht-stimmhaften Zone auf eine nicht-harmonische Komponente beschränkt ist.
  9. Verfahren nach Anspruch 8, dadurch gekennzeichnet, dass die harmonische Komponente durch Anwenden einer Filterung mittels des Langzeitvorhersage-Operators erhalten wird, die an ein Restsignal angewendet wird, das unter Anwendung einer umgekehrten Kurzzeit-Filterung an die gespeicherten Proben berechnet wird.
  10. Verfahren nach Anspruch 9, dadurch gekennzeichnet, dass die andere Komponente mit Hilfe eines Langzeitvorhersage-Operators bestimmt wird, an den pseudo-zufällige Störungen angelegt werden.
  11. Verfahren nach einem der Ansprüche 8 bis 10, dadurch gekennzeichnet, dass zur Erzeugung eines stimmhaften Anregungssignals die harmonische Komponente auf die niederen Frequenzen des Spektrums begrenzt ist, während die andere Komponente auf die hohen Frequenzen begrenzt ist.
  12. Verfahren nach einem der vorhergehenden Ansprüche, dadurch gekennzeichnet, dass der Langzeitvorhersage-Operator ausgehend von den gültigen gespeicherten Rahmenproben bestimmt wird, mit einer Anzahl von für diese Schätzung verwendeten Proben, die zwischen einem minimalen Wert und einem Wert gleich mindestens der doppelten Grundperiode variiert, die für den stimmhaften Ton geschätzt wird.
  13. Verfahren nach einem der vorhergehenden Ansprüche, dadurch gekennzeichnet, dass das Restsignal nichtlinear verarbeitet wird, um Amplitudenspitzen zu unterdrücken.
  14. Verfahren nach einem der vorhergehenden Ansprüche, dadurch gekennzeichnet, dass die Stimmaktivität erfasst wird, indem Rauschparameter geschätzt werden und indem Parameter des synthetisierten Signals zu denjenigen des geschätzten Rauschens ausgeweitet werden.
  15. Verfahren nach Anspruch 14, dadurch gekennzeichnet, dass die Spektralhülle des Rauschens der gültigen decodierten Proben geschätzt wird, und dass ein synthetisiertes Signal erzeugt wird, das sich zu einem Signal entwickelt, das die gleiche Spektralhülle besitzt.
  16. Verfahren zur Verarbeitung von Tonsignalen, dadurch gekennzeichnet, dass eine Unterscheidung zwischen den stimmhaften Tönen und den musikalischen Tönen angewendet wird, und dass, wenn musikalische Töne erfasst werden, ein Verfahren nach einem der vorhergehenden Ansprüche ohne Schätzung eines Langzeitvorhersage-Operators angewendet wird.
  17. Vorrichtung zur Übertragungsfehler-Verdeckung in einem digitalen Audiosignal, die am Eingang ein decodiertes Signal empfängt, das ihr ein Dekodierer überträgt, und die fehlende oder fehlerhaft Proben in diesen decodierten Signal erzeugt, dadurch gekennzeichnet, dass sie Verarbeitungsmittel aufweist, die in der Lage sind, das Verfahren nach einem der vorhergehenden Ansprüche anzuwenden.
  18. Übertragungssystem, das mindestens einen Codierer, mindestens einen Übertragungskanal, ein Modul, das erfassen kann, ob übertragene Daten verloren wurden oder stark fehlerhaft sind, mindestens einen Dekodierer und eine Fehlerverdeckungsvorrichtung aufweist, die das decodierte Signal empfängt, dadurch gekennzeichnet, dass diese Fehlerverdeckungsvorrichtung eine Vorrichtung gemäß Anspruch 17 ist.
EP01969857A 2000-09-05 2001-09-05 Übertragungsfehler-verdeckung in einem audiosignal Expired - Lifetime EP1316087B1 (de)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
FR0011285 2000-09-05
FR0011285A FR2813722B1 (fr) 2000-09-05 2000-09-05 Procede et dispositif de dissimulation d'erreurs et systeme de transmission comportant un tel dispositif
PCT/FR2001/002747 WO2002021515A1 (fr) 2000-09-05 2001-09-05 Dissimulation d'erreurs de transmission dans un signal audio

Publications (2)

Publication Number Publication Date
EP1316087A1 EP1316087A1 (de) 2003-06-04
EP1316087B1 true EP1316087B1 (de) 2008-01-02

Family

ID=8853973

Family Applications (1)

Application Number Title Priority Date Filing Date
EP01969857A Expired - Lifetime EP1316087B1 (de) 2000-09-05 2001-09-05 Übertragungsfehler-verdeckung in einem audiosignal

Country Status (11)

Country Link
US (2) US7596489B2 (de)
EP (1) EP1316087B1 (de)
JP (1) JP5062937B2 (de)
AT (1) ATE382932T1 (de)
AU (1) AU2001289991A1 (de)
DE (1) DE60132217T2 (de)
ES (1) ES2298261T3 (de)
FR (1) FR2813722B1 (de)
HK (1) HK1055346A1 (de)
IL (2) IL154728A0 (de)
WO (1) WO2002021515A1 (de)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
RU2679228C2 (ru) * 2013-09-30 2019-02-06 Конинклейке Филипс Н.В. Передискретизация звукового сигнала для кодирования/декодирования с малой задержкой

Families Citing this family (72)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030163304A1 (en) * 2002-02-28 2003-08-28 Fisseha Mekuria Error concealment for voice transmission system
FR2849727B1 (fr) * 2003-01-08 2005-03-18 France Telecom Procede de codage et de decodage audio a debit variable
EP1589330B1 (de) * 2003-01-30 2009-04-22 Fujitsu Limited EINRICHTUNG UND VERFAHREN ZUM VERBERGEN DES VERSCHWINDENS VON AUDIOPAKETEN, EMPFANGSENDGERûT UND AUDIOKOMMUNIKAITONSSYSTEM
US7835916B2 (en) * 2003-12-19 2010-11-16 Telefonaktiebolaget Lm Ericsson (Publ) Channel signal concealment in multi-channel audio systems
KR100587953B1 (ko) * 2003-12-26 2006-06-08 한국전자통신연구원 대역-분할 광대역 음성 코덱에서의 고대역 오류 은닉 장치 및 그를 이용한 비트스트림 복호화 시스템
JP4761506B2 (ja) * 2005-03-01 2011-08-31 国立大学法人北陸先端科学技術大学院大学 音声処理方法と装置及びプログラム並びに音声システム
DK1869671T3 (da) * 2005-04-28 2009-10-19 Siemens Ag Fremgangsmåde og anordning til stöjundertrykkelse
US7831421B2 (en) * 2005-05-31 2010-11-09 Microsoft Corporation Robust decoder
US8620644B2 (en) * 2005-10-26 2013-12-31 Qualcomm Incorporated Encoder-assisted frame loss concealment techniques for audio coding
US7805297B2 (en) 2005-11-23 2010-09-28 Broadcom Corporation Classification-based frame loss concealment for audio signals
US8417185B2 (en) 2005-12-16 2013-04-09 Vocollect, Inc. Wireless headset and method for robust voice data communication
JP5142727B2 (ja) * 2005-12-27 2013-02-13 パナソニック株式会社 音声復号装置および音声復号方法
US7885419B2 (en) * 2006-02-06 2011-02-08 Vocollect, Inc. Headset terminal with speech functionality
US7773767B2 (en) 2006-02-06 2010-08-10 Vocollect, Inc. Headset terminal with rear stability strap
MX2009000054A (es) * 2006-07-27 2009-01-23 Nec Corp Dispositivo de descodificacion de datos de audio.
US8015000B2 (en) * 2006-08-03 2011-09-06 Broadcom Corporation Classification-based frame loss concealment for audio signals
EP2080194B1 (de) 2006-10-20 2011-12-07 France Telecom Dämpfung von stimmüberlagerung, im besonderen zur erregungserzeugung bei einem decoder in abwesenheit von informationen
EP1921608A1 (de) * 2006-11-13 2008-05-14 Electronics And Telecommunications Research Institute Verfahren für die Einfügung von Vektorinformationen zum Schätzen von Sprachdaten in der Phase der Neusynchronisierung von Schlüsseln, Verfahren zum Übertragen von Vektorinformationen und Verfahren zum Schätzen der Sprachdaten bei der Neusynchronisierung von Schlüsseln unter Verwendung der Vektorinformationen
KR100862662B1 (ko) 2006-11-28 2008-10-10 삼성전자주식회사 프레임 오류 은닉 방법 및 장치, 이를 이용한 오디오 신호복호화 방법 및 장치
JP4504389B2 (ja) * 2007-02-22 2010-07-14 富士通株式会社 隠蔽信号生成装置、隠蔽信号生成方法および隠蔽信号生成プログラム
ES2642091T3 (es) * 2007-03-02 2017-11-15 Iii Holdings 12, Llc Dispositivo de codificación de audio y dispositivo de decodificación de audio
US7853450B2 (en) * 2007-03-30 2010-12-14 Alcatel-Lucent Usa Inc. Digital voice enhancement
US20080249767A1 (en) * 2007-04-05 2008-10-09 Ali Erdem Ertan Method and system for reducing frame erasure related error propagation in predictive speech parameter coding
WO2008146466A1 (ja) * 2007-05-24 2008-12-04 Panasonic Corporation オーディオ復号装置、オーディオ復号方法、プログラム及び集積回路
KR100906766B1 (ko) * 2007-06-18 2009-07-09 한국전자통신연구원 키 재동기 구간의 음성 데이터 예측을 위한 음성 데이터송수신 장치 및 방법
KR101450297B1 (ko) * 2007-09-21 2014-10-13 오렌지 복잡성 분배를 이용하는 디지털 신호에서의 전송 에러 위장
FR2929466A1 (fr) * 2008-03-28 2009-10-02 France Telecom Dissimulation d'erreur de transmission dans un signal numerique dans une structure de decodage hierarchique
CN101588341B (zh) * 2008-05-22 2012-07-04 华为技术有限公司 一种丢帧隐藏的方法及装置
KR20090122143A (ko) * 2008-05-23 2009-11-26 엘지전자 주식회사 오디오 신호 처리 방법 및 장치
MX2011000375A (es) * 2008-07-11 2011-05-19 Fraunhofer Ges Forschung Codificador y decodificador de audio para codificar y decodificar tramas de una señal de audio muestreada.
USD605629S1 (en) 2008-09-29 2009-12-08 Vocollect, Inc. Headset
JP2010164859A (ja) * 2009-01-16 2010-07-29 Sony Corp オーディオ再生装置、情報再生システム、オーディオ再生方法、およびプログラム
CN101609677B (zh) * 2009-03-13 2012-01-04 华为技术有限公司 一种预处理方法、装置及编码设备
US8160287B2 (en) 2009-05-22 2012-04-17 Vocollect, Inc. Headset with adjustable headband
US8438659B2 (en) 2009-11-05 2013-05-07 Vocollect, Inc. Portable computing device and headset interface
PL3364411T3 (pl) * 2009-12-14 2022-10-03 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Urządzenie do kwantyzacji wektorowej, urządzenie do kodowania głosu, sposób kwantyzacji wektorowej i sposób kodowania głosu
PT2676270T (pt) 2011-02-14 2017-05-02 Fraunhofer Ges Forschung Codificação de uma parte de um sinal de áudio utilizando uma deteção de transiente e um resultado de qualidade
KR101424372B1 (ko) 2011-02-14 2014-08-01 프라운호퍼 게젤샤프트 쭈르 푀르데룽 데어 안겐반텐 포르슝 에. 베. 랩핑 변환을 이용한 정보 신호 표현
BR112013020324B8 (pt) * 2011-02-14 2022-02-08 Fraunhofer Ges Forschung Aparelho e método para supressão de erro em fala unificada de baixo atraso e codificação de áudio
PT3239978T (pt) 2011-02-14 2019-04-02 Fraunhofer Ges Forschung Codificação e descodificação de posições de pulso de faixas de um sinal de áudio
PL2676268T3 (pl) 2011-02-14 2015-05-29 Fraunhofer Ges Forschung Urządzenie i sposób przetwarzania zdekodowanego sygnału audio w domenie widmowej
AR085794A1 (es) 2011-02-14 2013-10-30 Fraunhofer Ges Forschung Prediccion lineal basada en esquema de codificacion utilizando conformacion de ruido de dominio espectral
US8849663B2 (en) * 2011-03-21 2014-09-30 The Intellisis Corporation Systems and methods for segmenting and/or classifying an audio signal from transformed audio information
US9142220B2 (en) 2011-03-25 2015-09-22 The Intellisis Corporation Systems and methods for reconstructing an audio signal from transformed audio information
US9026434B2 (en) * 2011-04-11 2015-05-05 Samsung Electronic Co., Ltd. Frame erasure concealment for a multi rate speech and audio codec
US8620646B2 (en) 2011-08-08 2013-12-31 The Intellisis Corporation System and method for tracking sound pitch across an audio signal using harmonic envelope
US9183850B2 (en) 2011-08-08 2015-11-10 The Intellisis Corporation System and method for tracking sound pitch across an audio signal
US8548803B2 (en) 2011-08-08 2013-10-01 The Intellisis Corporation System and method of processing a sound signal including transforming the sound signal into a frequency-chirp domain
CN104011793B (zh) * 2011-10-21 2016-11-23 三星电子株式会社 帧错误隐藏方法和设备以及音频解码方法和设备
EP2830062B1 (de) * 2012-03-21 2019-11-20 Samsung Electronics Co., Ltd. Verfahren und vorrichtung für hochfrequente codierung/decodierung zur bandbreitenerweiterung
US9123328B2 (en) * 2012-09-26 2015-09-01 Google Technology Holdings LLC Apparatus and method for audio frame loss recovery
US20150302892A1 (en) * 2012-11-27 2015-10-22 Nokia Technologies Oy A shared audio scene apparatus
US9437203B2 (en) * 2013-03-07 2016-09-06 QoSound, Inc. Error concealment for speech decoder
FR3004876A1 (fr) * 2013-04-18 2014-10-24 France Telecom Correction de perte de trame par injection de bruit pondere.
ES2805744T3 (es) 2013-10-31 2021-02-15 Fraunhofer Ges Forschung Decodificador de audio y método para proporcionar una información de audio decodificada usando un ocultamiento de errores en base a una señal de excitación de dominio de tiempo
KR101940740B1 (ko) 2013-10-31 2019-01-22 프라운호퍼 게젤샤프트 쭈르 푀르데룽 데어 안겐반텐 포르슝 에. 베. 시간 도메인 여기 신호를 변형하는 오류 은닉을 사용하여 디코딩된 오디오 정보를 제공하기 위한 오디오 디코더 및 방법
US9437211B1 (en) * 2013-11-18 2016-09-06 QoSound, Inc. Adaptive delay for enhanced speech processing
EP2922056A1 (de) * 2014-03-19 2015-09-23 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Vorrichtung, Verfahren und zugehöriges Computerprogramm zur Erzeugung eines Fehlerverschleierungssignals unter Verwendung von Leistungskompensation
EP2922055A1 (de) * 2014-03-19 2015-09-23 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Vorrichtung, Verfahren und zugehöriges Computerprogramm zur Erzeugung eines Fehlerverschleierungssignals mit einzelnen Ersatz-LPC-Repräsentationen für individuelle Codebuchinformationen
EP2922054A1 (de) * 2014-03-19 2015-09-23 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Vorrichtung, Verfahren und zugehöriges Computerprogramm zur Erzeugung eines Fehlerverschleierungssignals unter Verwendung einer adaptiven Rauschschätzung
TWI602172B (zh) * 2014-08-27 2017-10-11 弗勞恩霍夫爾協會 使用參數以加強隱蔽之用於編碼及解碼音訊內容的編碼器、解碼器及方法
CN107004417B (zh) * 2014-12-09 2021-05-07 杜比国际公司 Mdct域错误掩盖
US9842611B2 (en) 2015-02-06 2017-12-12 Knuedge Incorporated Estimating pitch using peak-to-peak distances
US9922668B2 (en) 2015-02-06 2018-03-20 Knuedge Incorporated Estimating fractional chirp rate with multiple frequency representations
US9870785B2 (en) 2015-02-06 2018-01-16 Knuedge Incorporated Determining features of harmonic signals
MX2018010756A (es) * 2016-03-07 2019-01-14 Fraunhofer Ges Forschung Unidad de ocultamiento de error, decodificador de audio, y método relacionado y programa de computadora que usa características de una representación decodificada de una trama de audio decodificada apropiadamente.
ES2874629T3 (es) * 2016-03-07 2021-11-05 Fraunhofer Ges Forschung Unidad de ocultación de error, decodificador de audio y método y programa informático relacionados que desvanecen una trama de audio ocultada según factores de amortiguamiento diferentes para bandas de frecuencia diferentes
EP3553777B1 (de) * 2018-04-09 2022-07-20 Dolby Laboratories Licensing Corporation Verdecken von paketverlusten mit niedriger komplexität für transcodierte audiosignale
US10763885B2 (en) 2018-11-06 2020-09-01 Stmicroelectronics S.R.L. Method of error concealment, and associated device
WO2020164751A1 (en) 2019-02-13 2020-08-20 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Decoder and decoding method for lc3 concealment including full frame loss concealment and partial frame loss concealment
CN111063362B (zh) * 2019-12-11 2022-03-22 中国电子科技集团公司第三十研究所 一种数字语音通信噪音消除和语音恢复方法及装置
CN111554309A (zh) * 2020-05-15 2020-08-18 腾讯科技(深圳)有限公司 一种语音处理方法、装置、设备及存储介质

Family Cites Families (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2746033B2 (ja) * 1992-12-24 1998-04-28 日本電気株式会社 音声復号化装置
CA2142391C (en) * 1994-03-14 2001-05-29 Juin-Hwey Chen Computational complexity reduction during frame erasure or packet loss
US5574825A (en) * 1994-03-14 1996-11-12 Lucent Technologies Inc. Linear prediction coefficient generation during frame erasure or packet loss
US5699485A (en) * 1995-06-07 1997-12-16 Lucent Technologies Inc. Pitch delay modification during frame erasures
CA2177413A1 (en) * 1995-06-07 1996-12-08 Yair Shoham Codebook gain attenuation during frame erasures
US5732389A (en) * 1995-06-07 1998-03-24 Lucent Technologies Inc. Voiced/unvoiced classification of speech for excitation codebook selection in celp speech decoding during frame erasures
EP1686563A3 (de) * 1997-12-24 2007-02-07 Mitsubishi Denki Kabushiki Kaisha Verfahren und System zur Sprachdekodierung
FR2774827B1 (fr) * 1998-02-06 2000-04-14 France Telecom Procede de decodage d'un flux binaire representatif d'un signal audio
US6449590B1 (en) * 1998-08-24 2002-09-10 Conexant Systems, Inc. Speech encoder using warping in long term preprocessing
US6188980B1 (en) * 1998-08-24 2001-02-13 Conexant Systems, Inc. Synchronized encoder-decoder frame concealment using speech coding parameters including line spectral frequencies and filter coefficients
US6240386B1 (en) * 1998-08-24 2001-05-29 Conexant Systems, Inc. Speech codec employing noise classification for noise compensation
US6556966B1 (en) * 1998-08-24 2003-04-29 Conexant Systems, Inc. Codebook structure for changeable pulse multimode speech coding
JP3365360B2 (ja) * 1999-07-28 2003-01-08 日本電気株式会社 音声信号復号方法および音声信号符号化復号方法とその装置
US7590525B2 (en) * 2001-08-17 2009-09-15 Broadcom Corporation Frame erasure concealment for predictive speech coding based on extrapolation of speech waveform

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
RU2679228C2 (ru) * 2013-09-30 2019-02-06 Конинклейке Филипс Н.В. Передискретизация звукового сигнала для кодирования/декодирования с малой задержкой
US10566004B2 (en) 2013-09-30 2020-02-18 Koninklijke Philips N.V. Resampling an audio signal for low-delay encoding/decoding

Also Published As

Publication number Publication date
AU2001289991A1 (en) 2002-03-22
DE60132217T2 (de) 2009-01-29
WO2002021515A1 (fr) 2002-03-14
DE60132217D1 (de) 2008-02-14
HK1055346A1 (en) 2004-01-02
FR2813722A1 (fr) 2002-03-08
EP1316087A1 (de) 2003-06-04
IL154728A (en) 2008-07-08
US20100070271A1 (en) 2010-03-18
IL154728A0 (en) 2003-10-31
ES2298261T3 (es) 2008-05-16
JP5062937B2 (ja) 2012-10-31
US20040010407A1 (en) 2004-01-15
JP2004508597A (ja) 2004-03-18
US7596489B2 (en) 2009-09-29
FR2813722B1 (fr) 2003-01-24
ATE382932T1 (de) 2008-01-15
US8239192B2 (en) 2012-08-07

Similar Documents

Publication Publication Date Title
EP1316087B1 (de) Übertragungsfehler-verdeckung in einem audiosignal
EP2277172B1 (de) Verbergung von übertragungsfehlern in einem digitalsignal in einer hierarchischen decodierungsstruktur
DK1509903T3 (en) METHOD AND APPARATUS FOR EFFECTIVELY HIDDEN FRAMEWORK IN LINEAR PREDICTIVE-BASED SPEECH CODECS
JP5149198B2 (ja) 音声コーデック内の効率的なフレーム消去隠蔽の方法およびデバイス
EP2026330B1 (de) Einrichtung und verfahren zum verbergen verlorener rahmen
EP2080195B1 (de) Synthese verlorener blöcke eines digitalen audiosignals
EP1051703B1 (de) Verfahren zur dekodierung eines audiosignals mit korrektur von übertragungsfehlern
EP3175444B1 (de) Rahmenverlustverwaltung in einem fd-/ldp-übergangskontext
EP2080194B1 (de) Dämpfung von stimmüberlagerung, im besonderen zur erregungserzeugung bei einem decoder in abwesenheit von informationen
KR100216018B1 (ko) 배경음을 엔코딩 및 디코딩하는 방법 및 장치
EP2347411B1 (de) Vor-echo-dämpfung in einem digitalaudiosignal
EP3138095A1 (de) Verbesserte frameverlustkorrektur mit sprachinformationen
Tosun Dynamically adding redundancy for improved error concealment in packet voice coding
FR2830970A1 (fr) Procede et dispositif de synthese de trames de substitution, dans une succession de trames representant un signal de parole
MX2008008477A (es) Metodo y dispositivo para ocultamiento eficiente de borrado de cuadros en codec de voz
KR19990024266A (ko) 코드여기 선형예측 부호화기에서 무성음 검출에 의한 전송률 감소 방시규

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20030324

AK Designated contracting states

Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LI LU MC NL PT SE TR

AX Request for extension of the european patent

Extension state: AL LT LV MK RO SI

APBN Date of receipt of notice of appeal recorded

Free format text: ORIGINAL CODE: EPIDOSNNOA2E

APBR Date of receipt of statement of grounds of appeal recorded

Free format text: ORIGINAL CODE: EPIDOSNNOA3E

APBV Interlocutory revision of appeal recorded

Free format text: ORIGINAL CODE: EPIDOSNIRAPE

GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

GRAS Grant fee paid

Free format text: ORIGINAL CODE: EPIDOSNIGR3

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

AK Designated contracting states

Kind code of ref document: B1

Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LI LU MC NL PT SE TR

REG Reference to a national code

Ref country code: GB

Ref legal event code: FG4D

Free format text: NOT ENGLISH

REG Reference to a national code

Ref country code: IE

Ref legal event code: FG4D

Free format text: LANGUAGE OF EP DOCUMENT: FRENCH

REG Reference to a national code

Ref country code: CH

Ref legal event code: EP

REF Corresponds to:

Ref document number: 60132217

Country of ref document: DE

Date of ref document: 20080214

Kind code of ref document: P

GBT Gb: translation of ep patent filed (gb section 77(6)(a)/1977)

Effective date: 20080408

REG Reference to a national code

Ref country code: ES

Ref legal event code: FG2A

Ref document number: 2298261

Country of ref document: ES

Kind code of ref document: T3

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: NL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20080102

NLV1 Nl: lapsed or annulled due to failure to fulfill the requirements of art. 29p and 29m of the patents act
PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: FI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20080102

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: AT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20080102

REG Reference to a national code

Ref country code: HK

Ref legal event code: GR

Ref document number: 1055346

Country of ref document: HK

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: PT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20080602

REG Reference to a national code

Ref country code: IE

Ref legal event code: FD4D

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20080402

Ref country code: IE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20080102

Ref country code: DK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20080102

PLBE No opposition filed within time limit

Free format text: ORIGINAL CODE: 0009261

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT

26N No opposition filed

Effective date: 20081003

BERE Be: lapsed

Owner name: FRANCE TELECOM

Effective date: 20080930

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MC

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20080930

REG Reference to a national code

Ref country code: CH

Ref legal event code: PL

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: CY

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20080102

Ref country code: BE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20080930

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: LI

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20080930

Ref country code: CH

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20080930

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: LU

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20080905

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: TR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20080102

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: GR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20080403

REG Reference to a national code

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 16

REG Reference to a national code

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 17

REG Reference to a national code

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 18

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: FR

Payment date: 20200819

Year of fee payment: 20

Ref country code: GB

Payment date: 20200819

Year of fee payment: 20

Ref country code: DE

Payment date: 20200819

Year of fee payment: 20

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: IT

Payment date: 20200824

Year of fee payment: 20

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: ES

Payment date: 20201001

Year of fee payment: 20

REG Reference to a national code

Ref country code: DE

Ref legal event code: R071

Ref document number: 60132217

Country of ref document: DE

REG Reference to a national code

Ref country code: GB

Ref legal event code: PE20

Expiry date: 20210904

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: GB

Free format text: LAPSE BECAUSE OF EXPIRATION OF PROTECTION

Effective date: 20210904

REG Reference to a national code

Ref country code: ES

Ref legal event code: FD2A

Effective date: 20211228

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: ES

Free format text: LAPSE BECAUSE OF EXPIRATION OF PROTECTION

Effective date: 20210906