EP1316087B1 - Transmission error concealment in an audio signal - Google Patents

Transmission error concealment in an audio signal Download PDF

Info

Publication number
EP1316087B1
EP1316087B1 EP01969857A EP01969857A EP1316087B1 EP 1316087 B1 EP1316087 B1 EP 1316087B1 EP 01969857 A EP01969857 A EP 01969857A EP 01969857 A EP01969857 A EP 01969857A EP 1316087 B1 EP1316087 B1 EP 1316087B1
Authority
EP
European Patent Office
Prior art keywords
signal
samples
synthesis
decoded
voiced
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Lifetime
Application number
EP01969857A
Other languages
German (de)
French (fr)
Other versions
EP1316087A1 (en
Inventor
Balazs Kovesi
Dominique Massaloux
David Deleam
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Orange SA
Original Assignee
France Telecom SA
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by France Telecom SA filed Critical France Telecom SA
Publication of EP1316087A1 publication Critical patent/EP1316087A1/en
Application granted granted Critical
Publication of EP1316087B1 publication Critical patent/EP1316087B1/en
Anticipated expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/005Correction of errors induced by the transmission channel, if related to the coding algorithm

Definitions

  • the present invention relates to techniques for concealing consecutive transmission errors in transmission systems using any type of digital coding of the speech and / or sound signal.
  • the coded values are then transformed into a bit stream which will be transmitted on a transmission channel.
  • disturbances may affect the transmitted signal and produce errors on the bitstream received by the decoder. These errors may occur in isolation in the bit stream but occur very frequently in bursts. It is then a packet of bits corresponding to a complete portion of signal which is erroneous or not received. This type of problem occurs for example for transmissions on mobile networks. It is also found in transmission on packet networks and in particular on Internet-type networks.
  • a general object of the invention is to improve, for any compression system of speech and sound, the subjective quality of the speech signal restored to the decoder when due to a poor quality of the transmission channel or as a result of the loss or non-receipt of a packet in a packet transmission system, a set of consecutive encoded data has been lost.
  • Most predictive coding algorithms provide erased frame retrieval techniques ([GSM-FR], [REC G.723.1A], [SALAMI], [HONKANEN], [COX-2], [CHEN-2] ], [CHEN-3], [CHEN-4], [CHEN-5], [CHEN-6], [CHEN-7], [KROON-2], [WATKINS]).
  • the decoder is informed of the occurrence of an erased frame in one way or another, for example in the case of mobile radio systems by the transmission of the frame erase information from the channel decoder.
  • the purpose of the erased frame recovery devices is to extrapolate the parameters of the erased frame from the last (or more) previous frames considered valid.
  • Some parameters manipulated or coded by the predictive coders have a strong inter-frame correlation (case of the short-term prediction parameters, also called “LPC” of "Linear Predictive Coding” (see [RABINER]) which represent the spectral envelope, and long-term prediction parameters for voiced sounds, for example). Because of this correlation, there is much it is more advantageous to reuse the parameters of the last valid frame to synthesize the erased frame than to use erroneous or random parameters.
  • the procedures for concealing erased frames are strongly related to the decoder and use modules of this decoder, such as the signal synthesis module. They use also intermediate signals available within this decoder as the excitation signal passed and stored during the processing of valid frames preceding the erased frames.
  • the techniques for reconstructing erased frames are also based on the coding structure used: the algorithms, such as [PICTEL, MAHIEUX-2], aim at regenerating the transformed coefficients lost from the values taken by these transformers. coefficients before deletion.
  • the energy of the synthesis signal thus generated is controlled by means of a gain calculated and adapted sample by sample.
  • the gain for the control of the synthesis signal is advantageously calculated as a function of at least one of the following parameters: energy values previously stored for the samples corresponding to valid data, fundamental period for the voiced sounds, or any parameter characterizing the frequency spectrum.
  • the gain applied to the synthesis signal decreases progressively as a function of the time during which the synthesis samples are generated.
  • stationary and non-stationary sounds are discriminated in the valid data and adaptation laws of this gain (eg decreasing speed) are used, different on the one hand for the generated samples. as a result of valid data corresponding to stationary sounds and secondly for the samples generated as a result of valid data corresponding to nonstationary sounds.
  • the contents of the memories used for the decoding process are updated according to the generated synthesis samples.
  • the synthesized samples is implemented with a coding analogous to that implemented at the transmitter, optionally followed by a (possibly partial) decoding operation, the data obtained being used to regenerate the decoder memories.
  • this possibly partial coding-decoding operation can be advantageously used to regenerate the first erased frame because it makes it possible to exploit the contents of the decoder memories before the cutoff, when these memories contain information not provided by the last valid samples decoded (eg in the case of overlap-transform transform coders, see paragraph 5.2.2.2.1 point 10).
  • the input of the short-term prediction operator is generated by an excitation signal which, in a voiced area, is the sum of a harmonic component and a weakly harmonic component or non-harmonic, and in voiced area limited to the non-harmonic component.
  • the harmonic component is advantageously obtained by implementing a filtering by means of the long-term prediction operator applied to a residual signal calculated by implementing inverse short-term filtering on the stored samples.
  • the other component can be determined using a long-term prediction operator to which disturbances (eg perturbation of gain, or period), pseudo-random, are applied.
  • disturbances eg perturbation of gain, or period
  • the harmonic component represents the low frequencies of the spectrum, while the other component the high frequency portion.
  • the long-term prediction operator is determined from the stored valid frame samples, with a number of samples used for this estimate varying between a minimum value and a value equal to at least two times the fundamental period estimated for voiced sound.
  • the residual signal is advantageously modified by nonlinear type of processing to eliminate amplitude peaks.
  • the voice activity is detected by estimating noise parameters when the signal is considered as non-active, and parameters of the synthesized signal are adjusted to those of the estimated noise.
  • the spectral envelope of the noise of the decoded samples is estimated valid and generates a synthesized signal evolving towards a signal having the same spectral envelope.
  • the invention also proposes a method for processing sound signals, characterized in that it implements a discrimination between the speech and the musical sounds and when musical sounds are detected, a method of the aforementioned type is implemented. without estimating a long-term prediction operator, the excitation signal being limited to a non-harmonic component obtained for example by generating a uniform white noise.
  • the invention further relates to a transmission error concealment device in an audio-digital signal which receives as input a decoded signal transmitted to it by a decoder and which generates missing or erroneous samples in this decoded signal, characterized in that it comprises processing means capable of implementing the aforementioned method.
  • It also relates to a transmission system comprising at least one encoder, at least one transmission channel, a module able to detect that transmitted data has been lost or is highly erroneous, at least one decoder and an error concealment device which receives the decoded signal, characterized in that this error concealment device is a device of the aforementioned type.
  • FIG. 1 presents a device for coding and decoding the digital audio signal, comprising an encoder 1, a transmission channel 2, a module 3 making it possible to detect that data transmitted has been lost or is highly erroneous, a decoder 4, and a module 5 for concealing errors or lost packets according to a possible embodiment of the invention.
  • this module in addition to the indication of erased data, receives the decoded signal in valid period and transmits to the decoder signals used for its update.
  • the memory of the decoded samples containing a sufficient number of samples for the regeneration of any erased periods in the following, is updated.
  • the memory of the decoded samples containing a sufficient number of samples for the regeneration of any erased periods in the following, is updated.
  • the energy of the valid frames is also calculated and the energies corresponding to the last valid processed frames (typically of the order of 5 s) are stored in memory.
  • This spectral envelope is calculated as an LPC [RABINER] [KLEIJN] filter.
  • the analysis is carried out by conventional methods ([KLEIJN]) after winding the samples stored in valid period.
  • an LPC analysis (step 10) is used to obtain the parameters of a filter A (z), the inverse of which is used for the LPC filtering (step 11). Since the coefficients thus calculated do not have to be transmitted, a high order can be used for this analysis, which makes it possible to obtain good performances on the musical signals.
  • a method for detecting voiced sounds (processing 12 of FIG. 3: V / NV detection, for "voiced / unvoiced") is used on the last stored data.
  • V / NV detection for "voiced / unvoiced"
  • the normalized correlation [KLEIJN]
  • the criterion presented in the following exemplary embodiment can be used for this purpose.
  • LTP filter When the signal is declared voiced, the parameters enabling the generation of a long-term synthesis filter, also called LTP filter ([KLEIJN]) are calculated (FIG. 3: LTP analysis, the inverse filter is defined by B (Z). Calculated LTP).
  • LTP filter is generally represented by a period corresponding to the fundamental period and a gain. The accuracy of this filter can be improved by the use of fractional pitch or a multi-coefficient structure [KROON].
  • the length of the analysis window varies between a minimum value and a value related to the fundamental period of the signal.
  • a residual signal is calculated by LPC inverse filtering (processing 10) of the last stored samples. This signal is then used to generate an excitation signal of the LPC synthesis filter 11 (see below).
  • the synthesis of the replacement samples is carried out by introducing an excitation signal (calculated at 13 from the signal at the output of the inverse LPC filter) in the LPC synthesis filter 11 (1 / A (z)) calculated in 1.
  • This excitation signal is generated in two different ways depending on whether the signal is voiced or unvoiced:
  • the excitation signal is the sum of two signals, one strongly harmonic component and the other less harmonic or not at all.
  • the strongly harmonic component is obtained by LTP filtering (processing module 14) using the parameters calculated in 2, of the residual signal mentioned in 3.
  • the second component can also be obtained by LTP filtering but made non-periodic by random modifications of the parameters, by generating a pseudo-random signal.
  • the residual signal used for generating the excitation is processed to eliminate the amplitude peaks significantly above the average.
  • the energy of the synthesis signal is controlled using a calculated and adapted sample gain per sample. In the case where the erasure period is relatively long, it is necessary to gradually lower the energy of the synthesis signal.
  • the gain adaptation law is calculated according to different parameters: stored energy values before the deletion (see in 1), fundamental period, and local stationarity of the signal at the time of the cut.
  • the system includes a module that discriminates between stationary (such as music) and non-stationary sounds (such as speech), different adaptation laws may also be used.
  • the first half of the memory of the last correctly received frame contains fairly accurate information about the first half of the first lost frame (its weight in addition-overlap is greater than that of the current frame). This information can also be used for calculating adaptive gain.
  • the system is coupled to a voice activity detection device with estimation of the noise parameters (such as [REC-G.723.1A], [SALAMI-2], [BENYASSINE]), it is particularly interesting to make the parameters for generating the signal to be reconstructed towards those of the estimated noise: in particular at the level of the spectral envelope (interpolation of the LPC filter with that of the estimated noise, the interpolation coefficients evolving over time until the filter is obtained noise) and energy (level gradually evolving towards that of noise, for example by windowing).
  • the noise parameters such as [REC-G.723.1A], [SALAMI-2], [BENYASSINE]
  • the present invention performs time domain weighting with interpolation between replacement samples prior to restoration of communication and valid decoded samples following the erased period. This operation is a priori independent of the type of encoder used.
  • this desynchronization can cause audible impairments that can persist for a long time or even increase over time if there are instabilities in the structure. In this case, it is therefore important to try to resynchronize the encoder and the decoder, ie to make an estimation of the decoder memories as close as possible to those of the encoder.
  • resynchronization techniques depend on the coding structure used. One will be presented whose principle is general in the present patent, but whose complexity is potentially important.
  • One possible method consists in introducing into the decoder on reception a coding module of the same type as that present at the transmission, making it possible to perform the coding-decoding of the samples of the signal produced by the techniques mentioned in the preceding paragraph during the periods deleted. In this way the memories necessary to decode the next samples are completed with data a priori close (subject to a certain stationarity during the erased period) of those which have been lost. In the event that this assumption of stationarity is not respected, after a long erased period for example, we do not have any information sufficient to do better.
  • This update can be done at the time of the production of the replacement samples, which distributes the complexity over the entire erasure zone, but is cumulative with the synthesis procedure described above.
  • the above procedure can also be limited to an intermediate zone at the beginning of the valid data period after an erased period, the updating procedure then being combined with the decoding operation. .
  • TDAC-type digital transform coding / decoding system TDAC-type digital transform coding / decoding system.
  • Wide band encoder (50-7000 Hz) at 24 kb / s or 32 kb / s.
  • a bit frame contains the encoded parameters obtained by the TDAC transformation on a window. After decoding these parameters, doing the inverse TDAC transformation, we obtain an output frame of 20 ms which is the sum of the second one. half of the previous window and the first half of the current window. In Figure 4, it was marked in bold the two parts of windows used for the reconstruction of the frame n (in time). Thus, a lost binary frame disrupts the reconstruction of two consecutive frames (the current one and the next one, figure 5). On the other hand, by correctly replacing the lost parameters, it is possible to recover the parts of the information coming from the preceding and following bit frame (FIG. 6), for the reconstruction of these two frames.
  • the memory of the decoded samples is updated.
  • This memory is used for the LPC and LTP analyzes of the signal passed in the case of erasure of a bit frame.
  • the LPC analysis is done over a 20 ms signal period (320 samples).
  • LTP analysis requires more samples to memorize.
  • the number of stored samples is twice the maximum value of the pitch. For example, if the maximum MaxPitch pitch is 320 samples (50 Hz, 20 ms), the last 640 samples will be stored (40 ms of the signal).
  • This spectral envelope is calculated as an LPC [RABINER] [KLEIJN] filter.
  • the analysis is carried out by conventional methods ([KLEIJN]). After windowing of the samples stored in valid period, an LPC analysis is used to calculate an LPC A (Z) filter (step 19). For this analysis, a high order (> 100) is used to obtain good performances on musical signals.
  • the synthesis of the replacement samples is carried out by introducing an excitation signal into the LPC synthesis filter (1 / A (z)) calculated in step 19.
  • This excitation signal - calculated in a step 20 - is a white noise whose amplitude is chosen to obtain a signal having the same energy as that of the last N samples stored in valid period.
  • the filtering step is referenced 21.
  • this gain G can be calculated as follows:
  • the Durbin algorithm gives the energy of the residual signal. Also knowing the energy of the signal to modeled one estimates the gain G LPC of the filter LPC as the ratio of these two energies.
  • the target energy is estimated as equal to the energy of the last N samples stored in valid period (N is typically ⁇ the length of the signal used for the LPC analysis).
  • the energy of the synthesized signal is the product of the white noise energy by G 2 and G LPC .
  • G we choose G so that this energy is equal to the target energy.
  • the energy of the synthesis signal is controlled using a calculated and adapted sample gain per sample. In the case where the erasure period is relatively long, it is necessary to gradually lower the energy of the synthesis signal.
  • the gain adaptation law can be calculated according to various parameters such as energy values stored before the erasure, and local stationarity of the signal at the time of the cutoff.
  • the system is coupled to a voice activity detection device or musical signals with estimation of noise parameters (such as [REC-G.723.1A], [SALAMI-2], [BENYASSINE]), it will be of particular interest to make the parameters of generation of the signal to be reconstructed tend towards those of the estimated noise: in particular at the level of the spectral envelope (interpolation of the LPC filter with that of the estimated noise, the interpolation coefficients evolving over time until to obtain the noise filter) and energy (level gradually evolving towards that of the noise, for example by windowing).
  • noise parameters such as [REC-G.723.1A], [SALAMI-2], [BENYASSINE]
  • the technique which has just been described has the advantage of being usable with any type of encoder; in particular, it makes it possible to remedy the problems of lost bit packets for time or transform coders, on speech and music signals with good performances: in fact in the present technique, the only signals stored during the periods when the data transmitted are valid are the samples from the decoder, information that is available regardless of the coding structure used.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Acoustics & Sound (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Signal Processing (AREA)
  • Computational Linguistics (AREA)
  • Compression, Expansion, Code Conversion, And Decoders (AREA)
  • Detection And Prevention Of Errors In Transmission (AREA)
  • Transmission Systems Not Characterized By The Medium Used For Transmission (AREA)
  • Automobile Manufacture Line, Endless Track Vehicle, Trailer (AREA)
  • Arrangements For Transmission Of Measured Signals (AREA)
  • Mobile Radio Communication Systems (AREA)
  • Input Circuits Of Receivers And Coupling Of Receivers And Audio Equipment (AREA)

Abstract

A method of concealing transmission error in a digital audio signal, wherein a signal that has been decoded after transmission is received, the samples decoded while the transmitted data is valid are stored, at least one short-term prediction operator and one long-term prediction operator are estimated as a function of stored valid samples, and any missing or erroneous samples in the decoder signal are generated using the estimated operators. The energy of the synthesized signal that is thus generated is controlled by means of a gain that is computed and adapted sample by sample.

Description

1. DOMAINE TECHNIQUE 1. TECHNICAL FIELD

La présente invention concerne les techniques de dissimulation d'erreurs de transmission consécutives dans des systèmes de transmission utilisant n'importe quel type de codage numérique du signal de la parole et/ou du son.The present invention relates to techniques for concealing consecutive transmission errors in transmission systems using any type of digital coding of the speech and / or sound signal.

On distingue classiquement deux grandes catégories de codeurs :

  • les codeurs dits temporels, qui effectuent la compression des échantillons de signal numérisé échantillon par échantillon (cas des codeurs MIC ou MICDA [DAUMER][MAITRE] par exemple)
  • et les codeurs paramétriques qui analysent des trames successives d'échantillons du signal à coder pour extraire, à chacune de ces trames, un certain nombre de paramètres qui sont ensuite codés et transmis (cas des vocodeurs [TREMAIN], des codeurs IMBE [HARDWICK], ou des codeurs par transformée [BRANDENBURG]).
There are classically two main categories of coders:
  • the so-called temporal encoders, which perform the compression of the sampled sample sample samples (in the case of MIC or ADPCM [DAUMER] [MASTER] coders for example)
  • and the parametric coders which analyze successive frames of samples of the signal to be encoded in order to extract, at each of these frames, a certain number of parameters which are then coded and transmitted (in the case of vocoders [TREMAIN], IMBE coders [HARDWICK] , or transform coders [BRANDENBURG]).

Il existe des catégories intermédiaires qui complètent le codage des paramètres représentatifs des codeurs paramétriques par le codage d'une forme d'onde temporelle résiduelle. Pour simplifier, ces codeurs peuvent être rangés dans la catégorie des codeurs paramétriques.There are intermediate categories that complement the encoding of parameters representative of parametric encoders by the coding of a residual time waveform. For simplicity, these encoders can be placed in the category of parametric encoders.

Dans cette catégorie on trouve les codeurs prédictifs et notamment la famille des codeurs à analyse par synthèse tels le RPE-LTP ([HELLWIG]) ou les CELP ([ATAL]).In this category we find predictive coders and in particular the family of synthetic analysis coders such as RPE-LTP ([HELLWIG]) or CELP ([ATAL]).

Pour tous ces codeurs, les valeurs codées sont ensuite transformées en un train binaire qui sera transmis sur un canal de transmission. Selon la qualité de ce canal et le type de transport, des perturbations peuvent affecter le signal transmis et produire des erreurs sur le train binaire reçu par le décodeur. Ces erreurs peuvent intervenir de manière isolée dans le train binaire mais se produisent très fréquemment par rafales. C'est alors un paquet de bits correspondant à une portion complète de signal qui est erroné ou non reçu. Ce type de problèmes se rencontre par exemple pour les transmissions sur les réseaux mobiles. Il se rencontre aussi dans les transmissions sur les réseaux par paquets et en particulier sur les réseaux de type internet.For all these coders, the coded values are then transformed into a bit stream which will be transmitted on a transmission channel. Depending on the quality of this channel and the type of transport, disturbances may affect the transmitted signal and produce errors on the bitstream received by the decoder. These errors may occur in isolation in the bit stream but occur very frequently in bursts. It is then a packet of bits corresponding to a complete portion of signal which is erroneous or not received. This type of problem occurs for example for transmissions on mobile networks. It is also found in transmission on packet networks and in particular on Internet-type networks.

Lorsque le système de transmission ou les modules chargés de la réception permettent de détecter que les données reçues sont fortement erronées (par exemple sur les réseaux mobiles), ou qu'un bloc de données n'a pas été reçu (cas de systèmes à transmission par paquets par exemple), des procédures de dissimulation des erreurs sont alors mises en oeuvre. Ces procédures permettent d'extrapoler au décodeur les échantillons du signal manquant à partir des signaux et données disponibles issus des trames précédant et éventuellement suivant les zones effacées.When the transmission system or the modules in charge of reception can detect that the data received are highly erroneous (for example on mobile networks), or that a block of data has not been received (in the case of transmission systems for example), error concealment procedures are then implemented. These procedures make it possible to extrapolate to the decoder the samples of the missing signal from the signals and data available from the preceding frames and possibly following the erased zones.

De telles techniques ont été mises en oeuvre principalement dans le cas des codeurs paramétriques (techniques de récupération des trames effacées). Elles permettent de limiter fortement la dégradation subjective du signal perçue au décodeur en présence de trames effacées. La plupart des algorithmes développés reposent sur la technique utilisée pour le codeur et le décodeur, et constituent en fait une extension du décodeur.
Un but général de l'invention est d'améliorer, pour tout système de compression de la parole et du son, la qualité subjective du signal de parole restitué au décodeur lorsque, à cause d'une mauvaise qualité du canal de transmission ou par suite de la perte ou non réception d'un paquet dans un système à transmission par paquets, un ensemble de données codées consécutives ont été perdues.
Such techniques have been implemented mainly in the case of parametric coders (techniques for recovering erased frames). They make it possible to strongly limit the subjective degradation of the signal perceived at the decoder in the presence of erased frames. Most of the algorithms developed are based on the technique used for the encoder and decoder, and are in fact an extension of the decoder.
A general object of the invention is to improve, for any compression system of speech and sound, the subjective quality of the speech signal restored to the decoder when due to a poor quality of the transmission channel or as a result of the loss or non-receipt of a packet in a packet transmission system, a set of consecutive encoded data has been lost.

Elle propose à cet effet une technique permettant de dissimuler les erreurs de transmission successives (paquets d'erreur) quelle que soit la technique de codage utilisée, la technique proposée pouvant être utilisée par exemple dans le cas des codeurs temporels dont la structure se prête moins bien a priori à la dissimulation des paquets d'erreurs.To this end, it proposes a technique making it possible to conceal the successive transmission errors (error packets) whatever the coding technique used, the proposed technique being able to be used for example in the case of time coders whose structure lends itself less easily. a priori to the concealment of error packets.

2. ETAT DE LA TECHNIQUE ANTERIEURE 2. STATE OF THE PRIOR ART

La plupart des algorithmes de codage de type prédictif proposent des techniques de récupération de trames effacées ([GSM-FR], [REC G.723.1A], [SALAMI], [HONKANEN], [COX-2], [CHEN-2], [CHEN-3], [CHEN-4], [CHEN-5], [CHEN-6], [CHEN-7], [KROON-2], [WATKINS]). Le décodeur est informé de l'occurrence d'une trame effacée d'une manière ou d'une autre, par exemple dans le cas des systèmes radiomobiles par la transmission de l'information d'effacement de trame provenant du décodeur canal. Les dispositifs de récupération de trames effacées ont pour objectif d'extrapoler les paramètres de la trame effacée à partir de la (ou des) dernières trames précédentes considérées comme valides. Certains paramètres manipulés ou codés par les codeurs prédictifs présentent une forte corrélation inter-trames (cas des paramètres de prédiction à court terme, encore dénommés "LPC" de "Linear Prédictive Coding" (voir [RABINER]) qui représentent l'enveloppe spectrale, et des paramètres de prédiction à long terme pour les sons voisés, par exemple). Du fait de cette corrélation, il est beaucoup plus avantageux de réutiliser les paramètres de la dernière trame valide pour synthétiser la trame effacée que d'utiliser des paramètres erronés ou aléatoires.Most predictive coding algorithms provide erased frame retrieval techniques ([GSM-FR], [REC G.723.1A], [SALAMI], [HONKANEN], [COX-2], [CHEN-2] ], [CHEN-3], [CHEN-4], [CHEN-5], [CHEN-6], [CHEN-7], [KROON-2], [WATKINS]). The decoder is informed of the occurrence of an erased frame in one way or another, for example in the case of mobile radio systems by the transmission of the frame erase information from the channel decoder. The purpose of the erased frame recovery devices is to extrapolate the parameters of the erased frame from the last (or more) previous frames considered valid. Some parameters manipulated or coded by the predictive coders have a strong inter-frame correlation (case of the short-term prediction parameters, also called "LPC" of "Linear Predictive Coding" (see [RABINER]) which represent the spectral envelope, and long-term prediction parameters for voiced sounds, for example). Because of this correlation, there is much it is more advantageous to reuse the parameters of the last valid frame to synthesize the erased frame than to use erroneous or random parameters.

Pour l'algorithme de codage CELP(de "Code Excited Linear Prédiction" , se reporter à [RABINER]), les paramètres de la trame effacée sont classiquement obtenus de la manière suivante :

  • le filtre LPC est obtenu à partir des paramètres LPC de la dernière trame valide soit par recopie des paramètres ou avec introduction d'un certain amortissement (cf. codeur G723.1 [REC G.723.1A]).
  • on détecte le voisement pour déterminer le degré d'harmonicité du signal au niveau de la trame effacée ([SALAMI], cette détection se intervenant de la façon suivante :
    • ■ dans le cas d'un signal non voisé :
      • un signal d'excitation est généré de manière aléatoire (tirage d'un mot de code et gain de l'excitation passée légèrement amorti [SALAMI], sélection aléatoire dans l'excitation passée [CHEN], usage des codes transmis éventuellement totalement erronés [HONKANEN],...)
    • ■ dans le cas d'un signal voisé :
      • le délai LTP est généralement le délai calculé à la trame précédente, éventuellement avec une légère "gigue" ([SALAMI]), le gain LTP étant pris très voisin de 1 ou égal à 1. Le signal d'excitation est limité à la prédiction à long terme effectuée à partir de l'excitation passée.
For the CELP (Code Excited Linear Prediction) coding algorithm, refer to [RABINER], the parameters of the erased frame are conventionally obtained as follows:
  • the LPC filter is obtained from the LPC parameters of the last valid frame either by copying the parameters or by introducing a certain damping (cf encoder G723.1 [REC G.723.1A]).
  • the voicing is detected to determine the degree of harmonicity of the signal at the level of the erased frame ([SALAMI], this detection intervening as follows:
    • ■ in the case of an unvoiced signal:
      • an excitation signal is generated in a random manner (drawing of a code word and gain of the past excitation slightly damped [SALAMI], random selection in the past excitation [CHEN], use of the transmitted codes possibly totally erroneous [ Honkanen], ...)
    • ■ in the case of a voiced signal:
      • the LTP delay is usually the delay calculated at the previous frame, possibly with a slight "jitter" ([SALAMI]), the LTP gain being taken very close to 1 or equal to 1. The excitation signal is limited to the prediction long-term made from past excitement.

Dans tous les exemples cités précédemment, les procédures de dissimulation des trames effacées sont fortement liées au décodeur et utilisent des modules de ce décodeur, comme le module de synthèse du signal. Ils utilisent aussi des signaux intermédiaires disponibles au sein de ce décodeur comme le signal d'excitation passé et mémorisé lors du traitement des trames valides précédant les trames effacées.In all the examples mentioned above, the procedures for concealing erased frames are strongly related to the decoder and use modules of this decoder, such as the signal synthesis module. They use also intermediate signals available within this decoder as the excitation signal passed and stored during the processing of valid frames preceding the erased frames.

La plupart des méthodes utilisées pour dissimuler les erreurs produites par des paquets perdus lors du transport de données codées par des codeurs de type temporel font appel à des techniques de substitution de formes d'ondes telles celles présentées dans [GOODMAN], [ERDÖL], [AT&T]. Les méthodes de ce type reconstituent le signal en sélectionnant des portions du signal décodé avant la période perdue et ne font pas appel à des modèles de synthèse. Des techniques de lissage sont également mises en oeuvre pour éviter les artefacts produits par la concaténation des différents signaux.Most of the methods used to conceal the errors produced by lost packets during the transport of coded data by time-type coders use waveform substitution techniques such as those presented in [GOODMAN], [ERDÖL], [AT & T]. Methods of this type reconstruct the signal by selecting portions of the decoded signal before the lost period and do not use synthesis models. Smoothing techniques are also used to avoid the artifacts produced by the concatenation of the different signals.

Pour les codeurs par transformée, les techniques de reconstruction des trames effacées s'appuient également sur la structure de codage utilisée : les algorithmes, tels [PICTEL, MAHIEUX-2], visent à régénérer les coefficients transformés perdus à partir des valeurs prises par ces coefficients avant l'effacement.For transform coders, the techniques for reconstructing erased frames are also based on the coding structure used: the algorithms, such as [PICTEL, MAHIEUX-2], aim at regenerating the transformed coefficients lost from the values taken by these transformers. coefficients before deletion.

La méthode décrite dans [PARIKH] peut s'appliquer à tout type de signaux; elle repose sur la construction d'un modèle sinusoïdal à partir du signal valide décodé précédant l'effacement, pour régénérer la partie du signal perdue.The method described in [PARIKH] can be applied to any type of signals; it is based on the construction of a sinusoidal model from the decoded valid signal preceding the erasure, to regenerate the part of the lost signal.

Enfin, il existe une famille de techniques de dissimulation des trames effacées développées conjointement avec le codage canal. Ces méthodes, telle celle décrite dans [FINGSCHEIDT], se servent d'informations fournies par le décodeur canal, par exemple d'informations concernant le degré de fiabilité des paramètres reçus. Elles sont fondamentalement différentes de la présente invention qui ne présuppose pas l'existence d'un codeur canal.Finally, there is a family of techniques for concealing erased frames developed in conjunction with channel coding. These methods, such as those described in [FINGSCHEIDT], make use of information provided by the channel decoder, for example information concerning the degree of reliability received parameters. They are fundamentally different from the present invention which does not presuppose the existence of a channel encoder.

Un art antérieur qui peut être considéré comme le plus proche de la présente invention est celui qui est décrit dans [COMBESCURE], qui proposait une méthode de dissimulation des trames effacées équivalente à celle utilisée dans les codeurs CELP pour un codeur par transformée. Les inconvénients de la méthode proposée étaient l'introduction de distorsions spectrales audibles (voix "synthétique", résonances parasites,...), dus notamment à l'usage de filtres de synthèse à long terme mal contrôlés (composante harmonique unique en sons voisés, génération du signal d'excitation limitée à l'usage de portions du signal résiduel passé). En outre, le contrôle d'énergie s'effectuait dans [COMBESCURE] au niveau du signal d'excitation, la cible énergétique de ce signal était maintenue constante pendante toute la durée de l'effacement, ce qui engendrait également des artefacts gênants. Les mêmes remarques s'appliquent au document US5884010 .One prior art that can be considered to be closest to the present invention is that described in [COMBESCURE], which proposed a method for hiding erased frames equivalent to that used in CELP encoders for a transform coder. The drawbacks of the proposed method were the introduction of audible spectral distortions ("synthetic" voice, spurious resonances, etc.), due in particular to the use of poorly controlled long-term synthesis filters (unique harmonic component in voiced sounds). generation of the excitation signal limited to the use of portions of the past residual signal). In addition, the energy control was carried out in [COMBESCURE] at the excitation signal, the energy target of this signal was kept constant for the duration of the erasure, which also generated annoying artifacts. The same remarks apply to the document US5884010 .

3. PRESENTATION DE L'INVENTION 3. PRESENTATION OF THE INVENTION

L'invention telle que définie dans les revendications 1, 17 et 18 permet quant à elle la dissimulation des trames effacées sans distorsion marquée à des taux d'erreurs plus élevés et/ou pour des intervalles effacés plus longs.The invention as defined in claims 1, 17 and 18 allows for the concealment of erased frames without marked distortion at higher error rates and / or for longer erased intervals.

Elle propose notamment un procédé de dissimulation d'erreur de transmission dans un signal audio-numérique selon lequel on reçoit un signal décodé après transmission, on mémorise les échantillons décodés lorsque les données transmises sont valides, on estime au moins un opérateur de prédiction à court terme et au moins un opérateur de prédiction à long terme en fonction des échantillons valides mémorisés et on génère d'éventuels échantillons manquants ou erronés dans le signal décodé à l'aide des opérateurs ainsi estimés.It proposes in particular a method for concealing a transmission error in an audio-digital signal according to which a decoded signal is received after transmission, the decoded samples are stored when the transmitted data are valid, and at least one short-term prediction operator is estimated. term and at least one long-term prediction operator based valid samples stored and any missing or erroneous samples are generated in the decoded signal using the operators thus estimated.

Selon un premier aspect particulièrement avantageux de l'invention, on contrôle l'énergie du signal de synthèse ainsi généré à l'aide d'un gain calculé et adapté échantillon par échantillon.According to a first particularly advantageous aspect of the invention, the energy of the synthesis signal thus generated is controlled by means of a gain calculated and adapted sample by sample.

Ceci contribue en particulier à améliorer les performances de la technique sur des zones d'effacement d'une durée plus longue.This contributes in particular to improving the performance of the technique on deletion zones of a longer duration.

Notamment, le gain pour le contrôle du signal de synthèse est avantageusement calculé en fonction d'au moins un des paramètres suivantes : valeurs d'énergie préalablement mémorisées pour les échantillons correspondant à des données valides, période fondamentale pour les sons voisés, ou tout paramètre caractérisant le spectre de fréquences.In particular, the gain for the control of the synthesis signal is advantageously calculated as a function of at least one of the following parameters: energy values previously stored for the samples corresponding to valid data, fundamental period for the voiced sounds, or any parameter characterizing the frequency spectrum.

De façon avantageuse également, le gain appliqué au signal de synthèse décroît progressivement en fonction de la durée pendant laquelle les échantillons de synthèse sont générés.Advantageously also, the gain applied to the synthesis signal decreases progressively as a function of the time during which the synthesis samples are generated.

De façon préférée également, on discrimine dans les données valides les sons stationnaires et les sons non stationnaires et on met en oeuvre des lois d'adaptation de ce gain (vitesse de décroissante, par exemple), différentes d'une part pour les échantillons générés à la suite de données valides correspondant à des sons stationnaires et d'autre part pour les échantillons générés à la suite de données valides correspondants à des sons non stationnaires.Also preferably, stationary and non-stationary sounds are discriminated in the valid data and adaptation laws of this gain (eg decreasing speed) are used, different on the one hand for the generated samples. as a result of valid data corresponding to stationary sounds and secondly for the samples generated as a result of valid data corresponding to nonstationary sounds.

Selon un autre aspect indépendant de l'invention, on met à jour en fonction des échantillons de synthèse générés le contenu des mémoires utilisées pour le traitement de décodage.According to another independent aspect of the invention, the contents of the memories used for the decoding process are updated according to the generated synthesis samples.

De cette façon, d'une part on limite l'éventuelle désynchronisation du codeur et du décodeur (voir paragraphe 5.1.4 ci-après), et on évite les brusques discontinuités entre la zone effacée reconstruite selon l'invention et les échantillons suivant cette zone.In this way, on the one hand, the possible desynchronization of the encoder and the decoder (see paragraph 5.1.4 below) is limited, and sudden discontinuities between the erased zone reconstructed according to the invention and the samples following this are avoided. zoned.

Notamment, on met en oeuvre au moins partiellement sur les échantillons synthétisés un codage analogue à celui mis en oeuvre à l'émetteur suivi éventuellement d'une opération (éventuellement partielle) de décodage, les données obtenues servant à régénérer les mémoires du décodeur.In particular, at least part of the synthesized samples is implemented with a coding analogous to that implemented at the transmitter, optionally followed by a (possibly partial) decoding operation, the data obtained being used to regenerate the decoder memories.

En particulier, cette opération de codage-décodage éventuellement partielle peut être avantageusement utilisée pour régénérer la première trame effacée car elle permet d'exploiter le contenu des mémoires du décodeur avant la coupure, lorsque ces mémoires contiennent des informations non fournies par les derniers échantillons valides décodés (par exemple dans le cas des codeurs par transformée à addition-recouvrement, voir paragraphe 5.2.2.2.1 point 10).In particular, this possibly partial coding-decoding operation can be advantageously used to regenerate the first erased frame because it makes it possible to exploit the contents of the decoder memories before the cutoff, when these memories contain information not provided by the last valid samples decoded (eg in the case of overlap-transform transform coders, see paragraph 5.2.2.2.1 point 10).

Selon un aspect encore différent de l'invention, on génère en entrée de l'opérateur de prédiction à court terme un signal d'excitation qui, en zone voisée, est la somme d'une composante harmonique et d'une composante faiblement harmonique ou non harmonique, et en zone voisée limité à la composante non harmonique.According to yet another aspect of the invention, the input of the short-term prediction operator is generated by an excitation signal which, in a voiced area, is the sum of a harmonic component and a weakly harmonic component or non-harmonic, and in voiced area limited to the non-harmonic component.

Notamment, la composante harmonique est avantageusement obtenue en mettant en oeuvre un filtrage au moyen de l'opérateur de prédiction à long terme appliqué sur un signal résiduel calculé en mettant en oeuvre un filtrage à court terme inverse sur les échantillons mémorisés.In particular, the harmonic component is advantageously obtained by implementing a filtering by means of the long-term prediction operator applied to a residual signal calculated by implementing inverse short-term filtering on the stored samples.

L'autre composante peut être déterminée l'aide d'un opérateur de prédiction à long terme auquel on applique des perturbations (par exemple perturbation du gain, ou de la période), pseudo-aléatoires.The other component can be determined using a long-term prediction operator to which disturbances (eg perturbation of gain, or period), pseudo-random, are applied.

De façon particulièrement préférée, pour la génération d'un signal d'excitation voisé, la composante harmonique représente les basses fréquences du spectre, tandis que l'autre composante la partie haute fréquence.Particularly preferably, for the generation of a voiced excitation signal, the harmonic component represents the low frequencies of the spectrum, while the other component the high frequency portion.

Selon un autre aspect encore, l'opérateur de prédiction à long terme est déterminé à partir des échantillons de trames valides mémorisés, avec un nombre d'échantillons utilisés pour cette estimation variant entre une valeur minimale et une valeur égale à au moins deux fois la période fondamentale estimée pour le son voisé.In yet another aspect, the long-term prediction operator is determined from the stored valid frame samples, with a number of samples used for this estimate varying between a minimum value and a value equal to at least two times the fundamental period estimated for voiced sound.

Par ailleurs, le signal résiduel est avantageusement modifié par des traitements de type non linéaire pour éliminer des pics d'amplitude.Moreover, the residual signal is advantageously modified by nonlinear type of processing to eliminate amplitude peaks.

Egalement, selon un autre aspect avantageux, on détecte l'activité vocale en estimant des paramètres de bruit lorsque le signal est considéré comme non actif, et on fait tendre des paramètres du signal synthétisé vers ceux du bruit estimé.Also, according to another advantageous aspect, the voice activity is detected by estimating noise parameters when the signal is considered as non-active, and parameters of the synthesized signal are adjusted to those of the estimated noise.

De façon préférentielle encore, on estime l'enveloppe spectrale du bruit des échantillons décodés valides et on génère un signal synthétisé évoluant vers un signal possédant la même enveloppe spectrale.Even more preferably, the spectral envelope of the noise of the decoded samples is estimated valid and generates a synthesized signal evolving towards a signal having the same spectral envelope.

L'invention propose également un procédé de traitement de signaux de sons, caractérisé en ce qu'on met en oeuvre une discrimination entre la parole et les sons musicaux et lorsqu'on détecte des sons musicaux, on met en oeuvre un procédé du type précité sans estimation d'un opérateur de prédiction à long terme, le signal d'excitation étant limité à une composante non harmonique obtenue par exemple en générant un bruit blanc uniforme.The invention also proposes a method for processing sound signals, characterized in that it implements a discrimination between the speech and the musical sounds and when musical sounds are detected, a method of the aforementioned type is implemented. without estimating a long-term prediction operator, the excitation signal being limited to a non-harmonic component obtained for example by generating a uniform white noise.

L'invention concerne en outre un dispositif de dissimulation d'erreur de transmission dans un signal audio-numérique qui reçoit en entrée un signal décodé que lui transmet un décodeur et qui génère des échantillons manquants ou erronés dans ce signal décodé, caractérisé en ce qu'il comporte des moyens de traitement aptes à mettre en oeuvre le procédé précité.The invention further relates to a transmission error concealment device in an audio-digital signal which receives as input a decoded signal transmitted to it by a decoder and which generates missing or erroneous samples in this decoded signal, characterized in that it comprises processing means capable of implementing the aforementioned method.

Elle concerne également un système de transmission comportant au moins un codeur, au moins un canal de transmission, un module apte à détecter que des données transmises ont été perdues ou sont fortement erronées, au moins un décodeur et un dispositif de dissimulation d'erreurs qui reçoit le signal décodé, caractérisé en ce que ce dispositif de dissimulation d'erreurs est un dispositif du type précité.It also relates to a transmission system comprising at least one encoder, at least one transmission channel, a module able to detect that transmitted data has been lost or is highly erroneous, at least one decoder and an error concealment device which receives the decoded signal, characterized in that this error concealment device is a device of the aforementioned type.

4. PRESENTATION DES FIGURES 4. PRESENTATION OF FIGURES

D'autres caractéristiques et avantages de l'invention ressortiront encore de la description qui suit, laquelle est purement illustrative et non limitative et doit être lue en regard des dessins annexés sur lesquels :

  • la figure 1 est un schéma synoptique illustrant un système de transmission conforme à un mode de réalisation possible de l'invention ;
  • la figure 2 et la figure 3 sont des schémas synoptiques illustrant une mise en oeuvre conforme à un mode possible de l'invention ;
  • les figures 4 à 6 illustrent schématiquement les fenêtres utilisées avec le procédé de dissimulation d'erreurs conforme à un mode de mise en oeuvre possible de l'invention ;
  • les figures 7 et 8 sont des représentations schématiques illustrant un mode de mise en oeuvre possible de l'invention dans le cas de signaux musicaux.
Other features and advantages of the invention will become apparent from the description which follows, which is purely illustrative and not limiting and should be read in conjunction with the attached drawings in which:
  • Figure 1 is a block diagram illustrating a transmission system according to a possible embodiment of the invention;
  • FIG. 2 and FIG. 3 are block diagrams illustrating an implementation according to a possible embodiment of the invention;
  • Figures 4 to 6 schematically illustrate the windows used with the error concealment method according to a possible embodiment of the invention;
  • Figures 7 and 8 are schematic representations illustrating a possible embodiment of the invention in the case of musical signals.

5. DESCRIPTION D'UN OU PLUSIEURS MODES DE REALISATIONS POSSIBLE DE L'INVENTION 5. DESCRIPTION OF ONE OR MORE POSSIBLE EMBODIMENTS OF THE INVENTION 5.1 Principe d'un mode de réalisation possible 5.1 Principle of a possible embodiment

La figure 1 présente un dispositif de codage et décodage du signal audio numérique, comprenant un codeur 1, un canal de transmission 2, un module 3 permettant de détecter que des données transmises ont été perdues ou sont fortement erronées, un décodeur 4, et un module 5 de dissimulation des erreurs ou paquets perdus conforme à un mode de réalisation possible de l'invention.FIG. 1 presents a device for coding and decoding the digital audio signal, comprising an encoder 1, a transmission channel 2, a module 3 making it possible to detect that data transmitted has been lost or is highly erroneous, a decoder 4, and a module 5 for concealing errors or lost packets according to a possible embodiment of the invention.

On notera que ce module 5, outre l'indication de données effacées, reçoit le signal décodé en période valide et transmet au décodeur des signaux utilisés pour sa mise à jour.Note that this module 5, in addition to the indication of erased data, receives the decoded signal in valid period and transmits to the decoder signals used for its update.

Plus précisément, le traitement mis en oeuvre par le module 5 repose sur :

  1. 1. la mémorisation des échantillons décodés lorsque les données transmises sont valides (traitement 6);
  2. 2. durant un bloc de données effacées, la synthèse des échantillons correspondant aux données perdues (traitement 7) ;
  3. 3. lorsque la transmission est rétablie, le lissage entre les échantillons de synthèse produits pendant la période effacée et les échantillons décodés (traitement 8);
  4. 4. la mise à jour des mémoires du décodeur (traitement 9) (mise à jour qui s'effectue soit pendant la génération des échantillons effacés, soit au moment du rétablissement de la transmission).
More precisely, the processing implemented by the module 5 is based on:
  1. 1. storage of the decoded samples when the transmitted data are valid (processing 6);
  2. 2. during a block of erased data, the synthesis of the samples corresponding to the lost data (treatment 7);
  3. 3. when the transmission is restored, the smoothing between the synthetic samples produced during the erased period and the decoded samples (treatment 8);
  4. 4. the updating of the decoder memories (processing 9) (updating which is carried out either during the generation of the erased samples, or at the time of the restoration of the transmission).

5.1.1 En période valide5.1.1 In valid period

Après décodage des données valides, on met à jour la mémoire des échantillons décodés, contenant un nombre d'échantillons suffisant pour la régénération des éventuelles périodes effacées dans la suite. Typiquement, on mémorise de l'ordre de 20 à 40 ms de signal. On calcule également l'énergie des trames valides et on retient en mémoire les énergies correspondant aux dernières trames valides traitées (typiquement de l'ordre de 5 s).After decoding the valid data, the memory of the decoded samples, containing a sufficient number of samples for the regeneration of any erased periods in the following, is updated. Typically, one stores on the order of 20 to 40 ms of signal. The energy of the valid frames is also calculated and the energies corresponding to the last valid processed frames (typically of the order of 5 s) are stored in memory.

5.1.2 Pendant un bloc de données effacées.5.1.2 During a block of erased data.

On effectue les opérations suivantes, illustrées par la figure 3 :The following operations, illustrated in FIG. 3, are carried out:

1. Estimation de l'enveloppe spectrale courante :1. Estimation of the current spectral envelope:

On calcule cette enveloppe spectrale sous la forme d'un filtre LPC [RABINER][KLEIJN]. L'analyse est effectuée par des méthodes classiques ([KLEIJN]) après fenêtrage des échantillons mémorisés en période valide. Notamment, on met en oeuvre une analyse LPC (étape 10) pour obtenir les paramètres d'un filtre A(z), dont l'inverse est utilisé pour le filtrage LPC (étape 11). Comme les coefficients ainsi calculés n'ont pas à être transmis, on peut utiliser pour cette analyse un ordre élevé, ce qui permet d'obtenir de bonnes performances sur les signaux musicaux.This spectral envelope is calculated as an LPC [RABINER] [KLEIJN] filter. The analysis is carried out by conventional methods ([KLEIJN]) after winding the samples stored in valid period. In particular, an LPC analysis (step 10) is used to obtain the parameters of a filter A (z), the inverse of which is used for the LPC filtering (step 11). Since the coefficients thus calculated do not have to be transmitted, a high order can be used for this analysis, which makes it possible to obtain good performances on the musical signals.

2. Détection des sons voisés et calcul des paramètres LTP:2. Detection of voiced sounds and calculation of LTP parameters:

Une méthode de détection des sons voisés (traitement 12 de la figure 3 : détection V/NV, pour "voisé/non voisé") est utilisée sur les dernières données mémorisées. Par exemple on peut utiliser pour cela la corrélation normalisée ([KLEIJN]), ou le critère présenté dans l'exemple de réalisation qui suit.A method for detecting voiced sounds (processing 12 of FIG. 3: V / NV detection, for "voiced / unvoiced") is used on the last stored data. For example, the normalized correlation ([KLEIJN]) or the criterion presented in the following exemplary embodiment can be used for this purpose.

Lorsque le signal est déclaré voisé, on calcule les paramètres permettant la génération d'un filtre de synthèse à long terme, encore dénommé filtre LTP ([KLEIJN]) (figure 3 : analyse LTP, on définit par B (Z) le filtre inverse LTP calculé). Un tel filtre est généralement représenté par une période correspondant à la période fondamentale et un gain. La précision de ce filtre peut être améliorée par l'usage de pitch fractionnaire ou d'une structure multi-coefficients [KROON].When the signal is declared voiced, the parameters enabling the generation of a long-term synthesis filter, also called LTP filter ([KLEIJN]) are calculated (FIG. 3: LTP analysis, the inverse filter is defined by B (Z). Calculated LTP). Such a filter is generally represented by a period corresponding to the fundamental period and a gain. The accuracy of this filter can be improved by the use of fractional pitch or a multi-coefficient structure [KROON].

Lorsque le signal est déclaré non voisé, une valeur particulière est attribuée au filtre de synthèse LTP (voir paragraphe 4).When the signal is declared unvoiced, a particular value is assigned to the LTP synthesis filter (see section 4).

Il est particulièrement intéressant dans cette estimation du filtre de synthèse LTP de restreindre la zone analysée à la fin de la période précédant l'effacement. La longueur de la fenêtre d'analyse varie entre une valeur minimale et une valeur liée à la période fondamentale du signal.It is particularly interesting in this estimation of the LTP synthesis filter to restrict the analyzed area at the end of the period preceding the erasure. The length of the analysis window varies between a minimum value and a value related to the fundamental period of the signal.

3. Calcul d'un signal résiduel :3. Calculation of a residual signal:

On calcule un signal résiduel par filtrage inverse LPC (traitement 10) des derniers échantillons mémorisés. Ce signal est ensuite utilisé pour générer un signal d'excitation du filtre de synthèse LPC 11 (voir ci-dessous).A residual signal is calculated by LPC inverse filtering (processing 10) of the last stored samples. This signal is then used to generate an excitation signal of the LPC synthesis filter 11 (see below).

4. Synthèse des échantillons manquants :4. Summary of missing samples:

La synthèse des échantillons de remplacement s'effectue en introduisant un signal d'excitation (calculé en 13 à partir du signal en sortie du filtre LPC inverse) dans le filtre de synthèse LPC 11 (1/A(z)) calculé en 1. Ce signal d'excitation est engendré de deux façons différentes suivant que le signal est voisé ou non voisé:The synthesis of the replacement samples is carried out by introducing an excitation signal (calculated at 13 from the signal at the output of the inverse LPC filter) in the LPC synthesis filter 11 (1 / A (z)) calculated in 1. This excitation signal is generated in two different ways depending on whether the signal is voiced or unvoiced:

4.1 En zone voisée :4.1 In the voiced area:

Le signal d'excitation est la somme de deux signaux , une composante fortement harmonique et l'autre moins harmonique ou pas du tout.The excitation signal is the sum of two signals, one strongly harmonic component and the other less harmonic or not at all.

La composante fortement harmonique est obtenue par filtrage LTP (module de traitement 14) à l'aide des paramètres calculés en 2, du signal résiduel mentionné en 3.The strongly harmonic component is obtained by LTP filtering (processing module 14) using the parameters calculated in 2, of the residual signal mentioned in 3.

La seconde composante peut être obtenue également par filtrage LTP mais rendu non périodique par des modifications aléatoires des paramètres, par génération d'un signal pseudo-aléatoire.The second component can also be obtained by LTP filtering but made non-periodic by random modifications of the parameters, by generating a pseudo-random signal.

Il est particulièrement intéressant de limiter la bande passante de la première composante aux basses fréquences du spectre. De même il sera intéressant de limiter aux plus hautes fréquences la seconde composante.It is particularly interesting to limit the bandwidth of the first component to the low frequencies of the spectrum. Similarly, it will be interesting to limit the second component to higher frequencies.

4.2 En zone non voisée :4.2 In unvoiced area:

Lorsque le signal est non voisé, un signal d'excitation non harmonique est engendré. Il est intéressant d'utiliser une méthode de génération similaire à celle utilisée pour les sons voisés, avec des variations de paramètres (période, gain, signes) permettant de la rendre non harmonique.When the signal is unvoiced, a non-harmonic excitation signal is generated. It is interesting to use a method of generation similar to that used for the voiced sounds, with variations of parameters (period, gain, signs) making it possible to make it harmless.

4.3 Contrôle de l'amplitude du signal résiduel :4.3 Control of the amplitude of the residual signal:

Lorsque le signal est non voisé, ou faiblement voisé, le signal résiduel utilisé pour la génération de l'excitation est traité pour éliminer les pics d'amplitude significativement au dessus de la moyenne.When the signal is unvoiced, or weakly voiced, the residual signal used for generating the excitation is processed to eliminate the amplitude peaks significantly above the average.

5. Contrôle de l'énergie du signal de synthèse5. Control of the energy of the synthesis signal

L'énergie du signal de synthèse est contrôlée à l'aide d'un gain calculé et adapté échantillon par échantillon. Dans le cas où la période d'effacement est relativement longue, il est nécessaire de faire baisser progressivement l'énergie du signal de synthèse. La loi d'adaptation du gain est calculée en fonction de différents paramètres : valeurs d'énergies mémorisées avant l'effacement (voir en 1), période fondamentale, et stationnarité locale du signal au moment de la coupure.The energy of the synthesis signal is controlled using a calculated and adapted sample gain per sample. In the case where the erasure period is relatively long, it is necessary to gradually lower the energy of the synthesis signal. The gain adaptation law is calculated according to different parameters: stored energy values before the deletion (see in 1), fundamental period, and local stationarity of the signal at the time of the cut.

Si le système comprend un module permettant la discrimination des sons stationnaires (comme la musique) et non stationnaires (comme la parole), des lois d'adaptation différentes peuvent aussi être utilisées.If the system includes a module that discriminates between stationary (such as music) and non-stationary sounds (such as speech), different adaptation laws may also be used.

Dans le cas de codeurs par transformée avec addition-recouvrement, la première moitié de la mémoire de la dernière trame correctement reçue contient des informations assez précises sur la première moitié de la première trame perdue (son poids dans l'addition-recouvrement est plus important que celui de la trame actuelle). Cette information peut être également utilisée pour le calcul du gain adaptatif.In the case of transform coders with overlap-add, the first half of the memory of the last correctly received frame contains fairly accurate information about the first half of the first lost frame (its weight in addition-overlap is greater than that of the current frame). This information can also be used for calculating adaptive gain.

6. Evolution de la procédure de synthèse au cours du temps :6. Evolution of the synthesis procedure over time:

Dans le cas de période d'effacement relativement longues, on peut également faire évoluer les paramètres de synthèse. Si le système est couplé à un dispositif de détection d'activité vocale avec estimation des paramètres du bruit (tel [REC-G.723.1A], [SALAMI-2], [BENYASSINE]), il est particulièrement intéressant de faire tendre les paramètres de génération du signal à reconstruire vers ceux du bruit estimé: en particulier au niveau de l'enveloppe spectrale (interpolation du filtre LPC avec celui du bruit estimé, les coefficients de l'interpolation évoluant au cours du temps jusqu'à obtention du filtre du bruit) et de l'énergie (niveau évoluant progressivement vers celui du bruit, par exemple par fenêtrage).In the case of relatively long erasure period, it is also possible to change the synthesis parameters. If the system is coupled to a voice activity detection device with estimation of the noise parameters (such as [REC-G.723.1A], [SALAMI-2], [BENYASSINE]), it is particularly interesting to make the parameters for generating the signal to be reconstructed towards those of the estimated noise: in particular at the level of the spectral envelope (interpolation of the LPC filter with that of the estimated noise, the interpolation coefficients evolving over time until the filter is obtained noise) and energy (level gradually evolving towards that of noise, for example by windowing).

5.1.3 Au rétablissement de la transmission5.1.3 At the reestablishment of the transmission

Au rétablissement de la transmission, il est particulièrement important d'éviter les ruptures brutales entre la période effacée que l'on a reconstruite selon les techniques définies aux paragraphes précédents et les périodes qui suivent, au cours desquelles on dispose de toute l'information transmise pour décoder le signal. La présente invention effectue une pondération dans le domaine temporel avec interpolation entre les échantillons de remplacement précédent le rétablissement de la communication et les échantillons décodés valides suivant la période effacée. Cette opération est a priori indépendante du type du codeur employé.In the restoration of transmission, it is particularly important to avoid sudden breaks between the erased period that has been reconstructed according to the techniques defined in the preceding paragraphs and the periods that follow, during which all the information transmitted is available. to decode the signal. The present invention performs time domain weighting with interpolation between replacement samples prior to restoration of communication and valid decoded samples following the erased period. This operation is a priori independent of the type of encoder used.

Dans le cas de codeurs par transformée avec addition-recouvrement, cette opération est commune avec la mise à jour des mémoires décrite au paragraphe suivant (voir exemple de réalisation).In the case of transform coders with addition-overlap, this operation is common with the update of the memories described in the following paragraph (see embodiment example).

5.1.4 Mise à jour des mémoires du décodeur5.1.4 Updating the decoder memories

Lorsque le décodage des échantillons valides reprend après une période effacée, il peut y avoir une dégradation lorsque le décodeur utilise des données normalement produites aux trames précédentes et mémorisées. Il est important de mettre à jour proprement ces mémoires pour éviter ces artefacts.When the decoding of the valid samples resumes after an erased period, there may be degradation when the decoder uses data normally generated at the previous frames and stored. It is important to cleanly update these memories to avoid these artifacts.

Ceci est particulièrement important pour les structures de codage utilisant des procédés récursifs, qui pour un échantillon ou une séquence d'échantillons, se servent d'informations obtenues après décodage des échantillons précédents. Ce sont par exemple des prédictions ([KLEIJN]) qui permettent d'extraire de la redondance du signal. Ces informations sont normalement disponibles à la fois au codeur, qui doit pour cela avoir effectué pour ces échantillons précédents une forme de décodage local, et au décodeur distant présent à la réception. Dés que le canal de transmission est perturbé et que le décodeur distant ne dispose plus des mêmes informations que le décodeur local présent à l'émission, il y a désynchronisation entre le codeur et le décodeur. Dans le cas de systèmes de codage fortement récursifs, cette désynchronisation peut provoquer des dégradations audibles qui peuvent perdurer longtemps voir même s'amplifier au cours du temps s'il existe des instabilités dans la structure. Dans ce cas, il est donc important de s'efforcer de resynchroniser le codeur et le décodeur, c'est à dire de faire une estimation des mémoires du décodeur la plus proche possible de celles du codeur. Cependant les techniques de resynchronisation dépendent de la structure de codage utilisée. On en présentera une dont le principe est général dans le présent brevet, mais dont la complexité est potentiellement importante.This is particularly important for coding structures using recursive methods, which for a sample or a sequence of samples, use information obtained after decoding previous samples. For example, predictions ([KLEIJN]) make it possible to extract signal redundancy. This information is normally available to both the coder, who must have done this for these samples preceding a local decoding form, and to the remote decoder present at the reception. As soon as the transmission channel is disturbed and the remote decoder no longer has the same information as the local decoder present on the transmission, there is desynchronization between the encoder and the decoder. In the case of highly recursive coding systems, this desynchronization can cause audible impairments that can persist for a long time or even increase over time if there are instabilities in the structure. In this case, it is therefore important to try to resynchronize the encoder and the decoder, ie to make an estimation of the decoder memories as close as possible to those of the encoder. However resynchronization techniques depend on the coding structure used. One will be presented whose principle is general in the present patent, but whose complexity is potentially important.

Une méthode possible consiste à introduire dans le décodeur à la réception un module de codage du même type que celui présent à l'émission, permettant d'effectuer le codage-décodage des échantillons du signal produit par les techniques mentionnées au paragraphe précédent pendant les périodes effacées. De cette façon les mémoires nécessaires pour décoder les échantillons suivant sont complétées avec des données a priori proches (sous réserve d'une certaine stationnarité pendant la période effacée) de celles qui ont été perdues. Dans le cas où cette hypothèse de stationnarité ne serait pas respectée, après une longue période effacée par exemple, on ne dispose pas de toute façon d'informations suffisantes pour faire mieux.One possible method consists in introducing into the decoder on reception a coding module of the same type as that present at the transmission, making it possible to perform the coding-decoding of the samples of the signal produced by the techniques mentioned in the preceding paragraph during the periods deleted. In this way the memories necessary to decode the next samples are completed with data a priori close (subject to a certain stationarity during the erased period) of those which have been lost. In the event that this assumption of stationarity is not respected, after a long erased period for example, we do not have any information sufficient to do better.

En fait il n'est généralement pas nécessaire d'effectuer le codage complet de ces échantillons, on se limite aux modules nécessaires pour mettre à jour les mémoires.In fact it is generally not necessary to perform the complete coding of these samples, it is limited to the modules necessary to update the memories.

Cette mise à jour peut s'effectuer au moment de la production des échantillons de remplacement, ce qui répartit la complexité sur toute la zone d'effacement, mais se cumule avec la procédure de synthèse décrite précédemment.This update can be done at the time of the production of the replacement samples, which distributes the complexity over the entire erasure zone, but is cumulative with the synthesis procedure described above.

Lorsque la structure de codage le permet, on peut aussi limiter la procédure ci-dessus à une zone intermédiaire au début de la période de données valides succédant à une période effacée, la procédure de mise à jour se cumulant alors avec l'opération de décodage.When the coding structure permits, the above procedure can also be limited to an intermediate zone at the beginning of the valid data period after an erased period, the updating procedure then being combined with the decoding operation. .

5.2. Description d'exemples de réalisation particuliers 5.2. Description of particular embodiments

Des exemples particuliers de mise en oeuvre possible sont donnés ci-après. Le cas des codeurs par transformée de type TDAC ou TCDM ([MAHIEUX]) est en particulier abordé.Specific examples of possible implementation are given below. The case of transform coders of TDAC or TCDM type ([MAHIEUX]) is in particular addressed.

5.2.1 Description du dispositif5.2.1 Description of the device

Système de codage/décodage numérique par transformée de type TDAC.TDAC-type digital transform coding / decoding system.

Codeur en bande élargie (50-7000 Hz) à 24 kb/s ou 32 kb/s.Wide band encoder (50-7000 Hz) at 24 kb / s or 32 kb / s.

Trame de 20 ms (320 échantillons).20 ms frame (320 samples).

Fenêtres de 40 ms (640 échantillons) avec addition-recouvrements de 20 ms. Une trame binaire contient les paramètres codés obtenus par la transformation TDAC sur une fenêtre. Après le décodage de ces paramètres, en faisant la transformation inverse TDAC, on obtient une trame de sortie de 20 ms qui est la somme de la deuxième moitié de la fenêtre précédente et la première moitié de la fenêtre actuelle. Sur la figure 4, il a été marqué en gras les deux parties de fenêtres utilisées pour la reconstruction de la trame n (en temporel). Ainsi, une trame binaire perdue perturbe la reconstruction de deux trames consécutives (l'actuelle et la suivante, figure 5). Par contre, en faisant correctement le remplacement des paramètres perdus, on peut récupérer les parties de l'information provenant de la trame binaire précédente et suivante (figure 6), pour la reconstruction de ces deux trames.40 ms windows (640 samples) with 20 ms overlap overlays. A bit frame contains the encoded parameters obtained by the TDAC transformation on a window. After decoding these parameters, doing the inverse TDAC transformation, we obtain an output frame of 20 ms which is the sum of the second one. half of the previous window and the first half of the current window. In Figure 4, it was marked in bold the two parts of windows used for the reconstruction of the frame n (in time). Thus, a lost binary frame disrupts the reconstruction of two consecutive frames (the current one and the next one, figure 5). On the other hand, by correctly replacing the lost parameters, it is possible to recover the parts of the information coming from the preceding and following bit frame (FIG. 6), for the reconstruction of these two frames.

5.2.2 Mise en oeuvre5.2.2 Implementation

Toutes les opérations décrites ci-dessous sont mises en oeuvre à la réception, conformément aux figures 1 et 2, soit au sein du module de dissimulation des trames effacées qui communique avec le décodeur, soit dans le décodeur lui même (mise à jour des mémoires du décodeur).All the operations described below are implemented at the reception, in accordance with FIGS. 1 and 2, either within the module for concealing the erased frames which communicates with the decoder, or in the decoder itself (update of the memories decoder).

5.2.2.1 En période valide 5.2.2.1 In valid period

Correspondant au paragraphe 5.1.2, on met à jour la mémoire des échantillons décodés. Cette mémoire est utilisée pour les analyses LPC et LTP du signal passé dans le cas d'un effacement d'une trame binaire. Dans l'exemple présenté ici, l'analyse LPC est faite sur une période de signal de 20 ms (320 échantillons). En général, l'analyse LTP nécessite plus d'échantillons à mémoriser. Dans notre exemple, pour pouvoir faire l'analyse LTP correctement, le nombre des échantillons mémorisés est égal à deux fois la valeur maximale du pitch. Par exemple, si la valeur maximale du pitch MaxPitch est fixée à 320 échantillons (50 Hz, 20 ms), les derniers 640 échantillons seront mémorisés (40 ms du signal). On calcule également l'énergie des trames valides et on les stocke dans un tampon circulaire de longueur de 5 s. Lorsqu'une trame effacée est détectée, on compare l'énergie de la dernière trame valide au maximum et au minimum de ce tampon circulaire pour connaître son énergie relative.Corresponding to paragraph 5.1.2, the memory of the decoded samples is updated. This memory is used for the LPC and LTP analyzes of the signal passed in the case of erasure of a bit frame. In the example presented here, the LPC analysis is done over a 20 ms signal period (320 samples). In general, LTP analysis requires more samples to memorize. In our example, to be able to perform the LTP analysis correctly, the number of stored samples is twice the maximum value of the pitch. For example, if the maximum MaxPitch pitch is 320 samples (50 Hz, 20 ms), the last 640 samples will be stored (40 ms of the signal). We also calculate the energy of the frames valid and stored in a circular buffer of 5 s length. When an erased frame is detected, the energy of the last valid frame is compared to the maximum and the minimum of this circular buffer to know its relative energy.

5.2.2.2 Pendant un bloc de données effacées 5.2.2.2 During an erased data block

Lorsqu'une trame binaire est perdue, on distingue deux cas différents :When a binary frame is lost, two different cases are distinguished:

5.2.2.2.1 Première trame binaire perdue après une période valide5.2.2.2.1 First bit frame lost after a valid period

D'abord, on fait une analyse du signal mémorisé pour estimer les paramètres du modèle servant à synthétiser le signal regénéré. Ce modèle nous permet ensuite de synthétiser 40 ms de signal, ce qui correspond à la fenêtre de 40 ms perdue. En faisant la transformation TDAC suivie de la transformation inverse TDAC sur ce signal synthétisé (sans codage - décodage des paramètres), on obtient le signal de sortie de 20 ms. Grâce à ces opérations TDAC - TDAC inverse, on exploite l'information provenant de la fenêtre précédente correctement reçue (voir figure 6). En même temps, on met à jour les mémoires du décodeur. Ainsi, la trame binaire suivante, si elle est bien reçue, peut être décodée normalement, et les trames décodées seront automatiquement synchronisées (figure 6).First, an analysis of the stored signal is made to estimate the parameters of the model used to synthesize the regenerated signal. This model then allows us to synthesize 40 ms of signal, which corresponds to the window of 40 ms lost. By doing the TDAC transformation followed by the inverse TDAC transformation on this synthesized signal (without coding - decoding of the parameters), the output signal of 20 ms is obtained. With these inverse TDAC-TDAC operations, the information from the previous correctly received window is exploited (see FIG. 6). At the same time, the memories of the decoder are updated. Thus, the next bit frame, if it is well received, can be decoded normally, and the decoded frames will be automatically synchronized (Figure 6).

Les opérations à effectuer sont les suivantes :

  1. 1. Fenêtrage du signal mémorisé. Par exemple, on peut utiliser une fenêtre asymétrique de Hamming de 20 ms.
  2. 2. Calcul de la fonction d'autocorrélation sur le signal fenêtré.
  3. 3. Détermination des coefficients du filtre LPC. Pour cela, classiquement on utilise l'algorithme itératif de Levinson-Durbin. L'ordre d'analyse peut être élevé, surtout lorsque le codeur est utilisé pour coder des séquences de musique.
  4. 4. Détection de voisement et analyse à long terme du signal mémorisé pour la modélisation de l'éventuelle périodicité du signal (sons voisés). Dans la réalisation présentée, les inventeurs ont limité l'estimation de la période fondamentale Tp aux valeurs entières, et calculé une estimation du degré de voisement sous la forme du coefficient de corrélation MaxCorr (voir ci-dessous) évalué à la période sélectionnée. Soit Tm = max(T, Fs/200), où Fs est la fréquence d'échantillonnage, donc Fs/200 échantillons correspondent à une durée de 5 ms. Pour mieux modéliser l'évolution du signal à la fin de la trame précédente, on calcule les coefficients de corrélation Corr(T) correspondant à un retard T en n'utilisant que 2*Tm échantillons à la fin du signal mémorisé : Corr T = 2 i = Lmem - 2 T m + T Lmem - 1 m i m i - T i = Lmem - 2 T m Lmem - 1 m i 2 + i = Lmem - 2 T m + T Lmem - 1 - T m i 2
    Figure imgb0001

    m 0···m Lmem-1 est la mémoire du signal décodé précédemment. De cette formule, on voit que la longueur de cette mémoire Lmem doit être au moins 2 fois la valeur maximale de la période fondamentale (encore appelée "pitch") MaxPitch.
    On a également fixé la valeur minimale de la période fondamentale MinPitch correspondant à une fréquence de 600 Hz (26 échantillons à Fs = 16 kHz).
    On calcule Corr(T) pour T = 2,
    Figure imgb0002
    , MaxPitch. Si T' est le plus petit retard tel que Corr(T')<0 (on élimine ainsi les corrélations à très court terme), alors on cherche MaxCorr, maximum de Corr(T) pour T'<T<=MaxPitch. Soit Tp la période correspondant à MaxCorr ( Corr(Tp) = MaxCorr). On cherche également MaxCorrMP, maximum de Corr(T) pour T'<T<=0.75*MinPitch,. Si Tp<MinPitch ou MaxCorrMP > 0.7*MaxCorr et si l'énergie de la dernière trame valide est relativement faible, on décide que la trame est non voisée, car en utilisant la prédiction LTP on risquerait d'obtenir une résonance dans les hautes fréquences très gênante. Le pitch choisi est Tp=MaxPitch/2, et le coefficient de corrélation MaxCorr fixé à une valeur faible (0.25).
    On considère également la trame comme non-voisée lorsque plus de 80% de son énergie se concentre dans les derniers MinPitch échantillons. Il s'agit donc d'un démarrage de la parole, mais le nombre d'échantillons n'est pas suffisant pour estimer la période fondamentale éventuelle, il vaut mieux le traiter comme trame non voisée, et même diminuer plus rapidement l'énergie du signal synthétisé (pour signaler cela, on met DiminFlag=1).
    Dans le cas où MaxCorr > 0.6, on vérifie que l'on n'a pas trouvé un multiple (4, 3 ou 2 fois) de la période fondamentale. Pour cela, on cherche le maximum local de la corrélation autour de Tp/4, Tp/3 et Tp/2. Notons T1 la position de ce maximum, et MaxCorrL = Corr(T1). Si T1 > MinPitch et MaxCorrL > 0.75* MaxCorr, on choisit T1 comme nouvelle période fondamentale.
    Si Tp est inférieur à MaxPitch/2, on peut vérifier s'il s'agit vraiment d'une trame voisée en cherchant le maximum local de la corrélation autour de 2*TP (TPP) et en vérifiant si Corr(TPP)>0.4. Si Corr(TPP)<0.4 et si l'énergie du signal diminue, on met DiminFlag=1 et on diminue la valeur de MaxCorr, sinon on cherche le maximum local suivant entre le TP actuel et MaxPitch.
    Un autre critère de voisement consiste à vérifier si au moins dans 2/3 des cas le signal retardé par la période fondamentale a le même signe que le signal non retardé.
    On vérifie cela sur une longueur égale au maximum entre 5ms et 2*TP.
    On vérifie également si l'énergie du signal a tendance à diminuer ou non. Si oui, on met DiminFlag=1 et on fait décroître la valeur de MaxCorr en fonction de degré de diminution.
    La décision de voisement tient compte également de l'énergie du signal: si l'énergie est forte, on augmente la valeur de MaxCorr, ainsi il est plus probable que la trame soit décidée voisée. Par contre, si l'énergie est très faible, on diminue la valeur de MaxCorr.
    Finalement, on prend la décision de voisement en fonction de la valeur de MaxCorr: la trame est non voisée si et seulement si MaxCorr < 0.4. La période fondamentale Tp d'une trame non voisée est bornée, elle doit être inférieure ou égale à MaxPitch/2.
  5. 5. Calcul du signal résiduel par filtrage inverse LPC des derniers échantillons mémorisés. Ce signal résiduel est stocké dans la mémoire ResMem.
  6. 6. Egalisation de l'énergie du signal résiduel. Dans le cas d'un signal non voisé ou faiblement voisé (MaxCorr < 0.7), l'énergie du signal résiduel stocké dans ResMem peut changer brusquement d'une partie à l'autre. La répétition de cette excitation entraîne une perturbation périodique très désagréable dans le signal synthétisé. Pour éviter cela, on s'assure qu'aucun pic d'amplitude important ne se présente dans l'excitation d'une trame faiblement voisée. Comme l'excitation est construite à partir des derniers Tp échantillons du signal résiduel, on traite ce vecteur de Tp échantillons. La méthode utilisée dans notre exemple est la suivante :
    • ■ On calcule la moyenne MeanAmpl des valeurs absolues des derniers Tp échantillons du signal résiduel.
    • ■ Si le vecteur d'échantillons à traiter contient n passages à zéro, on le coupe en n+1 sous-vecteurs, le signe du signal dans chaque sous-vecteur étant donc invariant.
    • ■ On cherche l'amplitude maximale MaxAmplSv de chaque sous-vecteur. Si MaxAmplSv>1.5*MeanAmpl , on multiplie le sous-vecteur par 1.5*MeanAmpl/MaxAmplSv.
  7. 7. Préparation du signal d'excitation d'une longueur de 640 échantillons correspondant à la longueur de la fenêtre TDAC. On distingue 2 cas selon le voisement :
    • Figure imgb0003
      Le signal d'excitation est la somme de deux signaux, une composante fortement harmonique limitée en bande aux basses fréquences du spectre excb et une autre moins harmonique limitée aux plus hautes fréquences exch.

    La composante fortement harmonique est obtenue par filtrage LTP d'ordre 3 du signal résiduel : excb i = 0.15 * exc i - Tp - 1 + 0.7 * exc i - Tp + 0.15 * exc i - Tp + 1
    Figure imgb0004

    Les coefficients [0.15, 0.7, 0.15] correspondent à un filtre FIR passe-bas de 3 dB d'atténuation à Fs/4.
    La seconde composante est obtenue également par un filtrage LTP rendu non périodique par la modification aléatoire de sa période fondamentale Tph. Tph est choisie comme la partie entière d'une valeur réelle aléatoire Tpa. La valeur initiale de Tpa est égale à Tp puis elle est modifiée échantillon par échantillon en l'additionnant une valeur aléatoire dans [-0.5, 0.5]. De plus, ce filtrage LTP est combiné avec un filtrage IIR passe haut : exch i = - 0.0635 * exc i - Tph - 1 + exc i - Tph + 1 + 0.1182 * exc i - Tph - 0.9926 * exch i - 1 - 0.7679 * exch i - 2
    Figure imgb0005

    L'excitation voisée est alors la somme de ces 2 composante : Exc i = excb i + exch i
    Figure imgb0006
    • ■ Dans le cas d'une trame non voisée, le signal d'excitation exc est obtenu également par filtrage LTP d'ordre 3 avec les coefficients [0.15, 0.7, 0.15] mais il est rendu non périodique par augmentation de la période fondamentale d'une valeur égale à 1 tous les 10 échantillons, et inversion du signe avec une probabilité de 0.2.
  8. 8. Synthèse des échantillons de remplacement en introduisant le signal d'excitation exc dans le filtre LPC calculé en 3.
  9. 9. Contrôle du niveau de l'énergie du signal de synthèse. L'énergie tend progressivement vers un niveau fixé par avance dès la première trame de remplacement synthétisée. Ce niveau peut être défini, par exemple, comme l'énergie de la trame de sortie la plus faible trouvée durant les 5 dernières secondes précédant l'effacement. Nous avons défini deux lois d'adaptation du gain qui sont choisies en fonction du drapeau DiminFlag calculé en 4. La vitesse de diminution de l'énergie dépend également de la période fondamentale. Il existe une troisième loi d'adaptation plus radicale qui est utilisée quand on détecte que le début du signal généré ne correspond pas bien au signal originel, comme expliqué ultérieurement (voir point 11).
  10. 10. Transformation TDAC sur le signal synthétisé en 8, comme expliqué au début de ce chapitre. Les coefficients TDAC obtenus remplacent les coefficients TDAC perdus. Ensuite, en faisant la transformation inverse TDAC, on obtient la trame de sortie. Ces opérations ont trois buts :
    • ■ Dans le cas de la première fenêtre perdue, de cette façon on exploite l'information de la fenêtre précédente correctement reçue qui contient la moitié des données nécessaires pour reconstruire la première trame perturbée (figure 6).
    • ■ On met à jour la mémoire du décodeur pour le décodage de la trame suivante (synchronisation du codeur et du décodeur, voir paragraphe 5.1.4).
    • ■ On assure automatiquement la transition continue (sans rupture) du signal de sortie lorsque la première trame binaire correctement reçue arrive après une période effacée que l'on a reconstruite selon les techniques présentées ci-dessus (voir paragraphe 5.1.3).
  11. 11. La technique d'addition-recouvrement permet de vérifier si le signal voisé synthétisé correspond bien au signal d'origine ou non car pour la première moitié de la première trame perdue le poids de la mémoire de dernière fenêtre correctement reçue est plus important (figure 6). Donc en prenant la corrélation entre la première moitié de la première trame synthétisée et la première moitié de la trame obtenue après les opérations TDAC
    Figure imgb0007
    TDAC inverse, on peut estimer la similitude entre la trame perdue et la trame de remplacement. Une corrélation faible (< 0.65) signale que le signal originel est assez différent de celui obtenu par la méthode de remplacement, et il vaut mieux diminuer l'énergie de ce dernier rapidement vers le niveau minimal.
The operations to be performed are as follows:
  1. 1. Winding of the memorized signal. For example, an asymmetric Hamming window of 20 ms can be used.
  2. 2. Calculation of the autocorrelation function on the windowed signal.
  3. 3. Determination of the coefficients of the LPC filter. For this, conventionally we use the iterative algorithm of Levinson-Durbin. The order of analysis can be high, especially when the encoder is used to encode music sequences.
  4. 4. Voicing detection and long-term analysis of the memorized signal for modeling the possible periodicity of the signal (voiced sounds). In the embodiment presented, the inventors limited the estimation of the fundamental period Tp to integer values, and calculated an estimate of the degree of voicing in the form of the correlation coefficient MaxCorr (see below) evaluated at the selected period. Let Tm = max (T, Fs / 200), where Fs is the sampling frequency, so Fs / 200 samples correspond to a duration of 5 ms. To better model the evolution of the signal at the end of the previous frame, the correlation coefficients Corr (T) corresponding to a delay T are calculated using only 2 * Tm samples at the end of the memorized signal: Corr T = 2 Σ i = LMEM - 2 T m + T LMEM - 1 m i m i - T Σ i = LMEM - 2 T m LMEM - 1 m i 2 + Σ i = LMEM - 2 T m + T LMEM - 1 - T m i 2
    Figure imgb0001

    where m 0 ··· m Lmem -1 is the memory of the previously decoded signal. From this formula, we see that the length of this memory L mem must be at least 2 times the maximum value of the fundamental period (also called "pitch") MaxPitch .
    We also set the minimum value of the fundamental MinPitch period corresponding to a frequency of 600 Hz (26 samples at Fs = 16 kHz).
    Corr (T) is calculated for T = 2,
    Figure imgb0002
    , MaxPitch . If T 'is the smallest delay such that Corr (T') <0 (thus eliminating very short term correlations), then we look for MaxCorr , maximum of Corr (T) for T '<T <= MaxPitch. Let Tp be the period corresponding to MaxCorr (Corr (Tp) = MaxCorr). One also seeks MaxCorrMP, maximum of Corr (T) for T '<T <= 0.75 * MinPitch ,. If Tp <MinPitch or MaxCorrMP> 0.7 * MaxCorr and if the energy of the last valid frame is relatively weak, it is decided that the frame is unvoiced, because by using the LTP prediction it would be likely to obtain a resonance in the high frequencies very embarrassing. The pitch chosen is Tp = MaxPitch / 2, and the correlation coefficient MaxCorr set to a low value (0.25).
    The frame is also considered unvoiced when more than 80% of its energy is concentrated in the last MinPitch samples. It is thus a start of the speech, but the number of samples is not sufficient to estimate the possible fundamental period, it is better to treat it as an unvoiced frame, and even to decrease more quickly the energy of the signal synthesized (to signal this, we put DiminFlag = 1 ).
    In the case where MaxCorr> 0.6, we check that we did not find a multiple (4, 3 or 2 times) of the fundamental period. For this, we look for the local maximum of the correlation around T p / 4, Tp / 3 and Tp / 2. Let T 1 denote the position of this maximum, and MaxCorrL = Corr (T 1 ). If T 1 > MinPitch and MaxCorrL> 0.75 * MaxCorr, we choose T 1 as the new fundamental period.
    If Tp is less than MaxPitch / 2, we can check if it is really a voiced frame looking for the local maximum of the correlation around 2 * TP (TPP) and checking if Corr (T PP )> 0.4. If Corr (T PP ) <0.4 and if the signal energy decreases, we set DiminFlag = 1 and decrease the value of MaxCorr, otherwise we look for the next local maximum between the current T P and MaxPitch.
    Another voicing criterion consists of checking whether at least in 2/3 of the cases the signal delayed by the fundamental period has the same sign as the undelayed signal.
    This is verified for a length equal to the maximum between 5ms and 2 * T P.
    It is also checked whether the signal energy tends to decrease or not. If yes, we set DiminFlag = 1 and we decrease the value of MaxCorr according to the degree of decrease.
    The decision of voicing also takes into account the energy of the signal: if the energy is strong, one increases the value of MaxCorr, thus it is more probable that the frame is decided voiced. On the other hand, if the energy is very weak, one decreases the value of MaxCorr.
    Finally, we make the voicing decision based on the value of MaxCorr: the frame is unvoiced if and only if MaxCorr <0.4. The fundamental period T p of an unvoiced frame is bounded, it must be less than or equal to MaxPitch / 2.
  5. 5. Calculation of the residual signal by LPC inverse filtering of the last stored samples. This residual signal is stored in the ResMem memory.
  6. 6. Equalization of the energy of the residual signal. In the case of an unvoiced or weakly voiced signal (MaxCorr <0.7), the energy of the residual signal stored in ResMem can change abruptly from one party to another. The repetition of this excitation causes a very unpleasant periodic disturbance in the synthesized signal. To avoid this, it is ensured that no peak of significant amplitude occurs in the excitation of a weakly voiced frame. Since the excitation is constructed from the last Tp samples of the residual signal, this vector of Tp samples is processed. The method used in our example is as follows:
    • ■ MeanAmpl mean of the absolute values of the last Tp samples of the residual signal.
    • ■ If the sample vector to be processed contains n zero crossings, we cut it in n + 1 sub-vectors, the sign of the signal in each sub-vector being therefore invariant.
    • ■ We search for the maximum amplitude MaxAmplSv of each sub-vector. If MaxAmplSv> 1.5 * MeanAmpl, multiply the sub-vector by 1.5 * MeanAmpl / MaxAmplSv.
  7. 7. Preparation of the excitation signal with a length of 640 samples corresponding to the length of the window TDAC. We distinguish 2 cases according to the voicing:
    • Figure imgb0003
      The excitation signal is the sum of two signals, a strongly harmonic component limited in band at the low frequencies of the excb spectrum and another less harmonic limited to the higher frequencies exch.

    The strongly harmonic component is obtained by third order LTP filtering of the residual signal: ExCB i = 0.15 * exc i - Tp - 1 + 0.7 * exc i - Tp + 0.15 * exc i - Tp + 1
    Figure imgb0004

    The coefficients [0.15, 0.7, 0.15] correspond to a low-pass FIR filter of 3 dB attenuation at Fs / 4.
    The second component is also obtained by LTP filtering made non-periodic by the random modification of its fundamental period Tph. Tph is chosen as the integer part of a random real value Tpa. The initial value of Tpa is equal to Tp then it is modified sample by sample by adding a random value in [-0.5, 0.5]. In addition, this LTP filtering is combined with high pass IIR filtering: exch i = - 0.0635 * exc i - tph - 1 + exc i - tph + 1 + 0.1182 * exc i - tph - 0.9926 * exch i - 1 - 0.7679 * exch i - 2
    Figure imgb0005

    The voiced excitation is then the sum of these 2 components: HE i = ExCB i + exch i
    Figure imgb0006
    • ■ In the case of an unvoiced frame, the excitation signal exc is also obtained by LTP filtering of order 3 with the coefficients [0.15, 0.7, 0.15] but it is made non-periodic by increasing the fundamental period d a value equal to 1 every 10 samples, and inversion of the sign with a probability of 0.2.
  8. 8. Synthesis of the replacement samples by introducing the exc excitation signal into the LPC filter calculated in 3.
  9. 9. Control of the energy level of the synthesis signal. The energy tends gradually to a level set in advance from the first synthesized replacement frame. This level can be defined, for example, as the energy of the lowest output frame found during the last 5 seconds preceding erasure. We have defined two gain adaptation laws which are chosen according to the DiminFlag flag calculated in 4. The speed of energy decrease also depends on the fundamental period. There is a third, more radical adaptation law that is used when we detect that the The beginning of the generated signal does not correspond well to the original signal, as explained later (see point 11).
  10. 10. Transformation TDAC on the signal synthesized in 8, as explained at the beginning of this chapter. The TDAC coefficients obtained replace the TDAC coefficients lost. Then, doing the reverse TDAC transformation, we get the output frame. These operations have three goals:
    • ■ In the case of the first window lost, this way we use the information from the previous window correctly received which contains half of the data needed to reconstruct the first disturbed frame (Figure 6).
    • ■ The decoder memory is updated for the decoding of the next frame (synchronization of the encoder and the decoder, see section 5.1.4).
    • ■ The continuous transition (without breaking) of the output signal is automatically performed when the first correctly received bit frame arrives after an erased period which has been reconstructed according to the techniques presented above (see paragraph 5.1.3).
  11. 11. The addition-overlay technique makes it possible to check whether the synthesized voiced signal corresponds to the original signal or not because for the first half of the first lost frame the weight of the last received window memory is greater ( Figure 6). So taking the correlation between the first half of the first synthesized frame and the first half of the frame obtained after TDAC operations
    Figure imgb0007
    Reverse TDAC, we can estimate the similarity between the lost frame and the replacement frame. A weak correlation (<0.65) indicates that the original signal is quite different from that obtained by the replacement method, and it is better to reduce the energy of the latter rapidly towards the minimum level.

5.2.2.2.2 Trames perdues suivant la première trame d'une zone effacée5.2.2.2.2 Lost frames following the first frame of an erased area

Dans le paragraphe précédent, les points 1-6 concernent l'analyse du signal décodé précédant la première trame effacée et permettant la construction d'un modèle de synthèse (LPC et éventuellement LTP) de ce signal. Pour les trames effacées suivantes, on ne refait pas l'analyse, le remplacement du signal perdu est basé sur les paramètres (coefficients LPC, pitch, MaxCorr, ResMem) calculés lors de première trame effacée. On fait donc uniquement les opérations correspondant à la synthèse du signal et à la synchronisation du décodeur, avec les modifications suivantes par rapport à la première trame effacée :

  • ■ Dans la partie synthèse (points 7 et 8), on génère uniquement 320 nouveaux échantillons, car la fenêtre de la transformation TDAC couvre les derniers 320 échantillons générés lors de la trame effacée précédente et ces nouveaux 320 échantillons.
  • ■ Dans le cas où la période d'effacement serait relativement longue, il est important de faire évoluer les paramètres de synthèse vers les paramètres d'un bruit blanc ou vers ceux du bruit de fond (voir point 5 dans paragraphe 3.2.2.2). Comme le système présenté dans cet exemple ne comprend pas de VAD /CNG, nous avons, par exemple, la possibilité de faire une ou plusieurs des modifications suivantes :
  • ■ Interpolation progressive du filtre LPC avec un filtre plat pour rendre le signal synthétisé moins coloré.
  • ■ Augmentation progressive de la valeur du pitch.
  • ■ En mode voisé, on bascule en mode non-voisé après un certain temps (par exemple quand l'énergie minimale est atteinte).
In the preceding paragraph, the points 1-6 concern the analysis of the decoded signal preceding the first erased frame and allowing the construction of a synthesis model (LPC and possibly LTP) of this signal. For the following erased frames, the analysis is not repeated, the replacement of the lost signal is based on the parameters (LPC coefficients, pitch, MaxCorr, ResMem) calculated during the first erased frame. Therefore, only the operations corresponding to the synthesis of the signal and the synchronization of the decoder are carried out, with the following modifications with respect to the first erased frame:
  • ■ In the summary section (points 7 and 8), only 320 new samples are generated because the TDAC transformation window covers the last 320 samples generated during the previous cleared frame and these new 320 samples.
  • ■ In the case where the erasure period is relatively long, it is important to change the synthesis parameters to the parameters of a white noise or to those of the background noise (see point 5 in paragraph 3.2.2.2). As the system presented in this example does not include VAD / CNG, we have, for example, the ability to make one or more of the following changes:
  • ■ Progressive interpolation of the LPC filter with a flat filter to make the synthesized signal less colorful.
  • ■ Gradual increase of the pitch value.
  • ■ In the voiced mode, you switch to unvoiced mode after a certain time (for example when the minimum energy is reached).

5.3 Traitement spécifique pour les signaux musicaux. Si le système comprend un module permettant la discrimination parole/musique, on peut alors, après sélection d'un mode de synthèse de musique mettre en oeuvre un traitement spécifique au signaux musicaux. Sur la figure 7, le module de synthèse de musique a été référencé par 15, celui de synthèse de parole par 16 et le commutateur parole/musique par 17. 5.3 Specific processing for musical signals. If the system comprises a module for speech / music discrimination, then after selecting a music synthesis mode, it is possible to implement a specific processing of the musical signals. In FIG. 7, the music synthesis module has been referenced by 15, that of speech synthesis by 16 and the speech / music switch by 17.

Un tel traitement met par exemple en oeuvre pour le module de synthèse de musique les étapes suivantes, illustrées sur la figure 8 :Such a processing implements, for example, for the music synthesis module the following steps, illustrated in FIG. 8:

1. Estimation de l'enveloppe spectrale courante :1. Estimation of the current spectral envelope:

On calcule cette enveloppe spectrale sous la forme d'un filtre LPC [RABINER][KLEIJN]. L'analyse est effectuée par des méthodes classiques ([KLEIJN]). Après fenêtrage des échantillons mémorisés en période valide, on met en oeuvre une analyse LPC pour calculer un filtre LPC A(Z) (étape 19). On utilise pour cette analyse un ordre élevé (>100) afin d'obtenir de bonnes performances sur les signaux musicaux.This spectral envelope is calculated as an LPC [RABINER] [KLEIJN] filter. The analysis is carried out by conventional methods ([KLEIJN]). After windowing of the samples stored in valid period, an LPC analysis is used to calculate an LPC A (Z) filter (step 19). For this analysis, a high order (> 100) is used to obtain good performances on musical signals.

2. Synthèse des échantillons manquants :2. Summary of missing samples:

La synthèse des échantillons de remplacement s'effectue en introduisant un signal d'excitation dans le filtre de synthèse LPC (1/A(z)) calculé à l'étape 19. Ce signal d'excitation - calculé dans une étape 20 - est un bruit blanc dont l'amplitude est choisie pour obtenir un signal ayant la même énergie que celle des derniers N échantillons mémorisés en période valide. Sur la figure 8, l'étape de filtrage est référencée par 21.The synthesis of the replacement samples is carried out by introducing an excitation signal into the LPC synthesis filter (1 / A (z)) calculated in step 19. This excitation signal - calculated in a step 20 - is a white noise whose amplitude is chosen to obtain a signal having the same energy as that of the last N samples stored in valid period. In FIG. 8, the filtering step is referenced 21.

Exemple du contrôle de l'amplitude du signal résiduel :Example of the control of the amplitude of the residual signal:

Si l'excitation se présente comme un bruit blanc uniforme multiplié par un gain, on peut calculer ce gain G comme suit :If the excitation is a uniform white noise multiplied by a gain, this gain G can be calculated as follows:

Estimation du gain du filtre LPC:Estimation of the gain of the LPC filter:

L'algorithme de Durbin donne l'énergie du signal résiduel. Connaissant également l'énergie du signal à modélisé on estime le gain GLPC du filtre LPC comme le rapport de ces deux énergies.The Durbin algorithm gives the energy of the residual signal. Also knowing the energy of the signal to modeled one estimates the gain G LPC of the filter LPC as the ratio of these two energies.

Calcul de l'énergie cible :Calculation of the target energy:

On estime l'énergie cible égale à l'énergie des derniers N échantillons mémorisés en période valide (N est typiquement < la longueur du signal utilisé pour l'analyse LPC).The target energy is estimated as equal to the energy of the last N samples stored in valid period (N is typically <the length of the signal used for the LPC analysis).

L'énergie du signal synthétisé est le produit de l'énergie du bruit blanc par G2 et GLPC. On choisi G pour que cette énergie soit égale à l'énergie cible.The energy of the synthesized signal is the product of the white noise energy by G 2 and G LPC . We choose G so that this energy is equal to the target energy.

3. Contrôle de l'énergie du signal de synthèse3. Control of the energy of the synthesis signal

Comme pour les signaux de parole, sauf que la vitesse de diminution de l'énergie du signal de synthèse et beaucoup plus lente, et qu'elle ne dépend pas de période fondamentale (inexistante) :As for speech signals, except that the rate of decrease of the energy of the synthesis signal and much slower, and that it does not depend on fundamental period (non-existent):

L'énergie du signal de synthèse est contrôlée à l'aide d'un gain calculé et adapté échantillon par échantillon. Dans le cas où la période d'effacement est relativement longue, il est nécessaire de faire baisser progressivement l'énergie du signal de synthèse. La loi d'adaptation du gain peut être calculée en fonction de différents paramètres comme les valeurs d'énergies mémorisées avant l'effacement, et stationnarité locale du signal au moment de la coupure.The energy of the synthesis signal is controlled using a calculated and adapted sample gain per sample. In the case where the erasure period is relatively long, it is necessary to gradually lower the energy of the synthesis signal. The gain adaptation law can be calculated according to various parameters such as energy values stored before the erasure, and local stationarity of the signal at the time of the cutoff.

6. Evolution de la procédure de synthèse au cours du temps :6. Evolution of the synthesis procedure over time: Comme pour les signaux de parole :As for speech signals:

Dans le cas de périodes d'effacement relativement longues, on peut également faire évoluer les paramètres de synthèse. Si le système est couplé à un dispositif de détection d'activité vocale ou de signaux musicaux avec estimation des paramètres du bruit (tel [REC-G.723.1A], [SALAMI-2], [BENYASSINE]), il sera particulièrement intéressant de faire tendre les paramètres de génération du signal à reconstruire vers ceux du bruit estimé: en particulier au niveau de l'enveloppe spectrale (interpolation du filtre LPC avec celui du bruit estimé, les coefficients de l'interpolation évoluant au cours du temps jusqu'à obtention du filtre du bruit) et de l'énergie (niveau évoluant progressivement vers celui du bruit, par exemple par fenêtrage).In the case of relatively long erasure periods, it is also possible to change the synthesis parameters. If the system is coupled to a voice activity detection device or musical signals with estimation of noise parameters (such as [REC-G.723.1A], [SALAMI-2], [BENYASSINE]), it will be of particular interest to make the parameters of generation of the signal to be reconstructed tend towards those of the estimated noise: in particular at the level of the spectral envelope (interpolation of the LPC filter with that of the estimated noise, the interpolation coefficients evolving over time until to obtain the noise filter) and energy (level gradually evolving towards that of the noise, for example by windowing).

6. REMARQUE GENERALE 6. GENERAL REMARK

Comme on l'aura compris, la technique qui vient d'être décrite présente l'avantage d'être utilisable avec tout type de codeur ; en particulier elle permet de remédier aux problèmes des paquets de bits perdus pour les codeurs temporels ou par transformée, sur des signaux de parole et musique avec de bonnes performances : en effet dans la présente technique, les seuls signaux mémorisés lors des périodes où les données transmises sont valides sont les échantillons issus du décodeur, information qui est disponible quelle que soit la structure de codage utilisée.As will be understood, the technique which has just been described has the advantage of being usable with any type of encoder; in particular, it makes it possible to remedy the problems of lost bit packets for time or transform coders, on speech and music signals with good performances: in fact in the present technique, the only signals stored during the periods when the data transmitted are valid are the samples from the decoder, information that is available regardless of the coding structure used.

7. REFÉRENCES BIBLIOGRAPHIQUES 7. BIBLIOGRAPHIC REFERENCES

  • [AT&T] AT&T (D.A. Kapilow, R.V. Cox) « A high quality low-complexity algorithm for frame erasure concealment (FEC) with G.711 » , Delayed Contribution D.249 (WP 3/16), ITU, may 1999.[AT & T] AT & T (DA Kapilow, RV Cox) "A high quality low-complexity algorithm for frame erasure concealment (FEC) with G.711, Delayed Contribution D.249 (WP 3/16), ITU, May 1999.
  • [ATAL] B.S. Atal et M.R. Schroeder. "Predictive coding of speech signal and subjectives error criteria". IEEE Trans. on Acoustics, Speech and Signal Processing, 27 :247-254, juin 1979 .[ATAL] BS Atal and MR Schroeder. "Predictive coding of speech signal and subjective error criteria". IEEE Trans. on Acoustics, Speech and Signal Processing, 27: 247-254, June 1979 .
  • [BENYASSINE] A. Benyassine, E. Shlomot et H.Y. Su. "ITU-T recommendation G.729 Annex B : A silence compression scheme for use with G.729 optimized for V.70 digital simultaneous voice and data applications". IEEE Communication Magazine, septembre 97, PP. 56-63 .[BENYASSINE] A. Benyassin, E. Shlomot and HY Su. "ITU-T recommendation G.729 Annex B: A silent compression scheme for use with G.729 for V.70 digital simultaneous voice and data applications". IEEE Communication Magazine, September 97, PP. 56-63 .
  • [BRANDENBURG] K. H. Brandenburg et M. Bossi. "Overview of MPEG audio : current and future standards for low-bit-rate audio coding". Journal of Audio Eng. Soc., Vol.45-1/2, janvier/février 1997, PP.4-21 .[BRANDENBURG] KH Brandenburg and M. Bossi. "Overview of MPEG audio: current and future standards for low-bit-rate audio coding". Journal of Audio Eng. Soc., Vol.45-1 / 2, January / February 1997, PP.4-21 .
  • [CHEN] J. H. Chen, R. V. Cox, Y. C. Lin, N. Jayant et M. J. Melchner. "A low-delay CELP coder for the CCITT 16 kb/s speech coding standard". IEEE Journal on Selected Areas on Communications, Vol.10-5, juin 1992, PP.830-849 .[Chen] JH Chen, Cox RV, YC Lin, N. Jayant and MJ Melchner. "A low-delay CELP coder for the CCITT standard 16 kb / s speech coding". IEEE Journal on Selected Areas on Communications, Vol.10-5, June 1992, PP.830-849 .
  • [CHEN-2] J. H. Chen, C.R. Watkins. "Linear prediction coefficient generation during frame erasure or packet loss ". Brevet US5574825 , EP0673018 .[CHEN-2] JH Chen, CR Watkins. "Linear prediction coefficient generation during frame erasure gold packet loss "Patent US5574825 , EP0673018 .
  • [CHEN-3] J. H. Chen, C.R. Watkins. "Linear prediction coefficient generation during frame erasure or packet loss ". Brevet 884010 .[CHEN-3] JH Chen, CR Watkins. "Linear prediction coefficient generation during frame erasure gold packet loss "Patent 884010 .
  • [CHEN-4] J. H. Chen, C.R. Watkins. "Frame erasure or packet loss compensation method ". Brevet US5550543 , EP0707308 .[CHEN-4] JH Chen, CR Watkins. "Frame erase gold packet loss compensation method "Patent US5550543 , EP0707308 .
  • [CHEN-5] J. H. Chen. "Excitation signal synthesis during frame erasure or packet loss ". Brevet US5615298 , EP0673017 .[CHEN-5] JH Chen. "Excitement signal synthesis during frame erasure gold packet loss "Patent US5615298 , EP0673017 .
  • [CHEN-6] J. H. Chen. "Computational complexity reduction during frame erasure of packet loss ". Brevet US5717822 .[CHEN-6] JH Chen. "Computational complexity reduction during the erasure of packet loss "Patent US5717822 .
  • [CHEN-7] J. H. Chen. "Computational complexity reduction during frame erasure or packet loss ". Brevet US940212435 , EP0673015 .[CHEN-7] JH Chen. "Computational complexity reduction during the frame erasure or packet loss "Patent US940212435 , EP0673015 .
  • [COX] R. V. Cox. "Three new speech coders from the ITU cover a range of applications". IEEE Communication Magazine, septembre 97, PP. 40-47 .[COX] RV Cox. "Three new speech coders from the ITU covers a range of applications". IEEE Communication Magazine, September 97, PP. 40-47 .
  • [COX-2] R. V. Cox. "An improved frame erasure concealment method for ITU-T Rec. G728". Delayed contribution D.107 (WP 3/16), ITU-T, janvier 1998 .[COX-2] RV Cox. "An improved frame erasure concealment method for ITU-T Rec G728". Delayed contribution D.107 (WP 3/16), ITU-T, January 1998 .
  • [COMBESCURE] P.Combescure, J. Schnitzler, K. Ficher, R. Kirchherr, C. Lamblin, A. Le Guyader, D. Massaloux, C. Quinquis, J. Stegmann, P. Vary. "A 16,24,32 kbit/s Wideband Speech Codec Based on ATCELP". Proc. of ICASSP conference, 1998 .[COMBESCURE] P.Combescure, J. Schnitzler, K. Ficher, R. Kirchherr, C. Lamblin, A. Guyader, D. Massaloux, C. Quinquis, J. Stegmann, P. Vary. "At 16.24,32 kbps Wideband Speech Codec Based on ATCELP". Proc. of ICASSP conference, 1998 .
  • [DAUMER] W. R. Daumer, P. Mermelstein, X. Maître et I. Tokizawa. "Overview of the ADPCM coding algorithm". Proc. of GLOBECOM 1984, PP.23.1.1-23.1.4 .[DAUMER] WR Daumer, P. Mermelstein, X. Master and I. Tokizawa. "Overview of the ADPCM coding algorithm". Proc. of GLOBECOM 1984, PP.23.1.1-23.1.4 .
  • [ERDÖL]. N. Erdöl, C. Castelluccia, A. Zilouchian "Recovery of Missing Speech Packets Using the Short-Time Energy and Zero-Crossing Measurements" IEEE Trans. on Speech and Audio Processing, Vol.1-3, juillet 1993, PP.295-303 .[ERDÖL]. N. Erdöl, C. Castelluccia, A. Zilouchian "Recovery of Missing Speech Packets Using the Short-Time Energy and Zero-Crossing Measurements" IEEE Trans. on Speech and Audio Processing, Vol.1-3, July 1993, PP.295-303 .
  • [FINGSCHEIDT] T. Fingscheidt, P. Vary, "Robust speech decoding: a universal approach to bit error concealment", Proc. of ICASSP conference, 1997, pp. 1667-1670 .[FINGSCHEIDT] T. Fingscheidt, P. Vary, "Robust speech decoding: a universal approach to a bit error concealment", Proc. of ICASSP conference, 1997, pp. 1667-1670 .
  • [GOODMAN] D.J. Goodman, G.B. Lockhart, O.J. Wasem, W.C. Wong. "Waveform Substitution Techniques for Recovering Missing Speech Segments in Packet Voice Communications". IEEE Trans. on Acoustics, Speech and Signal Processing, Vol. ASSP-34, décembre 1986, PP. 1440-1448 .[Goodman] DJ Goodman, GB Lockhart, OJ Wasem, WC Wong. "Waveform Substitution Techniques for Recovering Missing Speech Segments in Packet Voice Communications". IEEE Trans. On Acoustics, Speech and Signal Processing, Vol. ASSP-34, December 1986, PP. 1440-1448 .
  • [GSM-FR] Recommendation GSM 06.11. " Substitution and muting of lost frames for full rate speech traffic channels". ETSI/TC SMG, ver. : 3.0.1. , février 1992 .[GSM-FR] Recommendation GSM 06.11. " ETSI / TC SMG, Ver .: 3.0.1., February 1992, Substitution and muting of lost frames for full rate speech. .
  • [HARDWICK] J. C. Hardwick et J. S. Lim. "The application of the IMBE speech coder to mobile communications". Proc. of ICASSP conference, 1991, PP.249-252 .[HARDWICK] JC Hardwick and JS Lim. "The application of the IMBE speech coder to mobile communications". Proc. of ICASSP conference, 1991, PP.249-252 .
  • [HELLWIG] K. Hellwig, P. Vary, D. Massaloux, J. P. Petit, C. Galand et M. Rosso. "Speech codec for the European mobile radio system". GLOBECOM conference, 1989, PP. 1065-1069 .[Hellwig] K. Hellwig, P. Vary, D. Massaloux, JP Petit, C. Galand and M. Rosso. "Speech codec for the European mobile radio system". GLOBECOM conference, 1989, PP. 1065-1069 .
  • [HONKANEN] T. Honkanen, J. Vainio, P. Kapanen, P. Haavisto, R. Salami, C. Laflamme et J. P. Adoul. "GSM enhanced full rate speech codec ". Proc. of ICASSP conference, 1997, PP.771-774 .[Honkanen] T. Honkanen, J. Vainio, P. Kapanen, P. Haavisto, R. Salami, C. Laflamme and JP Adoul. "GSM enhanced full rate speech codec". Proc. of ICASSP conference, 1997, PP.771-774 .
  • [KROON] P. Kroon, B.S. Atal. "On the use of pitch predictors with high temporal resolution". IEEE Trans. on Signal Processing, Vol.39-3, mars.1991, PP.733-735 .[KROON] P. Kroon, BS Atal. "On the use of pitch predictors with high temporal resolution". IEEE Trans. on Signal Processing, Vol.39-3, March 1991, PP.733-735 .
  • [KROON-2] P. Kroon. "Linear prediction coefficient generation during frame erasure or packet loss ". Brevet US5450449 , EP0673016 .[KROON-2] P. Kroon. "Linear prediction coefficient generation during frame erasure gold packet loss "Patent US5450449 , EP0673016 .
  • [MAHIEUX] Y. Mahieux, J. P. Petit. "High quality aaudio transform coding at 64 kbit/s". IEEE Trans. on Com., Vol.42-11, nov.1994, PP.3010-3019 .[MAHIEUX] Y. Mahieux, JP Small. "High quality audio coding at 64 kbit / s". IEEE Trans. on Com., Vol.42-11, Nov. 1994, PP.3010-3019 .
  • [MAHIEUX-2] Y. Mahieux, "Dissimulation erreurs de transmission ", brevet 92 06720 déposé le 3 juin 1992 .[MAHIEUX-2] Y. Mahieux, "Concealment of transmission errors "patent 92 06720 filed June 3, 1992 .
  • [MAITRE] X. Maitre. "7 kHz audio coding within 64 kbit/s". IEEE Journal on Selected Areas on Communications, Vol.6-2, février 1988, PP.283-298 .[MASTER] X. Master. "7 kHz audio coding within 64 kbit / s". IEEE Journal on Selected Areas on Communications, Vol.6-2, February 1988, PP.283-298 .
  • [PARIKH] V.N. Parikh, J.H. Chen, G. Aguilar. "Frame Erasure Concealment Using Sinusoidal Analysis-Synthesis and Its Application to MDCT-Based Codecs". Proc. of ICASSP conference, 2000 .[PARIKH] VN Parikh, JH Chen, G. Aguilar. "Frame Erasure Concealment Using Sinusoidal Analysis-Synthesis and Its Application to MDCT-Based Codecs". Proc. of ICASSP conference, 2000 .
  • [PICTEL] PictureTel Corporation, "Detailed Description of the PTC (PictureTel Transform Coder), Contribution ITU-T, SG15/WP2/Q6, 8-9 Octobre 1996 Baltimore meeting, TD7 [PICTEL] PictureTel Corporation, "Detailed Description of the PTC (PictureTel Transform Coder), ITU-T Contribution, SG15 / WP2 / Q6, October 8-9, 1996 Baltimore meeting, TD7
  • [RABINER] L.R. Rabiner, R.W. Schafer. "Digital processing of speech signals". Bell Laboratories inc., 1978 .[RABINER] LR Rabiner, RW Schafer. "Digital processing of speech signals". Bell Laboratories Inc., 1978 .
  • [REC G.723.1A] ITU-T Annex A to recommendation G.723.1 "Silence compression scheme for dual rate speech coder for multimedia communications transmitting at 5.3 & 6.3 kbit/s"[REC G.723.1A] ITU-T Annex A to Recommendation G.723.1 "Silence compression scheme for dual rate speech coding for multimedia communications transmitting at 5.3 and 6.3 kbit / s"
  • [SALAMI] R. Salami, C. Laflamme, J. P. Adoul, A. Kataoka, S. Hayashi, T. Moriya, C. Lamblin, D. Massaloux, S. Proust, P. Kroon et Y. Shoham. "Design and description of CS-ACELP : a toll quality 8 kb/s speech coder". IEEE Trans. on Speech and Audio Processing, Vol.6-2, mars 1998, PP.116-130 .[SALAMI] R. Salami, C. Laflamme, JP Adoul, A. Kataoka, S. Hayashi, T. Moriya, C. Lamblin, D. Massaloux, S. Proust, P. Kroon and Y. Shoham. "Design and description of CS-ACELP: a toll quality 8 kb / s speech coder". IEEE Trans. on Speech and Audio Processing, Vol.6-2, March 1998, PP.116-130 .
  • [SALAMI-2] R. Salami, C. Laflamme, J. P. Adoul. "ITU-T G.729 Annex A : Reduced complexity 8 kb/s CS-ACELP codec for digital simultaneous voice and data". IEEE Communication Magazine, septembre 97, PP. 56-63 .[SALAMI-2] R. Salami, C. Laflamme, JP Adoul. "ITU-T G.729 Annex A: Reduced complexity 8 kb / s CS-ACELP codec for digital simultaneous voice and data". IEEE Communication Magazine, September 97, PP. 56-63 .
  • [TREMAIN] T. E. Tremain. "The government standard linear predictive coding algorithm : LPC 10". Speech technology, avril 1982, PP.40-49 .[TREMAIN] TE Tremain. "The standard linear predictive government coding algorithm: LPC 10". Speech technology, April 1982, PP.40-49 .
  • [WATKINS] C.R. Watkins, J.H. Chen. "Improving 16 kb/s G.728 LD-CELP Speech Coder for Frame Erasure Channels". Proc. of ICASSP conference, 1995, PP.241-244 .[Watkins] CR Watkins, JH Chen. "Improving 16kbps G.728 LD-CELP Speech Coder for Frame Erasure Channels". Proc. of ICASSP conference, 1995, PP.241-244 .

Claims (18)

  1. Method of concealing transmission error in a digital audio signal in which upon detecting (3) samples that are missing or erroneous in a signal, synthesis samples (5) are generated with the aid of at least one short-term prediction operator and at least for the voiced sounds a long-term prediction operator estimated as a function of decoded samples of a past decoded signal, said decoded samples being stored (6) previously when the transmitted data of said past signal are valid, characterized in that the energy of the synthesis signal thus generated is controlled with the aid of a gain that is calculated and adapted sample by sample according to an adaptation law dependent on at least one parameter of said stored decoded samples.
  2. Method according to Claim 1, characterized in that the gain for controlling the synthesis signal is calculated as a function of at least one of the following parameters: energy values previously stored for the samples corresponding to valid data, pitch period for the voiced sounds, or any parameter characterizing the frequency spectrum.
  3. Method according to one of the preceding claims, characterized in that the gain applied to the synthesis signal decreases progressively as a function of the duration for which the synthesis samples are generated.
  4. Method according to one of the preceding claims, characterized in that the steady sounds and the non-steady sounds are discriminated in the valid data, and gain adaptation laws are implemented for controlling the synthesis signal that differ on the one hand for the samples generated following valid data corresponding to steady sounds and on the other hand for the samplers generated following valid data corresponding to non-steady sounds.
  5. Method according to one of the preceding claims, characterized in that the content of memories that are used for the decoding processing is updated as a function of the synthesis samples generated.
  6. Method according to Claim 5, characterized in that a coding analogous to that implemented at the transmitter is implemented at least partially on the synthesized samples, optionally followed by an at least partial decoding operation, the data obtained serving to regenerate the memories of the decoder.
  7. Method according to Claim 6, characterized in that the first erased frame is regenerated by means of this coding-decoding operation, utilizing the content of the memories of the decoder prior to the interruption, when said memories contain information that can be utilized in this operation.
  8. Method according to one of the preceding claims, characterized in that an excitation signal is generated as input to the short-term prediction operator, which signal, in a voiced zone, is the sum of a harmonic component and a weakly harmonic or non-harmonic component, and in a non-voiced zone, limited to a non-harmonic component.
  9. Method according to Claim 8, characterized in that the harmonic component is obtained by implementing a filtering by means of the long-term prediction operator applied to a residual signal calculated by implementing an inverse short-term filtering on the stored samples.
  10. Method according to Claim 9, characterized in that the other component is determined with the aid of a long-term prediction operator to which pseudo-ransom disturbances are applied.
  11. Method according to one of Claims 8 to 10, characterized in that in order to generate a voiced excitation signal, the harmonic component is limited to the low frequencies of the spectrum, while the other component is limited to the high frequencies.
  12. Method according to one of the preceding claims, characterized in that the long-term prediction operator is determined from the stored valid frame samples, with a number of samples used for this estimation varying between a minimum value and a value that is equal to at least twice the pitch period estimated for the voiced sound.
  13. Method according to one of the preceding claims, characterized in that the residual signal is processed in a non-linear manner in order to eliminate amplitude peaks.
  14. Method according to one of the preceding claims, characterized in that voice activity is detected by estimating noise parameters and in that parameters of the synthesized signal are made to tend towards those of the estimated noise.
  15. Method according to Claim 14, characterized in that the spectral envelope of the noise of the valid decoded samples is estimated and a synthesized signal that evolves towards a signal possessing the same spectral envelope is generated.
  16. Method of processing sound signals, characterized in that a discrimination is implemented between the voiced sounds and the musical sounds, and when ;musical sounds are detected, a method is implemented according to one of the preceding claims without estimating a long-term prediction operator.
  17. Device for concealing transmission error in a digital audio signal which receives as input a decoded signal transmitted to it by a decoder and which generates samples that are missing or erroneous in this decoded signal, characterized in that it comprises processing means suitable for implementing the method according to one of the preceding claims.
  18. Transmission system comprising at least one coder, at least one transmission channel, a module suitable for detecting that transmitted data have been lost or are highly erroneous, at least one decoder and a device for concealing errors which receives the decoded signal, characterized in that this device for concealing errors is a device according to Claim 17.
EP01969857A 2000-09-05 2001-09-05 Transmission error concealment in an audio signal Expired - Lifetime EP1316087B1 (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
FR0011285A FR2813722B1 (en) 2000-09-05 2000-09-05 METHOD AND DEVICE FOR CONCEALING ERRORS AND TRANSMISSION SYSTEM COMPRISING SUCH A DEVICE
FR0011285 2000-09-05
PCT/FR2001/002747 WO2002021515A1 (en) 2000-09-05 2001-09-05 Transmission error concealment in an audio signal

Publications (2)

Publication Number Publication Date
EP1316087A1 EP1316087A1 (en) 2003-06-04
EP1316087B1 true EP1316087B1 (en) 2008-01-02

Family

ID=8853973

Family Applications (1)

Application Number Title Priority Date Filing Date
EP01969857A Expired - Lifetime EP1316087B1 (en) 2000-09-05 2001-09-05 Transmission error concealment in an audio signal

Country Status (11)

Country Link
US (2) US7596489B2 (en)
EP (1) EP1316087B1 (en)
JP (1) JP5062937B2 (en)
AT (1) ATE382932T1 (en)
AU (1) AU2001289991A1 (en)
DE (1) DE60132217T2 (en)
ES (1) ES2298261T3 (en)
FR (1) FR2813722B1 (en)
HK (1) HK1055346A1 (en)
IL (2) IL154728A0 (en)
WO (1) WO2002021515A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
RU2679228C2 (en) * 2013-09-30 2019-02-06 Конинклейке Филипс Н.В. Resampling audio signal for low-delay encoding/decoding

Families Citing this family (70)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030163304A1 (en) * 2002-02-28 2003-08-28 Fisseha Mekuria Error concealment for voice transmission system
FR2849727B1 (en) * 2003-01-08 2005-03-18 France Telecom METHOD FOR AUDIO CODING AND DECODING AT VARIABLE FLOW
DE60327371D1 (en) 2003-01-30 2009-06-04 Fujitsu Ltd DEVICE AND METHOD FOR HIDING THE DISAPPEARANCE OF AUDIOPAKETS, RECEIVER AND AUDIO COMMUNICATION SYSTEM
US7835916B2 (en) * 2003-12-19 2010-11-16 Telefonaktiebolaget Lm Ericsson (Publ) Channel signal concealment in multi-channel audio systems
KR100587953B1 (en) * 2003-12-26 2006-06-08 한국전자통신연구원 Packet loss concealment apparatus for high-band in split-band wideband speech codec, and system for decoding bit-stream using the same
JP4761506B2 (en) * 2005-03-01 2011-08-31 国立大学法人北陸先端科学技術大学院大学 Audio processing method and apparatus, program, and audio system
JP4819881B2 (en) * 2005-04-28 2011-11-24 シーメンス アクチエンゲゼルシヤフト Method and apparatus for suppressing noise
US7831421B2 (en) * 2005-05-31 2010-11-09 Microsoft Corporation Robust decoder
US8620644B2 (en) * 2005-10-26 2013-12-31 Qualcomm Incorporated Encoder-assisted frame loss concealment techniques for audio coding
US7805297B2 (en) 2005-11-23 2010-09-28 Broadcom Corporation Classification-based frame loss concealment for audio signals
US8417185B2 (en) 2005-12-16 2013-04-09 Vocollect, Inc. Wireless headset and method for robust voice data communication
JP5142727B2 (en) * 2005-12-27 2013-02-13 パナソニック株式会社 Speech decoding apparatus and speech decoding method
US7773767B2 (en) 2006-02-06 2010-08-10 Vocollect, Inc. Headset terminal with rear stability strap
US7885419B2 (en) 2006-02-06 2011-02-08 Vocollect, Inc. Headset terminal with speech functionality
JP4678440B2 (en) * 2006-07-27 2011-04-27 日本電気株式会社 Audio data decoding device
US8015000B2 (en) * 2006-08-03 2011-09-06 Broadcom Corporation Classification-based frame loss concealment for audio signals
BRPI0718423B1 (en) * 2006-10-20 2020-03-10 France Telecom METHOD FOR SYNTHESIZING A DIGITAL AUDIO SIGNAL, DIGITAL AUDIO SIGNAL SYNTHESIS DEVICE, DEVICE FOR RECEIVING A DIGITAL AUDIO SIGNAL, AND MEMORY OF A DIGITAL AUDIO SIGNAL SYNTHESIS DEVICE
EP1921608A1 (en) * 2006-11-13 2008-05-14 Electronics And Telecommunications Research Institute Method of inserting vector information for estimating voice data in key re-synchronization period, method of transmitting vector information, and method of estimating voice data in key re-synchronization using vector information
KR100862662B1 (en) 2006-11-28 2008-10-10 삼성전자주식회사 Method and Apparatus of Frame Error Concealment, Method and Apparatus of Decoding Audio using it
JP4504389B2 (en) * 2007-02-22 2010-07-14 富士通株式会社 Concealment signal generation apparatus, concealment signal generation method, and concealment signal generation program
WO2008108080A1 (en) * 2007-03-02 2008-09-12 Panasonic Corporation Audio encoding device and audio decoding device
US7853450B2 (en) * 2007-03-30 2010-12-14 Alcatel-Lucent Usa Inc. Digital voice enhancement
US8126707B2 (en) * 2007-04-05 2012-02-28 Texas Instruments Incorporated Method and system for speech compression
JP5302190B2 (en) * 2007-05-24 2013-10-02 パナソニック株式会社 Audio decoding apparatus, audio decoding method, program, and integrated circuit
KR100906766B1 (en) * 2007-06-18 2009-07-09 한국전자통신연구원 Apparatus and method for transmitting/receiving voice capable of estimating voice data of re-synchronization section
WO2009047461A1 (en) * 2007-09-21 2009-04-16 France Telecom Transmission error dissimulation in a digital signal with complexity distribution
FR2929466A1 (en) * 2008-03-28 2009-10-02 France Telecom DISSIMULATION OF TRANSMISSION ERROR IN A DIGITAL SIGNAL IN A HIERARCHICAL DECODING STRUCTURE
CN101588341B (en) * 2008-05-22 2012-07-04 华为技术有限公司 Lost frame hiding method and device thereof
KR20090122143A (en) * 2008-05-23 2009-11-26 엘지전자 주식회사 A method and apparatus for processing an audio signal
MX2011000375A (en) * 2008-07-11 2011-05-19 Fraunhofer Ges Forschung Audio encoder and decoder for encoding and decoding frames of sampled audio signal.
USD605629S1 (en) 2008-09-29 2009-12-08 Vocollect, Inc. Headset
JP2010164859A (en) * 2009-01-16 2010-07-29 Sony Corp Audio playback device, information reproduction system, audio reproduction method and program
CN101609677B (en) * 2009-03-13 2012-01-04 华为技术有限公司 Preprocessing method, preprocessing device and preprocessing encoding equipment
US8160287B2 (en) 2009-05-22 2012-04-17 Vocollect, Inc. Headset with adjustable headband
US8438659B2 (en) 2009-11-05 2013-05-07 Vocollect, Inc. Portable computing device and headset interface
PT2515299T (en) * 2009-12-14 2018-10-10 Fraunhofer Ges Forschung Vector quantization device, voice coding device, vector quantization method, and voice coding method
TR201903388T4 (en) 2011-02-14 2019-04-22 Fraunhofer Ges Forschung Encoding and decoding the pulse locations of parts of an audio signal.
PL2550653T3 (en) 2011-02-14 2014-09-30 Fraunhofer Ges Forschung Information signal representation using lapped transform
SG192748A1 (en) 2011-02-14 2013-09-30 Fraunhofer Ges Forschung Linear prediction based coding scheme using spectral domain noise shaping
CA2827266C (en) 2011-02-14 2017-02-28 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Apparatus and method for coding a portion of an audio signal using a transient detection and a quality result
MX2013009344A (en) 2011-02-14 2013-10-01 Fraunhofer Ges Forschung Apparatus and method for processing a decoded audio signal in a spectral domain.
AR085218A1 (en) * 2011-02-14 2013-09-18 Fraunhofer Ges Forschung APPARATUS AND METHOD FOR HIDDEN ERROR UNIFIED VOICE WITH LOW DELAY AND AUDIO CODING
US8849663B2 (en) * 2011-03-21 2014-09-30 The Intellisis Corporation Systems and methods for segmenting and/or classifying an audio signal from transformed audio information
US9142220B2 (en) 2011-03-25 2015-09-22 The Intellisis Corporation Systems and methods for reconstructing an audio signal from transformed audio information
US9026434B2 (en) * 2011-04-11 2015-05-05 Samsung Electronic Co., Ltd. Frame erasure concealment for a multi rate speech and audio codec
US8620646B2 (en) 2011-08-08 2013-12-31 The Intellisis Corporation System and method for tracking sound pitch across an audio signal using harmonic envelope
US8548803B2 (en) 2011-08-08 2013-10-01 The Intellisis Corporation System and method of processing a sound signal including transforming the sound signal into a frequency-chirp domain
US9183850B2 (en) 2011-08-08 2015-11-10 The Intellisis Corporation System and method for tracking sound pitch across an audio signal
US20130144632A1 (en) 2011-10-21 2013-06-06 Samsung Electronics Co., Ltd. Frame error concealment method and apparatus, and audio decoding method and apparatus
WO2013141638A1 (en) 2012-03-21 2013-09-26 삼성전자 주식회사 Method and apparatus for high-frequency encoding/decoding for bandwidth extension
US9123328B2 (en) * 2012-09-26 2015-09-01 Google Technology Holdings LLC Apparatus and method for audio frame loss recovery
US20150302892A1 (en) * 2012-11-27 2015-10-22 Nokia Technologies Oy A shared audio scene apparatus
US9437203B2 (en) * 2013-03-07 2016-09-06 QoSound, Inc. Error concealment for speech decoder
FR3004876A1 (en) * 2013-04-18 2014-10-24 France Telecom FRAME LOSS CORRECTION BY INJECTION OF WEIGHTED NOISE.
SG10201609186UA (en) 2013-10-31 2016-12-29 Fraunhofer Ges Forschung Audio Decoder And Method For Providing A Decoded Audio Information Using An Error Concealment Modifying A Time Domain Excitation Signal
PT3063760T (en) 2013-10-31 2018-03-22 Fraunhofer Ges Forschung Audio decoder and method for providing a decoded audio information using an error concealment based on a time domain excitation signal
US9437211B1 (en) * 2013-11-18 2016-09-06 QoSound, Inc. Adaptive delay for enhanced speech processing
EP2922055A1 (en) * 2014-03-19 2015-09-23 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Apparatus, method and corresponding computer program for generating an error concealment signal using individual replacement LPC representations for individual codebook information
EP2922054A1 (en) * 2014-03-19 2015-09-23 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Apparatus, method and corresponding computer program for generating an error concealment signal using an adaptive noise estimation
EP2922056A1 (en) 2014-03-19 2015-09-23 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Apparatus, method and corresponding computer program for generating an error concealment signal using power compensation
TWI602172B (en) 2014-08-27 2017-10-11 弗勞恩霍夫爾協會 Encoder, decoder and method for encoding and decoding audio content using parameters for enhancing a concealment
CN107004417B (en) * 2014-12-09 2021-05-07 杜比国际公司 MDCT domain error concealment
US9842611B2 (en) 2015-02-06 2017-12-12 Knuedge Incorporated Estimating pitch using peak-to-peak distances
US9870785B2 (en) 2015-02-06 2018-01-16 Knuedge Incorporated Determining features of harmonic signals
US9922668B2 (en) 2015-02-06 2018-03-20 Knuedge Incorporated Estimating fractional chirp rate with multiple frequency representations
RU2711108C1 (en) * 2016-03-07 2020-01-15 Фраунхофер-Гезелльшафт Цур Фердерунг Дер Ангевандтен Форшунг Е.Ф. Error concealment unit, an audio decoder and a corresponding method and a computer program subjecting the masked audio frame to attenuation according to different attenuation coefficients for different frequency bands
MX2018010756A (en) * 2016-03-07 2019-01-14 Fraunhofer Ges Forschung Error concealment unit, audio decoder, and related method and computer program using characteristics of a decoded representation of a properly decoded audio frame.
EP3553777B1 (en) * 2018-04-09 2022-07-20 Dolby Laboratories Licensing Corporation Low-complexity packet loss concealment for transcoded audio signals
US10763885B2 (en) 2018-11-06 2020-09-01 Stmicroelectronics S.R.L. Method of error concealment, and associated device
CN111063362B (en) * 2019-12-11 2022-03-22 中国电子科技集团公司第三十研究所 Digital voice communication noise elimination and voice recovery method and device

Family Cites Families (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2746033B2 (en) * 1992-12-24 1998-04-28 日本電気株式会社 Audio decoding device
CA2142391C (en) * 1994-03-14 2001-05-29 Juin-Hwey Chen Computational complexity reduction during frame erasure or packet loss
US5574825A (en) * 1994-03-14 1996-11-12 Lucent Technologies Inc. Linear prediction coefficient generation during frame erasure or packet loss
CA2177413A1 (en) * 1995-06-07 1996-12-08 Yair Shoham Codebook gain attenuation during frame erasures
US5699485A (en) * 1995-06-07 1997-12-16 Lucent Technologies Inc. Pitch delay modification during frame erasures
US5732389A (en) * 1995-06-07 1998-03-24 Lucent Technologies Inc. Voiced/unvoiced classification of speech for excitation codebook selection in celp speech decoding during frame erasures
EP2154681A3 (en) * 1997-12-24 2011-12-21 Mitsubishi Electric Corporation Method and apparatus for speech decoding
FR2774827B1 (en) * 1998-02-06 2000-04-14 France Telecom METHOD FOR DECODING A BIT STREAM REPRESENTATIVE OF AN AUDIO SIGNAL
US6188980B1 (en) * 1998-08-24 2001-02-13 Conexant Systems, Inc. Synchronized encoder-decoder frame concealment using speech coding parameters including line spectral frequencies and filter coefficients
US6240386B1 (en) * 1998-08-24 2001-05-29 Conexant Systems, Inc. Speech codec employing noise classification for noise compensation
US6449590B1 (en) * 1998-08-24 2002-09-10 Conexant Systems, Inc. Speech encoder using warping in long term preprocessing
US6556966B1 (en) * 1998-08-24 2003-04-29 Conexant Systems, Inc. Codebook structure for changeable pulse multimode speech coding
JP3365360B2 (en) * 1999-07-28 2003-01-08 日本電気株式会社 Audio signal decoding method, audio signal encoding / decoding method and apparatus therefor
US7590525B2 (en) * 2001-08-17 2009-09-15 Broadcom Corporation Frame erasure concealment for predictive speech coding based on extrapolation of speech waveform

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
RU2679228C2 (en) * 2013-09-30 2019-02-06 Конинклейке Филипс Н.В. Resampling audio signal for low-delay encoding/decoding
US10566004B2 (en) 2013-09-30 2020-02-18 Koninklijke Philips N.V. Resampling an audio signal for low-delay encoding/decoding

Also Published As

Publication number Publication date
US8239192B2 (en) 2012-08-07
HK1055346A1 (en) 2004-01-02
DE60132217D1 (en) 2008-02-14
IL154728A0 (en) 2003-10-31
DE60132217T2 (en) 2009-01-29
US20100070271A1 (en) 2010-03-18
US20040010407A1 (en) 2004-01-15
WO2002021515A1 (en) 2002-03-14
US7596489B2 (en) 2009-09-29
EP1316087A1 (en) 2003-06-04
FR2813722B1 (en) 2003-01-24
JP2004508597A (en) 2004-03-18
ES2298261T3 (en) 2008-05-16
AU2001289991A1 (en) 2002-03-22
FR2813722A1 (en) 2002-03-08
JP5062937B2 (en) 2012-10-31
ATE382932T1 (en) 2008-01-15
IL154728A (en) 2008-07-08

Similar Documents

Publication Publication Date Title
EP1316087B1 (en) Transmission error concealment in an audio signal
EP2277172B1 (en) Concealment of transmission error in a digital signal in a hierarchical decoding structure
DK1509903T3 (en) METHOD AND APPARATUS FOR EFFECTIVELY HIDDEN FRAMEWORK IN LINEAR PREDICTIVE-BASED SPEECH CODECS
JP5149198B2 (en) Method and device for efficient frame erasure concealment within a speech codec
EP2026330B1 (en) Device and method for lost frame concealment
EP2080195B1 (en) Synthesis of lost blocks of a digital audio signal
EP1051703B1 (en) Method for decoding an audio signal with transmission error correction
EP3175444B1 (en) Frame loss management in an fd/lpd transition context
EP2080194B1 (en) Attenuation of overvoicing, in particular for generating an excitation at a decoder, in the absence of information
KR100216018B1 (en) Method and apparatus for encoding and decoding of background sounds
EP2347411B1 (en) Pre-echo attenuation in a digital audio signal
EP3138095A1 (en) Improved frame loss correction with voice information
Tosun Dynamically adding redundancy for improved error concealment in packet voice coding
FR2830970A1 (en) Telephone channel transmission speech signal error sample processing has errors identified and preceding/succeeding valid frames found/samples formed following speech signal period and part blocks forming synthesised frame.
MX2008008477A (en) Method and device for efficient frame erasure concealment in speech codecs
KR19990024266A (en) Code reduction in the rate-of-arrival by unvoiced detection in the excitation linear predictive encoder

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20030324

AK Designated contracting states

Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LI LU MC NL PT SE TR

AX Request for extension of the european patent

Extension state: AL LT LV MK RO SI

APBN Date of receipt of notice of appeal recorded

Free format text: ORIGINAL CODE: EPIDOSNNOA2E

APBR Date of receipt of statement of grounds of appeal recorded

Free format text: ORIGINAL CODE: EPIDOSNNOA3E

APBV Interlocutory revision of appeal recorded

Free format text: ORIGINAL CODE: EPIDOSNIRAPE

GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

GRAS Grant fee paid

Free format text: ORIGINAL CODE: EPIDOSNIGR3

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

AK Designated contracting states

Kind code of ref document: B1

Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LI LU MC NL PT SE TR

REG Reference to a national code

Ref country code: GB

Ref legal event code: FG4D

Free format text: NOT ENGLISH

REG Reference to a national code

Ref country code: IE

Ref legal event code: FG4D

Free format text: LANGUAGE OF EP DOCUMENT: FRENCH

REG Reference to a national code

Ref country code: CH

Ref legal event code: EP

REF Corresponds to:

Ref document number: 60132217

Country of ref document: DE

Date of ref document: 20080214

Kind code of ref document: P

GBT Gb: translation of ep patent filed (gb section 77(6)(a)/1977)

Effective date: 20080408

REG Reference to a national code

Ref country code: ES

Ref legal event code: FG2A

Ref document number: 2298261

Country of ref document: ES

Kind code of ref document: T3

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: NL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20080102

NLV1 Nl: lapsed or annulled due to failure to fulfill the requirements of art. 29p and 29m of the patents act
PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: FI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20080102

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: AT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20080102

REG Reference to a national code

Ref country code: HK

Ref legal event code: GR

Ref document number: 1055346

Country of ref document: HK

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: PT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20080602

REG Reference to a national code

Ref country code: IE

Ref legal event code: FD4D

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20080402

Ref country code: IE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20080102

Ref country code: DK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20080102

PLBE No opposition filed within time limit

Free format text: ORIGINAL CODE: 0009261

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT

26N No opposition filed

Effective date: 20081003

BERE Be: lapsed

Owner name: FRANCE TELECOM

Effective date: 20080930

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MC

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20080930

REG Reference to a national code

Ref country code: CH

Ref legal event code: PL

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: CY

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20080102

Ref country code: BE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20080930

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: LI

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20080930

Ref country code: CH

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20080930

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: LU

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20080905

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: TR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20080102

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: GR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20080403

REG Reference to a national code

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 16

REG Reference to a national code

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 17

REG Reference to a national code

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 18

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: FR

Payment date: 20200819

Year of fee payment: 20

Ref country code: GB

Payment date: 20200819

Year of fee payment: 20

Ref country code: DE

Payment date: 20200819

Year of fee payment: 20

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: IT

Payment date: 20200824

Year of fee payment: 20

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: ES

Payment date: 20201001

Year of fee payment: 20

REG Reference to a national code

Ref country code: DE

Ref legal event code: R071

Ref document number: 60132217

Country of ref document: DE

REG Reference to a national code

Ref country code: GB

Ref legal event code: PE20

Expiry date: 20210904

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: GB

Free format text: LAPSE BECAUSE OF EXPIRATION OF PROTECTION

Effective date: 20210904

REG Reference to a national code

Ref country code: ES

Ref legal event code: FD2A

Effective date: 20211228

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: ES

Free format text: LAPSE BECAUSE OF EXPIRATION OF PROTECTION

Effective date: 20210906