MXPA99009122A - Method for decoding an audio signal with transmission error correction - Google Patents

Method for decoding an audio signal with transmission error correction

Info

Publication number
MXPA99009122A
MXPA99009122A MXPA/A/1999/009122A MX9909122A MXPA99009122A MX PA99009122 A MXPA99009122 A MX PA99009122A MX 9909122 A MX9909122 A MX 9909122A MX PA99009122 A MXPA99009122 A MX PA99009122A
Authority
MX
Mexico
Prior art keywords
section
filter
estimated
synthesis
synthesis filter
Prior art date
Application number
MXPA/A/1999/009122A
Other languages
Spanish (es)
Inventor
Proust Stephane
Original Assignee
France Telecom
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by France Telecom filed Critical France Telecom
Publication of MXPA99009122A publication Critical patent/MXPA99009122A/en

Links

Abstract

An audio signal coded by successive frames is represented by a binary stream (F) received with data (BFI) signalling possible erased frames. For each frame, the method consists in filtering the excitation signal (Ek(n)), formed on the basis of excitation parameters (EX(n)) recovered in the binary stream (valid frame) or estimated otherwise (erased frame), using a synthesis filter (22) to obtain a decoded signal (&Sgr;n(t)). A linear prediction of the decoded signal is carried out up to the preceding frame to estimate a synthesis filter relative to the current frame. The synthesis filters used as long as no frame is erased are in conformity with the estimated synthesis filters. If one frame n0 is erased, the synthesis filter used for a subsequent frame n0+i is determined by a weighted combination of the estimated filter relative to the frame n0+i with at least one synthesis filter used since the frame n0.

Description

METHOD FOR DECODING AN AUDIO SIGNAL WITH CORRECTION OF TRANSMISSION ERRORS The present invention relates to the field of digital coding of audio signals. It relates more particularly to a decoding method used to reconstitute an encoded audio signal according to a method using a short-term synthesis filter with backward adaptation, or "LPC bac ward" filter. Predictive block coding systems analyze the successive sections of audio signal samples (voice or music in general) that will be encoded to extract a certain number of parameters for each of these sections. These parameters are quantized to form a binary flow sent over a transmission channel. Depending on the quality of this channel and the type of transport, disturbances can affect the transmitted signal and produce errors on the binary flow received by the decoder. These errors can intervene in isolation in the binary flow. But they occur more frequently in bursts, especially in the case of highly disturbed radiomobile channels or packet transmission networks. Then it is a whole packet of bits corresponding to one (or several) signal section (s) which is erroneous or is not received.
Frequently, the transmission system used allows to detect the erroneous or missing sections at the level of the decoder. Then the procedures called "recovery of erased stretches" are used. These methods make it possible to extrapolate to the decoder the samples of the missing signal from reconstituted samples in the preceding and optionally following sections to the erased zones. The present invention aims to improve the recovery techniques of erased sections, so as to limit to a large extent the subjective degradation of the signal perceived in the decoder in the presence of erased sections. The invention is more particularly concerned in the case of predictive coders using, permanently or intermittently, a linear prediction filter calculated backwards on the synthesis signal, a technique generally referred to as "LPC back ard analysis" (or "LPC back"). ") in the literature, meaning" LPC "Linear Prediction Coding (coding of linear prediction), and" backward "(backwards) which indicates that the analysis is carried out on the signals that precede the present section. This technique is particularly sensitive to transmission errors in general and to section erasures in particular. Among the systems of coding with linear prediction, the encoders of type CELP ("Code-Excited Linear Predictive" -Linear prediction of excited code) are the most generalized. The use of forward linear prediction coding analysis in a CELP encoder was used for the first time in the LD-CELP encoder adopted by ITU-T (see ITU-T Recommendation G.728). This encoder allowed a cost reduction of 64 kilobits per second to 116 kilobits per second without degradation of subjective perceived quality. The backward linear prediction coding analysis consists in carrying out the linear prediction coding analysis, not on the current stretch of the original audio signal, but on the synthesis signal. In fact, this analysis is done on the samples of the synthesis signal of the sections preceding the current section since this signal is available at the same time in the encoder (by means of a local decoding generally useful in the synthesis encoders by means of synthesis ) and in the remote decoder. Since this analysis is carried out in the encoder and in the decoder, the obtained LPC coefficients are not transmitted. Relating to a more classical LPC "forward" analysis (forward linear prediction coding analysis), in which the linear prediction is carried over the encoder input signal, the backward linear prediction coding analysis allows more expense, for example to enrich the excitation dictionaries in the case of CELP. It also allows, without increasing the expense, a considerable increase in the order of the analysis, the LPC synthesis filter typically having 50 coefficients for the LD-CELP coder against 10 coefficients for most of the coders using a linear prediction coding analysis. forward. The backward linear prediction coding analysis thus allows, thanks to a higher LPC filter order, to better model the musical signals, whose spectrum is noticeably richer than the voice signals. Another reason why this technique adapts well to the coding of musical signals is that these signals have a generally more stationary spectrum than speech signals, which improves the performance of backward linear prediction coding analysis. However, a good performance of backward linear prediction coding analysis requires three conditions: (i) a good quality of the synthesis signal, which must be closer to the original signal. This imposes a relatively high coding expense. 13 kilobits per second seems to be the lower limit taking into account the current quality of CELP coders; (ii) a section of reduced length or a sufficiently stationary signal. There is in effect a delay of one section between the signal analyzed and the signal to be encoded. The length of the section must be small with respect to the average parking time of the signal. (iii) few transmission errors between the encoder and the decoder. Since the synthesis signals are different, the encoder and the decoder do not calculate the same filter. Significant divergences may exist and be enlarged, also in the absence of any new disturbance. The sensitivity of the encoders / decoders with forward linear prediction coding analysis to the transmission errors comes mainly from the following recursive phenomenon: the difference between the synthesis signal generated at the encoder (local decoder) level and the reconstructed synthesis signal in the decoder, by means of an erased section recovery device, it causes a difference between the backward linear prediction coding filter calculated in the decoder and the one calculated in the encoder for the following segment, since these are calculated over different signals. These filters are used in turn to generate the synthesis signals of the next segment which will thus be different from the encoder and the decoder. The phenomenon can propagate, amplify and cause serious and irreversible divergences between the encoder and the decoder. Since the backward linear prediction coding filters have a generally high order (30 to 50 coefficients), their contribution to the spectrum of the synthesis signal is important (high prediction gains). Numerous coding algorithms use recovery techniques of deleted sections. The decoder is informed of the occurrence of a section erased in one way or another (for example, in the case of the radio-mobile systems, by receiving the section erasure information that comes from the channel decoder that detects the transmission errors and some you can correct). The recovery devices of deleted sections are intended, from one or several of the last preceding sections considered as valid, to extrapolate the samples from the deleted section. Certain devices extrapolate these samples by waveform substitution techniques by directly extracting the samples in the decoded past signals (see DJ Goodman et al .: "Waveform Substitution Techniques for Recovering Missing Speech Segments in Packet Voice Communications", IEEE Trans. On ASSP, Vol. ASSP-34, No. 6, December 1986). In the case of predictive coders, of the CELP type for example, they use, to replace the samples of the erased sections, the synthesis model used to synthesize the valid sections. The procedure for the recovery of deleted sections must provide the necessary parameters for the synthesis, which are not available for the deleted sections (see, for example, ITU-T Recommendations G.723.1 and G.729). Certain parameters manipulated or encoded by the predictive encoders have a great correlation between sections. This is especially the case of linear prediction coding parameters, and long-term prediction parameters (LTP delay and associated gain) for speech sounds. From the fact of this correlation, it is more advantageous to reuse the parameters of the last valid section to synthesize the deleted section instead of using the erroneous or random parameters. For the CELP coding algorithm, the parameters of the deleted section are obtained in the following way: - the linear prediction coding filter is obtained from the linear prediction coding parameters of the last valid segment, either by simply compiling the parameters, or by introducing some damping; - a speech / voiceless detection allows to determine the degree of harmony of the signal at the level of the erased segment (see Recommendation ITU-T G.723.1); in the case without voice, an excitation signal is generated in a partially random manner, for example by randomly issuing a codeword and returning to the gain of the past excitation slightly damped (see Recommendation ITU-T G.729), or the random selection in the past excitation (see ITU-T Recommendation G.728); - in the case of a signal with voice, the LTP delay is usually the one calculated in the previous section, possibly with a slight "dance" to avoid a resonant sound that is too long, the LTP gain is taken very close to equal to 1. The signal of excitation is generally limited to the long-term prediction made from the past excitation. In the case of a coding system that uses forward linear prediction coding analysis, the parameters of the linear prediction coding filter are extrapolated in a simple manner from the parameters of the preceding segment: the linear prediction coding filter used for the first section erased is usually the filter of the previous section, eventually damped (the contours of the spectrum become slightly flatter, decrease in the prediction gain). This damping can be obtained by means of a spectral expansion coefficient applied to the filter coefficients or, if these coefficients are represented by LSP (pairs of spectral lines), imposing a minimum separation of the pairs of spectral lines (Recommendation ITU-T G. 723.1). The spectral expansion technique was proposed in the case of the encoder of Recommendation ITU-T-G.728, which employs a forward linear prediction coding analysis: in the first erased segment, a set of coding parameters is first calculated of linear projection on the last synthesis signal (valid). An expansion factor of 0.97 is applied to this filter, a factor that is iteratively multiplied by 0.97 in each new section deleted. It will be noted that this technique does not use more than the deleted section. From the first non-erased section that follows, the linear prediction coding parameters used by the decoder are those normally calculated, ie on the synthesis signal. In the case of forward linear prediction coding analysis, there is no phenomenon of error memorization in terms of linear prediction coding filters, except when the quantization of linear prediction coding filters uses a prediction (in which case the mechanisms that allow the resynchronization of the predictor at the end of a certain number of valid stretches, using the leakage factors in the prediction, or a prediction of the MA type) are foreseen. . In the case of backward analysis, the error is propagated by the deviation of the erroneous synthesis signal that is used in the decoder to generate the linear prediction coding filters of the valid stretches following the erased zone. The improvement of the synthesis signal produced after the erased section (extrapolation of the excitation signal and the gains) is thus a means to ensure that the following linear prediction coding filters (calculated on the above synthesis signal) will be more approximate to those calculated in the encoder. The conditions of (i) to (iii) mentioned hereinabove show that a pure backward analysis finds its limits quickly when it is desired to operate at costs substantially less than 16 kilobits per second. In addition, the decrease in the quality of the synthesis signal that degrades the performances of the linear prediction coding filter is often necessary, in order to reduce the expense, to agree on a longer section length (from 10 to 30 ms). It is then noted that the degradation intervenes mainly after the transitions of spectrum and more generally in the non-stationary areas. In stationary areas and for globally more stationary signals such as music, the backward linear prediction coding analysis retains a very large advantage over linear forward prediction coding analysis. In order to preserve the advantages of the backward analysis, especially the good performance of the codification of the musical signals, pursuing the reduction of the expense, coding systems have been made with coding analysis of linear mixed prediction "forward / backward", or forward / backward, (see S. Proust and collaborators: "Dual Rate Low Delay CELP Coding (8kbits / s 16kbits / s using a Mixed Backward / Forward Adaptive LPC Prediction", Proc. of the IEEE Workshop on Speech Coding for Telecommunications, September 1995, pages 37-38; and French Patent Application No. 97 04684). The association of the two types of linear prediction coding analysis allows to benefit from the advantages of the two techniques: forward linear prediction coding analysis is used to encode transitions and non-stationary zones, while coding analysis of Linear prediction backward, higher order, serves to code the stationary areas. The introduction of the forward coded sections between backward coded sections also allows the encoder and the decoder to converge in case of transmission errors, and thus offers a robustness of these errors clearly superior to a pure backward encoding. However, the coding of the stationary signals is mostly made in the back mode for which the problem of transmission errors remains crucial. These mixed forward / backward systems consider, for example, multimedia applications over limited or shared resource networks, or mobile communications of improved quality. For this type of applications, the loss of packets of bits is very probable, which penalizes in advance the techniques sensitive to the losses of stretches such as the coding analysis of linear prediction backwards. The present invention, which makes it possible to greatly reduce the effect of the erased sections in the systems using the backward linear prediction coding analysis or the forward / back mixed linear prediction coding analysis, is particularly suited to this type of applications. It also points out the existence of other types of audio coding systems that consider both a forward linear prediction coding analysis and a backward linear prediction coding analysis. The synthesis filter may especially be a combination (pulse response ring) of a forward linear prediction coding filter and a linear reverse prediction coding filter (see EP-A-0 782 128). The coefficients of the linear forward prediction coding filter are then calculated by the encoder and transmitted in quantized form, while the coefficients of the backward linear prediction coding filter are determined together with the encoder and the decoder, according to a process of backward linear prediction coding analysis performed as explained above before the synthesis signal has been subjected to a forward filter of the forward linear prediction coding filter. The object of the present invention is to improve the subjective quality of the speech signal restored by the decoder when using predictive coding systems using blocks of linear prediction coding of the backward or mixed type forward / backward. , because of a poor quality of the transmission channel or following the loss or non-reception of a packet in a packet transmission system, one or more sections are deleted. The invention thus proposes, in the case of a system that constantly uses a backward linear prediction coding analysis, a method of decoding a binary stream representative of an audio signal encoded by successive tracts, the binary stream received being a information indicating any deleted segments, in which, for each section, an excitation signal is formed from excitation parameters that are recovered in the binary flow if the section is valid and is estimated in another way if the section is deleted , and the excitation signal is filtered by means of a synthesis filter to obtain a decoded audio signal, in which a linear prediction analysis is performed on the basis of the decoded audio signal obtained up to the previous section to estimate when less in part a synthesis filter relative to the current section, the synthesis filters successively used to filter the excitation signal as long as no section is erased conform to the estimated synthesis filters, and in which, if a ng section is erased, at least one synthesis filter used for the synthesis is determined. filtering the excitation signal relative to a subsequent section no + i by means of a weighted combination of the synthesis filter estimated relatively to the ng + 1 section and of at least one synthesis filter that was used after section n0. After the occurrence of one or several deleted segments, and during a certain number of sections, the backward linear prediction coding filters estimated by the decoder on the last synthesis signal are not those used effectively to reconstruct the synthesis signal. The decoder uses for its synthesis of linear prediction coding a filter that depends on the backward filter thus estimated, but also the filters used for the synthesis of one or more preceding sections, after the last filter calculated on a valid synthesis signal. This is done with the help of the weighted combination applied to the linear prediction coding filters after the erased section, which operates a reading and allows to force certain spectral parking. This combination may vary depending on the distance to the last valid tranche transmitted. The reading of the path of the linear prediction coding filters used in synthesis after the occurrence of an erased segment has the effect of greatly limiting divergence phenomena and thus significantly improving the subjective quality of the decoded signal. The sensitivity of forward linear prediction coding analyzes to transmission errors is mainly due to the phenomenon of divergence previously explained. The main source of degradation is due to the progressive divergence between the filters calculated in the remote decoder and the filters calculated in the local decoder, a divergence that could create catastrophic distortions in the synthesis signal. It is therefore important to minimize the separation (in terms of spectral distance) between the two calculated filters, and to do so in such a way that this separation tends towards 0 when the number of sections without errors following the deleted section (s) increases (property of reconvergence of the coding system), backward filters, of generally high order, have a major influence on the spectrum of the synthesis signal. The convergence of the filters, favored by the invention, ensures the convergence of the synthesis signals. The subjective quality of the signal synthesized in the presence of the deleted sections is improved. If the segment ng + 1 following a deleted segment ng is also an erased section, the synthesis filter used to filter the excitation signal relative to the ng + 1 section from the synthesis filter used to filter the synthesis is preferably determined. Excitation signal relative to section ng. These two filters can be especially identical. The second could also be determined by applying a spectral expansion coefficient as explained above. In a preferred embodiment, the weighting coefficients used in said weighted combination depend on the number i of sections that separate the ng + i section of the last section deleted in n0, so that the synthesis filter used progressively approaches the filter of estimated synthesis. In particular, each synthesis filter used to filter the excitation signal relative to a section n is represented by K parameters Pk (n) (l = k = K), the parameters P (ng + i) of the synthesis filter used for filtering the excitation signal relative to a section ng + i, following i-1 valid segments (i = l) preceded by an erased section ng, can be calculated according to the combination: P (n0 + i) = [l-of (i)] .Pk (n0 + i) + a (i) Pk (n0) (1) where P (ng + i) designates the k-th parameter of the synthesis filter estimated in relation to the ng + i stretch, and (i) is a positive or null coefficient, decreasing with ia from a value a (i) = amax when more equal to 1. The decrease of the coefficient o; (i) it allows to have, in the first valid sections following an erased section, a synthesis filter relatively close to that used for the n0 section, which was generally determined in good conditions, and to progressively lose the memory of this section filter ng to approximate the estimated filter for the n0 + l section. The parameters Pj ^ ín) can be the coefficients of the synthesis filter, that is, its impulse response. The parameters Pj ^) can also be other representations of these coefficients, such as those used classically in the linear prediction coders: reflection coefficients, LAR (log-area-ratio), PARCOR (partial correlation, LSP (pairs of linear spectra ) ... The coefficient o¿ (±) for i greater than 1 can especially be calculated by recurrence: or; (i) = max (0, a (± - l) -ß) (2) ß being a coefficient comprised between 0 and 1. In a preferred embodiment of the invention, the weighting coefficients used in the weighted combination depend on an estimate of a spectral parking degree of the audio signal, so that in the In the case of a weakly stationary signal, the synthesis filter used to filter the excitation signal relative to a ng + i segment following an erased section ng (i = l) is closer to the estimated synthesis filter than in the case of a very stationary signal. This adjusts the adjustment of the linear prediction coding filter backwards, and the spectral parking that it induces, based on a measure of the real average spectral parking of the signal. The reading is increased (and thus the spectral parking) when the actual parking of the signal rises, and decreases in the opposite case. In the case of large spectral parking, successive backward filters vary very little. The successive filters can thus be strongly adjusted. This limits the risks of divergence and ensures the desired parking. The degree of spectral parking of the audio signal can be estimated from information included in each valid section of the binary stream. In certain systems, it may in fact decide to devote the expense to the transmission of this type of information, which allows the decoder to determine that the encoded signal is more or less stationary. On the other hand, the degree of spectral parking of the audio signal can be estimated from a comparative analysis of the synthesis filters successively used by the decoder to filter the excitation signal. The spectral parking measure can be obtained with the aid of various methods of measuring the spectral distances between the filters encoding linear backward prediction successively used by the decoder (for example Itakura distance). The degree of parking of the signal can be taken into account in the calculation of the parameters of the synthesis filter carried out in accordance with the relation (1) hereinabove. The weighting coefficient a (i) for i greater than 1 is then a function increasing the estimated spectral parking degree of the audio signal. The filter used by the decoder then approaches less fast than the estimated filter when parking is much when it is little. In particular, when o; (i) is calculated according to the relation (2), the coefficient ß can be a decreasing function of the estimated spectral parking degree of the audio signal. As set forth hereinabove, the method according to the invention is applied to purely backward linear prediction coding analysis systems., for which the synthesis filter has a transfer function of the form l / AB (z), where AB (z) is a polynomial in z whose coefficients are obtained by the decoder from the analysis by linear prediction that leads over the decoded audio signal. It is equally applicable to systems in which backward linear prediction coding analysis is combined with forward forward linear prediction coding analysis, with a pulsed-response bypass of forward and reverse linear prediction coding filters. , in the manner described in EP-A-0 782 128. In this case, the synthesis filter has a transfer function of the form 1 / [Ap (z) .AB (z)], where Ap (z) Y Ag (z) are polynomials in z i, the polynomial coefficients Ap (z) are obtained from parameters included in the valid binary streams, and the polynomial coefficients AB (z) are obtained through the decoder from the linear prediction analysis carried on a signal obtained by filtering the signal from audio decoded by a transfer function filter Ap (z). In the framework of a coding system with forward / backward mixed linear prediction coding analysis, the present invention proposes a method of decoding a binary stream representative of an audio signal encoded by successive sections, the binary stream is received with an information indicating any deleted sections, each valid section of the binary stream including information indicating which coding mode was applied to encode the audio signal relative to the section between a first coding mode, in which the section contains the spectral parameters, and a second coding mode, in which, for each section, an excitation signal is formed from the excitation parameters that are recovered in the binary stream if the section is valid and are estimated from another way if the stretch is erased, and the excitation signal is filtered by means of a synthesis filter to obtain a signal of decoded audio, the synthesis filter used to filter the excitation signal is constructed from said spectral parameters if the binary flow indicates the first coding mode, in which an analysis is performed by linear prediction on the basis of the signal of decoded audio obtained up to the preceding section to estimate at least in part a synthesis filter relative to the current section, and in which, as long as no section is deleted and the binary flow indicates the second coding mode, the synthesis filters successively used to filter the excitation signal are conformed to the estimated synthesis filters, and in which, if a section n is erased, the binary flow indicated by the second coding mode for the preceding valid section, the n0 section is followed by many valid segments for which the binary flow indicates the second encoding mode, at least one fi ltro of synthesis used to filter the excitation signal relative to a subsequent section n0 + i by means of a weighted combination of the synthesis filter estimated relatively with the section ng + i and at least one synthesis filter that was used after the n0 section. These provisions regulate the case of deletions that occur in periods when the encoder operates in backward mode, essentially in the same way as in the case of purely coding systems behind.
The preferred embodiments mentioned hereinabove for purely coding back systems are directly transposable to the case of mixed forward / backward systems. It is interesting to note that the degree of spectral parking of the audio signal, when used, can be estimated from the information present in the binary stream to indicate stretch by segment the coding mode of the audio signal. The estimated degree of spectral parking can be deduced especially from a discount of sections treated according to the second coding mode and of sections treated according to the first coding mode, which belongs to a preceding time interval of the current section and which has a duration of the order of N stretches, where N is a predefined integer. In the case of a deletion that occurs when the encoder is about to go from the forward mode to the backward mode, it is indicated that, if a ng segment is deleted, the binary flow indicated by the first coding mode (or, on the other hand, the second coding mode) for the valid preceding leg, the leg n that is followed by at least one valid leg for which the binary flow indicates the second coding mode, then the synthesis filter used to filter the excitation signal relative to the next ng + 1 stretch can be determined from the synthesis filter estimated relatively with the ng stretch. The filter used to filter the excitation signal relative to the following section ng + 1 can be taken especially identical to the synthesis filter estimated relatively to the ng section. Other features and advantages of the present invention will appear in the description hereinafter of non-limiting embodiments, with reference to the accompanying drawings, in which: Figure 1 is a schematic diagram of an audio encoder whose flow binary output can be decoded in accordance with the invention. Figure 2 is a principle scheme of an audio decoder using a backward linear prediction coding filter in accordance with the present invention. Figure 3 is a flowchart of a procedure for estimating the spectral parking of the signal, applicable in the decoder of Figure 2. And Figure 4 is a flow chart of the calculation of the backward linear prediction coding filter, applicable in the decoder of Figure 2. The audio encoder represented on Figure 1 is an encoder with forward / backward mixed linear prediction coding analysis. The audio signal to encode sn (t) is received in the form of successive numeric segments indexed by the integer n. Each section is composed of an L number of samples. As an example, the stretch can have a duration of 10 ms, be L = 80 for a sampling frequency of 8 kilo-hertz. The encoder comprises a synthesis filter 5, of transfer function l / A (z), where A (z) is a polynomial in z. This filter 5 is usually identical to the synthesis filter used by the associated decoder. The filter 5 receives an excitation signal En (t) provided by a waste coding module 6, and locally forms a version n (t) of the synthetic signal that the decoder produces in the absence of transmission errors. The excitation signal En (t) provided by the module 6 is characterized by the excitation parameters EX (n). The code operated by module 6 considers returning the local synthesis signal? N (t) as close as possible to the input signal Sn (t) in the sense of a certain criterion. This criterion corresponds in a conventional manner to a minimization of the coding error? N (t) -Sn (t) filtered by a perceptual weighting filter determined from the coefficients of the synthesis filter 5. The coding module 6 generally uses blocks shorter than the sections (sub-sections). The notation EX (n) here denotes the set of the excitation parameters determined by module 6 for the sub-sections of section n. Classically, the coding module 6 can operate on the one hand a long-term prediction to determine a long-term prediction delay and an associated gain taking into account the pitch of the voice, and on the other hand a sequence of excitation residual and an associated gain. The shape of the residual excitation sequence depends on the type of encoder considered. In the case of an MP-LPC type encoder, it corresponds to a set of pulses whose positions and / or amplitudes are quantized. In the case of a CELP-type encoder, it corresponds to a coding word that belongs to a predetermined dictionary. The polynomial A (z), inverse of the transfer function of the synthesis filter 5, is of the form: k A (z) = 1 S ak (n), -k (3) k = l where the a (n) are the linear prediction coefficients determined for the n stretch. As symbolized by switch 7 of Figure 1, they are provided either by a forward linear prediction coding analysis module 10, or by a backward linear prediction coding analysis module 12, according to the value of a bit d (n) determined by a decision module 8 that differentiates the segments for which linear forward prediction coding analysis is performed (d (n) = 0) of the sections for which the linear prediction coding analysis is performed backward ((n) = l). The signal to be encoded Sn (t) is provided to the linear prediction analysis module 10 which performs forward linear prediction coding analysis of the signal Sn (t). A storage module 11 receives the signal Sn (t), and stores it over a time interval of analysis that typically covers several sections up to the current section. The module 10 performs a linear prediction calculation of the KF order (typically KF «10) over this signal interval Sn (t), to determine a linear prediction filter whose transfer function Ap (z) is of the form: KF Ap (z) = 1 + S p (n) .z -k (4) k = l where Pp (n) designates the prediction coefficient of the order k obtained after the treatment of the section n. The methods of analysis by linear prediction that can be used to calculate these coefficients Pp (n) are well known in the technique of numerical coding. For example, you can refer to the works "Digital Processing of Speech Signáis" by L.R. Rabiner and R.W. Shafer, Prentice-Hall Int., 1978, and "Linear Prediction of Speech" by J.D. Markel and A.H. Gray, Springer Verlag Berlin Heidelberg, 1976. When d (n) = 0 (forward mode), the coefficients Ppk (n) calculated for module 10 are provided to synthesis filter 5, ie K = KF and ak (n ) = PFk (n) for l = k = K. The module 10 also proceeds to the quantification of the linear forward prediction coding filter. It thus determines the quantization parameters Q (n) for each stretch for which Q (n) = 0. Different methods of quantification can be applied. The Q (n) parameters determined by the n-section can directly represent the Pp (n) coefficients of the filter. The quantification can also be operated on the reflection coefficients, the LAR (log-area-ratio), the LSP (pairs of line spectra) ... The coefficients Ppk (n) that are provided to the filter 5 when d (n) ) = 0 correspond to the quantized values. The local synthesis signal? N (t) is provided to the analysis module by linear prediction 12 which performs backward linear prediction coding analysis. A storage module 13 receives the signal? N (t), and stores it over a time interval of analysis that typically covers several sections up to the section preceding the current section. Module 12 performs a linear prediction calculation of the KB order (typically KB «50) over this interval of the synthesis signal, to determine a linear prediction filter whose transfer function AB (z) is of the form: KB AB (z) = 1 + S PBk (n) .z "k (5) k = l where PBk (n) designates the prediction coefficient of the order k obtained after the treatment of the n-l stretch. The prediction methods employed by the module 12 may be the same as those employed by the module 10. However, the module 12 does not need to operate the AB (z) filter quantization. When d (n) = l (back mode), the coefficients PB (n) calculated by the module 12 are provided to the synthesis filter 5, that is, K = KB and ak (n) = PBk (n) for l = k = K Each of the modules 10, 12 provides a prediction gain Gp (n), GB (n), which maximized to obtain their respective prediction coefficients PpK (n), P (n). The decision module 8 analyzes the values of these gains Gp (n), GB (n) according to and to the extent of the sections, to decide the instants where the encoder will work in forward mode and in backward mode. In general, when the gain G (n) of the prediction back is relatively high with respect to the gain Gp (n) of the forward prediction, it can be assumed that the signal to be encoded is mostly stationary. When this circumstance occurs over a large number of consecutive sections, it is judicious to operate the encoder in back mode, so that module 8 takes d (n) = l. On the other hand, in non-stationary areas, take d (n) = 0. For a detailed forward / backward decision method, reference is made to French Patent Application No. 97 04684. In Figure 1, reference 14 designates the output multiplexer of the encoder, which puts the binary F stream into shape. F includes the forward / back decision bit d (n) for each section. When d (n) = 0 (forward mode), section n of the flow F includes the spectral parameters Q (n) that quantize the Pp (n) coefficients of the forward linear prediction coding filter. The remainder of the section includes the excitation parameters EX (n) determined by module 6. When d (n) = l (back mode), the nth section of flow F does not contain spectral parameters Q (n). The output binary cost is the same, more bits are available for the encoding of residual excitation. Module 6 can thus enrich the coding of the residue, either by allocating more bits to the quantification of certain parameters (LTP delay, gains ...) or by increasing the size of the CELP dictionary. As an example, the binary cost can be 11.8 kilobits per second for an ACELP type encoder (CELP with algebraic dictionaries) that work in a telephone band (300 - 3400 Hertz), with 10 ms sections (L = 80), an analysis of forward linear prediction coding of the order KF = 10, a backward linear prediction coding analysis of the order KB = 30, and a separation of each section into two sub-sections (the forward and back linear prediction coding filters) calculated for each section are used in the treatment of the second sub-section, in the treatment of the first sub-section, an interpolation between these filters and those calculated by the preceding section is used). The decoder, of which Figure 2 shows the principle schema, receives, in addition to the binary stream F, a BFI information indicating the deleted segments. The binary output stream F of the encoder is generally subjected to a channel encoder that introduces the redundancy according to a code having detection and / or transmission error correction capabilities. Upstream of the audio decoder, an associated channel decoder exploits this redundancy to detect transmission errors and optionally correct some. If the transmission of a section is so bad that the correction capabilities of the channel decoder are insufficient, it activates the BFI indicator so that the audio decoder adopts the appropriate behavior. In Figure 2, reference 20 designates the decoder input demultiplexer, which delivers, for each valid leg n of the received binary stream, the forward / backward decision bit d (n), the excitation parameters EX (n) and , if d (n) = 0, the spectral parameters Q (n). When a section n is indicated as deleted, the decoder considers that the coding module is identical to the last valid section. So adopt the value d (n) = d (n-l). For a valid stretch in forward mode (d (n) = 0 read in flow F), module 21 calculates the coefficients Pp (n) of the forward linear prediction coding filter (l = k = KF) from the quantization indices received Q (n). The switches 23, 24 are in the positions shown on Figure 2, the calculated coefficients Ppk (n) are provided to the synthesis filter 22, whose transfer function is thus 1 / A (z) = l / Ap (z), with Ap (z) given by the relation (3). If d (n) = 0 for an erased section, the decoder continues to operate in forward mode, providing the synthesis filter KF with coefficients a (n) provided by an estimation module 36. In the case of a n-section in backward mode, (d (n) = l read in the stream or conserved in case of erasure), the coefficients of the synthesis filter 22 are the coefficients Pk (n) (l = k = K = KB) determined by a module 25 for calculating the forward linear prediction coding filter, which will be described later. The synthesis filter transfer function 22 is then l / A (z), with KB V? R A (z) = 1 + S Pk (n) .z "? (5) k = l The synthesis filter 22 receives for the section n an excitation signal En (t) delivered by a linear synthesis prediction coding coding residue synthesis module 26. For a valid n-span, the synthesis module 26 calculates the excitation signal En (t) from the excitation parameters EX (n) read in the flow, the switch 27 is in the position shown on Figure 2. In this case, the excitation signal En (t) produced by the synthesis module 26 is identical to the excitation signal En (t) delivered by the same section through Module 6 of the encoder. In the same way as for the encoder, the way of calculating the excitation signal depends on the forward / backward decision bit d (n). The output signal SQ (t) of the filter 22 constitutes the synthesis signal obtained by the decoder. In a conventional manner, this synthesis signal can then be subjected to one or more conformal postfilters provided in the decoder (not shown). The synthesizing signal? N (t) is provided by a linear prediction analysis module 30 which performs backward linear prediction coding analysis in the same manner as module 12 of the decoder of Fig. 1, to estimate a filter of synthesis, whose denoted coefficients Pk (n) (l = k = KB) are provided to the calculation module 25. The coefficients Pk (n) relative to the section n are obtained after taking into account the synthesized signal up to the nl section. A storage module 31 receives the signal? N (t) and stores it over the same time interval of analysis as the module 13 of FIG. 1. The analysis module 30 then proceeds to the same calculations as the module 12 on the basis of of the memorized synthesis signal. As long as no section is erased, the module 25 delivers the coefficients Pj ^) equal to the estimated coefficients P (n) provided by the analysis module 30.
Consequently, when there is no section deleted, the synthesis signal? N (t) delivered by the decoder is exactly identical to the synthesis signal? N (t) determined by the encoder, provided, of course, that there is no no erroneous bits in the valid sections of the F stream. The excitation parameters EX (n) received by the decoder, as well as the coefficients Ppk (n) of the forward linear prediction coding filter if d (n) = 0, are memorized during at least one section through the respective modules 33, 34, in order to be able to restore the excitation parameters and / or the forward linear prediction coding parameters if an erased section ensues. The parameters used then are the estimates provided by the respective modules 35, 36 on the basis of the contents of the memories 33, 34 when the BFI information indicates an erased section. The estimation methods usable by the modules 35 and 36 can be chosen from the methods mentioned above. In particular, the excitation parameters can be estimated by the module 35 taking into account information about the more or less voice character of the synthesis signal? N (t), provided by a detector with voice / without voice 37. The recovery of the coefficients of the filter linear prediction coding backwards when a deleted segment is indicated detached from the calculation of the coefficients P (n) effected by the module 25. This calculation advantageously depends on an estimate Istat (n) of the parking degree of the audio signal, obtained by a parking estimation module 38. This module 38 can operate in accordance with the flowchart represented on Figure 3. According to this procedure, module 38 uses two counters whose values are denoted N0 and N ^ Their ratio N] / N is representative of the proportion of coded sections forward on a time interval defined by a number N, whose duration represents the order of N signal segments (typically N «100, is a range of the order of 1 second). The Istat (n) parking grade estimated for section n is a function f of the numbers Ng and N-i. It can especially be a binary function such as for example: f (N0, N1) = l if N2 > 4Ng (more stationary signal); f (N0, N1) = 0 if N1 = 4N0 (little stationary signal). If the energy E (? N) of the synthesis signal? N (t) delivered by the filter 22 over the current section n is less than a chosen threshold in order that the too little energy sections are ignored (test 40) , the counters Ng and Nj are not modified after the section n, the module 38 that directly calculates the parking degree Istat (n) in step 41. If not, it examines in test 42 the coding mode indicated for the section n (d (n) read in the flow od (n) = d (nl) in case of erasure). If d (n) = 0, the counter N0 is incremented in step 43. If d (n) = l the counter Nj is increased in step 44. The module 38 immediately calculates the parking degree Istat (n) in the step 41, unless the sum Ng + Nj reaches the number N (test 45), in which case the values of the two counters Ng and Nj are first divided by 2. The procedure for calculating the coefficients P (n) l = k = KB) by means of the module 25 can be according to the flowchart of Figure 4. It will be noted that this procedure is executed for all valid or deleted n sections, coded forward or backward. The calculated filter depends on a weighting coefficient, which depends on the number of sections stuck after the last section deleted and the degrees of parking successively estimated. The index of the last section deleted that precedes the current section is denoted ng. At the beginning of the processing carried out by a section n, the module 25 produces the KB coefficients P (n) which, in the case where d (n) = l, are provided to the filter 22 to synthesize the signal • AS "(n) of the Section n If d (n) = 0, these coefficients Pl (n) are simply calculated and memorized This calculation is carried out in step 50 according to the relationship: P? (N) = (1-a Pk (n) + a.P n) (6) where the P (n) are the coefficients estimated by the module relative to section n (that is, taking into account the synthesized signal up to section nl), P (n0) are the coefficients that module 25 calculated in relation to the last section deleted n0 and OÍ is the weighting coefficient, initialized at 0 The relation (6) corresponds to the relation (1) when at least one valid segment ng + i follows the deleted section n (i = 1, 2 ...).
If the section n is valid (test 51), the module 25 examines the forward / backward decision bit d (n) read in the flow in step 52. If d (n) = l, the module 25 calculates the new value of the coefficient a. according to the relation (2) in steps 53 to 57, the coefficient ß is chosen as a decreasing function of the parking degree Istat (n) estimated by the module 38 relatively in the section n. If lstat (n) = 0 in stage 53 (low parking signal), the coefficient o¿ is decreased by an amount in step 54. If Istat (n) = l in the stage 53 (very stationary signal), the alpha coefficient is decreased by quantity in step 55. In the case where the parking degree Istat (n) is determined in a binary manner as explained above, the quantities y ß can be respectively equal to 0.5 and 0.1. In step 56, the new value of a. it is compared with 0. The treatment relative to section n is determined if a = 0. If a < 0, this coefficient α is set to 0 in step 57. In the case of a n-forward coded section (d (n) = 0 in step 52), the coefficient o¿ is set directly to 0 in step 57. In the case when section n is deleted (test 51), the index n of the current section is affected in the index n that designates the last section deleted, and the coefficient is initialized in its maximum value amax to stage 58 (0 < amax = l). The maximum amax value of the coefficient a may be less than 1. However, cmax = l is preferably chosen. In this way, when a ng segment is deleted, the next filter Pk (n0 + 1) calculated by module 25 corresponds to the filter that it calculated after the reception of the last valid section. If several deleted segments follow each other, the filter calculated by module 25 remains the same as that calculated after the reception of the last valid section. If the first valid segment received after a deletion is encoded forward (d (ng + l) = 0), the synthesis filter 22 receives the valid coefficients Pp (n0 + l) calculated by the module 21 as well as a signal of valid excitement Consequently, the signal synthesized? Ng + l (t) is relatively reliable, just like the estimate Pk (n0 + 2) of the synthesis filter made by the analysis module 30. Thanks to the setting in 0 of the coefficient a in the stage 57, this estimate Pk (n ° + 2) can be adopted by the calculation module for the next leg ng + 2. If the first valid segment received after the erasure is coded backward (d (n0 + l) = l), the synthesis filter 22 receives the coefficients Pk (n0 + 1) for this valid stretch. After the choice Q! Max = l, it is completely avoided to take into account, in the calculation of these coefficients, the estimate Pk (n0 + l) that was determined in an unreliable manner by the module 30 after the signal processing of synthesis? n0 (t) of the deleted section n0 (? n0 (t) was obtained by filtering an erratic excitation signal). If the following sections n0 + 2 ... are encoded backwards, the synthesis filter used will be read with the help of the coefficient or ¿whose value is decreased more or less quickly depending on whether it is in a stationary or very stationary area of the sign At the end of a certain number of sections (10 cases in the stationary case, and 2 sections in the non-stationary case with the indicated values of jdj and jßg), the coefficient a. becomes null, that is to say that the filter P (n0 + i) used if the coding mode follows backwards becomes identical to the filter Pk (n0 + i) estimated by the module 30 from the synthesis signal. In what precedes here, the example of a mixed forward / backward coding system has been described in detail. The use of the invention is very similar in the case of a purely backwards encoder: the output flow F does not contain the decision bit d (n) and the spectral parameters Q (n), but only the excitation parameters EX (n); the functional units 7, 8, 10 and 11 of the encoder of Figure 1 are not necessary, the coefficients PB (n) calculated by the backward linear prediction coding analysis module 12 are used directly by the synthesis filter 5; - the functional units 21, 23, 24, 34 and 36 of the decoder of Figure 2 are not necessary, the coefficients Pk (n) calculated by the module 25 are used directly by the synthesis filter 22. The decision bit d ( n) is not available at the level of the decoder, the parking degree Istat (n), if used by the calculation module 25, must be calculated in another way. If the binary stream transmitted does not contain any particular information that allows the decoder to estimate the parking, this estimate may be based on a comparative analysis of the synthesis filters P (n) successively calculated by the module 25. If the spectral distances measured between these successive filters are relatively weak over a certain time interval, it can be estimated that the signal is very stationary.

Claims (33)

1. Procedure of decoding a binary stream (F) representative of an audio signal (Sn (t) encoded by successive sections, the binary stream is received with an information (BFI) that indicates eventual sections deleted, in which, for each section , an excitation signal (S (n)) is formed from excitation parameters (EX (n)) that are retrieved in the binary stream if the stretch is valid and are estimated differently if the stretch is erased, and the excitation signal is filtered by means of a synthesis filter (22) to obtain a decoded audio signal (? n (t)), and in which a linear prediction analysis is performed on the basis of the decoded audio signal obtained up to the preceding section to estimate at least in part a synthesis filter in relation to the current section, the synthesis filters successively used to filter the excitation signal as long as no section is erased conform to the estimated synthesis filters, characterized in that, if a section ng is erased, at least one synthesis filter used for filtering the excitation signal relative to a subsequent ng + i segment by a weighted combination of the synthesis filter estimated relatively with respect to the n0 + i stretch and at least one synthetic filter is that it was used after the ng section. Method according to claim 1, in which, if the section ng + 1 following a deleted section ng is also a deleted section, the synthesis filter used to filter the excitation signal in relation to the section ng is determined +1 from the synthesis filter used to filter the excitation signal relative to the ng section. Method according to claim 1 or 2, in which the weighting coefficients (OI (i), 1-of (i)) used in said weighted combination depend on the number i of the sections separating the n0 + i stretch from the last one section deleted n0, so that the synthesis filter used progressively approaches the estimated synthesis filter. Method according to claim 3, in which each synthesis filter used to filter the excitation signal relative to a section n is represented by K parameters Pk (n) (l = k = K), and in which the parameters P (n0 + i) of the synthesis filter used to filter the excitation signal relative to a section ng + i, follows the valid sections (i = l) preceded by a deleted segment ng, are calculated according to the combination: (n0 + i) = [l-c- (i)] .Pk (n0 + i) + a (i). Pk (n0) where P (n0 + i) designates the kth parameter of the synthesis filter estimated in relation to the ng + i stretch, and ce (i) is a positive or null coefficient, decreasing with ia from a value a ( l) = amax at most equal to 1. The method according to claim 4, wherein Q! jl? ax = l- 6. Method according to claim 4 or 5, wherein the coefficient a (i) for i > l is calculated by the recurrence aí (i) = max. { 0, o¿ (± - l) -β} , ß being a coefficient between 0 and 1. 7. The method according to any of claims 1 to 6, wherein the weighting coefficients employed in said weighted combination depend on an estimate of a spectral parking degree of the audio signal (Istat (n)), so that, in the case of a non-stationary signal, the synthesis filter used to filter the excitation signal relative to a ng + i segment following an erased section n0 (i = l ) either closer to the estimated synthesis filter than in the case of a very stationary signal. The method according to claim 7, wherein the degree of spectral parking of the audio signal (Istat (n)) is estimated from information included in each valid binary stream. The method according to claim 7, in which the degree of spectral parking of the audio signal (Istat (n)) is estimated from a comparative analysis of the synthesis filters successively used to filter the excitation signal. The method according to claim 4 and any of claims 7 to 9, in which the weighting coefficient or? (±) for i >; l is a growing function of the estimated spectral parking degree of the audio signal (Istat (n)). Method according to claims 6 and 10, in which the coefficient ß is a decreasing function of the estimated spectral parking degree of the audio signal (Istat (n)). 1
2. The method according to claim 11, wherein the spectral parking degree of the audio signal (Istat (n)) is estimated in a binary manner, the coefficient ß being 0.5 or 0.1 according to the estimated spectral parking grade. The method according to any one of claims 1 to 12, in which the synthesis filter (22) has a transfer function of the form l / AB (z), where AB (z) is a polynomial in z "To whose coefficients (PBk (n)) were obtained from said analysis by linear prediction carried on the decoded audio signal (? n (t)) 14. The method according to any of claims 1 to 12 , in which the synthesis filter (22) has a transfer function of form 1 / [Ap (z) .AB (z)], where Ap (z) and AB (z) are polynomials in z, the coefficients ( PFk (n)) of the polynomial Ap (z) are obtained from parameters (Q (n)) included in the valid stretches of the binary flow, and the coefficients (PB (n)) of the polynomial AB (z) are obtained from starting from said analysis by means of linear prediction that takes on a signal obtained by filtering the decoded audio signal (? n (t)) by means of a transfer function filter Ap (z). Procedure of decoding a binary stream (F) representative of an audio signal (Sn (t)) encoded by successive sections, the binary stream is received with an information (BFI) that indicates possible deleted sections, each valid section of the binary flow which includes information (d (n)) which indicates which coding mode was applied to encode the audio signal in relation to the segment between a first coding mode, in which the section contains the spectral parameters (Q (n) ), and a second coding mode, in which, for each section, an excitation signal ("E'Ir (n)) is formed from excitation parameters • • (EX (n)) that are retrieved in the binary stream if the stretch is valid and are estimated differently if the stretch is erased, and the excitation signal is filtered by means of a synthesis filter (22) to obtain a decoded audio signal (? n (t)), the synthesis filter used to filter the excitation signal is constructed from said spectral parameters if the binary flow indicates the first coding mode, in which a linear prediction analysis is performed on the base of the decoded audio signal obtained up to the preceding section to estimate at least in part a synthesis filter in relation to the current section, and in which, as long as no section is deleted and the binary flow indicates the second mode of coding, the synthesis filters successively used to filter the excitation signal conform to the estimated synthesis filters, characterized in that if a ng section is erased, the binary flow indicated by the second coding mode for the preceding valid section, the section n0 which is followed by several valid segments for which the binary flow indicates the second coding mode, at least one filter is determined synthesis unit used to filter the excitation signal relative to a subsequent section n0 + i by a weighted combination of the synthesis filter estimated in relation to the ng + i section and at least one synthesis filter that was used from the ng section. Method according to claim 15, in which, if a section n is deleted and followed by at least one valid segment for which the binary flow indicates the second coding mode, the synthesis filter used to filter the signal is determined of excitation in relation to the next section n0 + starting from the estimated synthesis filter in relation to the ng section. Method according to claim 15 or 16, in which, if two consecutive sections ng and ng + 1 are both erased, the binary flow indicated by the second coding mode for the preceding valid section, the synthesis filter used is determined to filter the excitation signal relative to the n0 + section from the synthesis filter used to filter the excitation signal relative to the ng section. Method according to any of claims 15 to 17, in which the weighting coefficients (o. (I), (i)) used in said weighted combination depend on the number i of sections separating the ng section + i of the last section deleted ng, so that the synthesis filter used progressively approaches the estimated synthesis filter. The method according to claim 18, wherein each synthesis filter used to filter the excitation signal relative to a section n for which the binary flow indicates the second coding mode is represented by K parameters Pk (n) (l = k = K), and in which the parameters Pk (n0 + i) of the synthesis filter used to filter the excitation signal relative to a ng + i stretch for which the binary flow indicates the second coding mode, following at i-1 valid segments (i = l) preceded by an erased section n0, they are calculated according to the combination: Pk (n0 + i) = [l-a (i)] .Pk (n0 + 1) + a (i) -Pk (n0) where Pk (n0 + i) designates the kth parameter of the synthesis filter estimated in relation to the n0 + i stretch, i¿ (±) is a positive or null coefficient, decreasing with ia from a value to (l) ) = a! max when more equal to 1. 20. Procedure according to claim 19, in which amax = l. 21. The method according to claim 19 or 20, wherein the coefficient c (i) for i > l is calculated by recurrence «(i) = max. { 0, OL (ÍL-I) -β), where ß is a coefficient between 0 and 1. 22. The method according to any of claims 15 to 21, wherein the weighting coefficients used in said weighted combination depend of an estimate (Istat (n)) of a spectral parking degree of the audio signal, so that, in the case of a non-stationary signal, the synthesis filter used to filter the excitation signal relative to a ng + i stretch after a ng deleted stretch, and for which the binary flow indicates the second mode of coding (i = l), is closer to the estimated synthesis filter than in the case of a very stationary signal. Method according to claim 22, in which the degree of spectral parking of the audio signal (Istat (n)) is estimated from information (d (n)) included in each valid binary stream (F) ). Method according to claim 23, in which the information from which the spectral parking degree of the audio signal (Istat (n)) is estimated is the information (d (n)) and that the mode of coding of the audio signal. Method according to claim 24, in which the estimated degree of spectral parking (Istat (n)) is deducted from a count of sections treated according to the second coding mode and of the sections treated according to the first coding mode, belonging to at a time interval that precedes the current section and having a duration of the order of N sections, where N is a predefined integer. 26. The method according to claim 25, wherein the degree of spectral parking (Istat (n)) is estimated recursively with the help of two counters, one whose value Ng is increased for each section treated according to the first coding mode , and the other whose value Nj is increased for each section treated according to the second coding mode, decreasing the values of the two counters together when the sum of these two values reaches the number N, the estimated spectral parking degree is an increasing function of the relation Nj / Ng. 27. The method according to claim 26, wherein the estimated spectral parking degree (Istat (n)) is a binary function of the relation N ^ / Ng. The method according to claim 22, wherein the degree of spectral parking of the audio signal (Istat (n)) is estimated from a comparative analysis of the synthesis filters successively used to filter the excitation signal (Ek). (n)). 29. The method according to claim 19 and any of claims 22 to 28, in which the weighting coefficient "(i) for i >; l is a growing function of the estimated spectral parking degree of the audio signal (Istat (n)). 30. Method according to claims 21 and 29, in which the coefficient ß is a decreasing function of the estimated spectral parking degree of the audio signal (lstat (n)). 31. Method according to claims 27 and 30, in which the coefficient ß takes the value 0.5 or 0.1 according to the degree of estimated spectral parking (Istat (n)). 32. The method according to any of claims 15 to 31, wherein the synthesis filter used when the binary stream indicates the second encoding mode has a transfer function of the form l / AB (z), wherein AB (z) is a polynomial in z "1 whose coefficients (PBk (n)) are obtained from said analysis by means of linear prediction that takes over the decoded audio signal (? N (t)). 3
3. Procedure according to any of claims 15 to 31, in which the synthesis filter used when the binary stream indicates the second coding mode has a shape transfer function l / [AF (z), AB (z)], where Ap (z) and AB (z) are polynomials in z, the coefficients (PFk (n)) of the polynomial AF (z) are obtained from parameters (Q (n)) included in the valid sections of the binary flow, and the coefficients (PB (n)) of the polynomial AB (z) are obtained from said analysis by linear prediction that takes A signal obtained by filtering the decoded audio signal (? n (t)) by a transfer function filter Ap (z).
MXPA/A/1999/009122A 1998-02-06 1999-10-05 Method for decoding an audio signal with transmission error correction MXPA99009122A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
FR98/01441 1998-02-06

Publications (1)

Publication Number Publication Date
MXPA99009122A true MXPA99009122A (en) 2000-09-04

Family

ID=

Similar Documents

Publication Publication Date Title
US8239192B2 (en) Transmission error concealment in audio signal
EP2017829B1 (en) Forward error correction in speech coding
EP2026330B1 (en) Device and method for lost frame concealment
JP3565869B2 (en) Audio signal decoding method with correction of transmission error
US6202046B1 (en) Background noise/speech classification method
US8639519B2 (en) Method and apparatus for selective signal coding based on core encoder performance
KR102307492B1 (en) Audio coding device, audio coding method, audio coding program, audio decoding device, audio decoding method, and audio decoding program
CN110931025A (en) Apparatus and method for improved concealment of adaptive codebooks in ACELP-like concealment with improved pulse resynchronization
US7302385B2 (en) Speech restoration system and method for concealing packet losses
JPH06502930A (en) Error protection for multimode speech coders
MXPA99009122A (en) Method for decoding an audio signal with transmission error correction
KR20220006510A (en) Methods and devices for detecting attack in a sound signal and coding the detected attack
JPH10177399A (en) Voice coding method, voice decoding method and voice coding/decoding method
JPH03245199A (en) Error compensating system
McGrath et al. A Real Time Implementation of a 4800 bps Self Excited Vocoder