EP1035538B1 - Multimode quantizing of the prediction residual in a speech coder - Google Patents

Multimode quantizing of the prediction residual in a speech coder Download PDF

Info

Publication number
EP1035538B1
EP1035538B1 EP20000200874 EP00200874A EP1035538B1 EP 1035538 B1 EP1035538 B1 EP 1035538B1 EP 20000200874 EP20000200874 EP 20000200874 EP 00200874 A EP00200874 A EP 00200874A EP 1035538 B1 EP1035538 B1 EP 1035538B1
Authority
EP
European Patent Office
Prior art keywords
vector
vectors
weak
predictor
strong
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Lifetime
Application number
EP20000200874
Other languages
German (de)
French (fr)
Other versions
EP1035538A3 (en
EP1035538A2 (en
Inventor
Jacek Stachurski
Alan V. Mccree
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Texas Instruments Inc
Original Assignee
Texas Instruments Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Texas Instruments Inc filed Critical Texas Instruments Inc
Publication of EP1035538A2 publication Critical patent/EP1035538A2/en
Publication of EP1035538A3 publication Critical patent/EP1035538A3/en
Application granted granted Critical
Publication of EP1035538B1 publication Critical patent/EP1035538B1/en
Anticipated expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/04Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using predictive techniques
    • G10L19/16Vocoder architecture
    • G10L19/18Vocoders using multiple modes

Definitions

  • the present invention relates generally to the field of electronic devices, and, more particularly, to speech coding, technical transmission, storage, and synthesis circuitry and methods.
  • LPC linear predictive coding
  • the present application provides linear predictive speech coding /decoding methods as set forth in the independent claims.
  • both strongly predictive and weakly predictive codebooks may be used but with a weak predictor replacing a strong predictor which otherwise would have followed a weak predictor.
  • First preferred embodiments classify the spectra of the linear prediction (LP) residual (in a MELP coder) into classes of spectra (vectors) and vector quantize each class separately. For example, one first preferred embodiment classifies the spectra into long vectors (many harmonics which correspond roughly to low pitch frequency as typical of male speech) and short vectors (few harmonics which correspond roughly to high pitch frequency as typical of female speech). These spectra are then vector quantized with separate codebooks to facilitate encoding of vectors with different numbers of components (harmonics).
  • Figure 1a shows the classification flow and includes an overlap of the classes.
  • Second preferred embodiments allow for predictive coding of the spectra (or alternatively, other parameters such as line spectral frequencies or LSFs) and a selection of either the strong or weak predictor based on best approximation but with the proviso that a first strong predictor which otherwise follows a weak predictor is replaced with a weak predictor. This deters error propagation by a sequence of strong predictors of an error in a weak predictor preceding the series of strong predictors.
  • Figure 1b illustrates a predictive coding control flow.
  • Figures 2a-2b illustrate preferred embodiment MELP coding (analysis) and decoding (synthesis) in block format.
  • M the order of the linear prediction filter, is taken to be about 10-12; the sampling rate to form the samples y(n) is taken to be 8000 Hz (the same as the public telephone network sampling for digital transmission); and the number of samples ⁇ y(n) ⁇ in a frame is often 160 (a 20 msec frame) or 180 (a 22.5 msec frame).
  • a frame of samples may be generated by various windowing operations applied to the input speech samples.
  • ⁇ e(n) 2 yields the ⁇ a(j) ⁇ which furnish the best linear prediction.
  • the coefficients ⁇ a(j) ⁇ may be converted to LSFs for quantization and transmission.
  • the ⁇ e(n) ⁇ form the LP residual for the frame and ideally would be the excitation for the synthesis filter 1/A(z) where A(z) is the transfer function of equation (1).
  • the LP residual is not available at the decoder; so the task of the encoder is to represent the LP residual so that the decoder can generate the LP excitation from the encoded parameters.
  • the Band-Pass voicing for a frequency band of samples determines whether the LP excitation derived from the LP residual ⁇ e(n) ⁇ should be periodic (voiced) or white noise (unvoiced) for a particular band.
  • the Pitch Analysis determines the pitch period (smallest period in voiced frames) by low pass filtering ⁇ y(n) ⁇ and then correlating ⁇ y(n) ⁇ with ⁇ y(n+m) ⁇ for various m; interpolations provide for fractional sample intervals.
  • the resultant pitch period is denoted pT where p is a real number, typically constrained to be in the range 20 to 132 and T is the sampling interval of 1/8 millisecond. Thus p is the number of samples in a pitch period.
  • the LP residual ⁇ e(n) ⁇ in voiced bands should be a combination of pitch-frequency harmonics.
  • Gain Analysis sets the overall energy level for a frame.
  • the encoding (and decoding) may be implemented with a digital signal processor (DSP) such as the TMS320C30 manufactured by Texas Instruments which can be programmed to perform the analysis or synthesis essentially in real time.
  • DSP digital signal processor
  • Figure 3a illustrates an LP residual ⁇ e(n) ⁇ for a voiced frame and includes about eight pitch periods with each pitch period about 26 samples.
  • Figure 3b shows the magnitudes of the ⁇ E(j) ⁇ for one particular period of the LP residual
  • Figure 3c shows the magnitudes of the ⁇ E(j) ⁇ for all eight pitch periods.
  • the Fourier coefficients peak about 1/pT, 2/pT, 3/pT, ..., k/pT, ... ; that is, at the fundamental frequency 1/pT and harmonics.
  • p may not be an integer, and the magnitudes of the Fourier coefficients at the fundamental-frequency harmonics, denoted X[1], X[2], ..., X[k], ... must be estimated. These estimates will be quantized, transmitted, and used by the decoder to create the LP excitation.
  • the preferred embodiments use vector quantization of the spectra. That is, treat the set of Fourier coefficients X[1], X[2], ... X[k], ... as a vector in a multi-dimensional quantization, and transmit only the index of the output quantized vector. Note that there are [p] or [p]+1 coefficients, but only half of the components are significant due to their conjugate symmetry.
  • the set of output quantized vectors may be created by adaptive selection with a clustering method from a set of input training vectors. For example, a large number of randomly selected vectors (spectra) from various speakers can be used to form a codebook (or codebooks with multistep vector quantization). Thus a quantized and coded version of an input spectrum X[1], X[2], ... X[k], ... can be transmitted as the index in the codebook of the quantized vector and which may be 20 bits.
  • the first preferred embodiments proceed with vector quantization of the Fourier coefficient spectra as follows.
  • Some vectors will qualify as both short and long vectors.
  • conjugate symmetry of the Fourier coefficients implies only the first half of the vector components are significant and used.
  • Each codebook has 2 20 output quantized vectors, so 20 bits will index the output quantized vectors in each codebook. One bit could be used to select the codebook, but the pitch is transmitted and can be used to determine whether the 20 bits are long or short vector quantization.
  • a differential (predictive) approach will decrease the quantization noise. That is, rather than vector quantize a spectrum X[1], X[2], ... X[k], ..., first generate a prediction of the spectrum from the preceding one or more frames' quantized spectra (vectors) and just quantize the difference. If the current frame's vector can be well approximated from the prior frames' vectors, then a "strong" prediction can be used in which the difference between the current frame's vector and a strong predictor may be small. Contrarily, if the current frame's vector cannot be well approximated from the prior frames' vectors, then a "weak" prediction (including no prediction) can be used in which the difference between the current frame's vector and a predictor may be large.
  • a simple prediction of the current frame's vector X could be the preceding frame's quantized vector Y, or more generally a multiple ⁇ Y with ⁇ a weight factor (between 0 and 1).
  • could be a diagonal matrix with different factors for different vector components.
  • the predictor ⁇ Y is close to Y and if also close to X, the difference vector X- ⁇ Y to be quantized is small compared to X. This would be a strong predictor, and the decoder recovers an estimate for X by Q(X- ⁇ Y) + ⁇ Y with the first term the quantized difference vector X- ⁇ Y and the second term from the previous frame and likely the dominant term.
  • the predictor is weak in that the difference vector X- ⁇ Y to be quantized is likely comparable to X.
  • 0 is no prediction at all and the vector to be quantized is X itself.
  • the parameters i.e., LSFs, Fourier coefficients, pitch, (7) corresponding to the current frame are considered lost or unreliable and the frame is reconstructed based on the parameters from the previous frames.
  • the error resulting from missing a set of parameters will propagate throughout the series of frames for which a strong prediction is used. If the error occurs in the middle of the series, the exact evolution of the predicted parameters is compromised and some perceptual distortion is usually introduced.
  • a frame erasure happens within a region where a weak predictor is consistently selected, the effect of the error will be localized (it will be quickly reduced by the weak prediction).
  • a second preferred embodiment analyzes the predictors used in a series of frames and controls their sequencing.
  • one preferred embodiment modifies the current frame to use the weak predictor but does not affect the next frame's predictor.
  • Figure 1b illustrates the decisions.
  • the usual decoder recovers X 2 as Q(X 2 -X 2strong ) + X 2strong with the second term dominant, and analogously for X 3 , X 4 , ...
  • the preferred embodiment decoder recovers X 2 as Q(X 2 -X 2weak ) + X 2weak but with the first term likely dominant.
  • the decoder recreates X 1weak from the preceding reconstructed frames' vectors X 0 , X -1 , ... , and similarly for X 2strong and X 2weak recreated from reconstructed X 1 , X 0 , ..., and likewise for the other predictors.
  • the vector Q(X 1 -X 1weak ) is lost and the decoder reconstructs the X 1 by something such as just repeating reconstructed X 0 from the prior frame. However, this may not be a very good approximation because originally a weak predictor was used.
  • the usual decoder reconstructs X 2 by Q(X 2 -X 2strong ) + Y 2strong with Y 2strong the strong predictor recreated from X 0 , X 0 , ... rather than from X 1 , X 0 , ... because X 1 was lost and replaced by possibly poor approximation X 0 .
  • the preferred embodiment reconstructs X 2 by Q(X 2 -X 2weak ) + Y 2weak with Y 2strong the weak predictor recreated from X 0 , X 0 , ... rather than from X 1 , X 0 , ... again because X 1 was lost and replaced by possibly poor approximation X 0 .
  • the error would roughly be X 2weak - Y 2weak which likely is small due to the weak predictor being the smaller term compared to the difference term Q(X 2 -X 2weak ). And this smaller error also applies to the reconstruction of X 3 , X 4 ,
  • Alternative second preferred embodiments modify two (or more) successive frame's strong predictors after a weak predictor frame to be weak predictors. That is, a sequence of weak, strong, strong, strong, ... would be changed to weak, weak, weak, strong, ...

Landscapes

  • Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Compression, Expansion, Code Conversion, And Decoders (AREA)
  • Transmission Systems Not Characterized By The Medium Used For Transmission (AREA)

Description

    FIELD OF THE INVENTION
  • The present invention relates generally to the field of electronic devices, and, more particularly, to speech coding, technical transmission, storage, and synthesis circuitry and methods.
  • The performance of digital speech systems using low bits rates has become increasingly important with current and foreseeable digital communications. One digital speech method, linear predictive coding (LPC), uses a parametric model to mimic human speech. In this approach only the parameters of the speech model are transmitted across the communication channel (or stored), and a synthesizer regenerates the speech with the same perceptual characteristics as the input speech waveform. Periodic updating of the model parameters requires fewer bits than direct representation of the speech signal, so a reasonable LPC vocoder can operate at bits rates as low as 2-3 Kbps (kilobits per second) whereas the public telephone system uses 64 Kbps (8 bit PCM codewords at 8,000 samples per second). See for example, McCree et al, A 2.4 Kbit/s MELP Coder Candidate for the New U.S. Federal Standard, Proc. IEEE Int.Conf.ASSP 200 (1996) and US Patent No.5,699,477.
  • However, the speech output from such LPC vocoders is not acceptable in many applications because it does not always sound like natural human speech, especially in the presence of background noise. And there is a demand for a speech vocoder with at least telephone quality speech at a bit rate of about 4 Kbps. Various approaches to improve quality include enhancing the estimation of the parameters of a mixed excitation linear prediction (MELP) system and more efficient quantization of them. See Yeldener et al, A Mixed Sinusoidally Excited Linear Prediction coder at 4 kb/s and Below, Proc. IEEE Int. Conf. Acoust.,Speech,Signal Processing (1998) and Shlomot et al, Combined Harmonic and Waveform Coding of Speech at Low Bit Rates, IEEE ... 585 (1998). Moreover, United States Patent 5,749,065 describes codebook based predictive coding with a codebook for male speech and a codebook for female speech.
  • SUMMARY OF THE INVENTION
  • The present application provides linear predictive speech coding /decoding methods as set forth in the independent claims.
  • Additionally, both strongly predictive and weakly predictive codebooks may be used but with a weak predictor replacing a strong predictor which otherwise would have followed a weak predictor.
  • This has the advantages including maintenance of low bit rates but with increased performance and avoidance of error propagation by a series of strong predictors.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • Specific embodiments of the present invention will now be described in further detail, by way of example, with reference to the accompanying drawings in which:
  • Figures 1a-1b are flow diagrams of preferred embodiments.
  • Figures 2a-2b illustrate preferred embodiment coder and decoder in block format; and
  • Figures 3a-3d show an LP residual and its Fourier transforms.
  • DESCRIPTION OF THE PREFERRED EMBODIMENTS
  • First preferred embodiments classify the spectra of the linear prediction (LP) residual (in a MELP coder) into classes of spectra (vectors) and vector quantize each class separately. For example, one first preferred embodiment classifies the spectra into long vectors (many harmonics which correspond roughly to low pitch frequency as typical of male speech) and short vectors (few harmonics which correspond roughly to high pitch frequency as typical of female speech). These spectra are then vector quantized with separate codebooks to facilitate encoding of vectors with different numbers of components (harmonics). Figure 1a shows the classification flow and includes an overlap of the classes.
  • Second preferred embodiments allow for predictive coding of the spectra (or alternatively, other parameters such as line spectral frequencies or LSFs) and a selection of either the strong or weak predictor based on best approximation but with the proviso that a first strong predictor which otherwise follows a weak predictor is replaced with a weak predictor. This deters error propagation by a sequence of strong predictors of an error in a weak predictor preceding the series of strong predictors. Figure 1b illustrates a predictive coding control flow.
  • Figures 2a-2b illustrate preferred embodiment MELP coding (analysis) and decoding (synthesis) in block format. In particular, the Linear Prediction Analysis determines the LPC coefficients a(j), j = 1, 2, ..., M, for an input frame of digital speech samples {y(n)} by setting: e(n) = y(n) - ΣM≥j≥1 a(j)y(n-j) and minimizing Σe(n)2. Typically, M, the order of the linear prediction filter, is taken to be about 10-12; the sampling rate to form the samples y(n) is taken to be 8000 Hz (the same as the public telephone network sampling for digital transmission); and the number of samples {y(n)} in a frame is often 160 (a 20 msec frame) or 180 (a 22.5 msec frame). A frame of samples may be generated by various windowing operations applied to the input speech samples. The name "linear prediction" arises from the interpretation of e(n) = y(n) - ΣM≥j≥1 a(j)y(n-j) as the error in predicting y(n) by the linear sum of preceding samples ΣM≥j≥1 a(j)y(n-j). Thus minimizing Σe(n)2 yields the {a(j)} which furnish the best linear prediction. The coefficients {a(j)} may be converted to LSFs for quantization and transmission.
  • The {e(n)} form the LP residual for the frame and ideally would be the excitation for the synthesis filter 1/A(z) where A(z) is the transfer function of equation (1). Of course, the LP residual is not available at the decoder; so the task of the encoder is to represent the LP residual so that the decoder can generate the LP excitation from the encoded parameters.
  • The Band-Pass Voicing for a frequency band of samples (typically two to five bands, such as 0-500 Hz, 500-1000 Hz, 1000-2000 Hz, 2000-3000 Hz, and 3000-4000 Hz) determines whether the LP excitation derived from the LP residual {e(n)} should be periodic (voiced) or white noise (unvoiced) for a particular band.
  • The Pitch Analysis determines the pitch period (smallest period in voiced frames) by low pass filtering {y(n)} and then correlating {y(n)} with {y(n+m)} for various m; interpolations provide for fractional sample intervals. The resultant pitch period is denoted pT where p is a real number, typically constrained to be in the range 20 to 132 and T is the sampling interval of 1/8 millisecond. Thus p is the number of samples in a pitch period. The LP residual {e(n)} in voiced bands should be a combination of pitch-frequency harmonics.
  • Fourier Coeff. Estimation provides coding of the LP residual for voiced bands. The following sections describe this in detail.
  • Gain Analysis sets the overall energy level for a frame.
  • The encoding (and decoding) may be implemented with a digital signal processor (DSP) such as the TMS320C30 manufactured by Texas Instruments which can be programmed to perform the analysis or synthesis essentially in real time.
  • Figure 3a illustrates an LP residual {e(n)} for a voiced frame and includes about eight pitch periods with each pitch period about 26 samples. Figure 3b shows the magnitudes of the {E(j)} for one particular period of the LP residual, and Figure 3c shows the magnitudes of the {E(j)} for all eight pitch periods. For a voiced frame with pitch period equal to pT, the Fourier coefficients peak about 1/pT, 2/pT, 3/pT, ..., k/pT, ... ; that is, at the fundamental frequency 1/pT and harmonics. Of course, p may not be an integer, and the magnitudes of the Fourier coefficients at the fundamental-frequency harmonics, denoted X[1], X[2], ..., X[k], ... must be estimated. These estimates will be quantized, transmitted, and used by the decoder to create the LP excitation.
  • The {X[k]} may be estimated by various methods: for example, apply a discrete Fourier transform to the samples of a single period (or small number of periods) of e(n) as in Figures 3b-3c; alternatively, the {E(j)} can be interpolated. Indeed, one interpolation approach applies a 512-point discrete Fourier transform to an extended version of the LP residual, which allows use of a fast Fourier transform. In particular, extend the LP residual {e(n)} of 160 samples to 512 samples by setting e512(n) = e(n) for n = 0, 1, ... 159, and e512(n) = 0 for n = 160, 161, ..., 511. Then the discrete Fourier transform magnitudes appear as in Figure 3d with coefficients E512(j) which essentially interpolate the coefficients E(j) of Figures 3b-3c. Estimate the peaks X[k] at frequencies k/pT. The preferred embodiment only uses the magnitudes of the Fourier coefficients, although the phases could also be used. Because the LP residual components {e(n)} are real, the discrete Fourier transform coefficients {E(j)} are conjugate symmetric: E(k) = E*(N-k) for an N-point discrete Fourier transform. Thus only half of the {E(j)} need be used for magnitude considerations.
  • Once the estimated magnitudes of the Fourier coefficients X[k] for the fundamental pitch frequency and harmonics k/pT have been found, they must be transmitted with a minimal number of bits. The preferred embodiments use vector quantization of the spectra. That is, treat the set of Fourier coefficients X[1], X[2], ... X[k], ... as a vector in a multi-dimensional quantization, and transmit only the index of the output quantized vector. Note that there are [p] or [p]+1 coefficients, but only half of the components are significant due to their conjugate symmetry. Thus for a short pitch period such as pT = 4 milliseconds (p = 32), the fundamental frequency 1/pT (= 250 Hz) is high and there are 32 harmonics, but only 16 would be significant (not counting the DC component). Similarly, for a long pitch period such as pT = 12 milliseconds (p = 96), the fundamental frequency (= 83 Hz) is low and there are 48 significant harmonics.
  • In general, the set of output quantized vectors may be created by adaptive selection with a clustering method from a set of input training vectors. For example, a large number of randomly selected vectors (spectra) from various speakers can be used to form a codebook (or codebooks with multistep vector quantization). Thus a quantized and coded version of an input spectrum X[1], X[2], ... X[k], ... can be transmitted as the index in the codebook of the quantized vector and which may be 20 bits.
  • As illustrated in Figure 1a, the first preferred embodiments proceed with vector quantization of the Fourier coefficient spectra as follows. First, classify a Fourier coefficient spectrum (vector) according to the corresponding pitch period: if the pitch period is less than 55T, the vector is a "short" vector, and if the pitch period is more than 45T, the vector is a "long" vector. Some vectors will qualify as both short and long vectors. Vector quantize the short vectors with a codebook of 20-component vectors, and vector quantize the long vectors with a codebook of 45-component vectors. As described previously, conjugate symmetry of the Fourier coefficients implies only the first half of the vector components are significant and used. And for short vectors with less than 20 significant components, expand to 20 components by appending components equal to 1. Analogously for long vectors with fewer than 45 significant components, expand to 45 components by appending components equal to 1. Each codebook has 220 output quantized vectors, so 20 bits will index the output quantized vectors in each codebook. One bit could be used to select the codebook, but the pitch is transmitted and can be used to determine whether the 20 bits are long or short vector quantization.
  • For a vector classified as both short and long, use the same classification as the preceding frame's vector; this avoids discontinuities and provides a hysteresis by the classification overlap. Further, if the preceding frame was unvoiced, then take the vector as short if the pitch period is less than 50T and long otherwise.
  • Apply a weighting factor to the metric defining distance between vectors. The distance is used both for the clustering of training vectors (which creates the codebook) and for the quantization of Fourier component vectors by minimum distance. In general, define a distance between vectors X1 and X2 by d(X1 ,X2) = (X1 -X2 )TW(X1 -X2 ) with W a matrix of weights. Thus define matrices Wshort for short vectors and matrices Wlong for long vectors; further, the weights may depend upon the length of the vector to be quantized. Then for short vectors take Wshort [j,k] very small for either j or k larger than 20; this will render the components X1[k] and X2[k] irrelevant for k larger than 20. Further, take Wshort [j,k] decreasing as j and k increase from 1 to 20 to emphasize the lower vector components. That is, the quantization will depend primarily upon the Fourier coefficients for the fundamental and low harmonics of the pitch frequency. Analogously, take Wlong [j,k] very small for j or k larger than 45.
  • Further, the use of predictive coding could be included to reduce the magnitudes and decrease the quantization noise as described in the following.
  • Predictive coding
  • A differential (predictive) approach will decrease the quantization noise. That is, rather than vector quantize a spectrum X[1], X[2], ... X[k], ..., first generate a prediction of the spectrum from the preceding one or more frames' quantized spectra (vectors) and just quantize the difference. If the current frame's vector can be well approximated from the prior frames' vectors, then a "strong" prediction can be used in which the difference between the current frame's vector and a strong predictor may be small. Contrarily, if the current frame's vector cannot be well approximated from the prior frames' vectors, then a "weak" prediction (including no prediction) can be used in which the difference between the current frame's vector and a predictor may be large. For example, a simple prediction of the current frame's vector X could be the preceding frame's quantized vector Y, or more generally a multiple αY with α a weight factor (between 0 and 1). Indeed, α could be a diagonal matrix with different factors for different vector components. For α values in the range 0.7-1.0, the predictor αY is close to Y and if also close to X, the difference vector X-αY to be quantized is small compared to X. This would be a strong predictor, and the decoder recovers an estimate for X by Q(X-αY) + αY with the first term the quantized difference vector X-αY and the second term from the previous frame and likely the dominant term. Conversely, for α values in the range 0.0-0.3, the predictor is weak in that the difference vector X-αY to be quantized is likely comparable to X. In fact, α = 0 is no prediction at all and the vector to be quantized is X itself.
  • The advantage of strong predictors follows from the fact that with the same size codebooks, quantizing something likely to be small (strong-predictor difference) will give better average results than quantizing something likely to be large (weak-predictor difference).
  • Thus train four codebooks: (1) short vectors and strong prediction, (2) short vectors and weak prediction, (3) long vectors and strong prediction, and (4) long vectors and weak prediction. Then process a vector as illustrated in the top portion of Figure 1b: first the vector X is classified as short or long; next, the strong and weak predictor vectors, Xstrong and Xweak, are generated from previous frames' quantized vectors and the strong predictor and weak predictor codebooks are used for vector quantization of X-Xstrong and X-Xweak, respectively. Then the two results (Q(X-Xstrong) + Xstrong and Q (X-Xweak) + Xweak) are compared to the input vector and the better approximation (strong or weak predictor) is selected. A bit is transmitted (to indicate whether a strong or weak predictor was used) along with the 20-bit codebook index for the quantization vector. The pitch determines whether the vector was long or short.
  • In a frame erasure the parameters (i.e., LSFs, Fourier coefficients, pitch, ...) corresponding to the current frame are considered lost or unreliable and the frame is reconstructed based on the parameters from the previous frames. In the presence of frame erasures the error resulting from missing a set of parameters will propagate throughout the series of frames for which a strong prediction is used. If the error occurs in the middle of the series, the exact evolution of the predicted parameters is compromised and some perceptual distortion is usually introduced. When a frame erasure happens within a region where a weak predictor is consistently selected, the effect of the error will be localized (it will be quickly reduced by the weak prediction). The largest degradation in the reconstructed frame is observed whenever a frame erasure occurs for a frame with a weak predictor followed by a series of frames for which a strong predictor is chosen. In this case the evolution of the parameters is builtup on a parameter very different from that which is supposed to start the evolution.
  • Thus a second preferred embodiment analyzes the predictors used in a series of frames and controls their sequencing. In particular, for a current frame which otherwise would use a strong predictor immediately following a frame which used a weak predictor, one preferred embodiment modifies the current frame to use the weak predictor but does not affect the next frame's predictor. Figure 1b illustrates the decisions.
  • A simple example will illustrate the effect of this preferred embodiment. Presume a sequence of frames with Fourier coefficient vectors X1, X2, X3, ... and presume the first frame uses a weak predictor and the second, third, fourth, ... frames use strong predictors, but the preferred embodiment replaces the second frame's strong predictor with a weak predictor. Thus the transmitted quantized difference vector for the first frame is Q(X1-X1weak) and without erasure the decoder recovers X1 as Q(X1-X1weak) + X1weak with the first term likely the dominant term due to weak prediction. Similarly, the usual decoder recovers X2 as Q(X2 -X2strong) + X2strong with the second term dominant, and analogously for X3, X4, ... In contrast, the preferred embodiment decoder recovers X2 as Q(X2 -X2weak) + X2weak but with the first term likely dominant.
  • Note that the decoder recreates X1weak from the preceding reconstructed frames' vectors X0, X-1, ... , and similarly for X2strong and X2weak recreated from reconstructed X1, X0, ..., and likewise for the other predictors.
  • Now with an erasure of the first frame parameters the vector Q(X1-X1weak) is lost and the decoder reconstructs the X1 by something such as just repeating reconstructed X0 from the prior frame. However, this may not be a very good approximation because originally a weak predictor was used. Then for the second frame, the usual decoder reconstructs X2 by Q(X2-X2strong) + Y2strong with Y2strong the strong predictor recreated from X0, X0, ... rather than from X1, X0, ... because X1 was lost and replaced by possibly poor approximation X0. Thus the error would roughly be X2strong - Y2strong which likely is large due to the strong predictor being the dominant term compared to the difference term Q(X2-X2strong). And this also applies to the reconstruction of X3, X4,...
  • Contrarily, the preferred embodiment reconstructs X2 by Q(X2-X2weak) + Y2weak with Y2strong the weak predictor recreated from X0, X0, ... rather than from X1, X0, ... again because X1 was lost and replaced by possibly poor approximation X0. Thus the error would roughly be X2weak - Y2weak which likely is small due to the weak predictor being the smaller term compared to the difference term Q(X2-X2weak). And this smaller error also applies to the reconstruction of X3, X4,
  • Indeed for the case of the predictors X2strong = αX1 with α = 0.8 and X2weak = αX1 with α = 0.2, the usual decoder error would be 0.8(X1 - X0) for reconstruction of X2 and the preferred embodiment decoder error would be 0.2(X1 - X0).
  • Alternative second preferred embodiments modify two (or more) successive frame's strong predictors after a weak predictor frame to be weak predictors. That is, a sequence of weak, strong, strong, strong, ... would be changed to weak, weak, weak, strong, ...
  • The foregoing replacement of strong predictors by weak predictors provides a tradeoff of increased error robustness for slightly decreased quality (the weak predictors being used in place of better strong predictors).

Claims (3)

  1. A method of linear predictive speech coding, comprising the steps of:
    classifying linear prediction residual Fourier coefficients into two or more classes of vectors;
    for each class of vectors providing at least one vector quantization codebook; and
    encoding said vectors with said codebooks; the method being chracterised in that
    said classes of vectors overlap and a vector in two or more classes is encoded using the class of a vector in a preceding frame.
  2. A coding method as claimed in claim 1, wherein predictions of said vectors are encoded, said predictions using strong and weak predictors;
       said method comprising the step of:
    replacing a strong predictor following a weak predictor with a weak predictor.
  3. A method of linear predictive speech decoding, comprising the steps of:
    interpreting linear prediction residual Fourier coefficients as members of two or more overlapping classes of vectors with each class having at least one vector quantization codebook; and
    decoding such an encoded vector using the codebook of the class of a vector in a preceding frame.
EP20000200874 1999-03-12 2000-03-13 Multimode quantizing of the prediction residual in a speech coder Expired - Lifetime EP1035538B1 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US12408999P 1999-03-12 1999-03-12
US12411299P 1999-03-12 1999-03-12
US124112P 1999-03-12
US124089P 1999-03-12

Publications (3)

Publication Number Publication Date
EP1035538A2 EP1035538A2 (en) 2000-09-13
EP1035538A3 EP1035538A3 (en) 2003-04-23
EP1035538B1 true EP1035538B1 (en) 2005-07-27

Family

ID=26822196

Family Applications (1)

Application Number Title Priority Date Filing Date
EP20000200874 Expired - Lifetime EP1035538B1 (en) 1999-03-12 2000-03-13 Multimode quantizing of the prediction residual in a speech coder

Country Status (3)

Country Link
EP (1) EP1035538B1 (en)
JP (1) JP2000305597A (en)
DE (1) DE60021455T2 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8126707B2 (en) * 2007-04-05 2012-02-28 Texas Instruments Incorporated Method and system for speech compression

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2392640A1 (en) 2002-07-05 2004-01-05 Voiceage Corporation A method and device for efficient in-based dim-and-burst signaling and half-rate max operation in variable bit-rate wideband speech coding for cdma wireless systems
CA2415105A1 (en) * 2002-12-24 2004-06-24 Voiceage Corporation A method and device for robust predictive vector quantization of linear prediction parameters in variable bit rate speech coding

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3557662B2 (en) * 1994-08-30 2004-08-25 ソニー株式会社 Speech encoding method and speech decoding method, and speech encoding device and speech decoding device
JPH08179796A (en) * 1994-12-21 1996-07-12 Sony Corp Voice coding method

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8126707B2 (en) * 2007-04-05 2012-02-28 Texas Instruments Incorporated Method and system for speech compression

Also Published As

Publication number Publication date
DE60021455T2 (en) 2006-05-24
JP2000305597A (en) 2000-11-02
EP1035538A3 (en) 2003-04-23
DE60021455D1 (en) 2005-09-01
EP1035538A2 (en) 2000-09-13

Similar Documents

Publication Publication Date Title
JP5343098B2 (en) LPC harmonic vocoder with super frame structure
JP4843124B2 (en) Codec and method for encoding and decoding audio signals
KR100304682B1 (en) Fast Excitation Coding for Speech Coders
Bradbury Linear predictive coding
EP0718822A2 (en) A low rate multi-mode CELP CODEC that uses backward prediction
KR20020052191A (en) Variable bit-rate celp coding of speech with phonetic classification
CA2412449C (en) Improved speech model and analysis, synthesis, and quantization methods
EP1141947A2 (en) Variable rate speech coding
KR20010102004A (en) Celp transcoding
TW463143B (en) Low-bit rate speech encoding method
EP1597721B1 (en) 600 bps mixed excitation linear prediction transcoding
JPH02249000A (en) Voice encoding system
EP1035538B1 (en) Multimode quantizing of the prediction residual in a speech coder
US7295974B1 (en) Encoding in speech compression
JPH07225599A (en) Method of encoding sound
JP3496618B2 (en) Apparatus and method for speech encoding / decoding including speechless encoding operating at multiple rates
KR0155798B1 (en) Vocoder and the method thereof
JP3153075B2 (en) Audio coding device
KR100718487B1 (en) Harmonic noise weighting in digital speech coders
KR100554164B1 (en) Transcoder between two speech codecs having difference CELP type and method thereof
Gournay et al. A 1200 bits/s HSX speech coder for very-low-bit-rate communications
Drygajilo Speech Coding Techniques and Standards
JPH02160300A (en) Voice encoding system
Viswanathan et al. A harmonic deviations linear prediction vocoder for improved narrowband speech transmission
JP3144244B2 (en) Audio coding device

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

AK Designated contracting states

Kind code of ref document: A2

Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LI LU MC NL PT SE

AX Request for extension of the european patent

Free format text: AL;LT;LV;MK;RO;SI

PUAL Search report despatched

Free format text: ORIGINAL CODE: 0009013

AK Designated contracting states

Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LI LU MC NL PT SE

AX Request for extension of the european patent

Extension state: AL LT LV MK RO SI

17P Request for examination filed

Effective date: 20031023

AKX Designation fees paid

Designated state(s): DE FR GB

17Q First examination report despatched

Effective date: 20040206

GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

GRAS Grant fee paid

Free format text: ORIGINAL CODE: EPIDOSNIGR3

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

AK Designated contracting states

Kind code of ref document: B1

Designated state(s): DE FR GB

REG Reference to a national code

Ref country code: GB

Ref legal event code: FG4D

REF Corresponds to:

Ref document number: 60021455

Country of ref document: DE

Date of ref document: 20050901

Kind code of ref document: P

ET Fr: translation filed
PLBE No opposition filed within time limit

Free format text: ORIGINAL CODE: 0009261

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT

26N No opposition filed

Effective date: 20060428

REG Reference to a national code

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 16

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: FR

Payment date: 20150224

Year of fee payment: 16

Ref country code: GB

Payment date: 20150224

Year of fee payment: 16

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: DE

Payment date: 20150331

Year of fee payment: 16

REG Reference to a national code

Ref country code: DE

Ref legal event code: R119

Ref document number: 60021455

Country of ref document: DE

GBPC Gb: european patent ceased through non-payment of renewal fee

Effective date: 20160313

REG Reference to a national code

Ref country code: FR

Ref legal event code: ST

Effective date: 20161130

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: GB

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20160313

Ref country code: FR

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20160331

Ref country code: DE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20161001