WO1998005029A1 - Speech coding - Google Patents

Speech coding Download PDF

Info

Publication number
WO1998005029A1
WO1998005029A1 PCT/GB1997/002037 GB9702037W WO9805029A1 WO 1998005029 A1 WO1998005029 A1 WO 1998005029A1 GB 9702037 W GB9702037 W GB 9702037W WO 9805029 A1 WO9805029 A1 WO 9805029A1
Authority
WO
WIPO (PCT)
Prior art keywords
phase
spectrum
signal
magnitude
decoder
Prior art date
Application number
PCT/GB1997/002037
Other languages
French (fr)
Inventor
Hung Bun Choi
Xiaoqin Sun
Barry Michael George Cheetham
Original Assignee
British Telecommunications Public Limited Company
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by British Telecommunications Public Limited Company filed Critical British Telecommunications Public Limited Company
Priority to EP97933782A priority Critical patent/EP0917709B1/en
Priority to AU37024/97A priority patent/AU3702497A/en
Priority to US09/029,832 priority patent/US6219637B1/en
Priority to JP10508614A priority patent/JP2000515992A/en
Priority to DE69702261T priority patent/DE69702261T2/en
Publication of WO1998005029A1 publication Critical patent/WO1998005029A1/en

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/02Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using spectral analysis, e.g. transform vocoders or subband vocoders

Definitions

  • the present invention is concerned with speech coding and decoding, and especially with systems in which the coding process fails to convey all or any of the phase information contained in the signal being coded.
  • a decoder for speech signals comprising: means for receiving magnitude spectral information for synthesis of a time- varying signal; means for computing, from the magnitude spectral information, phase spectrum information corresponding to a minimum phase filter which has a magnitude spectrum corresponding to the magnitude spectral information; means for generating, from the magnitude spectral information and the phase spectral information, the time-varying signal; and phase adjustment means operable to modify the phase spectrum of the signal.
  • the invention provides a decoder for decoding speech signals comprising information defining the response of a minimum phase synthesis filter and, for synthesis of an excitation signal, magnitude spectral information, the decoder comprising: means for generating, from the magnitude spectral information, an excitation signal; a synthesis filter controlled by the response information and connected to filter the excitation signal; and phase adjustment means for estimating a phase-adjustment signal to modify the phase of the signal.
  • the invention provides a method of coding and decoding speech signals, comprising:
  • Figure 1 is a block diagram of a known speech coder and decoder
  • Figure 2 illustrates a model of the human vocal system
  • Figure 3 is a block diagram of a speech decoder according to one embodiment of the present invention
  • Figures 4 and 5 are charts showing test results obtained for the decoder of Figure 3;
  • Figure 6 is a graph of the shape of a (known) Rosenberg pulse
  • Figure 7 is a block diagram of a second form of speech decoder according to the invention.
  • Figure 8 is a block diagram of a known type of speech coder
  • Figure 9 is a block diagram of a third embodiment of decoder in accordance with the invention, for use with the coder of Figure 9
  • Figure 10 is a z-plane plot illustrating the invention
  • STC sinusoidal transform coding
  • a coder receives speech samples s(n) in digital form at an input 1 ; segments of speech of typically 20 ms duration are subject to Fourier analysis in a Fast Fourier Transform unit 2 to determine the short term frequency spectrum of the speech Specifically it is the amplitudes and frequencies of the peaks in the magnitude spectrum that are of interest, the frequencies being assumed - in the case of voiced speech - to be harmonics of a pitch frequency which is derived by a pitch detector 3.
  • the phase spectrum is, in the interests of transmission efficiency, not to be transmitted and a representation of the magnitude spectrum, for transmission to a decoder, is in this example obtained by fitting an envelope to the magnitude spectrum and characterising this envelope by a set of coefficients (e.g. LSP (line spectral pair) coefficients ⁇ .
  • LSP line spectral pair
  • the corresponding decoder is also shown in Figure 1 .
  • This receives the envelope information, but, lacking the phase information, has to reconstruct the phase spectrum based on some assumption.
  • the assumption used is that the magnitude spectrum represented by the received LSP coefficients is the magnitude spectrum of a minimum-phase transfer function - which amounts to the assumption that the human vocal system can be regarded as a minimum phase filter impulsively excited.
  • a unit 6 derives the magnitude spectrum from the received LSP coefficients and a unit 7 calculates the phase spectrum which corresponds to this magnitude spectrum based on the minimum phase assumption.
  • a sinusoidal synthesiser 8 From the two spectra a sinusoidal synthesiser 8 generates the sum of a set of sinusoids, harmonic with the pitch frequency, having amplitudes and phases determined by the spectra.
  • a synthetic speech signal y(n) is constructed by the sum of sine waves:
  • a k and ⁇ k represent the amplitude and phase of each sine wave component associated with the frequency track ⁇ k
  • N is the number of sinusoids
  • ⁇ k (n) represents the instantaneous relative phase of the harmonics
  • ⁇ k (n) represents the instantaneous linear phase component
  • ⁇ 0 (n) is the instantaneous fundamental pitch frequency
  • a simple example of sinusoidal synthesis is the overlap and add technique.
  • a k (n), ⁇ 0 (n) and ⁇ k (n) are updated periodically, and are assumed to be constant for the duration of a short, for example 10 ms, frame.
  • the t'th signal frame is thus synthesised as follows
  • ⁇ ' y' (n) i A cos(k ⁇ ' 0 n + ⁇ ) 4
  • T is the frame duration expressed as a number of sample periods
  • y(n) may be calculated continuously by interpolating the amplitude and phase terms in equation 2.
  • the magnitude component A k (n) is often interpolated linearly between updates, whilst a number of techniques have been reported for interpolating the phase component.
  • the instantaneous combined phase ( ⁇ k (n) + ⁇ (n)) and pitch frequency ⁇ 0 (n) are specified at each update potnt.
  • the interpolated phase trajectory can then be represented by a cubic polynomial.
  • ⁇ k (n) and ⁇ (n) are interpolated separately.
  • ⁇ (n) is specified directly at the update points and linearly interpolated, whilst the instantaneous linear phase component ⁇ k (n) is specified at the update points in terms of the pitch frequency ⁇ 0 (n), and only requires a quadratic polynomial interpolation.
  • a sinusoidal synthesiser can be generalised as a unit that produces a continuous signal y ⁇ n) from periodically updated values of A k (n), ⁇ 0 (n) and ⁇ k (n).
  • the number of sinusoids may be fixed or time-varying.
  • A is a constant determined by the amplitude of e(n). and the phase is:
  • n is any integer.
  • V(z) ⁇ i 1 1 - I
  • the lip radiation filter may be regarded as a differentiator for which:
  • represents a single zero having a value close to unity (typically
  • the decoder proceeds on the assumption that an appropriate transfer function for G ap is
  • the results include figures for a Rosenberg pulse. As described by
  • g(t) A(3(t / T,, ) 2 - 2(t / T l ) O ⁇ t ⁇ T,
  • T P and T N are the glottal opening and closing times 5 respectively.
  • Equation 1 6 An alternative to Equation 1 6, therefore, is to apply at 31 a computed phase equal to the phase of g(t) from Equation ( 17), as shown in Figure 7.
  • the coder transmits details of the filter response, along with information (63) to enable the decoder to construct (64) an excitation signal which is to some extent similar to the residual signal and can be used by the decoder to drive a synthesis filter 65 to produce an output speech signal.
  • an excitation signal which is to some extent similar to the residual signal and can be used by the decoder to drive a synthesis filter 65 to produce an output speech signal.
  • CELP coding a vector- quantised version of the residual
  • MPLPC coding a coded representation of an irregular pulse train
  • phase information about the excitation is omitted from the transmission, then a similar situation arises to that described in relation to Figure 2, namely that assumptions need to be made as to the phase spectrum to be employed. Whether phase information for the synthesis filter is included is not an issue since LPC analysis generally produces a minimum phase transfer function in any case so that it is immaterial for the purposes of the present discussion whether the phase response in included in the transmitted filter information (typically a set of filter coefficients) or whether it is computed at the decoder on the basis of a minimum phase assumption.
  • the adjustment is added in an adder 83 prior and converted back into Fourier coefficients before passing to the PWI excitation generator 64.
  • the calculation unit 91 may be realised by a digital signal processing unit programmed to implement the Equation 16.
  • the supposed total transfer function H(z) is the product of G,V and L and thus has, inside the unit circle, P poles at p, and one zero at ⁇ , and, outside the unit circle, two poles at 1 / ⁇ -. and 1 / ⁇ 2 , as illustrated in Figure 9.
  • the effect of the inverse LPC analysis is to produce an inverse filter 61 which flattens the spectrum by means of zeros approximately coinciding with the poles at p..
  • the filter being a minimum phase filter, cannot produce zeros outside the unit circle at 1 / ⁇ - ⁇ and 1 / ⁇ 2 but instead produces zeros at ⁇ and ⁇ 2 , which tend to flatten the magnitude response, but not the phase response (the filter cannot produce a pole to cancel the zero at ⁇ but as ⁇ ! usually has a similar value to ⁇ it is common to assume that the ⁇ zero and 1 / ⁇ pole cancel in the magnitude spectrum so that the inverse filter has zeros just at p, and ⁇ 2 .
  • the residual has a phase spectrum represented in the z-plane by two zeros at ⁇ and ⁇ 2 (where the ⁇ 's have values corresponding to the original signal) and poles at 1 / ⁇ , and 1 / ⁇ 2 (where the ⁇ 's have values as determined by the LPC analysis).
  • This information having been lost, it is approximated by the all-pass filter computation according to equations ( 1 5) and ( 1 6) which have zeros and poles at these positions.
  • Equation 1 6 This description assumes a phase adjustment determined at all frequencies by Equation 1 6. However one may alternatively apply Equation 1 6 only in the lower part of the frequency range - up to a limit which may be fixed or may depend on the nature of the speech, and apply a random phase to higher frequency components.
  • the coder has, in conventional manner, a voiced/unvoiced speech detector 92 which causes the decoder to switch, via a switch 93, between the excitation generator 64 and a voice generator whose amplitude is controlled by a gain signal from the coder
  • decoders described have been presented in terms of the decoding of signals coded and transmitted thereto, they may equally well serve to generate speech from coded signals stored and later retrieved - i.e. they could form part of a speech synthesiser.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Computational Linguistics (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Compression, Expansion, Code Conversion, And Decoders (AREA)

Abstract

A decoder for speech signals has means for receiving magnitude spectral information for synthesis of a time-varying signal; means (7) for computing, from the magnitude spectral information, phase spectrum information corresponding to a minimum phase filter which has a magnitude spectrum corresponding to the magnitude spectral information; means (8) for generating, from the magnitude spectral information and the phase spectral information, the time-varying signal; and phase adjustment means (31, 32) operable to modify the phase spectrum of the signal.

Description

SPEECH CODING
The present invention is concerned with speech coding and decoding, and especially with systems in which the coding process fails to convey all or any of the phase information contained in the signal being coded.
According to one aspect of the present invention there is provided a decoder for speech signals comprising: means for receiving magnitude spectral information for synthesis of a time- varying signal; means for computing, from the magnitude spectral information, phase spectrum information corresponding to a minimum phase filter which has a magnitude spectrum corresponding to the magnitude spectral information; means for generating, from the magnitude spectral information and the phase spectral information, the time-varying signal; and phase adjustment means operable to modify the phase spectrum of the signal.
In another aspect the invention provides a decoder for decoding speech signals comprising information defining the response of a minimum phase synthesis filter and, for synthesis of an excitation signal, magnitude spectral information, the decoder comprising: means for generating, from the magnitude spectral information, an excitation signal; a synthesis filter controlled by the response information and connected to filter the excitation signal; and phase adjustment means for estimating a phase-adjustment signal to modify the phase of the signal.
In a further aspect, the invention provides a method of coding and decoding speech signals, comprising:
(a) generating signals representing the magnitude spectrum of the speech signal;
(b) receiving the signals;
(c) generating from the received signals a synthetic speech signal having a magnitude spectrum determined by the received signals and having a phase spectrum which corresponds to a transfer function having, when considered as a z- plane plot, at least one pole outside the unit circle.
Some embodiments of the invention will now be described, by way of example, with reference to the accompanying drawings, in which: Figure 1 is a block diagram of a known speech coder and decoder;
Figure 2 illustrates a model of the human vocal system; Figure 3 is a block diagram of a speech decoder according to one embodiment of the present invention;
Figures 4 and 5 are charts showing test results obtained for the decoder of Figure 3;
Figure 6 is a graph of the shape of a (known) Rosenberg pulse; Figure 7 is a block diagram of a second form of speech decoder according to the invention;
Figure 8 is a block diagram of a known type of speech coder, Figure 9 is a block diagram of a third embodiment of decoder in accordance with the invention, for use with the coder of Figure 9, and Figure 10 is a z-plane plot illustrating the invention
This first example assumes that a sinusoidal transform coding (STC) technique is employed for the coding and decoding of speech signals. This technique was proposed by McAulay and Quatieri and is described in their paper
"Speech Analysis/Synthesis based on a Sinusoidal Representation" , R. J. McAulay and T. F. Quatieri, IEEE Trans. Acoust. Speech Signal Process. ASSP-34, pp. 744-
754, 1986; and "Low-rate Speech Coding based on the Sinusoidal Model" by the same authors, in "Advances in Speech Signal Processing" , Ed. S. Furui and M. M. Sondhi, Marcel Dekker Inc., 1 992. The principles are illustrated in Figure 1 where a coder receives speech samples s(n) in digital form at an input 1 ; segments of speech of typically 20 ms duration are subject to Fourier analysis in a Fast Fourier Transform unit 2 to determine the short term frequency spectrum of the speech Specifically it is the amplitudes and frequencies of the peaks in the magnitude spectrum that are of interest, the frequencies being assumed - in the case of voiced speech - to be harmonics of a pitch frequency which is derived by a pitch detector 3. The phase spectrum is, in the interests of transmission efficiency, not to be transmitted and a representation of the magnitude spectrum, for transmission to a decoder, is in this example obtained by fitting an envelope to the magnitude spectrum and characterising this envelope by a set of coefficients (e.g. LSP (line spectral pair) coefficients}. This function is performed by a conversion unit 4 which receives the Fourier coefficients and performs the curve fit and a unit 5 which converts the envelope to LSP coefficients which form the output of the coder.
The corresponding decoder is also shown in Figure 1 . This receives the envelope information, but, lacking the phase information, has to reconstruct the phase spectrum based on some assumption. The assumption used is that the magnitude spectrum represented by the received LSP coefficients is the magnitude spectrum of a minimum-phase transfer function - which amounts to the assumption that the human vocal system can be regarded as a minimum phase filter impulsively excited. Thus a unit 6 derives the magnitude spectrum from the received LSP coefficients and a unit 7 calculates the phase spectrum which corresponds to this magnitude spectrum based on the minimum phase assumption. From the two spectra a sinusoidal synthesiser 8 generates the sum of a set of sinusoids, harmonic with the pitch frequency, having amplitudes and phases determined by the spectra.
In sinusoidal speech synthesis, a synthetic speech signal y(n) is constructed by the sum of sine waves:
y{n) = ∑ Ak cos(ωk n + φk ) . = !
where Ak and φk represent the amplitude and phase of each sine wave component associated with the frequency track ωk, and N is the number of sinusoids.
Although this is not a prerequisite, it is common to assume that the sinusoids are harmonically related, thus:
(n)=∑ ∞s(ψkk)
4 = 1
where
Figure imgf000006_0001
where φk(n) represents the instantaneous relative phase of the harmonics, ψk(n) represents the instantaneous linear phase component, and ω0(n) is the instantaneous fundamental pitch frequency.
A simple example of sinusoidal synthesis is the overlap and add technique. In this scheme Ak(n), ω0(n) and ψk(n) are updated periodically, and are assumed to be constant for the duration of a short, for example 10 ms, frame. The t'th signal frame is thus synthesised as follows
Λ' y' (n) = i A cos(kω '0 n + φ ) 4
*=ι
Note that this is essentially an inverse discrete Fourier transform Discontinuities at frame boundaries are avoided by combining adjacent frames as follows:
y ' (n) = W( )y'- (n) + W(n - T)y' (n - T) 5
where W(n) is an overlap and add window, for example triangular or trapezoidal, T is the frame duration expressed as a number of sample periods and
W(n) + W(n - T) = \ 6
In an alternative approach, y(n) may be calculated continuously by interpolating the amplitude and phase terms in equation 2. In such schemes, the magnitude component Ak(n) is often interpolated linearly between updates, whilst a number of techniques have been reported for interpolating the phase component. In one approach (McAulay and Quatieri) the instantaneous combined phase (Ψk(n) + φ(n)) and pitch frequency ω0(n) are specified at each update potnt. The interpolated phase trajectory can then be represented by a cubic polynomial. In another approach (Kleijn) ψk(n) and φ(n) are interpolated separately. In this case φ(n) is specified directly at the update points and linearly interpolated, whilst the instantaneous linear phase component ψk(n) is specified at the update points in terms of the pitch frequency ω0(n), and only requires a quadratic polynomial interpolation.
From the discussion presented above, it is clear that a sinusoidal synthesiser can be generalised as a unit that produces a continuous signal y{n) from periodically updated values of Ak(n), ω0(n) and φk(n). The number of sinusoids may be fixed or time-varying.
Thus we are interested in sinusoidal synthesis schemes where the original phase information is unavailable and φk must be derived in some manner at the synthesiser.
Whilst the system of Figure 1 produces reasonably satisfactory results, the coder and decoder now to be described offers alternative assumptions as to the phase spectrum. The notion that the human vocal apparatus can be viewed as an impulsive excitation e(n) consisting of a regular series of delta functions driving a time-varying filter H(z) (where z is the z-transform variable) can be refined by considering H(z) to be formed by three filters, as illustrated in Figure 2, namely a glottal filter 20 having a transfer function G(z), a vocal tract filter 21 having a transfer function V(z) and a lip radiation filter 22 with a transfer function L(z). In this description, the time-domain representations of variables and the impulse responses of filters are shown in lower case, whilst their z transforms and frequency domain representations are denoted by the same letters in upper case. Thus we may write for the speech signal s(n):
s(n) = e(n) ® h(n) = e(n) ®g(n)®v{n)® l(n) 7
or
S(z) = E(z) H(z) = E(z)G(z) V(z)L(z) 8
Since the spectrum of e(n) is a series of lines at the pitch frequency harmonics, it follows that at the frequency of each harmonic the magnitude of s is: S(e'") = E(e'ω) H(e)\=
Figure imgf000008_0001
)\
where A is a constant determined by the amplitude of e(n). and the phase is:
arg(S(eJ") )) = aτg(E(e "" )) + arg(H(e "ϋ )) = 2mπ + aτg(H(e "ϋ )) 1 0
Where m is any integer.
Assuming that the magnitude spectrum at the decoder of Figure 1 corresponds to | H(e) | the regenerated speech will be degraded to the extent that the phase spectrum used differs from arg(H(e'ω)).
Considering now the components G, V and L, minimum phase is a good assumption for the vocal tract transfer function V(z) Typically this may be represented by an all-pole model having the transfer function
V(z) = ~ i 1 1 - I
where p, are the poles of the transfer function and are directly related to the formant frequencies of the speech, and P is the number of poles
The lip radiation filter may be regarded as a differentiator for which:
L(z) = \ -α∑-1 1 2
where α represents a single zero having a value close to unity (typically
0.95).
Whilst the minimum phase assumption is good for V(z) and L(z), it is believed to be less valid for G(z). Noting that any filter transfer function can be represented as the product of a minimum phase function and an all pass filter, we may suppose that:
Figure imgf000009_0001
The decoder shortly to be described with reference to Figure 3 is based on the assumption that the magnitude spectrum associated with G is that corresponding to
Gmm(z) = - l 14 π 1 = 1 α-A*-1)
The decoder proceeds on the assumption that an appropriate transfer function for Gapis
(\-β^){\-β,z-[)
G z) = 15
(1 *-')(! ---') ?, β>
The corresponding phase spectrum for Gap is
φf(ω) )
Figure imgf000009_0002
-tan '( ) 16 β2 -cosω
In the decoder of Figure 3, items 6, 7 and 8 are as in Figure 1. However, the phase spectrum computed at 7 is adjusted. A unit 31 receives the pitch frequency and calculates values of φ in accordance with Equation (17) for the relevant values of ω - i.e. harmonics of the pitch frequency for the current frame of speech. These are then added in an adder 32 to the minimum-phase values, prior to the sinusoidal synthesiser 8. Experiments were conducted on the decoder of Figure 3, with a fixed value βi = β2 = 0.8 (though - as will be discussed below - varying β is also possible). These showed an improvement in measured phase error (as shown in Figure 4) and also in subjective tests (Figure 5) in which listeners were asked to listen to the 5 output of four decoders and place them in order of preference for speech quality. The choices were scored: first choice = 4, second = 3, third = 2 and fourth = 1 ; and the scores added.
The results include figures for a Rosenberg pulse. As described by
A.E.Rosenberg in "Effect of Glottal Pulse Shape on the Quality of Natural Vowels", 0 J. Acoust. Soc. of America. Vol 49, No. 2, 1 971 , pp. 583-590, this is a pulse shape postulated for the output of the glottal filter G. The shape of a Rosenberg pulse is shown in Figure 6 and is defined as:
g(t) = A(3(t / T,, )2 - 2(t / Tl ) O ≤ t ≤ T,,
t - T,, g(t) = A(\ - ( Ϋ) T, < t ≤ T + T,
g(t) = 0 Tp + T, < t ≤ p 1 7
where p is the pitch period and TP and TN are the glottal opening and closing times 5 respectively.
An alternative to Equation 1 6, therefore, is to apply at 31 a computed phase equal to the phase of g(t) from Equation ( 17), as shown in Figure 7. However, in order that the component of the Rosenberg pulse spectrum that can be represented by a minimum phase transfer function is not applied twice, the magnitude spectrum 0 corresponding to Equation 1 7 is calculated at 71 and subtracted from the amplitude values before they are processed by the phase spectrum calculation unit 7. The results given are for TP = 0.33P, TN = 0 1 P.
The same considerations may be applied to arrangements in which a coder attempts to deconvolve the glottal excitation and the vocal tract response - so-called 5 linear predictive coders. Here (Figure 8) input speech is analysed (60) frame-by frame to determine parameters of a filter having a spectral response similar to that of the input speech. The coder then sets up a filter 61 having the inverse of this response and the speech signal is passed through this inverse filter to produce a residual signal r(n) which ideally would have a flat spectrum and which in practice is flatter than that of the original speech. The coder transmits details of the filter response, along with information (63) to enable the decoder to construct (64) an excitation signal which is to some extent similar to the residual signal and can be used by the decoder to drive a synthesis filter 65 to produce an output speech signal. Many proposals have been made for different ways of transmitting the residual information, e.g.
(a) sending for voiced speech a pitch period and gain value to control a pulse generator and for unvoiced speech a gain value to control a noise generator;
(b) a quantised version of the residual (RELP coding)
(c) a vector- quantised version of the residual (CELP coding) (d) a coded representation of an irregular pulse train (MPLPC coding)
(e) particulars of a single cycle of the residual by which the decoder may synthesise a repeating sequence of frame length (Prototype waveform interpolation or PWI) (See W. B. Kleijn, "Encoding Speech using prototype Waveforms", IEEE Trans. Speech and Audio Processing, Vol 1 , No. 4, October 1993, pp. 386-399, and W. B. Kleijn and J. Haagen, "A Speech Coder based on Decomposition of Characteristic Waveforms", Proc ICASSP, 1 995, pp.508-51 1 .
In the event that the phase information about the excitation is omitted from the transmission, then a similar situation arises to that described in relation to Figure 2, namely that assumptions need to be made as to the phase spectrum to be employed. Whether phase information for the synthesis filter is included is not an issue since LPC analysis generally produces a minimum phase transfer function in any case so that it is immaterial for the purposes of the present discussion whether the phase response in included in the transmitted filter information (typically a set of filter coefficients) or whether it is computed at the decoder on the basis of a minimum phase assumption.
Of particular interest in this context are PWI coders where commonly the extracted prototypical residual pitch cycle is analysed using a Fourier transform. Rather than simply quantising the Fourier coefficients, a saving in transmission capacity can be made by sending only the magnitude and the pitch period. Thus in the arrangement of Figure 9, where items identical to those in Figure 8 carry the same reference numerals, the excitation unit 63 - here operating according to the PWI principle and producing at its output sets of Fourier coefficients - is followed by a unit 80 which extracts only the magnitude information and transmits this to the decoder. At the decoder a unit 91 - analogous to unit 31 in figure 3 - calculates the phase adjustment values φF using Equation 1 6 and controls the phase of an excitation generator 64. In this example, the β< is fixed at 0.95 whilst β2 is controlled as a function of the pitch period p, in accordance with the following table:
Figure imgf000012_0001
Table I The valui t |1 used in Hz) tor the ranee ol pitch periods
These values are chosen so that the all-pass transfer function of Equation 1 5 has a phase response equivalent to that part of the phase spectrum of a Rosenberg pulse having TP = 0.4p and TN = 0.16p which is not modelled by the LPC synthesis filter 65. As before, the adjustment is added in an adder 83 prior and converted back into Fourier coefficients before passing to the PWI excitation generator 64.
The calculation unit 91 may be realised by a digital signal processing unit programmed to implement the Equation 16.
It is of interest to consider the effect of these adjustments in terms of poles and zeroes on the z-plane. The supposed total transfer function H(z) is the product of G,V and L and thus has, inside the unit circle, P poles at p, and one zero at α, and, outside the unit circle, two poles at 1 /β-. and 1 /β2, as illustrated in Figure 9. The effect of the inverse LPC analysis is to produce an inverse filter 61 which flattens the spectrum by means of zeros approximately coinciding with the poles at p.. The filter, being a minimum phase filter, cannot produce zeros outside the unit circle at 1 /β-ι and 1 /β2 but instead produces zeros at β and β2, which tend to flatten the magnitude response, but not the phase response (the filter cannot produce a pole to cancel the zero at α but as β! usually has a similar value to α it is common to assume that the α zero and 1 /βτ pole cancel in the magnitude spectrum so that the inverse filter has zeros just at p, and β2. Thus the residual has a phase spectrum represented in the z-plane by two zeros at βϊ and β2 (where the β's have values corresponding to the original signal) and poles at 1 /β, and 1 /β2 (where the β's have values as determined by the LPC analysis). This information having been lost, it is approximated by the all-pass filter computation according to equations ( 1 5) and ( 1 6) which have zeros and poles at these positions.
This description assumes a phase adjustment determined at all frequencies by Equation 1 6. However one may alternatively apply Equation 1 6 only in the lower part of the frequency range - up to a limit which may be fixed or may depend on the nature of the speech, and apply a random phase to higher frequency components.
The arrangements so far described for Figure 9 are designed primarily for voiced speech. To accommodate unvoiced speech, the coder has, in conventional manner, a voiced/unvoiced speech detector 92 which causes the decoder to switch, via a switch 93, between the excitation generator 64 and a voice generator whose amplitude is controlled by a gain signal from the coder
Although the adjustment has been illustrated by addition of phase values, this is not the only way of achieving the desired result; for example the synthesis filter 65 could instead be followed (or preceded) by an all-pass filter having the response of Equation ( 1 5).
It should be noted that, although the decoders described have been presented in terms of the decoding of signals coded and transmitted thereto, they may equally well serve to generate speech from coded signals stored and later retrieved - i.e. they could form part of a speech synthesiser.

Claims

1 . A decoder for speech signals comprising: means for receiving magnitude spectral information for synthesis of a time- varying signal; means for computing, from the magnitude spectral information, phase spectrum information corresponding to a minimum phase filter which has a magnitude spectrum corresponding to the magnitude spectral information; means for generating, from the magnitude spectral information and the phase spectral information, the time-varying signal; and phase adjustment means operable to modify the phase spectrum of the signal.
2. A decoder for decoding speech signals comprising information defining the response of a minimum phase synthesis filter and, for synthesis of an excitation signal, magnitude spectral information, the decoder comprising: means for generating, from the magnitude spectral information, an excitation signal; a synthesis filter controlled by the response information and connected to filter the excitation signal; and phase adjustment means for estimating a phase-adjustment signal to modify the phase of the signal.
3. A decoder according to Claim 2 in which the excitation generating means are connected to receive the phase adjustment signal so as to generate an excitation having a phase spectrum determined thereby.
4. A decoder according to Claim 1 or Claim 2 in which the phase adjustment means are arranged in operation to modify the phase of the signal after generation thereof.
5. A decoder according to any one of the preceding claims in which the phase adjustment means are operable to adjust the phase in accordance with the transfer function of an all-pass filter having, in a z-plane representation, at least one pole outside the unit circle.
6. A decoder according to any one of the preceding claims in which the phase adjustment means are operable to adjust the phase in accordance with the transfer function of an ail-pass filter having, in a z-plane representation, two poles outside the unit circle.
7. A decoder according to claim 5 or 6 in which the adjustment means are arranged in operation to vary the position of the or a said pole as a function of pitch period information received by the decoder.
8. A method of coding and decoding speech signals, comprising:
(a) generating signals representing the magnitude spectrum of the speech signal;
(b) receiving the signals; (c) generating from the received signals a synthetic speech signal having a magnitude spectrum determined by the received signals and having a phase spectrum which corresponds to a transfer function having, when considered as a z- plane plot, at least one pole outside the unit circle.
9. A method according to claim 8 in which the phase spectrum of the synthetic speech signal is determined by computing a minimum-phase spectrum from the received signals and forming a composite phase spectrum which is the combination of the minimum-phase spectrum and a spectrum corresponding to the said pole(s).
10. A method according to claim 8 in which the signals include signals defining a minimum-phase synthesis filter and the phase spectrum of the synthetic speech signal is determined by the defined synthesis filter and by a phase spectrum corresponding to the said pole(s).
PCT/GB1997/002037 1996-07-30 1997-07-28 Speech coding WO1998005029A1 (en)

Priority Applications (5)

Application Number Priority Date Filing Date Title
EP97933782A EP0917709B1 (en) 1996-07-30 1997-07-28 Speech coding
AU37024/97A AU3702497A (en) 1996-07-30 1997-07-28 Speech coding
US09/029,832 US6219637B1 (en) 1996-07-30 1997-07-28 Speech coding/decoding using phase spectrum corresponding to a transfer function having at least one pole outside the unit circle
JP10508614A JP2000515992A (en) 1996-07-30 1997-07-28 Language coding
DE69702261T DE69702261T2 (en) 1996-07-30 1997-07-28 LANGUAGE CODING

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
EP96305576.9 1996-07-30
EP96305576 1996-07-30

Publications (1)

Publication Number Publication Date
WO1998005029A1 true WO1998005029A1 (en) 1998-02-05

Family

ID=8225033

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/GB1997/002037 WO1998005029A1 (en) 1996-07-30 1997-07-28 Speech coding

Country Status (6)

Country Link
US (1) US6219637B1 (en)
EP (1) EP0917709B1 (en)
JP (1) JP2000515992A (en)
AU (1) AU3702497A (en)
DE (1) DE69702261T2 (en)
WO (1) WO1998005029A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0987680A1 (en) * 1998-09-17 2000-03-22 BRITISH TELECOMMUNICATIONS public limited company Audio signal processing
US6535847B1 (en) 1998-09-17 2003-03-18 British Telecommunications Public Limited Company Audio signal processing
EP1617416A2 (en) * 1999-07-19 2006-01-18 Qualcom Incorporated Method and apparatus for subsampling phase spectrum information

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3644263B2 (en) * 1998-07-31 2005-04-27 ヤマハ株式会社 Waveform forming apparatus and method
US7039581B1 (en) * 1999-09-22 2006-05-02 Texas Instruments Incorporated Hybrid speed coding and system
US20030048129A1 (en) * 2001-09-07 2003-03-13 Arthur Sheiman Time varying filter with zero and/or pole migration
US7353168B2 (en) * 2001-10-03 2008-04-01 Broadcom Corporation Method and apparatus to eliminate discontinuities in adaptively filtered signals
JP2005532585A (en) * 2002-07-08 2005-10-27 コーニンクレッカ フィリップス エレクトロニクス エヌ ヴィ Audio coding
PL376861A1 (en) * 2002-11-29 2006-01-09 Koninklijke Philips Electronics N.V. Coding an audio signal
GB2398981B (en) * 2003-02-27 2005-09-14 Motorola Inc Speech communication unit and method for synthesising speech therein
KR101019936B1 (en) * 2005-12-02 2011-03-09 퀄컴 인코포레이티드 Systems, methods, and apparatus for alignment of speech waveforms
JP6011039B2 (en) * 2011-06-07 2016-10-19 ヤマハ株式会社 Speech synthesis apparatus and speech synthesis method
KR101475894B1 (en) * 2013-06-21 2014-12-23 서울대학교산학협력단 Method and apparatus for improving disordered voice
JP6345780B2 (en) 2013-11-22 2018-06-20 クゥアルコム・インコーポレイテッドQualcomm Incorporated Selective phase compensation in highband coding.
JP6637082B2 (en) * 2015-12-10 2020-01-29 ▲華▼侃如 Speech analysis and synthesis method based on harmonic model and sound source-vocal tract feature decomposition
CN113114160B (en) * 2021-05-25 2024-04-02 东南大学 Linear frequency modulation signal noise reduction method based on time-varying filter

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0259950A1 (en) * 1986-09-11 1988-03-16 AT&T Corp. Digital speech sinusoidal vocoder with transmission of only a subset of harmonics
EP0698876A2 (en) * 1994-08-23 1996-02-28 Sony Corporation Method of decoding encoded speech signals

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4475227A (en) * 1982-04-14 1984-10-02 At&T Bell Laboratories Adaptive prediction
JPS6031325A (en) * 1983-07-29 1985-02-18 Nec Corp System and circuit of forecast stop adpcm coding
EP0243561B1 (en) * 1986-04-30 1991-04-10 International Business Machines Corporation Tone detection process and device for implementing said process
US4969192A (en) * 1987-04-06 1990-11-06 Voicecraft, Inc. Vector adaptive predictive coder for speech and audio
GB9417185D0 (en) * 1994-08-25 1994-10-12 Adaptive Audio Ltd Sounds recording and reproduction systems

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0259950A1 (en) * 1986-09-11 1988-03-16 AT&T Corp. Digital speech sinusoidal vocoder with transmission of only a subset of harmonics
EP0698876A2 (en) * 1994-08-23 1996-02-28 Sony Corporation Method of decoding encoded speech signals

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
ARJMAND M M ET AL: "Pitch-congruent baseband speech coding", PROCEEDINGS OF ICASSP 83. IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING, BOSTON, MA, USA, 14-16 APRIL 1983, vol. 3, 14 April 1983 (1983-04-14) - 16 April 1983 (1983-04-16), 1983, NEW YORK, NY, USA, IEEE, USA, pages 1324 - 1327 vol.3, XP002022589 *
KAZUNORI OZAWA: "A 4.8 KB/S HIGH-QUALITY SPEECH CODING USING VARIOUS TYPES OF EXCITATION SIGNALS", PROCEEDINGS OF THE EUROPEAN CONFERENCE ON SPEECH COMMUNICATION AND TECHNOLOGY (EUROSPEECH), PARIS, SEPT. 26 - 28, 1989, vol. 1, 26 September 1989 (1989-09-26), TUBACH J P;MARIANI J J, pages 306 - 309, XP000209626 *
MCAULAY R ET AL: "19 SINE-WAVE AMPLITUDE CODING AT LOW DATA RATES", ADVANCES IN SPEECH CODING, VANCOUVER, SEPT. 5 - 8, 1989, no. -, 1 January 1991 (1991-01-01), ATAL B S;CUPERMAN V; GERSHO A, pages 203 - 213, XP000419275 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0987680A1 (en) * 1998-09-17 2000-03-22 BRITISH TELECOMMUNICATIONS public limited company Audio signal processing
US6535847B1 (en) 1998-09-17 2003-03-18 British Telecommunications Public Limited Company Audio signal processing
EP1617416A2 (en) * 1999-07-19 2006-01-18 Qualcom Incorporated Method and apparatus for subsampling phase spectrum information
EP1617416A3 (en) * 1999-07-19 2006-05-03 Qualcom Incorporated Method and apparatus for subsampling phase spectrum information
KR100752001B1 (en) * 1999-07-19 2007-08-28 콸콤 인코포레이티드 Method and apparatus for subsampling phase spectrum information
KR100754580B1 (en) * 1999-07-19 2007-09-05 콸콤 인코포레이티드 Method and apparatus for subsampling phase spectrum information

Also Published As

Publication number Publication date
AU3702497A (en) 1998-02-20
DE69702261T2 (en) 2001-01-25
DE69702261D1 (en) 2000-07-13
JP2000515992A (en) 2000-11-28
EP0917709B1 (en) 2000-06-07
EP0917709A1 (en) 1999-05-26
US6219637B1 (en) 2001-04-17

Similar Documents

Publication Publication Date Title
US4937873A (en) Computationally efficient sine wave synthesis for acoustic waveform processing
AU656787B2 (en) Auditory model for parametrization of speech
JP2787179B2 (en) Speech synthesis method for speech synthesis system
US6219637B1 (en) Speech coding/decoding using phase spectrum corresponding to a transfer function having at least one pole outside the unit circle
US5001758A (en) Voice coding process and device for implementing said process
Moulines et al. Time-domain and frequency-domain techniques for prosodic modification of speech
EP1141946B1 (en) Coded enhancement feature for improved performance in coding communication signals
US20020052736A1 (en) Harmonic-noise speech coding algorithm and coder using cepstrum analysis method
USRE43099E1 (en) Speech coder methods and systems
JPH10307599A (en) Waveform interpolating voice coding using spline
Quatieri et al. Phase coherence in speech reconstruction for enhancement and coding applications
JP3191926B2 (en) Sound waveform coding method
Pantazis et al. Analysis/synthesis of speech based on an adaptive quasi-harmonic plus noise model
US6173256B1 (en) Method and apparatus for audio representation of speech that has been encoded according to the LPC principle, through adding noise to constituent signals therein
Sun et al. Phase modelling of speech excitation for low bit-rate sinusoidal transform coding
CA2124713C (en) Long term predictor
Burnett et al. A mixed prototype waveform/CELP coder for sub 3 kbit/s
McCree Low-bit-rate speech coding
Fries Hybrid time-and frequency-domain speech synthesis with extended glottal source generation
JPH07261798A (en) Voice analyzing and synthesizing device
Rank Exploiting improved parameter smoothing within a hybrid concatenative/LPC speech synthesizer
Yang et al. High-quality harmonic coding at very low bit rates
Andrews Design of a high quality 2400 bit per second enhanced multiband excitation vocoder
JPH09258796A (en) Voice synthesizing method
Cheetham et al. All-pass excitation phase modelling for low bit-rate speech coding

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AL AM AT AU AZ BA BB BG BR BY CA CH CN CU CZ DE DK EE ES FI GB GE GH HU IL IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MD MG MK MN MW MX NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT UA UG US UZ VN YU ZW AM AZ BY KG KZ MD RU TJ TM

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): GH KE LS MW SD SZ UG ZW AT BE CH DE DK ES FI FR GB GR IE IT LU MC NL

DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
WWE Wipo information: entry into national phase

Ref document number: 09029832

Country of ref document: US

121 Ep: the epo has been informed by wipo that ep was designated in this application
WWE Wipo information: entry into national phase

Ref document number: 1997933782

Country of ref document: EP

WWP Wipo information: published in national office

Ref document number: 1997933782

Country of ref document: EP

REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

NENP Non-entry into the national phase

Ref country code: CA

WWG Wipo information: grant in national office

Ref document number: 1997933782

Country of ref document: EP