GB2367990A - Speech processing system - Google Patents
Speech processing system Download PDFInfo
- Publication number
- GB2367990A GB2367990A GB0020310A GB0020310A GB2367990A GB 2367990 A GB2367990 A GB 2367990A GB 0020310 A GB0020310 A GB 0020310A GB 0020310 A GB0020310 A GB 0020310A GB 2367990 A GB2367990 A GB 2367990A
- Authority
- GB
- United Kingdom
- Prior art keywords
- values
- audio signal
- parameter values
- speech
- received
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
- 238000012545 processing Methods 0.000 title claims description 108
- 238000000034 method Methods 0.000 claims abstract description 134
- 230000005236 sound signal Effects 0.000 claims abstract description 86
- 230000005540 biological transmission Effects 0.000 claims abstract description 32
- 230000006870 function Effects 0.000 claims description 94
- 238000004088 simulation Methods 0.000 claims description 25
- 238000004458 analytical method Methods 0.000 claims description 22
- 230000015654 memory Effects 0.000 claims description 13
- 230000015572 biosynthetic process Effects 0.000 claims description 5
- 238000003786 synthesis reaction Methods 0.000 claims description 5
- 230000001373 regressive effect Effects 0.000 claims description 4
- 230000001419 dependent effect Effects 0.000 claims description 3
- 238000007619 statistical method Methods 0.000 abstract description 49
- 239000013598 vector Substances 0.000 description 28
- 238000009826 distribution Methods 0.000 description 27
- 238000007405 data analysis Methods 0.000 description 18
- 239000011159 matrix material Substances 0.000 description 16
- 238000005259 measurement Methods 0.000 description 13
- 238000009499 grossing Methods 0.000 description 9
- 238000010586 diagram Methods 0.000 description 6
- 230000009466 transformation Effects 0.000 description 5
- RYGMFSIKBFXOCR-UHFFFAOYSA-N Copper Chemical compound [Cu] RYGMFSIKBFXOCR-UHFFFAOYSA-N 0.000 description 4
- 238000004519 manufacturing process Methods 0.000 description 4
- 230000000694 effects Effects 0.000 description 3
- 230000002441 reversible effect Effects 0.000 description 3
- 238000005070 sampling Methods 0.000 description 3
- 101100228469 Caenorhabditis elegans exp-1 gene Proteins 0.000 description 2
- 238000007476 Maximum Likelihood Methods 0.000 description 2
- 238000010420 art technique Methods 0.000 description 2
- 238000004422 calculation algorithm Methods 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 239000000203 mixture Substances 0.000 description 2
- 238000009827 uniform distribution Methods 0.000 description 2
- 230000003936 working memory Effects 0.000 description 2
- 101100005986 Caenorhabditis elegans cth-2 gene Proteins 0.000 description 1
- 238000000342 Monte Carlo simulation Methods 0.000 description 1
- 101001100790 Xenopus laevis Protein quaking-A Proteins 0.000 description 1
- 239000000654 additive Substances 0.000 description 1
- 230000000996 additive effect Effects 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 238000005315 distribution function Methods 0.000 description 1
- 238000002955 isolation Methods 0.000 description 1
- 239000002674 ointment Substances 0.000 description 1
- 238000003860 storage Methods 0.000 description 1
- 230000001052 transient effect Effects 0.000 description 1
- 238000010977 unit operation Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/02—Feature extraction for speech recognition; Selection of recognition unit
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L19/00—Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
- G10L19/0018—Speech coding using phonetic or linguistical decoding of the source; Reconstruction using text-to-speech synthesis
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L21/00—Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
- G10L21/02—Speech enhancement, e.g. noise reduction or echo cancellation
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/20—Speech recognition techniques specially adapted for robustness in adverse environments, e.g. in noise, of stress induced speech
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Human Computer Interaction (AREA)
- Computational Linguistics (AREA)
- Acoustics & Sound (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Quality & Reliability (AREA)
- Transmission Systems Not Characterized By The Medium Used For Transmission (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
- Complex Calculations (AREA)
Abstract
An audio transmission system is provided which is operable to receive sets of signals representative of a speech signal generated by a speech source 7. The system includes a statistical analysis unit 21 which is operable to determine parameters representative of each of the sets of signal values, and an encoder 71 to encode these parameter values and then to transmit these parameter values to a receiver circuit 75. At the receiver, the encoded parameters are decoded at 76 to recover the parameter values. The recovered parameter values may then be used to resynthesise the speech using a speech synthesiser 79 or they may be used by a speech recognition system 77 to generate a recognition result. The statistical analysis unit 21 applies a set of received audio signal values to a stored predetermined function to provide a probability density for parameters of a predetermined audio model, parameters best representing the set then being derived. A measure of the quality of the audio signal may be determined so that an appropriate coding and/or decoding method can be employed.
Description
2367990 SPEECH PROCESSING SYSTEM The present invention relates to an
apparatus for and method of speech transmission. The invention 5 particularly relates to a statistical processing of an input speech signal to derive parameter values defining the speech production system which generated the speech and the subsequent transmission of those parameter values.
A number of speech transmission systems have been proposed. These can be categorised into systems which transmit essentially the speech signals and those which parameterise the speech first (based on some speech 15 model) and transmit the parameters. The problem with the first technique is that it requires a relatively large bandwidth to transmit the speech signals compared with parameterising the speech signals first and then transmitting the parameters. However, parameterising the 20 speech results in more complex transmitter circuitry in order to determine the parameter values and more complex receiver circuitry in order to regenerate the speech from the transmitted parameters. Further, the second technique also reduces the quality of the regenerated 25 speech at the receiver, since some speech information 2 will inevitably be lost through the parameterisation.
Many different techniques are known to parameterise speech signals. One of the most commonly used techniques 5 is based on a linear prediction analysis of the speech.
With this technique, the entire speech signal is divided into a number of time frames (typically having a duration of 10 to 30 ms) and a set of linear prediction parameters (or coefficients) is calculated to represent the speech 10 within each time frame. This linear prediction analysis assumes that the value of a current speech sample can be predicted from a linear weighted combination of the k most recent speech samples. Based on this model, the task of the linear prediction analyses is to identify the 15 value of the weightings (or coef f icients) which minimises the mean squared error between the actual value of the current speech sample and the predicted value of the current speech sample.
20 one of the problems with this linear prediction analysis is that is analyses the speech within each frame in isolation from the speech within other frames. It also assumes that the same number of weightings or coefficients represent the speech within each time frame.
25 As a result, errors are introduced into the 3 representation.
An aim of the present invention is to provide an alternative technique for parameterising speech prior to 5 transmission from a transmitting terminal to a receiving terminal.
According to one aspect, the present invention provides an audio encoding system comprising: a memory for storing 10 a probability density function for parameters of a predetermined audio model which is assumed to have generated a set of received audio signal values; means for applying the set of received audio signal values to the probability density function; means for processing 15 said function with the set of received audio signal values applied to determine samples of parameter values from said probability density function; means for analysing at least some of the determined samples to determine parameter values that are representative of the 20 received audio signal values; and means for encoding said determined parameter values to generate encoded data representative of the received audio signal values.
Exemplary embodiments of the present invention will now 25 be described with reference to the accompanying drawings 4 in which:
Figure 1 is a schematic block diagram illustrating the principal components of a speech encoding and 5 transmission system; Figure 2a. is a flow chart illustrating the processing steps performed by the transmission side of the system shown in Figure 1; Figure 2b is a flow chart illustrating the processing steps performed by the receiving side of the system shown in Figure 1; 15 Figure 3 is a block diagram representing a model employed by a statistical analysis unit which forms part of the speech transmission system shown in Figure 1; Figure 4 is a flow chart illustrating the processing 20 steps performed by a model order selection unit forming part of the statistical analysis unit shown in Figure 1; Figure 5 is a flow chart illustrating the main processing steps employed by a simulation Smoother which forms part 25 of the statistical analysis unit shown in Figure 1; Figure 6 is a block diagram illustrating the main processing components of the statistical analysis unit shown in Figure 1; 5 Figure 7 is a memory map illustrating the data that is stored in a memory which forms part of the statistical analysis unit shown in Figure 1; Figure 8 is a flow chart illustrating the main processing steps performed by the statistical analysis unit shown in Figure 6; Figure 9a is a histogram f or a model order of an auto regressive filter model which forms part of the model shown in Figure 3; Figure 9b is a histogram for the variance of process noise modelled by the model shown in Figure 3; and 20 Figure 9c is a histogram for a third coefficient of the AR filter model.
Embodiments of the present invention can be implemented in computer hardware, computer software or a mix of 25 computer hardware and software. When implemented using 6 computer software, the program instructions that make the programmable hardware operate in accordance with the present invention may be supplied on, for example, a storage device such as a magnetic disc or by downloading 5 the software from a computer device over a computer network.
Figure 1 shows a speech encoding and transmission system.
Electrical signals representative of input speech from 10 the microphone 7 are input to a filter 15 which removes unwanted frequencies (in this embodiment frequencies above 8 kHz). The filtered signal is then sampled (at a rate of 16 kHz) and digitized by an analogue to digital converter 17 and the digitized speech samples are then 15 stored in a buffer 19. Sequential blocks (or frames) of speech samples are then passed from the buffer 19 to a statistical analyses unit 21 which performs a statistical analysis of each frame of speech samples in sequence to determine, amongst other things, a set of auto regressive 20 (AR) coefficients representative of the speech within the frame. In this embodiment, the AR coefficients output by the statistical analysis unit 21 are input to a channel encoder which encodes the sequences of AR filter coefficients so that they are in a more suitable form for 25 transmission through a communications channel. The 7 encoded AR filter coefficients are then passed to a transmitter 73 where the encoded data is used to modulate a carrier signal which is then transmitted to a remote receiver 75. The receiver 75 demodulates the received 5 signal to recover the encoded data which is then decoded by a decoder 76. The sequence of AR coefficients output by the decoder are then either passed to a speech recognition unit 77 which compares the sequences with stored reference models (not shown) to generate a 10 recognition result or to a speech synthesis unit 79 which regenerates the speech and outputs it via a loudspeaker 81. As shown, prior to application to the speech synthesis unit 79, the sequences of AR coefficients may also pass through an optional processing unit 83 (shown in phantom) which can be used to manipulate the characteristics of the speech that is synthesised.
Figure 2a is a flow chart illustrating the processing steps performed by the channel encoder 71 of the system shown in Figure 1. As shown, in step S101 the channel encoder 71 receives the speech parameters and the quality indicator for the current segment of speech to be transmitted. The processing then proceeds to step S103 where the channel encoder 71 determines whether or not 25 the speech parameters to be transmitted were generated 8 from a high quality speech signal. If they are, then the processing proceeds to step S105 where the channel encoder 71 encodes the speech parameters using an efficient encoding technique which minimises the data to 5 be transmitted. If, on the other hand, step S103 determines that the speech parameters were derived from low quality speech, then the processing proceeds to step S107 where the channel encoder 71 uses a less efficient encoding technique (or no encoding) which minimises 10 information lost in the encoding. After step S105 or S107, the processing proceeds to step S109 where the channel encoder 71 outputs the data to be transmitted to the transmitter unit 73 which transmits the encoded speech to the receiver 75. In this embodiment, the 15 channel encoder 71 also encodes and transmits the quality indicator to the receiver 75 so that the receiver can decode the encoded speech parameters using the appropriate decoding technique. In order that the receiver can always decode the encoded speech quality 20 indicator, the channel encoder 71 encodes this information using a standard encoding technique regardless of what the quality indicator is.
Figure 2b is a flow chart illustrating the processing 25 steps performed by the decoder 76 shown in Figure 1. As 9 shown, in step S111, the decoder 76 recovers the quality indicator f rom the received encoded speech data. The processing then proceeds to step S113 where the decoder 76 determines whether or not the received encoded speech 5 parameters where generated f rom a high quality speech signal. If they were, then the processing proceeds to step S115 where the decoder uses a decoding technique corresponding to the efficient encoding technique used in step S105. If, an the other hand, the decoder 76 10 determines that the received encoded speech data was generated from a low quality speech signal, then the processing proceeds to step S117 where the decoder 76 decodes the received data using a decoding technique corresponding to the less efficient encoding technique 15 used in step S107. The processing then proceeds to step S119 where the decoded speech parameters are output either to the speech recognition unit 77 or to the speech synthesis unit 79.
20 Statistical Analysis Unit - Theory and Overview As mentioned above, the statistical analysis unit 21 analyses the speech within successive frames of the input speech signal. In most speech processing systems, the frames are overlapping. However, in this embodiment, the 25 frames of speech are non-overlapping and have a duration of 20ms which, with the 16kHz sampling rate of the analogue to digital converter 17, results in a frame size of 320 samples.
5 In order to perform the statistical analysis on each of the frames, the analysis unit 21 assumes that there is an underlying process which generated each sample within the frame. The model of this process used in this embodiment is shown in Figure 2. As shown, the process is modelled by a speech source 31 which generates, at time t = n, a raw speech sample s(n). Since there are physical constraints on the movement of the speech articulators, there is some correlation between neighbouring speech samples. Therefore, in this embodiment, the speech source 31 is modelled by an auto regressive (AR) process.
In other words, the statistical analysis unit 21 assumes that a current raw speech sample (s(n)) can be determined from a linear weighted combination of the most recent previous raw speech samples, i.e.:
s(n) = als(n-1) + a 2 s(n-2). + aks(n-k) + e(n) where a,, a2...... ak are the AR filtercoefficients representing the amount of correlation between the speech 25 samples; k is the AR filter model order; and e(n) 11 represents random process noise which is involved in the generation of the raw speech samples. As those skilled in the art of speech processing will appreciate, these AR filter coefficients are the same coefficients that the 5 linear prediction analysis estimates albeit using a different processing technique.
As shown in Figure 3, the raw speech samples s(n) generated by the speech source are input to a channel 33 10 which models the acoustic environment between the speech source 31 and the output of the analogue to digital converter 17. Ideally, the channel 33 should simply attenuate the speech as it travels from the source 31 to the microphone. However, due to reverberation and other 15 distortive effects, the signal (y(n)) output by the analogue to digital converter 17 will depend not only on the current raw speech sample (s(n)) but it will also depend upon previous raw speech samples. Therefore, in this embodiment, the statistical analysis unit 21 models 20 the channel 33 by a moving average (MA) filter, i.e.:
y(n) = h,s(n) + hs(n-1) + h2 s(n-2). + hrs(n-r) + e(n) (2) where y(n) represents the signal sample output by the 25 analogue to digital converter 17 at time t = n; ho, hl, 12 h2.... h, are the channel filter coefficients representing the amount of distortion within the channel 33; r is the channel filter model order; and &(n) represents a random additive measurement noise component. 5 For the current frame of speech being processed, the filter coefficients for both the speech source and the channel are assumed to be constant but unknown. Therefore, considering all N samples (where N = 320) in 10 the current frame being processed gives:
s(n) = a,s(n-1) + a2s(n-2). + aks(n-k) + e(n) s(n-1) = a,s(n-2) + a2s(n-3). +a k s(n-k-1) + e(n-1) (3) s(n-N+I) = als(n-N) + a 2 s (n -N1). +a k s(n-k-N+I) + e(n-N+I) which can be written in vector form as:
s(n) = S-a + On) (4) 20 where s(n-1) s(n-2) s(n-3)... s (n - k-) s(n-2) s(n-3) s(n-4)... s(n-k-1) s(n-3) s(n-4) s(n-5)... s(n-k--2) S= 25 -s(n-N) s(n-N-1) s(n-N-2)... s(n-k-N+1).v.,k and a, s(n) e(n) a. s(n-1) e(n-1) a3 s(n-2) e(n-2) S W e (n) -a k- kxj s(n-N+I) e(n-N+l) As will be apparent from the following discussion, it is also convenient to rewrite equation (3) in terms of the random error component (often referred to as the 10 residual) e(n). This gives:
e(n) = s(n) - als(n-1) - a 2 s(n-2). -a k s(n-k) e(n-1) = s(n-1) a,s(n-2) - a 2 s(n-3). -a ks(n-k-1) (5) 15 e(n-N+I) = s(n-N+I) - als(n-]V) a 2 s (n -N- 1). aks(n-k-N+l) which can be written in vector notation as:
e-(n) As(n) (6) where 20 1 -al -a. -a3... -ak 0 0 0... 0 0 1 -a, -a2 -ak-I -ak 0 0... 0 0 0 1 -a, -ak-2 -ak-I -ak 0... 0 14 similarly, considering the channel model defined by equation (2), with ho = 1 (since this provides a more stable solution), gives:
q(n) =h,s(n-1) +h 2 s (n -2) +.. + h,s (n -r) + e (n) 5 q(n-1) =hs(n-2) +h 2 s (n -3) + + h,s (n -r- 1) + e (n - 1) (7) q (n-N+ 1) =his (n-N) +h 2 s (n -N- 1) +.. + hrS (n -r-N+ 1) + e (n -N+ 1) (where q(n) = y(n) - s(n)) which can be written in vector 10 form as:
q (n) = Y. h + F. (n) (8) where s(n-1) s(n-2) s(n-3) s(n -r) s(n-2) s(n-3) s(n-4)... s(n-r-1) 15 s(n-3) s(n-4) s(n -5) s(n-r-2) Y= An-N) s(n-N-1) s(n-N-2)... s(n-r-N+I) 20 and hi q (n) E (n) h2 q(n-1) e(n-1) h3 q(n-2) (n -2) h q (n) F. (n) h e (n -'V+ I)j rXI In this embodiment, the analysis unit 21 aimsto determine, amongst other things, values f or the AR f ilter coefficients (A) which best represent the observed signal samples (y(n)) in the current frame. It does this by 5 determining the AR filter coefficients (a) that maximise the joint probability density function of the speech model, channel model, speech samples and the noise statistics given the observed signal samples output from the analogue to digital converter 17, i.e. by 10 determining:
max 1 2 '5(n) a I p(a,k,h,r,CYe Ge. I y- (n) (9) 15 where Ge 2 and a,' represent the process and measurement noise statistics respectively. As those skilled in the art will appreciate, this function defines the probability that a particular speech model, channel model, raw speech samples and noise statistics generated 20 the observed frame of speech samples (y(n)) from the analogue to digital converter. To do this, the statistical analysis unit 21 must determine what this function looks like. This problem can be simplified by rearranging this probability density function using Bayes 25 law to give:
2 2 P( 2)p((F2 p(y-(n)J&(n),h,r,u,)p(s(n)Ja,k G')p(aJk)p(&Jr) 0,)p(k)p(r) (10) p(y(n)) 5 As those skilled in the art will appreciate, the denominator of equation (10) can be ignored since the probability of the signals from the analogue to digital converter is constant for all choices of model. Therefore, the AR filter coefficients that maximise the 10 function defined by equation (9) will also maximise the numerator of equation (10).
Each of the terms on the numerator of equation (10) will now be considered in turn.
p (,g (n) I A, k, q.2) This term represents the joint probability density function for generating the vector of raw speech samples (.q(n)) during a frame, given the AR filter coefficients 20 (A), the AR filter model order (k) and the process noise statistics ( Cy'2). From equation (6) above, this joint probability density function for the raw speech samples can be determined from the joint probability density function for the process noise. In particular p(fi(n) JA, 25 k, 0',2) is given by:
P (S(n) I a, k"72) = P (C 5 e_(n) (11) e _(n)) 5 s(n) g (n) = s (n) - Sa 5 where p(g(n)) is the joint probability density function for the process noise during a frame of the input speech and the second term on the right-hand side is known as the Jacobean of the transformation. In this case, the Jacobean is unity because of the triangular form of the matrix JK (see equations (6) above).
In this embodiment, the statistical analysis unit 21 assumes that the process noise associated with the speech source 31 is Gaussian having zero mean and some unknown variance Ce2 The statistical analysis unit 21 also assumes that the process noise at one time point is independent of the process noise at another time point.
Therefore, the joint probability density function for the process noise during a frame of the input speech (which 20 defines the probability of any given vector of process noise e(n) occurring) is given by:
(n)'e (12) 2(3 2 p(e_(n)) = (2 7EG2) 2 exp _(n) e 25 Therefore, the joint probability density function for a 18 vector of raw speech samples given the AR filter coefficients (,a), the AR filter model order (k) and the process noise variance ( Cy'2) is given by:
2 2)TS (n) aTSS + a TS TSa 5 p(s(n)1a,k,(Y,,) (2na,,) 2 exp 2 (s-(n 2 _(n) (13) 2(ye p (y (n) I g (n) Z' (7,2) This term represents the joint probability density 10 function for generating the vector of speech samples (y(n)) output from the analogue to digital converter 17, given the vector of raw speech samples (a(n)), the channel filter coef f icients (h), the channel filter model order (r) and the measurement noise statistics (Cy,2).
15 From equation (8), this joint probability density function can be determined from the joint probability density function for the process noise. In particular, p(y(n)1.q(n)f h, r, 0,2) is given by:
2 8E(n) 20 p(y(n)ls(n), rcy.) =p(f,(n)) 5y(n) E(n) = q(n) - Yh where p(-L(n)) is the joint probability density function for the measurement noise during a frame of the input speech and the second term on the right hand side is the 25 Jacobean of the transformation which again has a value of 19 one.
In this embodiment, the statistical analysis unit 21 assumes that the measurement noise is Gaussian having 5 zero mean and some unknown variance U'2. It also assumes that the measurement noise at one time point is independent of the measurement noise at another time point. Therefore, the joint probability density function for the measurement noise in a frame of the input speech 10 will have the same form as the process noise defined in equation (12). Therefore, the joint probability density function for a vector of speech samples (y(n)) output from the analogue to digital converter 17, given the channel f ilter coef f icients (h), the channel f ilter model 15 order (r), the measurement noise statistics (u,2) and the raw speech samples (.a(n)) will have the following form:
E 2 7E 2 2 (q(n)Tq(n) - 2hTyq(n) (15) 2 p(An)Js(n),&,r,(Q = (2 a.) exp 20, + hTyTyh)i 20 As those skilled in the art will appreciate, although this joint probability density function for the vector of speech samples (y(n)) is in terms of the variable -q(n), this does not matter since -q(n) is a function of y(n) and s(n), and s(n) is a given variable (ie known) for this 25 probability density function.
p (a I k) This term defines the prior probability density function for the AR filter coefficients (a) and it allows the 5 statistical analysis unit 21 to introduce knowledge about what values it expects these coefficients will take. In this embodiment, the statistical analysis unit 21 models this prior probability density function by a Gaussian having an unknown variance (Ga 2) and mean vector (p,), 10 i.e.:
(27Eo,,) ex 2 2 2 p(a I ka,,, 41 p 2 (16) 2cFa By introducing the new variables Ga2 and U,,,, the prior 15 density functions (p(o.1) and p(Ua)) for these variables must be added to the numerator of equation (10) above.
Initially, for the first frame of speech being processed the mean vector (P-a) can be set to zero and f or the second and subsequent frames of speech being processed, 20 it can be set to the mean vector obtained during the processing of the previous frame. In this case, p(pa) is just a Dirac delta function located at the current value of p, and can therefore be ignored.
25 With regard to the prior probability density function for 21 the variance of the AR filter coefficients, the statistical analysis unit 21 could set this equal to some constant to imply that all variances are equally probable. However, this term can be used to introduce 5 knowledge about what the variance of the AR filter coefficients is expected to be. In this embodiment, since variances are always positive, the statistical analysis unit 21 models this variance priox- probability density function by an Inverse Gamma function having 10 parameters CXa and, I i.e.
((Y2 a exp 2 (17) Pa]7 (Ota) (Ya Pa At the beginning of the speech being processed, the 15 statistical analysis unit 21 will not have much knowledge about the variance of the AR filter coefficients.
Therefore, initially, the statistical analysis unit 21 a, 2 sets the variance, and the a and parameters of the Inverse Gamma function to ensure that this probability 20 density function is fairly flat and therefore non informative. However, after the first frame of speech has been processed, these parameters can be set more accurately during the processing of the next frame of speech by using the parameter values calculated during 25 the processing of the previous frame of speech.
p(hl--) This term represents the prior probabilitydensity function for the channel model coefficients (h) and it allows the statistical analysis unit 21 to introduce 5 knowledge about what values it expects these coefficients to take. As with the prior probability density function for the AR filter coefficients, in this embodiment, this probability density function is modelled by a Gaussian having an unknown variance (Gh 2 and mean vector (ph), 10 i.e.:
N 4Lh)T 41 P(h I r, C;2,,, (2 n'U2 2 exp h) (18) h h h 2 2(yh 15 Again, by introducing these new variables, the prior density functions (POO and P(P-h)) must be aaded to the numerator of equation (10). Again, the mean vector can initially be set to zero and af ter the f irst f rame of speech has been processed and for all subsequent frames 20 of speech being processed, the mean vector can be set to equal the mean vector obtained during the processing of the previous f rame. Therefore, P(P-h) is also just a Dirac delta function located at the current value of ph and can be ignored.
23 With regard to the prior probability density function for the variance of the channel filter coefficients, again, in this embodiment, this is modelled by an Inverse Gamma function having parameters Uh and 4. Again, the variance 5 ( (Jh 2) and the a and P parameters of the Inverse Gamma function can be chosen initially so that these densities are non-informative so that they will have little effect on the subsequent processing of the initial frame.
10 p(C.2) and p(C,,2) These terms are the prior probability density functions for the process and measurement noise variances and again, these allow the statistical analysis unit 21 to introduce knowledge about what values it expects these 15 noise variances will take. As with the other variances, in this embodiment, the statistical analysis unit 21 models these by an Inverse Gamma function having parameters C(ef P, and a,, P, respectively. Again, these variances and these Gamma function parameters can be set 20 initially so that they are non-informative and will not appreciably affect the subsequent calculations for the initial frame.
p(k) and p(.r) 25 These terms are the prior probability density functions 24 for the AR filter model order (k) and the channel model order (r) respectively. In this embodiment, these are modelled by a uniform distribution up to some maximum order. In this way, there is no prior bias on the number 5 of coefficients in the models except that they can not exceed these predefined maximums. In this embodiment, the maximum AR filter model order (k) is thirty and the maximum channel model order (r) is one hundred and fifty.
10 Therefore, inserting the relevant equations into the numerator of equation (10) gives the following joint probability density function which is proportional to 2 2 2 2 p(afkrh#,rrGa I Oh " 0, a, 'S(n) ly(n)):
-N 15 (2 JrG2) 2 exp (q(n)Tq(n) - 2bTyq(n) + hTyTyh)l C 2 20C 2) 2 gTSS (n) + aTS TS a x (27rcy, exp (s (n)Ts (n) - 2 2 a,- -N -N (h g)T (h - IQ xQua 2) 2 x (27t(5 2) 2 exp 2(y 2 2 2 a Gh exp 2 -(a,+l) ((52 -(cth+l) ((Ya) h) 2 a Ghph Par(a) exp G 2 X - Ph r(CCh) exp 2 -(Cc,+') 2 2 (Y (T.
exp)C exp (19) Gibbs Sampler In order to determine the form of this joint probability density function, the statistical analysis unit 21 "draws samples" from it. In this embodiment, since the joint 5 probability density function to be sampled is a complex multivariate function, a Gibbs sampler is used which breaks down the problem into one of drawing samples from probability density functions of smaller dimensionality.
In particular, the Gibbs sampler proceeds by drawing 10 random variates from conditional densities as follows:
first iteration p(a.Mh',r',(7 2" a 2' 2' 20 0 al,k I e C)(Ya I(yh Xn) X(n)) 1, 2' 20 20 20 0 p(krld,k CF.)as Ga;(Yh Xn) On)) P(a2lal k l,bl,r 1, a 2' CF 2' 20 OX(n)) - 21 15 e E:) a)Gh ts(n) Ge p(U2'lal k',h',r 1, a 21, a 21, a 21, s 21 h 8 a h -(n)Oy(n)) --, CFh second iteration 20 p(ak1h 21 a 21 cy 21 2 e I E: 7 h 2,C;2',G2"CY21 21 2 p(b,rJ(i,k C a)Cyh,S(n) v(n)) -+ Y,r 25 etc.
26 where (hO, rO, CT'2) 1 (0,2)1, (0"2)0, Oh 2) 0, s (n) 0)are initial values which may be obtained from the results of the statistical analysis of the previous frame of speech, or where there are no previous frames, can be set to 5 appropriate values that will be known to those skilled in the art of speech processing.
As those skilled in the art will appreciate, these conditional densities are obtained by inserting the 10 current values for the given (or known) variables into the terms of the density function of equation (19). For the conditional density p(A,kl...) this results in:
exp - 1 2 (s(n)'s(n) 2a'Ss(n) + aTS T S a)l 2ae 15 (20) x exp 2 2G. which can be simplified to give:
S(n)TS(n) 41T 20 + au" _ 2aT Ss(n) ± Aa 2 2 2 2 Aakj exp -1 ae Ga Ge Ga. (21) 2 + aT STS + I I a 2 2 Oe Ga which is in the form of a standard Gaussian distribution 25 having the following covariance matrix:
S TS I la = -2 + 2 (22) Ge Ga The mean value of this Gaussian distribution can be 5 determined by differentiating the exponent of equation (21) with respect to a and determining the value of a which makes the differential of the exponent equal to zero. This yields a mean value of:
S TS 1 -1 r +,'n' al 21 2 + (23) 10 A G' Cya (7. 'Ua A sample can then be drawn from this standard Gaussian distribution to give ag (where g is the gth iteration of the Gibbs sampler) with the model order (kg) being 15determined by a model order selection routine which will be described later. The drawing of a sample from this Gaussian distribution may be done by using a random number generator which generates a vector of random values which are uniformly distributed and then using a 20 transformation of random variables using the covariance matrix and the mean value given in equations (22) and (23) to generate the sample. In this embodiment, however, a random number generator is used which generates random numbers from a Gaussian distribution 25 having zero mean and a variance of one. This simplifies 28 the transformation process to one of a simple scaling using the covariance matrix given in equation (22) and shifting using the mean value given in equation (23).
Since the techniques for drawing samples from Gaussian 5 distributions are well known in the art of statistical analysis, a further description of them will not be given here. A more detailed description and explanation can be found in the book entitled I'Vumerical Recipes in C", by W. Press et al, Cambridge University Press, 1992 and in 10 particular at chapter 7.
As those skilled in the art will appreciate, however, before a sample can be drawn from this Gaussian distribution, estimates of the raw speech samples must be 15 available so that the matrix S and the vector s(n) are known. The way in which these estimates of the raw speech samples are obtained in this embodiment will be described later.
20 A similar analysis f or the conditional density P (h, r reveals that it also is a standard Gaussian distribution but having a covariance matrix and mean value given by:
yTy + I yTy + 11-1 -g'n' + Ah (24) 2 2 2 2 2 Y-h a E: Gh Ah GE (3h r2 Gh 1 29 from which a sample for -h9 can be drawn in the manner described above, with the channel model order (rg) being determined using the model order selection routine which will be described later. 5 p(0,21...
A similar analysis for the conditional density shows that:
N a 2)- (CE t I 10 P( 2) 2 -e (25) e 2 2 20 Ge a exp exp where:
S (n)TS (n) aTSS(n) + alS TS a E 2 which can be simplified to give:
V 2 2) 2 P -1( E + 1 (26) 2 2 Pe) e which is also an Inverse Gamma distribution having the following parameters:
N 20e.
(X1 2 ' -e -" d 0, = 2 + P.E A sample is then drawn from this Inverse Gamma distribution by firstly generating a random number from a uniform distribution and then performing a transformation of random variables using the alpha and 5 beta parameters given in equation (27), to give (Ge 2)g.
A similar analysis for the conditional density p(G,21...) reveals that it also is an Inverse Gamma distribution having the following parameters:
N 2or a, - + (x,, and (28) 2 2 + PE where:
E = q(n)Tq(n) - 2h T yq (n) + h TYTYh A sample is then drawn from this Inverse Gamma distribution in the manner described above to give 0,2)g A similar analysis for conditional density P(Ga'l...
20 reveals that it too is an Inverse Gamma distribution having the following parameters:
N 1 2P,, &a = 2 ' -a -"d Pa = 2 + Pa'(a - 41,,)T(a (29) 25 A sample is then drawn from this Inverse Gamma 31 distribution in the manner described above to give (0,2) g.
Similarly, the conditional density P(Gh 21...) is also an Inverse Gamma distribution but having the following 5 parameters:
N 20h aLh = 2 + a. and 2 + Ph'(h-gd T(h - #h) (30) A sample is then drawn from this Inverse Gamma 10 distribution in the manner described above to give (Gh 2)g.
As those skilled in the art will appreciate, the Gibbs sampler requires an initial transient period to converge to equilibrium (known as burn-in). Eventually, after L 15 iterations, the sample (AL, kL, hL, rL, CT2) L, (C7,2) L, CF. 2) L CTh 2) L, s (n) L) is considered to be a sample f rom the joint probability density function defined in equation (19). In this embodiment, the Gibbs sampler performs approximately one hundred and fifty (150) iterations on 20 each frame of input speech and discards the samples from the first fifty iterations and uses the rest to give a picture (a set of histograms) of what the joint probability density function defined in equation (19) looks like. From these histograms, the set of AR 25 coefficients (a) which best represents the observed 32 speech samples (y(n)) from the analogue to digital converter 17 are determined. The histograms are also used to determine appropriate values for the variances and channel model coefficients (h) which can be used as 5 the initial values for the Gibbs sampler when it processes the next frame of speech.
Model Order Selection 10 As mentioned above, during the Gibbs iterations, the model order (k) of the AR filter and the model order (r) of the channel filter are updated using a model order selection routine. In this embodiment, this is performed using a technique called "Reversible jump Markov chain 15 Monte Carlo computation". This technique is described in the paper entitled "Reversible jump Markov chain Monte Carlo Computation and Bayesian model determination" by Peter Green, Biometrika, vol 82, pp 711 to 732, 1995.
20 Figure 4 is a flow chart which illustrates the processing steps performed during this model order selection routine for the AR filter model order (k). As shown, in step sl, a new model order (k2) is proposed. In this embodiment, the new model order will normally be proposed as k2 k, 25 1, but occasionally it will be proposed as k2 = k, 2 I 33 and very occasionally as k2 = k, 3 etc. To achieve this, a sample is drawn from a discretised Laplacian density function centred on the current model order (kj) and with the variance of this Laplacian density function 5 being chosen a priori in accordance with the degree of sampling of the model order space that is required.
The processing then proceeds to step s3 where a model order,variable (MO) is set equal to:
MO = max P(A <1 k2> f k 2 1 (31) P (A <1 kl> f ki where the ratio term is the ratio of the conditional probability given in equation (21) evaluated for the 15 current AR filter coefficients (A) drawn by the Gibbs sampler for the current model order (kj) and for the proposed new model order (k2)' If k2 > kj, then the matrix S must first be resized and then a new sample must be drawn from the Gaussian distribution having the 20 mean vector and covariance matrix defined by equations (22) and (23) (determined for the resized matrix S), to provide the AR filter coefficients (A<l:k2>) for the new model order (k2) - If k2 < k, then all that is required is to delete the last (k, - k2) samples of the a vector. If 25 the ratio in equation (31) is greater than one, then this 34 implies that the proposed model order (k2) is better than the current model order whereas if it is less than one then this implies that the current model order is better than the proposed model order. However, since 5 occasionally this will not be the case, rather than deciding whether or not to accept the proposed model order by comparing the model order variable (MO) with a fixed threshold of one, in this embodiment, the model order variable (MO) is compared, in step s5, with a 10 random number which lies between zero and one. If the model order variable (MO) is greater than this random number, then the processing proceeds to step s7 where the model order is set to the proposed model order (k2) and a count associated with the value of k, is incremented.
15 If, on the other hand, the model order variable (MO) is smaller than the random number, then the processing proceeds to step s9 where the current model order is maintained and a count associated with the value of the current model order (k,) is incremented. The processing 20 then ends.
This model order selection routine is carried out for both the model order of the AR filter model and for the model order of the channel filter model. This routine 25 may be carried out at each Gibbs iteration. However, this is not essential. Therefore, in this embodiment, this model order updating routine is only carried out every third Gibbs iteration.
5 Simulation Smoother As mentioned above, in order to be able to draw samples using the Gibbs sampler, estimates of the raw speech samples are required to generate s(n), S and Y which are used in the Gibbs calculations. These could be obtained from the conditional probability density function p(s(n)j...)- However, this is not done in this embodiment because of the high dimensionality of S(n).
Therefore, in this embodiment, a different technique is used to provide the necessary estimates of the raw speech 15 samples. In particular, in this embodiment, a "Simulation Smoother" is used to provide these estimates.
This Simulation Smoother was proposed by Piet de Jong in the paper entitled "The Simulation Smoother for Time Series Models", Biometrika (1995), vol 82,2, pages 339 to 20 350. As those skilled in the art will appreciate, the Simulation Smoother is run before the Gibbs Sampler. It is also run again during the Gibbs iterations in order to update the estimates of the raw speech samples. In this embodiment, the Simulation Smoother is run every fourth 25 Gibbs iteration.
In order to run the Simulation Smoother, the model equations defined above in equations (4) and (6) must be written in "state space" format as follows:
1(n) J. 1) + i(n) 5 (32) y(n) + F(n) where al a 2 a3 ak 0... 0 1 0 0 0 0... 0 0 1 0... 0 0 0 1 rxr 15 and 9(n) 6(n)' 9(n-1) 0 9(n-2) 0 S(n) i(n) 20 With this state space representation, the dimensionality of the raw speech vectors (9 (n)) and the process noise vectors (a(n)) do not need to be Nxl but only have to be as large as the greater of the model orders - k and r.
Typically, the channel model order (r) will be larger 25 than the AR filter model order (k). Hence, the vector of 37 raw speech samples (9 (n)) and the vector of process noise (4?(n)) only need to be rxl and hence the dimensionality of the matrix Jk only needs to be rxr.
5 The Simulation Smoother involves two stages - a f irst stage in which a Kalman filter is run on the speech samples in the current frame and then a second stage in which a "smoothing" filter is run on the speech samples in the current frame using data obtained from the Kalman 10 filter stage. Figure 5 is a flow chart illustrating the processing steps performed by the Simulation Smoother.
As shown, in step s2l, the system initialises a time variable t to equal one. During the Kalman filter stage, this time variable is run from t = 1 to N in order to 15 process the N speech samples in the current frame being processed in time sequential order. After step s2l, the processing then proceeds to step s23, where the following Kalman filter equations are computed for the current speech sample (y(t)) being processed:
20 W(t) = y(t) - hT(t) d(t) = hTp(t)h + CY2 & W = (AP(t)h).d(t)-I 1(t + 1) = i(t) + k i(t) - w(t) (33) 25 L(t) = - k f 0) - hT P(t + 1) = iP(t)L(t)T + G2. I e 38 where the initial vector of raw speech samples (9(1) includes raw speech samples obtained from the processing of the previous frame (or if there are no previous frames then s(i) is set equal to zero for i < 1); P(1) is the 5 variance of a(l) (which can be obtained from the previous f rame or initially can be set to 0,2); h is the current set of channel model coefficients which can be obtained from the processing of the previous frame (or if there are no previous frames then the elements of h can be set 10 to their expected values - zero); y(t) is the current speech sample of the current frame being processed and I is the identity matrix. The processing then proceeds to step s25 where the scalar values w(t) and d(t) are stored together with the rxr matrix L(t) (or alternatively the 15 Kalman filter gain vector kf(t) could be stored from which L(t) can be generated). The processing then proceeds to step s27 where the system determines whether or not all the speech samples in the current frame have been processed. If they have not, then the processing 20 proceeds to step s29 where the time variable t is incremented by one so that the next sample in the current frame will be processed in the same way. once all N samples in the current frame have been processed in this way and the corresponding values stored, the first stage 25 of the Simulation Smoother is complete.
I 39 The processing then proceeds to step s3l where the second stage of the Simulation Smoother is started in which the smoothing filter processes the speech samples in the current frame in reverse sequential order. As shown, in 5 step s3l the system runs the following set of smoothing filter equations on the current speech sample being processed together with the stored Kalman filter variables computed for the current speech sample being processed:
10 2 2 Q0 0" V - a, UW) :4(t) - N(O, Qt)) 2 VQ) = a, U(t)L(t) r(t-1) = h d(t)-'w(t) + L(t)TE(t) - V(t)TC(t)-I]I(t) (34) U(t-j) = hd(t)-IhT + LQ)I U(t) LQ) + V(t)'CQ)-I V(t) 2 +I)IT JEW = Ge f(t) + il(t) where i(t) = [j(t) jQ-1) j(t-2)... j(t-r 9(t) =,!9Q-1) + j(t) where 1(t) = [ (t) (t-1) (t-2)... f(t-r+ I)IT and j(t) = [j(t) 0 0... 0]T where a (t) is a sample drawn f rom a Gaussian distribution 20 having zero mean and covariance matrix C(t); the initial vector r(t=N) and the initial matrix U(t=N) are both set to zero; and s(O) is obtained from the processing of the previous f rame (or if there are no previous frames can be set equal to zero). The processing then proceeds to step 25 s33 where the estimate of the process noise (6(t)) for the current speech sample being processed and the estimate of the raw speech sample ( (t)) for the current speech sample being processed are stored. The processing then proceeds to step s35 where the system determines 5 whether or not all the speech samples in the current frame have been processed. If they have not, then the processing proceeds to step s37 where the time variable t is decremented by one so that the previous sample in the current frame will be processed in the same way.
10 once all V samples in the current frame have been processed in this way and the corresponding process noise and raw speech samples have been stored, the second stage of the Simulation Smoother is complete and an estimate of s(n) will have been generated.
As shown in equations (4) and (8), the matrix S and the matrix Y require raw speech samples s(n-N-1) to s(n-U k+l) and s(n-N-1) to s(n-N-r+l) respectively in addition to those in s(n). These additional raw speech samples 20 can be obtained either from the processing of the previous frame of speech or if there are no previous frames, they can be set to zero. With these estimates of raw speech samples, the Gibbs sampler can be run to draw samples from the above described probability density 25 functions.
41 Statistical Analysis Unit - Operation A description has been given above of the theory underlying the statistical analysis unit 21. A description will now be given with reference to Figures
5 6 to 8 of the operation of the statistical analysis unit 21 that is used in the embodiment.
Figure 6 is a block diagram illustrating the principal components of the statistical analysis unit 21 of this 10 embodiment. As shown, it comprises the above described Gibbs sampler 41, Simulation Smoother 43 (including the Kalman filter 43-1 and smoothing filter 43-2) and model order selector 45. It also comprises a memory 47 which receives the speech samples of the current frame to be 15 processed, a data analysis unit 49 which processes the data generated by the Gibbs sampler 41 and the model order selector 45 and a controller 50 which controls the operation of the statistical analysis unit 21.
20 As shown in Figure 6, the memory 47 includes a non volatile memory area 47-1 and a working memory area 47-2.
The non volatile memory 47-1 is used to store the joint probability density function given in equation (19) above and the equations for the variances and mean values and 25 the equations for the Inverse Gamma parameters given 42 above in equations (22) to (24) and (27) to (30) for the above mentioned conditional probability density functions for use by the Gibbs sampler 41. The non volatile memory 47-1 also stores the Kalman filter equations given above 5 in equation (33) and the smoothing filter equations given above in equation 34 for use by the Simulation Smoother 43.
Figure 7 is a schematic diagram illustrating the 10 parameter values that are stored in the working memory area (RAM) 47-2. As shown, the RAM includes a store 51 for storing the speech samples yf (1) to yf (N) output by the analogue to digital converter 17 for the current frame (f) being processed. As mentioned above, these 15 speech samples are used in both the Gibbs sampler 41 and the Simulation Smoother 43. The RAM 47-2 also includes a store 53 for storing the initial estimates of the model parameters (g=O) and the M samples (g = 1 to M) of each parameter drawn from the above described conditional 20 probability density functions by the Gibbs sampler 41 for the current frame being processed. As mentioned above, in this embodiment, M is 100 since the Gibbs sampler 41 performs 150 iterations on each frame of input speech with the first fifty samples being discarded. The RAM 25 47-2 also includes a store 55 for storing W(t), d(t) and 43 L(t) for t = 1 to N which are calculated during the processing of the speech samples in the current frame of speech by the above described Kalman filter 43-1. The RAM 47-2 also includes a store 57 for storing the 5 estimates of the raw speech samples ( f(t)) and the estimates of the process noise (df(t)) generated by the smoothing filter 43-2, as discussed above. The RAM 47-2 also includes a store 59 for storing the model order counts which are generated by the model order selector 45 10 when the model orders for the AR f ilter model and the channel model are updated.
Figure 8 is a f low diagram illustrating the control program used by the controller 50, in this embodiment, to 15 control the processing operations of the statistical analysis unit 21. As shown, in step s4l, the controller retrieves the next frame of speech samples to be processed from the buffer 19 and stores them in the memory store 51. The processing then proceeds to step 20 s43 where initial estimates for the channel model, raw speech samples and the process noise and measurement noise statistics are set and stored in the store 53.
These initial estimates are either set to be the values obtained during the processing of the previous frame of 25 speech or, where there are no previous frames of speech, 44 are set to their expected values (which may be zero).
The processing then proceeds to step s45 where the simulation smoother 43 is activated so as to provide an estimate of the raw speech samples in the manner 5 described above. The processing then proceeds to step s47 where one iteration of the Gibbs sampler 41 is run in order to update the channel model, speech model and the process and measurement noise statistics using the raw speech samples obtained in step s45. These updated 10 parameter values are then stored in the memory store 53.
The processing then proceeds to step s49 where the controller 50 determines whether or not to update the model orders of the AR filter model and the channel 15 model. As mentioned above, in this embodiment, these model orders are updated every third Gibbs iteration. If the model orders are to be updated, then the processing proceeds to step s5l where the model order selector 45 is used to update the model orders of the AR filter model 20 and the channel model in the manner described above. If at step s49 the controller 50 determines that the model orders are not to be updated, then the processing skips step s5l and the processing proceeds to step s53. At step s53,-the controller 50 determines whether or not to 25 perform another Gibbs iteration. If another iteration is to be performed, then the processing proceeds to decision block s55 where the controller 50 decides whether or not to update the estimates of the raw speech samples (s (t)).
If the raw speech samples are not to be updated, then the 5 processing returns to step s47 where the next Gibbs iteration is run.
As mentioned above, in this embodiment, the Simulation Smoother 43 is run every fourth Gibbs iteration in order 10 to update the raw speech samples. Therefore, if the controller 50 determines, in step s55 that there has been four Gibbs iterations since the last time the speech samples were updated, then the processing returns to step s45 where the Simulation Smoother is run again to provide 15 new estimates of the raw speech samples (s(t)). once the controller 50 has determined that the required 150 Gibbs iterations have been performed, the controller 50 causes the processing to proceed to step s57 where the data analysis unit 49 analyses the model order counts 20 generated by the model order selector 45 to determine the model orders for the AR filter model and the channel model which best represents the current frame of speech being processed. The processing then proceeds to step s59 where the data analysis unit 49 analyses the samples 25 drawn from the conditional densities by the Gibbs sampler 46 41 to determine the AR filter coefficients (a), the channel model coefficients (h), the variances of these coefficients and the process and measurement noise variances which best represent the current frame of 5 speech being processed. The processing then proceeds to step s6l where the controller 50 determines,whether or not there is any further speech to be processed. if there is more speech to be processed, then processing returns to step S41 and the above process is repeated for 10 the next frame of speech. once all the speech has been processed in this way, the processing ends.
Data Analysis unit A more detailed description of the data analysis unit 49
15 will now be given with reference to Figure 9. As mentioned above, the data analysis unit 49 initially determines, in step s57, the model orders for both the AR filter model and the channel model which best represents the current frame of speech being processed. It does 20 this using the counts that have been generated by the model order selector 45 when it was run in step s5l.
These counts are stored in the store 59 of the RAM 47-2.
In this embodiment, in determining the best model orders, the data analysis unit 49 identifies the model order 25 having the highest count. Figure 9a is an exemplary 47 histogram which illustrates the distribution of counts that is generated for the model order (k) of the AR f ilter model. Therefore, in this example, the data analysis unit 49 would set the best model order of the AR 5 filter model as five. The data analysis unit 49 performs a similar analysis of the counts generated for the model order (r) of the channel model to determine the best model order for the channel model.
10 once the data analysis unit 49 has determined the best model orders (k and r), it then analyses the samples generated by the Gibbs sampler 41 which are stored in the store 53 of the RAM 47-2, in order to determine parameter values that are most representative of those samples.
15 It does this by determining a histogram for each of the parameters from which it determines the most representative parameter value. To generate the histogram, the data analysis unit 49 determines the maximum and minimum sample value which was drawn by the 20 Gibbs sampler and then divides the range of parameter values between this minimum and maximum value into a predetermined number of sub-ranges or bins. The data analysis unit 49 then assigns each of the sample values into the appropriate bins and counts how many samples 25 are allocated to each bin. It then uses these counts to 48 calculate a weighted average of the samples (with the weighting used for each sample depending on the count for the corresponding bin), to determine the most representative parameter value (known as the minimum mean 5 square estimate (MMSE)). Figure 9b illustrates an example histogram which is generated for the variance (0,2) of the process noise, from which the data analysis unit 49 determines that the variance representative of the sample is 0.3149.
In determining the AR filter coefficients (ai for i = i to k), the data analysis unit 49 determines and analyses a histogram of the samples for each coefficient independently. Figure 9c shows an exemplary histogram 15 obtained for the third AR filter coefficient (aAl from which the data analysis unit 49 determines that the coefficient representative of the samples is -0.4977.
In this embodiment, the data analysis unit 49 only 20 outputs the AR filter coefficients which are passed to the channel encoder 71 shown in Figure 1. The AR filter coefficients (and the remaining parameter values determined by the data analysis unit 49) are also stored in the RAM 47-2 for use during the processing of the next 25 frame of speech.
49 As the skilled reader will appreciate, a speech processing technique has been described above which uses statistical analysis techniques to determine sets of AR filter coefficients representative of an input speech 5 signal. The technique is more robust and accurate than prior art techniques which employ maximum likelihood estimators to determine the AR filter coefficients. This is because the statistical analysis of each frame uses knowledge obtained from the processing of the previous 10 frame.
in addition, with the analysis performed above, the model order for the AR filter model is not assumed to be constant and can vary from frame to frame. In this way, 15 the optimum number of AR filter coefficients can be used to represent the speech within each frame. As a result, the AR filter coefficients output by the statistical analysis unit 21 will more accurately represent the corresponding input speech. In contrast, with the prior 20 art linear prediction systems, the number of AR coefficients is assumed to be constant and hence these prior art techniques tend to over parametrise the speech in order to ensure that information is not lost. As a result, with the statistical analysis described above, 25 the amount of data which has to be transmitted from the transmitter to the receiver will be less than with the prior art systems which assume a fixed size of AR filter model.
5 Further still, since the underlying process model that is used separates the speech source from the channel, the AR filter coefficients that are determined will be more representative of the actual speech and will be less likely to include distortive effects of the channel.
Further still, since variance information is available for each of the parameters, this provides an indication of the confidence of each of the parameter estimates.
This is in contrast to maximum likelihood and least 15 squares approaches, such as linear prediction analysis, where point estimates of the parameter values are determined.
Alternative Embodiments 20 In the above embodiment, the statistical analysis unit was used in order to parameterise the input speech signal for onward transmission to a remote receiver. It also generated a number of other parameter values (such as the process noise variances and the channel model 25 coefficients), but these were not output by the 51 statistical analysis unit. As those skilled in the art will appreciate, the AR coefficients and some of the other parameters which are calculated by the statistical analysis unit can also be used in the transmission 5 system. For example, the variance information of the AR coefficients can be used to control the type of channel encoding which is employed by the channel encoder 71, since this variance information is indicative of the quality of the input speech signal. For example, 10 different channel encoding techniques may be used in dependence upon the quality of the input speech signal, with the particular technique being used depending upon the quality of the speech within each frame.
15 In the above embodiment, the speech generation process was modelled as an auto-regressive (AR) process and the channel was modelled as a moving average (MA) process. As those skilled in the art will appreciate, other signal models may be
used. However, these models are preferred 20 because it has been found that they suitably represent the speech source and the channel they are intended to model.
In the above embodiment, a speech quality indicator was 25 determined for each frame of speech that is processed and 52 the channel encoder 71 encodes each frame of speech in dependence upon the quality measure determined for each frame. As those skilled in the art will appreciate, this is not essential. For example, a moving average quality 5 indicator may be determined and the channel encoder may be arranged to change its encoding technique when the moving average goes above or below a predetermined threshold value.
10 In the above embodiments, Gaussian and Inverse Gamma distributions were used to model the various prior probability density functions of equation (19). As those skilled in the art of statistical analysis will appreciate, the reason these distributions were chosen is 15 that they are conjugate to one another. This means that each of the conditional probability density functions which are used in the Gibbs sampler will also either be Gaussian or Inverse Gamma. This therefore simplifies the task of drawing samples from the conditional probability 20 densities. However, this is not essential. The noise probability density functions could be modelled by Laplacian or student-t distributions rather than Gaussian distributions. Similarly, the probability density functions for the variances may be modelled by a 25 distribution other than the Inverse Gamma distribution.
53 For example, they can be modelled by a Rayleigh distribution or some other distribution which is always positive. However, the use of probability density functions that are not conjugate will result in increased 5 complexity in drawing samples from the conditional densities by the Gibbs sampler.
Additionally, whilst the Gibbs sampler was used to draw samples from the probability density function given in 10 equation (19), other sampling algorithms could be used.
For example the Metropolis-Hastings algorithm (which is reviewed together with other techniques in a paper entitled "Probabilistic inference using Markov chain Monte Carlo methods" by R. Neal, Technical Report CRG 15 TR-93-1, Department of Computer Science, University of Toronto, 1993) may be used to sample this probability density.
in the above embodiment, a Simulation Smoother was used 20 to generate estimates for the raw speech samples. This Simulation Smoother included a Kalman filter stage and a smoothing f ilter stage in order to generate the estimates of the raw speech samples. In an alternative embodiment, the smoothing f ilter stage may be omitted, since the 25 Kalman f ilter stage generates estimates of the raw speech 54 (see equation (33)). However, these raw speech samples were ignored, since the speech samples generated by the smoothing filter are considered to be more accurate and robust. This is because the Kalman filter essentially 5 generates a point estimate of the speech samples from the joint probability density f unction p (.a(n) I A, k, Cye'), whereas the Simulation Smoother draws a sample f rom. this probability density function.
10 In the above embodiment, a Simulation Smoother was used in order to generate estimates of the raw speech samples.
It is possible to avoid having to estimate the raw speech samples by treating them as "nuisance parameters" and integrating them out of equation (19). However, this is 15 not preferred, since the resulting integral will have a much more complex form than the Gaussian and Inverse Gamma mixture def ined in equation (19). This in turn will result in more complex conditional probabilities corresponding to equations (20) to (30). In a similar 20 way, the other nuisance parameters (such as the coefficient variances or any of the Inverse Gamma, alpha and beta parameters) may be integrated out as well.
However, again this is not preferred, since it increases the complexity of the density function to be sampled 25 using the Gibbs sampler. The technique of integrating out nuisance parameters is well known in the field of statistical analysis and will not be described further here.
5 In the above embodiment, the data analysis unit analysed the samples drawn by the Gibbs sampler by determining a histogram for each of the model parameters and then determining the value of the model parameter using a weighted average of the samples drawn by the Gibbs 10 sampler with the weighting being dependent upon the number of samples in the corresponding bin. In an alterative embodiment, the value of the model parameter may be determined from the histogram as being the value of the model parameter having the highest count.
15 Alternatively, a predetermined curve (such as a bell curve) could be fitted to the histogram in order to identify the maximum which best fits the histogram.
In the above embodiment, the statistical analysis unit 20 modelled the underlying speech production process with a separate speech source model (AR filter) and a channel model. Whilst this is the preferred model structure, the underlying speech production process may be modelled without the channel model. In this case, there is no 25 need to estimate the values of the raw speech samples 56 using a Kalman filter or the like, although this can still be done. However, such a model of the underlying speech production process is not preferred, since the speech model will inevitably represent aspects of the 5 channel as well as the speech. Further, although the statistical analysis unit described above ran a model order selection routine in order to allow the model orders of the AR filter model and the channel model to vary, this is not essential. In particular, the model 10 order of the AR filter model and the channel model may be fixed in advance, although this is not preferred since it will inevitably introduce errors into the representation.
In the above embodiments, the speech that was processed 15 was received from a user via a microphone. As those skilled in the art will appreciate, the speech may be received from a telephone line or may have been stored on a recording medium. In this case, the channel model will compensate for this so that the AR filter coefficients 20 representative of the actual speech that has been spoken should not be significantly affected.
In the above embodiments, during the running of the model order selection routine, a new model order was proposed 25 by drawing a random variable from a predetermined Laplacian distribution function. As those skilled in the art will appreciate, other techniques may be used. For example the new model order may be proposed in a deterministic way (ie under predetermined rules), provided that the model order space is sufficiently sampled.
58
Claims (68)
1. An audio encoding system comprising:
a memory for storing a predetermined function which 5 gives, for a given set of audio signal values, a probability density for parameters of a predetermined audio model which is assumed to have generated the set of audio signal values, the probability density defining, for a given set of model parameter values, the 10 probability that the predetermined audio model has those parameter values, given that the model is assumed to have generated the set of audio signal values; means for receiving a set of audio signal values representative of an input audio signal; 15 means for applying the set of received audio signal values to said stored function to give the probability density for said model parameters for the set of received audio signal values; means for processing said function with said set of 20 received audio signal values applied, to derive samples of parameter values from said probability density; means for analysing at least some of said derived samples of parameter values to determine parameter values that are representative of the set of received audio 25 signal values; and means for encoding said determined parameter values to generate encoded data representative of the received audio signal values.
5
2. A system according to claim 1, wherein said processing means is operable to draw samples iteratively from said probability density function.
3. A system according to claim 2, wherein said 10 processing means comprises a Gibbs sampler.
4. A system according to claim 2 or 3, wherein said analysing means is operable to determine a histogram of said drawn samples and wherein said values of said 15 parameters are determined from said histogram.
5. A system according to claim 4, wherein said processing means is operable to determine said values of said first parameters using a weighted sum of said drawn 20 samples and wherein the weighting for each sample is determined from said histogram.
6. A system according to any preceding claim, wherein said receiving means is operable to receive a sequence of 25 sets of signal values representative of an input audio signal and wherein said applying means, processing means and analysing means are operable to perform their function with respect to each set of received audio signal values to determine parameter values that are 5 representative of each set of received audio signal values.
7. A system according to claim 6, wherein said processing means is operable to use the values of 10 parameters obtained during the processing of a preceding set of signal values as initial estimates for the values of the corresponding parameters for a current set of signal values being processed.
15
8. A system according to claim 6 or 7, wherein said sets of signal values in said sequence are non overlapping.
9. A system according to any of claims 6 to 8, wherein 20 said processing means comprises means for varying the number of parameters used to represent the audio signal within each set of audio signal values.
10. A system according to any preceding claim, wherein 25 said audio model comprises an auto-regressive process 61 model and wherein said parameters include auto-regressive model coefficients.
11. A system according to any preceding claim, wherein 5 said received set of audio signal values are representative of an input speech signal.
12. A system according to claim 11, wherein said received set of speech signal values are representative 10 of a speech signal generated by a speech source as distorted by a transmission channel between the speech source and the receiving means; wherein said predetermined function includes a first part having first parameters which models said source and a second part 15 having second parameters which models said channel; wherein said processing means is operable to derive samples of at least said first parameters; and wherein said analysing means is operable to determine values of said first parameters that are representative of said 20 speech generated by said speech source before it was distorted by said transmission channel.
13. A system according to claim.12, wherein said function is in terms of a set of raw speech signal values 25 representative of speech generated by said source before 62 being distorted by said transmission channel, wherein the system further comprises second processing means for processing the received set of signal values with initial estimates of said first and second parameters, to 5 generate an estimate of the raw speech signal values corresponding to the received set of signal values and wherein said applying means is operable to apply said estimated set of raw speech signal values to said function in addition to said set of received signal 10 values.
14. A system according to claim 13, wherein said second processing means comprises a simulation smoother.
15 15. A system according to claim 13 or 14, wherein said second processing means comprises a Kalman filter.
16. A system according to any of claims 12 to 15, wherein said second part is a moving average model and 20 said second parameters comprise moving average model coefficients.
17. A system according to any preceding claim, further comprising means for evaluating said probability density 25 function for the set of received signal values using one 63 or more of said drawn samples of parameter values f or different numbers of parameter values, to determine respective probabilities that the predetermined signal model has those parameter values and wherein said 5 processing means is operable to process at least some of said drawn samples of parameter values and said evaluated probabilities to determine said values of said parameters that are representative of the received audio signal.
10
18. A system according to any preceding claim, wherein said analysing means is operable to analyse at least some of said derived samples of parameter values to determine a measure of the variance of at least some of said samples of parameter values; wherein said system further 15 comprises means for determining an indication of the quality of the received audio signal using said variance measure; and wherein said encoding means is operable to encode said determined parameter values in dependence upon the determined quality indication.
19. A system according to claim 18, wherein said encoding means is operable to encode said parameter values using a first encoding technique if said quality indication is above a predetermined value and is operable 25 to encode said parameter values using a second encoding 64 technique if said quality indication is below said value.
20. A system according to claim 19, wherein said first encoding technique is operable to minimise the data to be transmitted and wherein said second encoding technique is operable to minimise information lost in the encoding.
21. An audio transmission system comprising:
a transmission unit comprising: means for receiving 10 an audio signal; an audio encoding system according to any preceding claim for generating encoded parameter values representative of received audio signal values; and means for transmitting the encoded parameter values; and a receiver unit comprising means for receiving the 15 transmitted parameter values; and means for processing the received parameter values to generate an output signal in dependence thereon.
22. A system according to claim 21, wherein said 20 processing means of said receiving unit comprises speech synthesis means for generating a synthesised speech signal in dependence upon the received parameter values.
23. A system according to claim 21 or 22, wherein said processing means of said receiving unit comprises a speech recognition system which operable to compare the received parameter values with stored reference models to generate a recognition result.
5
24. A system according to claim 21 when dependent upon claim 19, wherein said transmission unit is operable to transmit said quality indication to said receiving unit and wherein said receiving unit is operable to receive said quality indication and to decode said encoded 10 parameters in dependence upon the received quality indication.
25. A system according to claim 24, wherein said receiving unit is operable to decode said encoded 15 parameter values in accordance with a first decoding technique if said quality indication has a value above a predetermined threshold value and is operable to decode said encoded parameter values in accordance with a second decoding technique if said quality indication is below 20 said predetermined value.
26. An audio transmission system comprising a transmitter and receiver, wherein the transmitter comprises:
25 means for receiving an input audio signal; 66 means f or determining a measure of the quality of the input audio signal; means for encoding data representative of the audio signal in dependence upon the determined quality 5 measure; and means for transmitting the encoded audio data; and wherein said receiver comprises:
means for receiving the encoded audio data; 10 means f or decoding the transmitted audio data; and means for outputting the decoded audio data.
27. A system according to claim 26, wherein said 15 transmitter is operable to transmit said quality measure and wherein said decoder is operable to decode said encoded audio data in dependence upon the received quality measure.
20
28. A system according to claim 26 or 27, wherein said encoder is operable to encode said audio signal in accordance with a first encoding technique if said quality measure is above a predetermined threshold and is operable to encode said audio signal in accordance with 25 a second encoding technique if said quality measure is 67 below a predetermined threshold.
29. A system according to claim 28, wherein said first encoding technique is operable to minimise the data to be 5 transmitted and where said second encoding technique is operable to minimise information lost in the encoding.
30.An audio transmission system comprising a transmission unit and a receiving unit, wherein 10 the transmission unit comprises:
a memory for storing a predetermined function which gives, for a given set of audio signal values, a probability density for parameters of a predetermined audio model which is assumed to have generated the set of 15 audio signal values, the probability density defining, for a given set of model parameter values, the probability that the predetermined audio model has those parameter values, given that the model is assumed to have generated the set of audio signal values; 20 means for receiving a set of audio signal values representative of an input audio signal; means for applying the set of received audio signal values to said stored function to give the probability density for said model parameters for the set 25 of received audio signal values; 68 means for processing said function with said set of received audio signal values applied, to derive samples of parameter values from said probability density; 5 means for analysing at least some of said derived samples of parameter values to determine parameter values that are representative of the set of received audio signal values; and means for transmitting said determined 10 parameter values; and wherein the receiver unit comprises:
means for receiving said transmitted parameter values; and means for processing the received parameter 15 values to generate an output signal in dependence thereon.
31. A system according to claim 30, wherein said transmission unit further comprises means for encoding 20 the determined parameter values and wherein said receiving unit comprises means for decoding the encoded parameter values.
32. A transmitter apparatus comprising:
25 a memory for storing a predetermined function which 69 gives, for a given set of audio signal values, a probability density f or parameters of a predetermined audio model which is assumed to have generated the set of audio signal values, the probability density defining, 5 for a given set of model parameter values, the probability that the predetermined audio model has those parameter values, given that the model is assumed to have generated the set of audio signal values; means for receiving a set of audio signal values 10 representative of an input audio signal; means for applying the set of received audio signal values to said stored function to give the probability density for said model parameters for the set of received audio signal values; 15 means for processing said function with said set of received audio signal values applied, to derive samples of parameter values from said probability density; means for analysing at least some of said derived samples of parameter values to determine parameter values 20 that are representative of the set of received audio signal values; and means for transmitting said determined parameter values.
25
33. A transmitter apparatus comprising:
means for receiving an input audio signal; means for determining a measure of the quality of the input audio signal; means f or encoding data representative of the audio 5 signal in dependence upon the determined quality measure; and means for transmitting the encoded data.
34. A transmitter according to claim 33, wherein said 10 transmitting means is operable to transmit said quality measure in addition to said encoded data.
35. A transmitter according to claim 33 or 34, wherein said encoder is operable to encode said audio signal in 15 accordance with a first encoding technique if said quality measure is above a predetermined threshold and is operable to encode said audio signal in accordance with a second encoding technique if said quality measure is below a predetermined threshold.
36. A transmitter according to claim 35, wherein said first encoding technique is operable to minimise the data to be transmitted and wherein said second encoding technique is operable to minimise information lost in the 25 encoding.
37. An audio encoding method comprising the steps of:
storing a predetermined function which gives, for a given set of audio signal values, a probability density 5 for parameters of a predetermined audio model which is assumed to have generated the set of audio signal values, the probability density defining, for a given set of model parameter values, the probability that the predetermined audio model has those parameter values, 10 given that the model is assumed to have generated the set of audio signal values; receiving a set of audio signal values representative of an input audio signal at a receiver; applying the set of received audio signal values to 15 said stored function to give the probability density for said model parameters for the set of received audio signal values; processing said function with said set of received audio signal values applied, to derive samples of 20 parameter values from said probability density; analysing at least some of said derived samples of parameter values to determine parameter values that are representative of the set of received audio signal values; and 25 encoding said determined parameter values to 72 generate encoded data representative of the received audio signal values.
38. A method according to claim 37, wherein said 5 processing step draws samples iteratively from said probability density function.
39. A method according to claim 38, wherein said processing step uses a Gibbs sampler.
40. A method according to claim 38 or 39, wherein said analysing step determines a histogram of said drawn samples and wherein said values of said parameters are determined from said histogram.
41. A method according to claim 40, wherein said processing step determines said values of said first parameters using a weighted sum of said drawn samples and wherein the weighting for each sample is determined from 20 said histogram.
42. A method according to any of claims 37 to 41, wherein said receiving step receives a sequence of sets of signal values representative of an input audio signal 25 and wherein said applying step, processing step and 73 analysing step are performed for each set of received audio signal values to determine parameter values that are representative of each set of received audio signal values.
43. A method according to claim 42, wherein said processing step uses the values of parameters obtained during the processing of a preceding set of signal values as initial estimates for the values of the 10 corresponding parameters for a current set of signal values being processed.
44. A method according to claim 42 or 43, wherein said sets of signal values in said sequence are non 15 overlapping.
45. A method according to any of claims 42 to 44, wherein said processing step comprises the step of varying the number of parameters used to represent the 20 audio signal within each set of audio signal values.
46. A method according to any of claims 37 to 45, wherein said audio model comprises an auto-regressive process model and wherein said parameters include auto 25 regressive model coefficients.
47. A method according to any of claims 37 to 46, wherein said received set of audio signal values are representative of an input speech signal.
5
48. A method according to claim 47, wherein said received set of speech signal values are representative of a speech signal generated by a speech source as distorted by a transmission channel between the speech source and the receiver; wherein said predetermined 10 function includes a first part having first parameters which models said source and a second part having second parameters which models said channel; wherein said processing step derives samples of at least said first parameters; and wherein said analysing step determines 15 values of said first parameters that are representative of said speech generated by said speech source before it was distorted by said transmission channel.
49. A method according to claim 48, wherein said 20 function is in terms of a set of raw speech signal values representative of speech generated by said source before being distorted by said transmission channel, further comprising a second processing step for processing the received set of signal values with initial estimates of 25 said first and second parameters, to generate an estimate of the raw speech signal values corresponding to the received set of signal values and wherein said applying step applies said estimated set of raw speech signal values to said function in addition to said set of 5 received signal values.
50. A method according to claim 49, wherein said second processing step uses a simulation smoother.
10
51. A method according to claim 49 or 50, wherein said second processing step uses a Kalman filter.
52. A method according to any of claims 48 to 50 wherein said second part is a moving average model and said 15 second parameters comprise moving average model coefficients.
53. A method according to any of claims 37 to 52, further comprising the step of evaluating said 20 probability density function for the set of received signal values using one or more of said drawn samples of parameter values for different numbers of parameter values, to determine respective probabilities that the predetermined signal model has those parameter values and 25 wherein said processing step processes at least some of 76 said drawn samples of parameter values and said evaluated probabilities to determine said values of said parameters that are representative of the received audio signal.
5
54. A method according to any of claims 37 to 53, wherein said analysing step analyses at least some of said derived samples of parameter values to determine a measure of the variance of at least some of said samples of parameter values; further comprising the step of 10 determining an indication of the quality of the received audio signal; and wherein said encoding step encodes said determined parameter values in dependence upon the determined quality indication.
15
55. A method according to claim 54, wherein said encoding step encodes said parameter values using a first encoding technique if said quality indication is above a predetermined value and encodes said parameter values using a second encoding technique if said quality 20 indication is below said value.
56. A method according to claim 55, wherein said first encoding technique is operable to minimise the data to be transmitted and wherein said second encoding technique 25 is operable to minimise information lost in the 77 encoding.
57. An audio transmission method comprising the steps of:
5 receiving an audio signal at a transmission unit; encoding the audio signal using a method according to any of claims 37 to 56 to generate encoded parameter values representative of the audio signal; and transmitting the encoded parameters values; 10 receiving the transmitted encoded parameter values at a receiver unit; decoding the received encoded parameter values; processing the decoded parameter values to generate and output signal in dependence thereon.
58. A method according to any of claims 37 to 53, wherein said processing step at said receiving unit comprises speech synthesis means for generating a synthesised speech signal in dependence upon the received 20 parameter values.
59. A method according to any of claims 37 to 54, wherein said processing step at said receiving unit uses a speech recognition system to compare the received 25 parameter values with stored reference models and to 78 generate a recognition result.
60. A method according to claim 57 when dependent upon claim 54, further comprising the step of transmitting 5 said quality indication to said receiving unit and, at said receiving unit, the steps of receiving said quality indication and decoding said encoded parameters in dependence upon the received quality indication.
10
61. A method according to claim 60, comprising the step of, at said receiving unit, decoding said encoded parameter values in accordance with a first decoding technique if said quality indication has a value above a predetermined threshold value and decoding said encoded 15 parameter values in accordance with a second decoding technique if said quality indication is below said predetermined value.
62. An audio transmission method using a transmitter and 20 receiver, the method comprising the steps of:
at the transmitter:
receiving an audio signal; determining a measure of the quality of the received audio signal; 25 encoding data representative of the audio 79 signal in dependence upon the determined quality measure; and transmitting the encoded data; and at said receiver:
5 receiving the encoded audio data; decoding the transmitted audio data; and outputting the decoded audio data.
63. A method according to claim 62, further comprising 10 the step of, at said transmitter, transmitting said quality measure and wherein said decoding step decodes said encoded audio data in dependence upon the received quality measure.
15
64. A method according to claim 61 or 62, wherein said encoding step encodes said audio signal in accordance with a first encoding technique if said quality measure is above a predetermined threshold and encodes said audio signal in accordance with a second encoding technique if 20 said quality measure is below a predetermined threshold
65. A method according to claim 63, wherein said first encoding technique is operable to minimise the data to be transmitted and where said second encoding technique is 25 operable to minimise information lost in the encoding.
66. An audio transmission method using a transmission unit and a receiving unit, wherein the method comprises the steps of:
5 at the transmission unit:
storing a predetermined function which gives, for a given set of audio signal values, a probability density for parameters of a predetermined audio model which is assumed to have generated the set of audio 10 signal values, the probability density defining, for a given set of model parameter values, the probability that the predetermined audio model has those parameter values, given that the model is assumed to have generated the set of audio signal values; 15 receiving a set of audio signal values representative of an input audio signal; applying the set of received audio signal values to said stored function to give the probability density for said model parameters for the set of received 20 audio signal values; processing said function with said set of received audio signal values applied, to derive samples of parameter values from said probability density; analysing at least some of said derived samples 25 of parameter values to determine parameter values that 81 are representative of the set of received audio signal values; and transmitting said determined parameter values; and 5 at the receiver unit:
receiving said transmitted parameter values; and processing the received parameter values to generate an output signal in dependence thereon.
67. A computer readable medium storing computer executable process steps to cause a programmable computer apparatus to perform the method of any of claims 37 to 66.
68. Processor implementable process steps for causing a programmable computing device to perform the method according to any of claims 37 to 66.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US09/866,585 US20020026253A1 (en) | 2000-06-02 | 2001-05-30 | Speech processing apparatus |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
GB0013541A GB0013541D0 (en) | 2000-06-02 | 2000-06-02 | Speech processing system |
Publications (2)
Publication Number | Publication Date |
---|---|
GB0020310D0 GB0020310D0 (en) | 2000-10-04 |
GB2367990A true GB2367990A (en) | 2002-04-17 |
Family
ID=9892938
Family Applications (4)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
GB0013541A Ceased GB0013541D0 (en) | 2000-06-02 | 2000-06-02 | Speech processing system |
GB0020309A Withdrawn GB2367466A (en) | 2000-06-02 | 2000-08-17 | Speech processing system |
GB0020314A Withdrawn GB2367729A (en) | 2000-06-02 | 2000-08-17 | Speech processing system |
GB0020310A Withdrawn GB2367990A (en) | 2000-06-02 | 2000-08-17 | Speech processing system |
Family Applications Before (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
GB0013541A Ceased GB0013541D0 (en) | 2000-06-02 | 2000-06-02 | Speech processing system |
GB0020309A Withdrawn GB2367466A (en) | 2000-06-02 | 2000-08-17 | Speech processing system |
GB0020314A Withdrawn GB2367729A (en) | 2000-06-02 | 2000-08-17 | Speech processing system |
Country Status (1)
Country | Link |
---|---|
GB (4) | GB0013541D0 (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB2380644A (en) * | 2001-06-07 | 2003-04-09 | Canon Kk | Speech detection |
US8775168B2 (en) | 2006-08-10 | 2014-07-08 | Stmicroelectronics Asia Pacific Pte, Ltd. | Yule walker based low-complexity voice activity detector in noise suppression systems |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4386237A (en) * | 1980-12-22 | 1983-05-31 | Intelsat | NIC Processor using variable precision block quantization |
EP0554083A2 (en) * | 1992-01-30 | 1993-08-04 | Ricoh Company, Ltd | Neural network learning system |
US5884269A (en) * | 1995-04-17 | 1999-03-16 | Merging Technologies | Lossless compression/decompression of digital audio data |
US6226613B1 (en) * | 1998-10-30 | 2001-05-01 | At&T Corporation | Decoding input symbols to input/output hidden markoff models |
EP1160768A2 (en) * | 2000-06-02 | 2001-12-05 | Canon Kabushiki Kaisha | Robust features extraction for speech processing |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB2137052B (en) * | 1983-02-14 | 1986-07-23 | Stowbell | Improvements in or relating to the control of mobile radio communication systems |
GB8608289D0 (en) * | 1986-04-04 | 1986-05-08 | Pa Consulting Services | Noise compensation in speech recognition |
KR100609128B1 (en) * | 1999-07-12 | 2006-08-04 | 에스케이 텔레콤주식회사 | Apparatus and method for measuring quality of reverse link in CDMA system |
-
2000
- 2000-06-02 GB GB0013541A patent/GB0013541D0/en not_active Ceased
- 2000-08-17 GB GB0020309A patent/GB2367466A/en not_active Withdrawn
- 2000-08-17 GB GB0020314A patent/GB2367729A/en not_active Withdrawn
- 2000-08-17 GB GB0020310A patent/GB2367990A/en not_active Withdrawn
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4386237A (en) * | 1980-12-22 | 1983-05-31 | Intelsat | NIC Processor using variable precision block quantization |
EP0554083A2 (en) * | 1992-01-30 | 1993-08-04 | Ricoh Company, Ltd | Neural network learning system |
US5884269A (en) * | 1995-04-17 | 1999-03-16 | Merging Technologies | Lossless compression/decompression of digital audio data |
US6226613B1 (en) * | 1998-10-30 | 2001-05-01 | At&T Corporation | Decoding input symbols to input/output hidden markoff models |
EP1160768A2 (en) * | 2000-06-02 | 2001-12-05 | Canon Kabushiki Kaisha | Robust features extraction for speech processing |
Also Published As
Publication number | Publication date |
---|---|
GB0013541D0 (en) | 2000-07-26 |
GB0020310D0 (en) | 2000-10-04 |
GB2367466A (en) | 2002-04-03 |
GB0020309D0 (en) | 2000-10-04 |
GB0020314D0 (en) | 2000-10-04 |
GB2367729A (en) | 2002-04-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
AU755258B2 (en) | Improved lost frame recovery techniques for parametric, LPC-based speech coding systems | |
US7072833B2 (en) | Speech processing system | |
DE69613907T2 (en) | Changed fundamental frequency delay when data frames are lost | |
US6954745B2 (en) | Signal processing system | |
DE69613908T2 (en) | Voiced / unvoiced classification of speech for speech decoding when data frames are lost | |
EP0786760B1 (en) | Speech coding | |
AU704847B2 (en) | Synthesis of speech using regenerated phase information | |
RU2284664C2 (en) | Method for improved detection of speed errors in receivers with varying speed and device for realization of said method | |
EP1160768A2 (en) | Robust features extraction for speech processing | |
EP0413391A2 (en) | Speech coding system and a method of encoding speech | |
US6272459B1 (en) | Voice signal coding apparatus | |
CA2262787C (en) | Methods and devices for noise conditioning signals representative of audio information in compressed and digitized form | |
US6011846A (en) | Methods and apparatus for echo suppression | |
EP0450064B1 (en) | Digital speech coder having improved sub-sample resolution long-term predictor | |
EP1218876A1 (en) | Apparatus and method for a telecommunications system | |
US4379949A (en) | Method of and means for variable-rate coding of LPC parameters | |
US7411985B2 (en) | Low-complexity packet loss concealment method for voice-over-IP speech transmission | |
US20020026253A1 (en) | Speech processing apparatus | |
DE69622646T2 (en) | Attenuation of codebook gain in the event of data packet failure | |
JPH0590974A (en) | Method and apparatus for processing front echo | |
GB2367990A (en) | Speech processing system | |
US6377553B1 (en) | Method and device for error masking in digital transmission systems | |
EP1688918A1 (en) | Speech decoding | |
CN100487790C (en) | Method and device for selecting self-adapting codebook excitation signal | |
EP1204094B1 (en) | Excitation signal low pass filtering for speech coding |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WAP | Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1) |