GB2480108A - Speech Synthesis using jointly estimated acoustic and excitation models - Google Patents

Speech Synthesis using jointly estimated acoustic and excitation models Download PDF

Info

Publication number
GB2480108A
GB2480108A GB1007705A GB201007705A GB2480108A GB 2480108 A GB2480108 A GB 2480108A GB 1007705 A GB1007705 A GB 1007705A GB 201007705 A GB201007705 A GB 201007705A GB 2480108 A GB2480108 A GB 2480108A
Authority
GB
United Kingdom
Prior art keywords
model
parameters
excitation
speech
acoustic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
GB1007705A
Other versions
GB2480108B (en
GB201007705D0 (en
Inventor
Ranniery Maia
Byung Ha Chun
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Toshiba Europe Ltd
Original Assignee
Toshiba Research Europe Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Toshiba Research Europe Ltd filed Critical Toshiba Research Europe Ltd
Priority to GB1007705.5A priority Critical patent/GB2480108B/en
Publication of GB201007705D0 publication Critical patent/GB201007705D0/en
Priority to JP2011100487A priority patent/JP2011237795A/en
Priority to US13/102,372 priority patent/US20110276332A1/en
Publication of GB2480108A publication Critical patent/GB2480108A/en
Application granted granted Critical
Publication of GB2480108B publication Critical patent/GB2480108B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L13/00Speech synthesis; Text to speech systems
    • G10L13/08Text analysis or generation of parameters for speech synthesis out of text, e.g. grapheme to phoneme translation, prosody generation or stress or intonation determination
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L13/00Speech synthesis; Text to speech systems
    • G10L13/02Methods for producing synthetic speech; Speech synthesisers
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L13/00Speech synthesis; Text to speech systems
    • G10L13/02Methods for producing synthetic speech; Speech synthesisers
    • G10L13/04Details of speech synthesis systems, e.g. synthesiser structure or memory management

Abstract

A speech synthesis method comprising: receiving a text input and outputting speech corresponding to said text input using a stochastic model, said stochastic model comprising an acoustic model and an excitation model, said acoustic model having a plurality of model parameters describing probability distributions which relate a word or part thereof to a feature, said excitation model comprising excitation model parameters which are used to model the vocal chords and lungs to output the speech using said features; wherein said acoustic parameters and excitation parameters have been jointly estimated; and outputting said speech. Corresponding speech synthesis statistical model training method and speech synthesis apparatus are also disclosed.

Description

A Speech Processing Method and Apparatus Embodiment of the present invention described herein generally relate to the field of speech synthesis.
An acoustic model is used as the backbone of the speech synthesis. An acoustic model is used to relate a sequence of words or parts of words to a sequence of feature vectors.
In statistical parametric speech synthesis, an excitation model is used in combination with the acoustic model. The excitation model is used to model the action of the lungs and vocal chords in order to output speech which is more natural.
In known statistical speech synthesis, features, such as cepstral coefficients are extracted from speech waveforms and their trajectories and modelled by a statistical model, such as a Hidden Markov Model (HMM). The parameters of the statistical model are estimated so as to maximize its likelihood to the training data or minimize an error between training data and generated features. At the synthesis stage, a sentence-level model is composed from the estimated statistical model according to an input text, and then features are generated from such sentence model so as to maximize their output probabilities or minimize an objective function.
The present invention will now be described with reference to the following non-limiting embodiments in which: Figure 1 is a schematic of a very basic speech synthesis system; Figure 2 is a schematic of the architecture of a processor configured for text-to-speech synthesis; Figure 3 is a block diagram of a speech synthesis system, the parameters of which are estimated in accordance with an embodiment of the present invention; Figure 4 is a plot of a Gaussian distribution relating a particular word or part thereof to an observation; Figure 5 is a flow diagram showing the initialisation steps in a method of training a speech synthesis model in accordance with an embodiment of the present invention; Figure 6 is a flow diagram showing the recursion steps in a method of training a speech synthesis model in accordance with an embodiment of the present invention; and Figure 7 is a flow diagram showing a method of speech synthesis in accordance with an embodiment of the present invention.
Current speech synthesis systems often use a source filter model. In this model, an excitation signal is generated and filtered. A spectral feature sequence is extracted from speech and utilized to separately estimate acoustic model and excitation model parameters. Therefore, spectral features are not optimized by taking into account the excitation model and vice versa.
The inventors of the present invention have taken a completely different approach to the problem of estimating the acoustic and excitation model parameters and in an embodiment provide a method in which acoustic model parameters are jointly estimated with excitation model parameters in a way to maximize the likelihood of the speech waveform.
According to an embodiment, it is presumed that speech is represented by the convolution of a slowly varying vocal tract impulse response filter derived from spectral envelope features, and an excitation source. In the proposed approach extraction of spectral features is integrated in the interlaced training of acoustic and excitation models. Estimation of parameters of the models in question based on the maximum likelihood (ML) criterion can be viewed as full-fledge waveform level closed-loop training with the implicit minimization of the distance between natural and synthesized speech waveforms.
In an embodiment, a joint estimation of acoustic and excitation models for statistical parametric speech synthesis is based on maximum likelihood. The resulting system becomes what can be interpreted as a factor analyzed trajectory HMM. The approximations made for the estimation of the parameters of the joint acoustic and excitation model comprise fixing the state sequence fixed along the training and derivation of a one-best spectral coefficient vector.
In an embodiment, parameters of the acoustic model are updated by taking into account the excitation model, and parameters of the latter are calculated assuming spectrum generated from the acoustic model. The resulting system connects spectral envelope parameter extraction and excitation signal modelling in a fashion similar to factor analyzed trajectory HMM. The proposed approach can be interpreted as a waveform level closed-loop training to minimize the distance between natural and synthesized speech.
In an embodiment, acoustic and excitation models are jointly optimized from the speech waveform directly in a statistical framework.
Thus, the parameters are jointly estimated as: A = argniaxp(s where 2. represents the parameters of the excitation model and acoustic model to be optimised, s is the natural speech waveform and is a transcription of the speech waveform.
In an embodiment, the above training method can be applied to text-to-speech (TTS) synthesizers constructed according to the statistical parametric principle. Consequently, it can also be applied to any task in which such TTS systems are embedded, such as speech-to-speech translation and spoken dialog systems.
In one embodiment a source filter model is used where said text input is processed by said acoustic model to output P0 and spectral features, the method further comprising: processing said FO features to form a pulse train and filtering said pulse train using excitation parameters derived from said excitation model to produce an excitation signal and filtering said excitation signal using filter parameters derived from said spectral features.
The acoustic model parameters may comprise means and variances of said probability distributions. Examples of the features output by said acoustic model are FO features and spectral features.
The excitation model parameters may comprise filter coefficients which are configured to filter a pulse signal derived from FO features and white noise.
In an embodiment, said joint estimation process comprises a recursive process where in one step excitation parameters are updated using the latest estimate of acoustic parameters and in another step acoustic model parameters are updated using the latest estimate of excitation parameters. Preferably, said joint estimation process uses a maximum likelihood technique.
In a further embodiment, said stochastic model further comprises a mapping model and said mapping model comprises mapping model parameters, said mapping model being configured to map spectral features to filter coefficients which represent the human vocal tract. Preferably the relationship between the spectral features and filter coefficients is modelled as a Gaussian process.
Embodiment of the present invention can be implemented either in hardware or on software in a general purpose computer. Further the present invention can be implemented in a combination of hardware and software. The present invention can also be implemented by a single processing apparatus or a distributed network of processing apparatuses.
Since the present invention can be implemented by software, the present invention encompasses computer code provided to a general purpose computer on any suitable carrier medium. The carrier medium can comprise any storage medium such as a floppy disk, a CD ROM, a magnetic device or a programmable memory device, or any transient medium such as any signal e.g. an electrical, optical or microwave signal.
Figure 1 is a schematic of a very basic speech processing system, the system of figure 1 has been configured for speech synthesis. Text is received via unit 1. Unit I may be a connection to the internet, a connection to a text output from a processor, an input from a speech to speech language processing module, a mobile phone etc. The unit 1 could be substituted by a memory which contains text data previously saved.
The text signal is then directed into a speech processor 3 which will be described in more detail with reference to figure 2.
The speech processor 3 takes the text signal and turns it into speech corresponding to the text signal. Many different forms of output are available. For example, the output may be in the form of a direct audio output 5 which outputs to a speaker. This could be implemented on a mobile telephone, satellite navigation system etc. Alternatively, the output could be saved as an audio file and directed to a memory. Also, the output could be in the form of an electronic audio signal which is provided to a further system 9.
Figure 2 shows the basic architecture of a text to speech system 51. The text to speech system 51 comprises a processor 53 which executes a program 55. Text to speech system 51 further comprises storage 57. The storage 57 stores data which is used by program 55 to convert text to speech. The text to speech system 51 further comprises an input module 61 and an output module 63. The input module 61 is connected to a text input 65. Text input 65 receives text. The text input 65 may be for example a keyboard. Alternatively, text input 65 may be a means for receiving text data from an external storage medium or a network.
Connected to the output module 63 is output for audio 67. The audio output 67 is used for outputting a speech signal converted from text input into text input 63. The audio output 67 may be for example a direct audio output e.g. a speaker or an output for an audio data file which may be sent to a storage medium, networked etc. In use, the text to speech system 51 receives text through text input 63. The program 55 executed on processor 53 coverts the text into speech data using data stored in the storage 57. The speech is output via the output module 65 to audio output 67.
Figure 3 is a schematic of a model of speech generation. The model has two sub-models: an acoustic model 101, and an excitation model 103.
Acoustic models where a word or part thereof are converted to features or feature vectors are well known in the art of speech synthesis. In this embodiment, an acoustic model is used which is based on a Hidden Markov Model (HMM). However, other models could also be used.
The actual model used in this embodiment is a standard model, the details of which are outside the scope of this patent application. However, the model will require the provision of probability density functions (pdfs) which relate to the probability of an observation represented by a feature vector being related to a word or part thereof.
Generally, this probability distribution will be a Gaussian distribution in n-dimensional space.
A schematic example of a generic Gaussian distribution is shown in figure 4. Here, the horizontal axis corresponds to a parameter of the input vector in one dimension and the probability distribution is for a particular word or part thereof relating to the observation. For example, in figure 4, an observation corresponding to a feature vector x has a probability pi of corresponding to the word whose probability distribution is shown in figure 4. The shape and position of the Gaussian is defined by its mean and variance. These parameters are determined during training for the vocabulary which the acoustic model, they will be referred to as the "model parameters" for the acoustic model.
The text which is to be output into speech is first converted into phone labels. A phone label comprises a phoneme with contextual information about that phoneme. Examples of contextual information are the preceding and succeeding phonemes, the position within a word of the phoneme, the position of the word in a sentence etc. The phoneme labels are then input into the acoustic model.
The output of acoustic model HMM, once the model parameters have been determined, the model can be used to determine the likelihood of a sequence of observations corresponding to a sequence of words or parts of words.
In this particular embodiment, the features which are the output of acoustic model 101 are FO features and spectral features. In this embodiment, the spectral features are cepstral coefficients. However, in other embodiments other spectral features could be used such as linear prediction coefficients (LPC), line spectral pairs (LSPs) and their frequency warped versions.
The spectral features are converted to form vocal tract filter coefficients expressed as /z(n).
The generated FO features are converted into a pulse train sequence t(n) and according to the FO values, periods between pulse trains are determined.
The pulse train is a sequence of signals in the time domain, for example: 01000 10000100 where 1 is pulse. The human vocal cord vibrates to generate periodic signals for voiced speech. The pulse train sequence is used to approximate these periodic signals.
A white noise excitation sequence w(n) is generated from white noise generator (not shown).
A pulse train t(n)and white noise sequences w(n) are filtered by excitation model parameters H(z) and H(z) respectively. The excitation model parameters are produced from excitation model 105. 14(z) represents the voiced impulse response filter coefficients and is sometimes referred to as the "glottis filter" since it represents the action of the glottis. 14(z) represents the unvoiced filter response coefficients. 14(z) and 14(z) together are excitation parameters which model the lungs and vocal chords.
Voiced excitation signal v(n) which is a time domain signal is produced from the filtered pulse train and unvoiced excitation signal u(n) which is also a time domain signal is produced from the white noise w(n). These signal v(n) and u(n) are mixed (added) to compose the mixed excitation signals in time domain, e(n).
Finally, excitation signals e(n) are filtered by impulse response 14(z) derived from the spectral features derived as explained above to obtain speech waveform s(n).
In a speech synthesis software product, the product comprises a memory which contains coefficients of 14(z) and 14(z) along with the acoustic model parameters such as means and variances. The product will also contain data which allows spectral features outputted from the acoustic model to be converted to 14(z). When the spectral features are cepstral coefficients, the conversion of the spectral features to 14(z) is deterministic and not dependent on the nature of the data used to train the stochastic model.
However, if the spectral features comprise other features such as linear prediction coefficients (LPC), line spectral pairs (LSPs) and their frequency warped versions, then the mapping between the spectral features and 14(z) is not deterministic and needs to be estimated when the acoustic and excitation parameters are estimated. However, regardless of whether the mapping between the spectral features and 14(z) is deterministic or estimated using a mapping model, in a preferred embodiment, a software synthesis product will just comprise the information needed to convert spectral features to 14(z).
Training of a speech synthesis system involves estimating the parameters of the models.
In the above system, the acoustic, excitation and mapping model parameters are to be estimated. However, it should be noted that the mapping model parameters can be removed and this will be described later.
In a training method in accordance with an embodiment of the present invention, the acoustic model parameters and the excitation model parameters are estimated at the same time in the same process.
To understand the differences, first a conventional framework for estimating these parameters will be described.
In known statistical parametric speech synthesis, first a "super-vector" of speech features is extracted from the speech waveform, where c ct(O) cdC)) is a C-th order speech parameter vector at frame t, and T is the total number of frames. Estimation of acoustic model parameters is usually done through the ML criterion: = argmaxp(c I £,.A), (1) where is a transcription of the speech waveform and 2 denotes a set of acoustic model parameters.
During the synthesis, a speech feature vector c' is generated for a given text to be synthesized £ so as to maximize its output probability = argmaxp (c' I £,A). (2) These features together with F0 and possibly duration, are utilized to generate speech waveform by using the source-filter production approach as described with reference to figure 3.
A training method in accordance with an embodiment of the present invention uses a different approach. Since the intention of any speech synthesizer is to mimic the speech waveform as well as possible, in an embodiment of the present invention a statistical model defined at the waveform level is proposed The parameters of the proposed model are estimated so as to maximize the likelihood of the waveform itself, i.e., ) = arginaxp(s (3) where S [s(O) *.. a vector containing the entire speech waveform, s(n) is a waveform value at sample n, N is the number of samples, and,l denotes the set of parameters of the joint acoustic-excitation models.
By introducing two hidden variables: the state sequence q{q0,q7-J} (discrete)
T T T
spectral parameter C [CO... CT_i] (continuous), Eq. (3) can be rewritten as: (4) = arg max fp (s c, q, A) p (c q, A) p (q I £, A) dc, (5) where q, is the state at frame t.
Terms p(sjc,q,,%), p(cq,2) and p(qe,.a)of Eq. (5) can be analysed separately as follows: 25. q, i%): This probability concerns the speech waveform generation from spectral features and a given state sequence. The maximization of this probability with respect to ? is closely related to the ML estimation of spectral model parameters. This probability is related to the assumed speech signal generative model.
* p(cq, ): This probability is given as the product of state-output probabilities of speech parameter vectors if HMMs or hidden semi-Markov models (HSMMs) are used as its acoustic model. If trajectory HMMs are used, this probability is given as a state-sequence-output probability of entire speech parameter vector.
* p(qI, ,a) :This probability gives the probability of state sequence q for a transcription £* If HMM or trajectory HMM is used as acoustic model, this probability is given as a product of state-transition probabilities. If HSMM or trajectory HSMM is used, it includes both state-transition and state-duration probabilities.
It is possible to model p(cq,) and p(qe,%) using existing acoustic models, such as HMM, HSMM or trajectory HMMs, the problem is how to model p(sc,q,il).
It is assumed that the speech signal is generated according to the diagram of Figure 4, i.e., s(n) = f(n) * [h(n) * t(m) + h(n) * w(n)], (6) where * denotes linear convolution and * h(n): is the vocal tract filter impulse response; * t(n): is a pulse train; * w(n): is a Gaussian white noise sequence with mean zero and variance one; * h/n): is the unvoiced filter impulse response; * h0(n): is the unvoiced filter impulse response; Here the vocal tract, voiced and unvoiced filters are assumed to have respectively the following shapes in the z-transform domain: Hc(z) =0h(p)z (7) H(z) = h(rn)zm, (8) 771=-1
K
H(z) L (9) 1 -g(l)z where P, M and L are respectively the orders of H(z), H(z) and H(z). Filter H(z) is considered to have minimum-phase response because it represents the impulse response of the vocal tract filter. In addition, if the coefficients of H(z) to be calculated according to the approach described in R. Maia, T. Toda, H. Zen, Y. Nakaku, and K. Tokuda, "An excitation model for HMM-based speech synthesis based on residual modelling," in Proc. of the 6th ISCA Workshop on Speech Synthesis, 2007 then H(z) also has minimum-phase response.. Parameters of the generative model above comprise the vocal tract, voiced and unvoiced filters, H(z), H(z) and H(z), and the positions and amplitudes of t(n), {p,p2_1}, and {a0,a2_1} with Z being the number of pulses. Although there are several ways to estimate H(z) and H(z), this report will be based on the method described in R. Maia, T. Toda, H. Zen, Y. Nakaku, and K. Tokuda, "An excitation model for HMM-based speech synthesis based on residual modelling," in Proc. of the 6th ISCA Workshop on Speech Synthesis, 2007.
Using matrix notation, with uppercase and lowercase capital letters meaning respectively matrices and vectors, Eq(6) can be written as: (10) where s [s(_) s(N++P_l)] (11) {jo) T (12) (i) () 0 h. (0) h (P) 0 0 -(13) N+&f-i-i H, = j(N_1)J T (14) -0 0 h.(_4L) h(L) @ 0 -2 2._........., (15) N-i-i t= . t(N -] T (16) 0 0 s(0) s(N + L -1) 0 [) = (17)
ALPL
The vector s contains samples of s(n.) = h(n) * h(n) * w(ri), (17) and can be interpreted as the error of the model for voiced regions of the speech signal, with covariance matrix H (GTG) H, (19) where G = [(o) (N+M_1)] T (20) -. s... L 2k2... a.Ii o*.o g(t) = K K K._-.--_. (21) N+M-i-1 As w(n) is Gaussian white noise, u(n)h(n)*w(n) becomes a normally distributed stochastic process. By using vector notation, probability u is p (u C) = (u; 0, (GTG) ), (22) Where N(x;,u,I) is the Gaussian distribution of x with mean vector p and covariance matrix E. Thus since = .U' (z) [s(n) -li('ri) * h(n) * t(n)] (23) the probability of speech vector s becomes p(s HCHV,G,t) =zA((s;HHvt,Hc(GTG)_lHi). (24) If the last P rows of H are neglected, which means neglecting the zero-impulse response of H(z) which produces samples {sN+) then Lf becomes square with dimensions (N + M) x (N + M) equation (24) can be re-written as: p(s Hr,A) = H'J\f(H's;Hvt (GTG)'), (25) where A {H, G, t} are parameters of the excitation modelling part of the speech generative model. It is interesting to note that the term Hz's corresponds to the residual sequence, extracted from the speech signal s(n) through inverse filtering by H(z).
By assuming that H and I-li, have a state-dependent parameter tying structure as that proposed in R. Maia, T. Toda, H. Zen, Y. Nakaku, and K. Tokuda, "An excitation model for HMM-based speech synthesis based on residual modelling," in Proc. of the 6th ISCA Workshop on Speech Synthesis, 2007 Eq. (25) can be re-written as I = IHt1JV (Ha_is; Hv,qt, (GqTGq)_l), (26) where Hv,q and Gq are respectively the voiced filter and inverse unvoiced filter impulse response matrices for state sequence q.
There is usually a one-to-one relationship between the vocal tract impulse response If (or coefficients of H(z)) and spectral features c. However, it is difficult to compute H, from c in a closed form for some spectral feature representations. To address this problem, a stochastic approximation is introduced to model the relationship between c and I-Ia.
If the mapping between c and H is considered to be represented by a Gaussian process with probability p(H (c, q, %h) where is the parameter set of the model that maps spectral features onto vocal tract filter impulse response, q, becomes: p(s c.q.Ae) J(s I I c,q,Aj)dH (27) J.r (Hc8; Ht,qt, (GQT Gq) ) A((H; fq(c)Mq), (28) Where fq(c) is an approximated function to convert c to H and)q is the covariance matrix of the Gaussian distribution in question. This representation includes the case that H can be computed from c in a closed form as its special case, i.e. fq(c) becomes the mapping function in the closed form and q becomes a zero matrix. It is interesting to note that the resultant model becomes very similar to that of a shared factor analysis model if a linear function for fq(c) is utilized and it has a parameter sharing structure dependent on q.
If a trajectory HMM is used as an acoustic model p(cie,2) then p(cq,%) and p(qe,) can be defined as: p(cI qA) =Af(c; q,Pq), (29) p(q t, A) qo fl qtqt+i' (30) t=1 where ir1 is the initial state probability of state i, a is the state transition probability from state i to state], and and Pq correspond to mean vector and covariance matrix of trajectory HMM for q. In Eq. (29), cq and Pq are given as Rqëq = rq. (31) Rq WTElW (32) rq WTEItq, (33) where W is typically a 3T(C+1) x T(C+l) window matrix that appends dynamic features(velocity and acceleration features) to c. For example, if the static, velocity, and acceleration features of c1, A(0)c,, and EPc are calculated as: = z, (34) LZt = (Zt+i -Zt_i) /2, (35) = Zt_i -2z + Zt+1, (36) then Wis as follows 0 I 0 0 1/2 0 1/2 0 I -21 I 0 *.. -Ct2 0 0 I 0 ct_I (37) = **-0 -1/2 0 1/2 -** -0 I -21 1 0 0 0 I 0 0 -1/2 0 0 0 1 -21 where I and 0 correspond to the (C+1) x (C+1) identity and zero matrices. q and in Eqs. (32) and (33) corresponds to the 3T(C+1)xl mean parameter vector and the 3T(C+1) x 3T(C+1) precision parameter matrix for the state sequence q, given as q= (38) cliag {E1. , }, (39) where Au, and E correspond to the 3(C+l) mean-parameter vector and the 3(C+1) x 3(C+1) precision-parameter matrix associated with state i, and Y diag {Xi,.. .,XD} means that matrices {Xj, ,XD} are diagonal sub-matrices of 1'. Mean parameter vectors and precision parameter matrices are defined as [oi-(1)T (2)T]T (40) diag {(O)E1 1E' (2)1} (41) Where Au1 and L-')E' correspond to the (C+1) x 1 mean parameter vector and (C+1) x (C+l) precision parameter matrix associated with state i.
The final parameter model is obtained by combing the acoustic and excitation models via the mapping model as: (42) where p(s I Hq,A) = IHI1 (Ha's; Hv.qt, (GqTGq)_l) (43) p(Hclc,q,Ah).AI(Hc;fq(c),I»=q), (44) p(cjq,A)=.A/(c; q,Pq), (45) p(q I £,) qoflaqtqt+i (46) Where A={A,C,/%h,)tC} There are various possible spectral features, such as cepstral coefficients, linear prediction coefficients (LPC), line spectral pairs (LSPs) and their frequency warped versions. In this embodiment cepstral coefficients are considered as a special case. The mapping from a cepstral coefficient vector, c1 [c1 (o) to its corresponding vocal tract filter impulse response vector h1 [he, (o) h, () can be written in a closed form as = . EXP ID8 Ct] , (47) Where EXP [.1 means a matrix which is derived by taking the exponential of the elements of [.1 and D is a (P+1)x(C+1) DFT (Discrete Fourier Transform) matrix, 1 1... 1 1 Wp1... I'V1 D8= . . (48) 1 fl7P TTJPC i P+i. With
Wp÷i = (49) and D* is a (P+l)x(P+l) JDFT (Inverse DFT) matrix with the following form 1 1 *.. 1 1 W-1 1 P+i P+1 0 SP+1 1 TI/-P TXJP2 P+i As the mapping between cepstral coefficients and vocal tract filter response can be computer in a closed form. There is no need to use a stochastic approximation between candH.
The vocal tract filter impulse response-related term that appears in the generative model of Eq. (10) is Hcnot h. Relationship between Hc given as Eqs. (12) and (13), and h is given by = [h0... hT_l] (51) = [h,(o) *. h,t(P)] (52) With h,1 being the synthesis filter impulse response of the t-th frame and T the total of frames, can be written as (53) 1i 0 In Eq. (53), N is the number of samples (of the database), and _ [o...o 1 ft N-i-n P+l OP+1,P+i OP+iP+i P+l OP+1,P+i OP+1.,P+i B= (55) OP+LP+L °P+1,P+1 P+i op+1.p±1 op+i,p+i IP+1 Where B is an N(P+1) x T(P+1) matrix to map a frame-basis h vector into sample-basis. It should be noted that the square version of H is considered by neglecting the last P rows. The NxN(P+l) matrices.J, are constructed as P+i °P+1,N(P+1)-P-i Jo=: (56) °N-P-1,P+i ON-P-i,N(P+])-P-1 °iP+i Oi,P+i °i,N(P+i)-2P-2 °P+i,P+1 P+i OP+1,N(P+1)-2P-2 (57) ON-P -2,P+1 ON-P-2,P+i ON-P-2.N(P+1)-2P-2 ON-i,N(PH-i)-P--i ON-i,i ON-1,P JN-l (58) °1,N(P+i)-P-i 1 °1,P where Ox ymeans a matrix of zeros elements with Xrows and Y columns, and Ix is an X-size identity matrix.
The training of the model will now be described with reference to figures 5 and 6.
The training allows the parameters of the joint model X to be estimated such that: = argmxp(s £,.A), (59) where 2 = {2e, 2h k} with XeZ= {H, G, t} corresponding to parameters of the excitation model, and 2 {rn,cr} consisting of parameters of the acoustic model m {p... I, (60) a vdiag { diag { 1} } (61) where S is the number of states. m and a are respectively vectors formed by concatenating all the means and diagonals of the inverse covariance matrices of all states, with vdiag {[.]} meaning a vector formed by the diagonal elements of[.J.
The likelihood function p(sie, 2) assuming cepstral coefficients as spectral features, is p(s jt,,\) EffP(s I Hc,q,Ae)p(Hc 1 c,q,\,)p(c q,Ac)p(q I £,)dHdc. (62) Unfortunately, estimation of this model through the expectation-maximization (EM) is intractable. Therefore, an approximate recursive approach is adopted.
If the summation over all possible q in Eq. (62) is approximated by a fixed state sequence the likelihood function above becomes p (s I £, ) fJ (s I H, . Ae)p(Hc c, , Ah)p (c, ) p ( t, ) dc, (63) where t = } is the state sequence. Further, if the integration over all possible c is approximated by a spectral vector and an impulse response vector, then Eq.
(64) becomes p(s I £) I I I I t,), (64) where ê = [ ê7 J is the fixed spectral response vector.
By taking the logarithm of Eq. (64) or cost function to be maximized is obtained through update of acoustic, excitation and mapping model parameters = iogp (s I ie) +logp (E I e) + Iogp(ê I, A) + 1ogp( I £,A). (65) The optimization problem can be split into two parts: initialization and recursion. The following explains the calculations performed in each part. Initialisation will be described with reference to figure 5 and recursion with reference to figure 6.
The model is trained using training data which is speech data with corresponding text data and which is input in step S2 10 Part 1 -Initialization 1. In step S203 speech data is extracted from an initial cepstral coefficient vector = ... (66) [cc(O) cc(c)]. (67) 2. In step S205 trajectory HMM parameters 2 are trained using c ) argiuaxp(c I)). (68) 3. In step S207 the best state sequence is determined as the Viterbi path from the trained models by using the algorithm of H. Zen, K. Tokuda, and T. Kitamura, "Reformulating the HMM as a trajectory model by imposing explicit relationships between static and dynamic feature vector sequence," Computer Speech and Language, vol. 21, pp. 153-173, Jan. 2007.
= argmaxp (c, q (69) q 4. In step S209, the mapping model parameters 1h are estimated assuming andc.
= argrnaxp(H I c,,)j1). (70) 5. In step S2 11, the excitation parameters 2e are estimated assuming and c, by using one iteration of the algorithm described in R. Maia, T. Toda, H. Zen, Y. Nakaku, and K. Tokuda, "An excitation model for HMM-based speech synthesis based on residual modelling," in Proc. of the 6th ISCA Workshop on Speech Synthesis, 2007.
= aigmaxp(s I (71) Part 2: Recursion 1. In step S2 13 of figure 6 the best cepstral coefficient vector is estimated using the log likelihood function of Eq. (65) e=arguiaxr. (72) 2. In step S215 the vocal tract filter impulse responses H are estimated assuming and ê.
HcargrflcixL. (73) 3. In step S2 17 excitation model parameters 2e are updated assuming and I-Ia org max (74) 4. In step S2 19 acoustic model parameters are updated ) -arg max C. (75) 5. In step S221 mapping model parameters are updated Aj1=argnmxr. (76) The recursive steps may be repeated several times. In the following each one of them is explained with details The recursion terminates until convergence. Convergence may be determined in many different ways, in one embodiment, convergence will be deemed to have occurred when the change in likelihood is less than a predefined minimum value. For example the change in likelihood L is less than 5%.
In step 1 S 213 of the recursion, if cepstral coefficients are used as the spectral features then the likelihood function of Eq. (65) can be written as = _{iv + M)log(2ir) + 1ogGGqJ -21ogH + sTH_TGqTGqHl8_ -28T HT GqT CqHu,qt + tTH,TqCqGqHqt + T (C + 1)log(2ir) _1ogIRqI+cTRqc_2rc+rFç1rq}, (77) Where the terms that depend on c can be selected to compose the cost function h given by: _sTHTGGqH1s + log FII + sTH;TGCqHv,qt -cTRqc + rc. (78) The best cepstral coefficient vector ê can be defined as the one which maximizes the cost function h. By utilizing the steepest gradient ascent algorithm (see for example J. Nocedal and S. J. Wright, Numerical Optimization. Springer, 1999) or another optimization method such as the Broyden Fletcher Goldfarb Shanno (BFGS) algorithm, each update for ê can be calculated by = éf +y-(79) 3c Where is the convergence factor (constant), z is the iteration index, and = DTDIAG(EXP[DCI)D*TBT {JHT [GqGq(ev)eT _I]n}_Rc+r (80) fl=o with Diag ({,]) meaning a diagonal matrix formed with the elements of vector [.], and e=H's, (81) V = Hvqt, (82) D diag D8, , , (83) D* =diag D;,...,D. (84) 1) In the above, the iterative process will continue until convergence. In a preferred embodiment, convergence will have occurred when the difference between successive iterations is less than 5%.
In Step 3 S2 17 of the recursive procedure, excitation parameters 2e (14, G, G} are calculated by using one iteration of the algorithm describe in R. Maia, T. Toda, H. Zen, Y. Nakaku, and K. Tokuda, "An excitation model for HMM-based speech synthesis based on residual modelling," in Proc. Of the 6th ISCA Workshop on Speech Synthesis, 2007. In this case the estimated cepstral vector ê is used to extract the residual vector e = H's through inverse filtering.
In step 4 S2 19 estimation of acoustic model parameters.2k. = {m, cr} is done as described in H. Zen, K. Tokuda, and T. Kitamura, "Reformulating the HMM as a trajectory model by imposing explicit relationships between static and dynamic feature vector sequence," Computer Speech and Language, vol. 21, pp. 153-173, Jan. 2007, by utilizing the best estimated cepstral vector ê as the observation.
The above training method uses a set of model parameters 2h of a mapping model to describe the uncertainty of H predicted byfq(c).
However, in an alternative embodiment, a deterministic case is assumed where fq(c) perfectly predicts He,. In this embodiment, there is no uncertainty between H and fq(c) and thus)Lh is no longer required.
In such a scenario, the mapping model parameters are set to zero in step S209 of figure and are not re-estimated in the S221 of figure 6.
Figure 7 is a flow chart of a speech synthesis method in accordance with an embodiment of the present invention.
Text is input at step S251. An acoustic model is run on this text and features including spectral features and FO features are extracted in step S253.
An impulse response filter function is generated in step S255 from the spectral features extracted in step S253.
The input text is also inputted into excitation model and excitation model parameters are generated from the input text in step S257.
Returning to the features extracted in step S253, the FO features extracted at this stage are converted into a pulse train in step S259. The pulse train is filtered using voiced filter function which has been generated in step S257.
White noise is generated by a white noise generator. The white noise is then filtered in step S263 using unvoiced filter function which was generated in step S257. The voiced excitation signal which has been produced in step S261 arid the unvoiced excitation signal which has been produced in step S263 are then mixed to produce mixed excitation signal in step S265.
The mixed excitation signal is then filtered in step S267 using impulse response which was generated in step S255 and the speech signal is outputted.
By training acoustic and excitation models through joint optimization, the information which is lost during speech parameter extraction, such as phase information, may be recovered at run-time, resulting in synthesized speech which sounds closer to natural speech. Thus statistical parametric text-to-speech systems can be produced with the capability of producing synthesized speech which may sound very similar to natural speech.
While certain embodiments have been described, these embodiments have been presented by way of example only, and are not intended to limit the scope of the inventions. Indeed the novel methods and systems described herein may be embodied in a variety of other forms; furthermore, various omissions, substitutions and changes in the form of methods and systems described herein may be made without departing from the spirit of the inventions. The accompanying claims and their equivalents are intended to cover such forms of modifications as would fall within the scope and spirit of the inventions.

Claims (19)

  1. CLAIMS: 1. A speech processing method comprising: receiving a text input and outputting speech corresponding to said text input using a stochastic model, said stochastic model comprising an acoustic model and an excitation model, said acoustic model having a plurality of model parameters describing probability distributions which relate a word or part thereof to a feature, said excitation model comprising excitation model parameters which are used to model the vocal chords and lungs to output the speech using said features; wherein said acoustic parameters and excitation parameters have been jointly estimated; and outputting said speech.
  2. 2. A speech synthesis method according to claim 1, wherein said text input is processed by said acoustic model to output FO and spectral features, the method further comprising: processing said FO features to form a pulse train and filtering said pulse train using excitation parameters derived from said excitation model to produce an excitation signal and filtering said excitation signal using filter parameters derived from said spectral features.
  3. 3. A method of training a statistical model for speech synthesis, the method comprising: receiving training data, said training data comprising speech and text corresponding to said speech; training a stochastic model, said stochastic model comprising an acoustic model and an excitation model, said acoustic model having a plurality of model parameters describing probability distributions which relate a word or part thereof to a feature vector, said excitation model comprising excitation model parameters which model the vocal chords and lungs to output the speech; wherein said acoustic parameters and excitation parameters are jointly estimated during said training process.
  4. 4. A method according to claim 3, wherein said acoustic model parameters comprise means and variances of said probability distributions.
  5. 5. A method according to claim 3, wherein the features output by said acoustic model comprise FO features and spectral features.
  6. 6. A method according to claim 5, wherein said excitation model parameters comprise filter coefficients which are configured to filter a pulse signal derived from FO features.
  7. 7. A method according to claim 3, wherein said joint estimation process comprises a recursive process where in one step excitation parameters are updated using the latest estimate of acoustic parameters and in another step acoustic model parameters are updated using the latest estimate of excitation parameters.
  8. 8. A method according to claim 3, wherein said joint estimation process uses a maximum likelihood technique.
  9. 9. A method according to claim 5, wherein said stochastic model further comprises a mapping model and said mapping model comprises mapping model parameters, said mapping model being configured to map spectral features to filter coefficients which represent the human vocal tract.
  10. 10. A method according to claim 3, wherein the parameters are jointly estimated as: A = argmaxp(s e,,\), where X represents the parameters of the excitation model and acoustic model to be optimised, s is the natural speech waveform and £ is a transcription of the speech waveform.
  11. 11. A method according to claim 10, wherein X further comprises parameters of a mapping model configured to map spectral parameters to a filter function to represent the human vocal tract.
  12. 12. A method according to claim 11, wherein the relationship between the spectral features and filter coefficients is modelled as a Gaussian process.
  13. 13. A method according to claim 11, wherein p(s[e, 2) is expressed as: p(s I >ffp(s H,qA)p(H I c,q.Ah)p(c I q,)p(q I £,A)dHdc. (62) where H is the filter function used to model the human vocal tract, q is the state, 2e are the excitation parameters, A the acoustic model parameters, 2h the mapping model parameters and c are the spectral features.
  14. 14. A method according to claim 13, wherein the summation over q is approximated by a fixed state sequence to give: p (s I £. A) fj p (s I H, , A) p(H I c, . A,)p (c I A) ( I £. A) de. (63) where { } is the state sequence
  15. 15. A method according to claim 14, wherein the integration over all possible H and c is approximated by spectral response and impulse response vectors to give: p(s I £,A) I,4,Aiz)P(a I,A)p(I £,), (64) where 6 = [ 6 V is the fixed spectral response vector and fixed impulse response vector.
  16. 16. A method according to claim 15, wherein the log likelihood function of p(s, ) is given by: r = logp (s I ic, , e) + logp (i I, + logp (ê I 4, A) + logp (4 I t, ). (65)
  17. 17. A carrier medium carrying computer readable instructions for controlling the computer to carry out the method of any preceding claim.
  18. 18. A speech processing apparatus comprising: a receiver for receiving a text input which comprises a sequence of words; and a processor, said processor being configured to determine the likelihood of output speech corresponding to said input text using a stochastic model, said stochastic model comprising an acoustic model and an excitation model, said acoustic model having a plurality of model parameters describing probability distributions which relate a word or part thereof to a feature, said excitation model comprising excitation model parameters which are used to model the vocal chords and lungs to output the speech using said features; wherein said acoustic parameters and excitation parameters have been jointly estimated, wherein said apparatus further comprises an output for said speech.
  19. 19. A speech to speech translation system comprising an input speech recognition unit, a translation unit and a speech synthesis apparatus according to claim 18.
GB1007705.5A 2010-05-07 2010-05-07 A speech processing method an apparatus Expired - Fee Related GB2480108B (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
GB1007705.5A GB2480108B (en) 2010-05-07 2010-05-07 A speech processing method an apparatus
JP2011100487A JP2011237795A (en) 2010-05-07 2011-04-28 Voice processing method and device
US13/102,372 US20110276332A1 (en) 2010-05-07 2011-05-06 Speech processing method and apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB1007705.5A GB2480108B (en) 2010-05-07 2010-05-07 A speech processing method an apparatus

Publications (3)

Publication Number Publication Date
GB201007705D0 GB201007705D0 (en) 2010-06-23
GB2480108A true GB2480108A (en) 2011-11-09
GB2480108B GB2480108B (en) 2012-08-29

Family

ID=42315018

Family Applications (1)

Application Number Title Priority Date Filing Date
GB1007705.5A Expired - Fee Related GB2480108B (en) 2010-05-07 2010-05-07 A speech processing method an apparatus

Country Status (3)

Country Link
US (1) US20110276332A1 (en)
JP (1) JP2011237795A (en)
GB (1) GB2480108B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210383816A1 (en) * 2019-02-20 2021-12-09 Yamaha Corporation Sound signal generation method, generative model training method, sound signal generation system, and recording medium

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2505400B (en) * 2012-07-18 2015-01-07 Toshiba Res Europ Ltd A speech processing system
US9865247B2 (en) * 2014-07-03 2018-01-09 Google Inc. Devices and methods for use of phase information in speech synthesis systems
KR101587625B1 (en) * 2014-11-18 2016-01-21 박남태 The method of voice control for display device, and voice control display device
WO2017046887A1 (en) * 2015-09-16 2017-03-23 株式会社東芝 Speech synthesis device, speech synthesis method, speech synthesis program, speech synthesis model learning device, speech synthesis model learning method, and speech synthesis model learning program
US9972310B2 (en) * 2015-12-31 2018-05-15 Interactive Intelligence Group, Inc. System and method for neural network based feature extraction for acoustic model development
CN110298906B (en) 2019-06-28 2023-08-11 北京百度网讯科技有限公司 Method and device for generating information
CN113823257B (en) * 2021-06-18 2024-02-09 腾讯科技(深圳)有限公司 Speech synthesizer construction method, speech synthesis method and device

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2291571A (en) * 1994-07-19 1996-01-24 Ibm Text to speech system; acoustic processor requests linguistic processor output
EP1220195A2 (en) * 2000-12-28 2002-07-03 Yamaha Corporation Singing voice synthesizing apparatus, singing voice synthesizing method, and program for realizing singing voice synthesizing method

Family Cites Families (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS6054680B2 (en) * 1981-07-16 1985-11-30 カシオ計算機株式会社 LSP speech synthesizer
US5060269A (en) * 1989-05-18 1991-10-22 General Electric Company Hybrid switched multi-pulse/stochastic speech coding technique
DE4111995A1 (en) * 1991-04-12 1992-10-15 Philips Patentverwaltung CIRCUIT ARRANGEMENT FOR VOICE RECOGNITION
US5708757A (en) * 1996-04-22 1998-01-13 France Telecom Method of determining parameters of a pitch synthesis filter in a speech coder, and speech coder implementing such method
US5940791A (en) * 1997-05-09 1999-08-17 Washington University Method and apparatus for speech analysis and synthesis using lattice ladder notch filters
US6487531B1 (en) * 1999-07-06 2002-11-26 Carol A. Tosaya Signal injection coupling into the human vocal tract for robust audible and inaudible voice recognition
JPWO2003042648A1 (en) * 2001-11-16 2005-03-10 松下電器産業株式会社 Speech coding apparatus, speech decoding apparatus, speech coding method, and speech decoding method
US20030191645A1 (en) * 2002-04-05 2003-10-09 Guojun Zhou Statistical pronunciation model for text to speech
JP4539537B2 (en) * 2005-11-17 2010-09-08 沖電気工業株式会社 Speech synthesis apparatus, speech synthesis method, and computer program
JP4353174B2 (en) * 2005-11-21 2009-10-28 ヤマハ株式会社 Speech synthesizer
US7584104B2 (en) * 2006-09-08 2009-09-01 At&T Intellectual Property Ii, L.P. Method and system for training a text-to-speech synthesis system using a domain-specific speech database
US8321222B2 (en) * 2007-08-14 2012-11-27 Nuance Communications, Inc. Synthesis by generation and concatenation of multi-form segments
US8224648B2 (en) * 2007-12-28 2012-07-17 Nokia Corporation Hybrid approach in voice conversion
WO2009144368A1 (en) * 2008-05-30 2009-12-03 Nokia Corporation Method, apparatus and computer program product for providing improved speech synthesis
WO2010025460A1 (en) * 2008-08-29 2010-03-04 O3 Technologies, Llc System and method for speech-to-speech translation
US8315871B2 (en) * 2009-06-04 2012-11-20 Microsoft Corporation Hidden Markov model based text to speech systems employing rope-jumping algorithm
US8332225B2 (en) * 2009-06-04 2012-12-11 Microsoft Corporation Techniques to create a custom voice font

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2291571A (en) * 1994-07-19 1996-01-24 Ibm Text to speech system; acoustic processor requests linguistic processor output
EP1220195A2 (en) * 2000-12-28 2002-07-03 Yamaha Corporation Singing voice synthesizing apparatus, singing voice synthesizing method, and program for realizing singing voice synthesizing method

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210383816A1 (en) * 2019-02-20 2021-12-09 Yamaha Corporation Sound signal generation method, generative model training method, sound signal generation system, and recording medium
US11756558B2 (en) * 2019-02-20 2023-09-12 Yamaha Corporation Sound signal generation method, generative model training method, sound signal generation system, and recording medium

Also Published As

Publication number Publication date
GB2480108B (en) 2012-08-29
GB201007705D0 (en) 2010-06-23
JP2011237795A (en) 2011-11-24
US20110276332A1 (en) 2011-11-10

Similar Documents

Publication Publication Date Title
GB2480108A (en) Speech Synthesis using jointly estimated acoustic and excitation models
JP5242724B2 (en) Speech processor, speech processing method, and speech processor learning method
JP6092293B2 (en) Text-to-speech system
EP2846327B1 (en) Acoustic model training method and system
US6154722A (en) Method and apparatus for a speech recognition system language model that integrates a finite state grammar probability and an N-gram probability
US8825485B2 (en) Text to speech method and system converting acoustic units to speech vectors using language dependent weights for a selected language
US8126717B1 (en) System and method for predicting prosodic parameters
Yoshimura Simultaneous modeling of phonetic and prosodic parameters, and characteristic conversion for HMM-based text-to-speech systems
Tokuda et al. Directly modeling voiced and unvoiced components in speech waveforms by neural networks
US9466285B2 (en) Speech processing system
Maia et al. Statistical parametric speech synthesis with joint estimation of acoustic and excitation model parameters.
Boruah et al. A study on HMM based speech recognition system
EP3038103A1 (en) Quantitative f0 pattern generation device and method, and model learning device and method for generating f0 pattern
JP5474713B2 (en) Speech synthesis apparatus, speech synthesis method, and speech synthesis program
US20220172703A1 (en) Acoustic model learning apparatus, method and program and speech synthesis apparatus, method and program
JP6553584B2 (en) Basic frequency model parameter estimation apparatus, method, and program
JP6468519B2 (en) Basic frequency pattern prediction apparatus, method, and program
JP2008064849A (en) Sound model creation device, speech recognition device using the same, method, program and recording medium therefore
Li et al. Graphical model approach to pitch tracking.
Vargas et al. Cascade prediction filters with adaptive zeros to track the time-varying resonances of the vocal tract
JP6665079B2 (en) Fundamental frequency model parameter estimation device, method, and program
Aroon et al. Statistical parametric speech synthesis: A review
JP2015203766A (en) Utterance rhythm conversion matrix generation device, utterance rhythm conversion device, utterance rhythm conversion matrix generation method, and program for the same
Hwang et al. A Unified Framework for the Generation of Glottal Signals in Deep Learning-based Parametric Speech Synthesis Systems.
WO2023157066A1 (en) Speech synthesis learning method, speech synthesis method, speech synthesis learning device, speech synthesis device, and program

Legal Events

Date Code Title Description
PCNP Patent ceased through non-payment of renewal fee

Effective date: 20230507