GB2478314A - Incorporating context dependency in an acoustic model for both speech recognition and synthesis - Google Patents

Incorporating context dependency in an acoustic model for both speech recognition and synthesis Download PDF

Info

Publication number
GB2478314A
GB2478314A GB1003496A GB201003496A GB2478314A GB 2478314 A GB2478314 A GB 2478314A GB 1003496 A GB1003496 A GB 1003496A GB 201003496 A GB201003496 A GB 201003496A GB 2478314 A GB2478314 A GB 2478314A
Authority
GB
United Kingdom
Prior art keywords
model
training data
speech
acoustic model
sequence
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
GB1003496A
Other versions
GB2478314B (en
GB201003496D0 (en
Inventor
Heiga Zen
Byung Ha Chun
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Toshiba Europe Ltd
Original Assignee
Toshiba Research Europe Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Toshiba Research Europe Ltd filed Critical Toshiba Research Europe Ltd
Priority to GB1003496.5A priority Critical patent/GB2478314B/en
Publication of GB201003496D0 publication Critical patent/GB201003496D0/en
Priority to US13/014,185 priority patent/US9043213B2/en
Priority to JP2011045161A priority patent/JP5242724B2/en
Publication of GB2478314A publication Critical patent/GB2478314A/en
Application granted granted Critical
Publication of GB2478314B publication Critical patent/GB2478314B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • G10L15/18Speech classification or search using natural language modelling
    • G10L15/183Speech classification or search using natural language modelling using context dependencies, e.g. language models
    • G10L15/187Phonemic context, e.g. pronunciation rules, phonotactical constraints or phoneme n-grams
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/40Processing or translation of natural language
    • G06F40/58Use of machine translation, e.g. for multi-lingual retrieval, for server-side translation for client devices or for real-time translation
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L13/00Speech synthesis; Text to speech systems
    • G10L13/08Text analysis or generation of parameters for speech synthesis out of text, e.g. grapheme to phoneme translation, prosody generation or stress or intonation determination
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/005Language recognition
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/06Creation of reference templates; Training of speech recognition systems, e.g. adaptation to the characteristics of the speaker's voice
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/06Creation of reference templates; Training of speech recognition systems, e.g. adaptation to the characteristics of the speaker's voice
    • G10L15/065Adaptation
    • G10L15/07Adaptation to the speaker
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • G10L15/14Speech classification or search using statistical models, e.g. Hidden Markov Models [HMMs]
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • G10L15/14Speech classification or search using statistical models, e.g. Hidden Markov Models [HMMs]
    • G10L15/142Hidden Markov Models [HMMs]
    • G10L15/144Training of HMMs

Abstract

A speech recognition system comprising receiving a speech input from a known speaker which comprises a sequence of observations and determining the likelihood of a sequence of words arising from the sequence of observations using an acoustic model, said acoustic model having a plurality of model parameters describing probability distributions which relate a word or part thereof to an observation. The acoustic model is trained using first training data and adapted using second training data to said speaker, the speech recognition method further comprising determining the likelihood of a sequence of observations occurring in a given language using a language model. The likelihoods determined by the acoustic model and the language model are combined and a sequence of words identified from said speech input signal is output. The acoustic model is context based for said speaker, the context based information being contained in said model using a plurality of decision trees, wherein the structure of said decision trees is based on the second training data. Therefore, adapting the decision trees based on adaptation data makes it possible to model contexts which were not present in the original training data.

Description

A Speech Processor, a Speech Processing Method and a Method of Training a Speech Processor The present invention is concerned with the field of speech processing both speech recognition and text-to-speech synthesis. The present invention is particularly concerned with the incorporation of context dependency in an acoustic model for both speech recognition and speech synthesis.
An inherent problem with speech recognition or speech synthesis in many languages is the fact that a given phoneme may be pronounced differently depending on its context.
For example, the plosive phoneme "g" is pronounced differently in the word "gauge".
To address this problem context dependent acoustic models have been widely used.
As the number of contexts increases, the number of combinations of contexts also increases exponentially. It is almost impossible to have all possible combinations of contexts in a limited amount of training or adaptation data. To address this problem, the decision tree based context clustering technique has been used. Here similar states of HMMs are clustered into a small number of clusters using decision trees. The decision trees are usually built onmaximum likelihood (ML) criteria. By traversing constructed decision trees, unseen combinations of contexts in the training data can be assigned to a leaf node of a decision tree. Model parameters are also estimated in the decision tree clustering process based on the ML criteria.
When the model is adapted to a speaker, model parameters are transformed or updated based on a criterion, Maximum likelihood linear regression or maximum a posteriori criterion is often used. To adapt general acoustic model of hidden Markov model-based statistical parametric speech synthesis systems to a target voice characteristics, speaking styles, and/or emotions, linear transformation of model parameters (e.g. various variants of maximum-likelihood linear regressions) are used.
These techniques linearly transform mean vectors and covariance matrices associated to states of hidden Markov models based on some criterion such as the maximum likelihood criterion.
In the adaptation stage, constructed decision trees are fixed and they are built from the original training data which is different to the adaptation data. If training data and adaptation data have very different context-dependency, it is not possible to model the context-dependency of adaptation data. For example, if the general model is trained by neutral voices and adaptation data is an expressive voice, to control the expressiveness, expressiveness may be modelled as contexts. However, if the general acoustic model has no expressiveness contexts, the model cannot be properly adapted to the expressive voice.
The present invention attempts to at last partially address the above problem and in a first aspect provides a speech recognition method, said method comprising: receiving a speech input from a known speaker which comprises a sequence of observations; and determining the likelihood of a sequence of words arising from the sequence of observations using an acoustic model, said acoustic model having a plurality of model parameters describing probability distributions which relate a word or part thereof to an observation, said acoustic model having been trained using first training data and adapted using second training data to said speaker, the speech recognition method further comprising determining the likelihood of a sequence of observations occurring in a given language using a language model; and combining the likelihoods determined by the acoustic model and the language model and outputting a sequence of words identified from said speech input signal, wherein said acoustic model is context based for said speaker, said context based information being contained in said model using a plurality of decision trees, wherein the structure of said decision trees is based on second training data.
The present invention may also be applied to text to speech systems. Therefore, in a second aspect, the present invention provides a text to speech processing method, said method comprising: receiving a text input which comprises a sequence of words; and determining the likelihood of a sequence of speech vectors arising from the sequence of words using an acoustic model, said acoustic model having a plurality of model parameters describing probability distributions which relate a word or part thereof to an observation, said acoustic model having been trained using first training data and adapted using second training data to said speaker, wherein said acoustic model is context based for said speaker, said context based information being contained in said model using a plurality of decision trees, wherein the structure of said decision trees is based on second training data.
For both of the above aspects, decision trees themselves are adapted based on the adaptation data, therefore their structure is influenced by the adaptation data and it is possible to model contexts which were not present in the original training data. For the avoidance of doubt, the structure of the decision trees comprises both the order of the nodes and the size fo the splitting at the nodes. A decision tree is constructed so that the division of a node which gives the largest splitting is provided at the root of the tree and branches are arranged so that they give smaller and smaller splittings towards the leaf nodes.
In a preferred embodiment, the structure of the decision trees is based on both the first and second training data.
In a further embodiment, the structure is determined from the splitting of the nodes of the trees and has been calculated using maximum a posterior criteria. Here. both decision trees and model parameters are jointly adapted to the adaptation data based on the maximum a posteriori criterion. This allows re-building of decision trees for the adaptation data. Furthermore, because the statistics of both general and adaptation data are used, a better estimate of model parameters can be obtained. This produces statistically reliable estimates of model parameters and decision trees for given adaptation data. The use of this technique will give better model to synthesize speech with various voice characteristics, speaking styles, and emotions with a limited amount of adaptation data.
The method achieves high-quality statistical parametric text-to-speech synthesis with various voice characteristics, speaking styles and/or emotions using a limited amount of adaptation data. It jointly estimates model parameters and decision trees, which are the essential parts of hidden Markov model-base statistical parametric speech synthesis systems, based on the maximum a posteriori criterion. It finds decision trees suitable for the given adaptation data using the statistics of both general and adaptation data. It also re-estimates model parameters from the statistics of both general and adaptation data. The method can estimate statistically reliable decision trees and model parameters from the limited amount of adaptation data.
The splitting may be calculated using maximum a posterior criteria implemented as: (?IAP AMAP) -argrnac{1ogp(O rn, A) + c logp(O' rn,A)} m.,\ Where 0' is the first training data, 0 is the second training data, m denotes a parameter tying structure, A is a set of HMM parameters, thM, denotes the parameter tying structure under maximum a posterior criteria, 2Mm' are the HMM parameters under maximum a posterior criteria and a is a parameter to be set Although the preferred criteria are based on MAP, it is also possible to use other technicques, for example discriminative adaptation methods such as minimum phoneme error criteria, maximum mutual information criteria etc. In practice, any adaptation technique could be used, providing that it constructs a decision tree.
The context dependency may be implemented as tn-phones, but higher or lower order phonemes are also possible.
The acoustic model comprises probability distributions which are represented by means and variances, in a preferred embodiment decision trees are provided for both means and variances. However, in some implementations, only decision trees for means may be constructed from the adaptation data.
The context based information may be selected from phonetic, linguistic and prosodic contexts.
The decision trees may be used to model expressive contexts, or other contexts for example, gender, age, voice characteristics, etc. In a third aspect, the present invention provides a method of training an acoustic model for a speech processing system, the method comprising: receiving first training data, said first training data comprising speech and text corresponding to said speech; training a first acoustic model using said first training data; receiving second training data from a known speaker; adapting said first acoustic model to form a second acoustic model using said second training data, wherein adapting said first model to form said second model comprises constructing decision trees to model context dependency, and wherein the structure of the decision trees is based on the second training data.
Training of the first and second acoustic model is preferably performed such that the end user receives a product which has been trained using both first and second training data. However, it is also possible for a product to be given to the end user which has been trained just using first training data and where the end user or other intermediary trains the product using second training data. Thus, the method may further comprise storing the first acoustic model such that adaptation to the second acoustic model can be performed at a different location.
In an embodiment, training said first acoustic model comprises: initialising a plurality of Hidden Markov Models; re-estimating the HMMs on the basis of the first training data; and construct decision trees to model contexts in said first training data.
The training of said first model may further comprise re-estimating the HMMs clustered by the decision trees. However, this step may be omitted, especially if the model is being trained for a text to speech system.
Training the second model may comprise: deriving HMMs parameters for said second model by running the forward-backward algorithm on said second training data and said first training data; scaling the statistics obtained from the first training data using a parameter; and constructing decision trees using said first and second training data.
The training of said second model may further comprise re-estimating the HMMs clustered by the decision trees. However, this step may be omitted, especially if the model is being trained for a text to speech system.
The parameter may be determined by trial and error.
In a fourth aspect, the present invention provides a speech recognition apparatus comprising: a receiver for receiving a speech input from a known speaker which comprises a sequence of observations; and a processor configured to: determine the likelihood of a sequence of words arising from the sequence of observations using an acoustic model, said acoustic model having a plurality of model parameters describing probability distributions which relate a word or part thereof to an observation, said acoustic model having been trained using first training data and adapted using second training data to said speaker, determine the likelihood of a sequence of observations occurring in a given language using a language model; and combine the likelihoods determined by the acoustic model and the language model and outputting a sequence of words identified from said speech input signal, wherein said acoustic model is context based for said speaker, said context based information being contained in said model using a plurality of decision trees, wherein the structure of said decision trees is based on second training data.
In a fifth aspect, the present invention provides a text to speech system comprising: A receiver for receiving a text input which comprises a sequence of words; and a processor, said processor being configured to: determine the likelihood of a sequence of speech vectors arising from the sequence of words using an acoustic model, said acoustic model having a plurality of model parameters describing probability distributions which relate a word or part thereof to an observation, said acoustic model having been trained using first training data and adapted using second training data to said speaker, wherein said acoustic model is context based for said speaker, said context based information being contained in said model using a plurality of decision trees, wherein the structure of said decision trees is based on second training data.
In an embodiment, the present invention is applied to a speech to speech translation system, said system comprising a speech recognition system according to the above fourth aspect configured to recognise speech in a first language, a translation module configured to translate text received in a first language into text of a second language and a text to speech system according to above fifth aspect configured to output speech in said second language.
The translation module could be any of the well known automatic translation or machine translation systems.
The present invention can be implemented either in hardware or on software in a general purpose computer. Further the present invention can be implemented in a combination of hardware and software. The present invention can also be implemented by a single processing apparatus or a distributed network of processing apparatuses.
Since the present invention can be implemented by software, the present invention encompasses computer code provided to a general purpose computer on any suitable carrier medium. The carrier medium can comprise any storage medium such as a floppy disk, a CD ROM, a magnetic device or a programmable memory device, or any transient medium such as any signal e.g. an electrical, optical or microwave signal.
The above acoustic models will preferably be HMM based models, but other models may also be used.
The present invention will now be described with reference to the following preferred, non-limiting embodiments in which: Figure 1 is a schematic of a very basic speech recognition system; Figure 2 is a schematic of the architecture of a speech recognition processor for implementing the model of figure 1; Figure 3 is a schematic of the architecture of a processor configured for text-to-speech synthesis; Figure 4 is a block diagram of the standard components of a speech recognition processor of the type shown in figure 1; Figure 5 is a plot of a Gaussian distribution relating a particular word or part thereof to an observation; Figure 6 is a schematic plot of acoustic space; Figure 7 is a flow diagram showing how decision trees are constructed in accordance with a known method; Figure 8 is a flow diagram showing how decision trees are constructed in accordance with an embodiment useful for understanding the invention; Figure 9 is a flow diagram showing the basic steps for training an acoustic model; Figure 10 is a schematic of the additional training steps required in accordance with an embodiment of the present invention for training a model for a specific speaker using both original data 0' and new data 0; Figure 11 is a schematic flow diagram showing the method in accordance with an embodiment of the present invention for speech recognition; and Figure 12 is a flow diagram in accordance with an embodiment of the present invention for speech synthesis.
Figure 1 is a schematic of a very basic speech processing system, the system of figure 1 has been configured for speech recognition. A user (not shown) speaks into microphone 1 or other collection device for an audio signal. The device 1 could be substituted by a memory which contains audio data previously recorded or the device 1 may be a network connection for receiving audio data from a remote location.
The speech signal is then directed into a speech processor 3 which will be described in more detail with reference to figure 2.
The speech processor 3 takes the speech signal and turns it into text corresponding to the speech signal. Many different forms of output are available. For example, the output may be in the form of a display 5 which outputs to a screen. Alternatively, the output could be directed to a printer or the like. Also, the output could be in the form of an electronic signal which is provided to a further system 9. For example, the further system 9 could be part of a speech translation system which takes the outputted text from processor 3 and then converts it into a different language. The converted text is then outputted via a further text or speech system.
Alternatively, the text outputted by the processor 3 could be used to operate different types of equipment, for example, it could be part of a mobile phone, car, etc. where the user controls various functions via speech.
Figure 2 shows a possible basic architecture for speech a recognition system. The speech recognition comprises a processor 23 which executes a program 25. Speech recognition system 21 further comprises storage 27. The storage 27 stores data which is used by program 25 to convert text to speech. The text to speech system 21 further comprises an input module 11 and an output module 33. The input module 31 is connected to a speech input 35. Speech input 35 receives speech. The speech input may be for example a microphone. Alternatively, speech input 35 may be a means for receiving speech data from an external storage medium or a network.
Connected to the output module 33 is output for text 37. The text output 37 is used for outputting text converted from speech input 33. The text output 37 may be for example a direct text e.g. a monitor, printer or an output for a data file which may be sent to a storage medium, networked etc. In use, the speech recognition system 21 receives speech through speech input 33.
The program 25 executed on processor 23 converts the speech into text data using data stored in the storage 27. The text is output via the output module 35 to text output 37.
The present invention may also be applied to speech synthesis as well as speech recognition. Figure 3 shows the basic architecture of a text to speech system 51. The text to speech system 51 comprises a processor 53 which executes a program 55.
Text to speech system 51 further comprises storage 57. The storage 57 stores data which is used by program 55 to convert text to speech. The text to speech system 51 further comprises an input module 61 and an output module 63. The input module 61 is connected to a text input 65. Text input 65 receives text. The text input 65 may be for example a keyboard. Alternatively, text input 65 may be a means for receiving text data from an external storage medium or a network.
Connected to the output module 63 is output for audio 67. The audio output 67 is used for outputting a speech signal converted from text input into text input 63. The audio output 67 may be for example a direct audio output e.g. a speaker or an output for an audio data file which may be sent to a storage medium, networked etc. In use, the text to speech system 51 receives text through text input 63. The program executed on processor 53 coverts the text into speech data using data stored in the storage 57. The speech is output via the output module 65 to audio output 67.
Figure 4 is a block diagram of the standard components of a speech recognition processor 3 of the type shown in figure 1. The speech signal received from microphone, through a network or from a recording medium I is directed into front-end unit 11.
The front end unit 11 digitises the received speech signal and splits it into frames of equal lengths. The speech signals are then subjected to a spectral analysis to determine various parameters which are plotted in an "acoustic space". The parameters which are derived will be discussed in more detail later.
The front end unit 11 also removes signals which are believed not to be speech signals and other irrelevant information. Popular front end units comprise apparatus which use filter bank (F BANK) parameters, MelFrequency Cepstral Coefficients (MFCC) and Perceptual Linear Predictive (PLP) parameters. The output of the front end unit is in the form of an input vector which is in n-dimensional acoustic space.
The input vector is then fed into a decoder 13 which cooperates with both an acoustic model section 15 and a language model section 17. The acoustic model section 15 will generally operate using Hidden Markov Models. However, it is also possible to use acoustic models based on connectionist models and hybrid models.
The acoustic model unit 15 derives the likelihood of a sequence of observations corresponding to a word or part thereof on the basis of the acoustic input alone.
The language model section 17 contains information concerning probabilities of a certain sequence of words or parts of words following each other in a given language.
Generally a static model is used. The most popular method is the N-gram model.
The decoder 13 then traditionally uses a dynamic programming (DP) approach to find the best transcription for a given speech utterance using the results from the acoustic model 15 and the language model 17.
This is then output via the output device 19 which allows the text to be displayed, presented or converted for further use e.g. in speech to speech translation or to control a voice activated device.
This description will be mainly concerned with the use of an acoustic model which is a Hidden Markov Model (HMM). However, it could also be used for other models.
The actual model used in this embodiment is a standard model, the details of which are outside the scope of this patent application. However, the model will require the provision of probability density functions (pdfs) which relate to the probability of an observation represented by an acoustic vector being related to a word or part thereof.
Generally, this probability distribution will be a Gaussian distribution in n-dimensional space.
A schematic example of a generic Gaussian distribution is shown in figure 5. Here, the horizontal axis corresponds to a parameter of the input vector in one dimension and the probability distribution is for a particular word or part thereof relating to the observation.
For example, in figure 5, an observation corresponding to an acoustic vector x has a probability p1 of corresponding to the word whose probability distribution is shown in figure 5. The shape and position of the Gaussian is defined by its mean and variance.
These parameters are determined during training for the vocabulary which the acoustic model, they will be referred to as the model parameters".
In a HMM, once the model parameters have been determined, the model can be used to determine the likelihood of a sequence of observations corresponding to a sequence of words or parts of words.
Figure 6 is schematic plot of acoustic space where an observation is represented by an observation vector or feature vector x1. The open circles g correspond to the means of Gaussians or other probability distribution functions plotted in acoustic space.
During decoding, the acoustic model will calculate a number of different likelihoods that the feature vector x1 corresponds to a word or part thereof represented by the Gaussians. These likelihoods are then used in the acoustic model and combined with probabilities from the language model to determine the text spoken.
Most of state-of-the art speech recognition systems are based on the statistical framework, finding the most likely word sequence, w, for a sequence of speech parameters, o, which are expressed as feature vectors extracted from an input speech.
Which can be written as: = argrnaxp(w o) (1) where p (w I o) is a posterior probability distribution of w for a given o. Because it is difficult to model p (w I o) directly, the following reformulation based on the Bayes' rule is often used: (2) w = arg max p (w f o) p(w, o) (3) = arg max p(o) (4) p(o w)p(w) = arg max p(o) where p (o) is a marginal distribution of o (often called as "evidence"). Because p (o) is independent of the maximization, Eq. (4) can be rewritten as = arg rnaxp(o w)p(w) (5) Most of speech recognition systems consists of three modules (see figure 4) to perform the maximum in Eq. (5), which are an acoustic model for p (0 J w), a language model for p (u.), and a decoder to search the best word sequence.
The statistical speech synthesis can be written as follows: ô = arg rnaxp(o w) (6) Unlike the speech recognition, transformation by the Bayes' rule is not required in the statistical speech synthesis for the type of unit described with reference to figure 3.
Basically it consists of acoustic model only. The acoustic model described herein is relevant to both speech recognition and speech synthesis.
In both statistical speech recognition and synthesis, context-dependent hidden Markov models (HMM5) are widely used as their acoustic models because of its efficiency and capability. The maximum likelihood (ML) criterion is one of the most popular criteria to estimate HMM parameters and build decision trees, which define HMM state-level parameter tying structure to reduce the number of parameters to be estimated. The ML estimation of HMM parameters can be written as AML = argmaxp(O (7)
A
where A is a set of HMM parameters and 0 is a set of training data. It is known that HMMs estimated based on the ML criterion sometimes overfit to the training data. One possible solution of overfitting problem is to use the maximum a posteriori (MAP) estimation. The MAP estimation of HMM parameters can be written as )MAP = argrnaxp(\ I 0) A (8) where p (A I 0) is a posterior probability of A for a given 0. Equation (8) can be reformulated by Bayes' rule as: = argrnaxp(,\ 0) (9) p(A,O) =argrnax p(Q) (10) -p(0IA)p(\) -cUglllcIx p(O) . (11) where p (A) is a prior distribution of A. Because the numerator of Eq. (11) is independent of maximization, it can be rewritten as AMAP = argmaxp(0 A)p(A) A (12) The main advantage of the MAP estimation over the ML criterion is the capability to use the prior art distribution. By incorporating the prior knowledge about data to the prior distribution, it can avoid the overfitting to the training data. The MAP estimation has been used to adapt ML-estimated speaker-independent HMMs to a target speaker both in speech recognition and synthesis.
Conventionally, MAP estimation has been used only parameter estimation. For decision tree-based context clustering, which is one of the essential part for training context-dependent HMMs, the ML criterion has been used. In an embodiment in accordance with a method of the present invention, a joint estimation technique of HMM parameters and decision trees based on the MAP criterion is used. The use of the MAP criterion allows incorporation of the prior knowledge about both HMM parameters and decision trees as its joint prior distribution while estimating model parameters and decision trees.
As an example, the plosive phone g" is pronounced differently in the two instances it is using the word "gauge". The phonemes can be thought of as being divided into different groups such as plosives, b, d, g, k, p, t, the fricatives dh, th, f, v, s, sh, z, zh, the nasals m, em, n, en, ng and other groups have been identified. A decision tree can be set for example by asking questions concerning the group to which the preceding and succeeding phoneme belong. Therefore, by building these trees, it is possible to model for all instances of language and to cope with different pronunciation of phonemes in different context.
In both HMM-based speech recognition and synthesis systems, context-dependent phoneme HMMs (e.g. triphone HMMs) are widely used. The use of context-dependent phoneme HMMs rather than context-independent ones (monophones) is known to provide higher recognition performance. While the large number of context-dependent HMMs can help to capture variations in speech data, it results in too many parameters to be estimated in a system and causes the overfitting to the training data. Therefore, maintaining a good balance between the model complexity and model robustness is important in acoustic modelling. The use of top-down decision tree-based context clustering is a good and known solution to this problem. It has two advantages over bottom-up based approaches. First, by incorporating phonetic knowledge into a set of questions, it can assign unseen context-dependent phonemes to the leaf nodes of decision trees. Second, the splitting procedure of the decision tree provides a way of keeping the balance of model complexity and robustness.
The decision tree-based context clustering technique aims to find a parameter tying structure (decision tree) and model parameters that maximizes the likelihood of the model to the training data. It can be written as: (?MLAML) argmaxp(O m,A) (13) In argmaxlogp(O m,. A) m,,\ (14) where m denotes a parameter tying structure. The procedure of decision tree-based clustering will now be described with reference to figure 7.
Instep SlOl, all context-independent phoneme HMMs are pooled at the root node of a decision tree; In step Si 03, the log likelihood of the model to the training data is calculated using:
E T
> *-(nH. + Eii(2ir)) + [ii (!D)t) 14(a) ES c=I t=1 (the above equation has been taken from the PhD Thesis of Julian Odell, Cambridge University 1995.) where the likelihood is calculated over a set of models comprising the set of distributions S generating the training data 0 consisting of E examples, and where y (t) is the state occupancy at time t and Te is the total length of time of the speech of the E examples.
In step S105 for all combinations of nodes in the decision tree and pre-defined binary questions about contexts, the log likelihood of the model to the training data is computed after splitting the node by the question.
In step S 107, the best combination of node and question that gives the largest gain in log likelihood is selected.
In step S109, the found node by the found question is split and is the gain in log likelihood exceeds a pre-defined threshold, the process returns to step S103.
If the gain is below the pre-defined threshold then the clustering stops in step Sill.
The estimated parameter tying structure m and HMM parameters A are used as acoustic models for speech recognition and synthesis.
Figure 8 shows a method in accordance with a preferred embodiment of the present invention.
In contrast, in a method in accordance with an embodiment of the present invention.
Here, instead of using the ML criterion, the MAP criterion is used in decision tree-based context clustering. It can be written as follows: (TiMAP, AMAP) argmaxp(m, A 0) ?fl..X P(O.7Ti, A) (16) arg max TIL) v(0) (17) I in, A)p(m, A) arg max (18) rn,A p(O) argmaxp(0 I m,A)p(m,A) ill,')' where p (m, A) denotes a joint prior distribution of the parameter tying structure and a set of HMM parameters. Next, how to define this joint prior distribution will be explained.
In adaptation by the MAP estimation for HMM-based statistical speech recognition a6'd synthesis, hyper-parameters of prior distributions, which specify the characteristics of prior distributions, are usually set according to parameters of HMMs estimated by a large amount of training data (e.g., speaker-independent HMM5). It can be written as follows: p(rn, A) = p(rn, A I 0') (19) where 0' denotes a large amount of training data to estimate parameters of HMMs and p (m, A I 0') is a joint posterior probability distribution of model structure m and model parameter A. Using the Bayes' rule, Eq. (19) can be rewritten as follows: (in A 0') -p(O', in, A) (20) (21) -p(O' iit, A)p'('rn, A) p(o,) where p' (m, A) is the joint prior distribution of m and A and p' (0', m, A) is estimated.
If p' (m, A) is a non-informative (uniform) distribution, the maximization problem of Eq.
(18) can be rewritten as (1MAP AMAP) aigrnaxp(O in, A) rn A)p'(in,,\) (22) rn,A v(O) (23) = argmaxp(O A)r(O' iii, A) = arglnax {Iogp(O rn, A) + logp(O' I m A)} (24) m,A because both p(O') and p' (m, A) are independent of maximization. Practically a parameter a is introduced to control the balance of contribution between 0 and 0' as (?1MAP. AMAP) -argmax {logp(O J m, A) + . logp(O' J in, in, A (25) Interestingly, this a works the same as the weight term in the MAP estimation of HMM parameters Equation (25) is almost the same as the decision tree-based context clustering based on the ML criterion (Eq. (14)). The essential difference is that the log likelihood of the model to 0' is added. The tree construction process becomes the same as that of the ML criterion described in the previous section. The tree clustering process is shown in figure 8. Again, the difference is that the log likelihood of the model to 0' is also considered. Therefore, this can be easily incorporated to the existing implementation of decision tree-based context clustering.
In Figure 8, in step S151, HMMs are pooled in the same manner as described with reference to figure 7.
The log likelihood of model to training data using 0 and 0' data is then performed in step S153. This uses equation 25 and the likelihood is computed using equation 14(a).
In the same manner as figure 7, for all node combinations splittings are calculated in step S155 and the combination of node and question which gives the largest splitting is selected in step S157. The node is then split in step S159. If the gain due to splitting exceeds a threshold, then the system loops back to step Si 53. lIthe gain does not exceed a threshold, then it means that the tree has been split to a sufficient degree and clustering is stopped in step S161.
The threshold is selected dependent on the accuracy required and computing considerations. If the threshold is set reasonably low, then the trees will be larger and more computing power will be required in order to run a model which uses trees constructed using the method of figure 8. However, if a larger threshold is used, then less questions will appear in the trees resulting in a loss of accuracy.
It has been previously described that when computing the log likelihood of model to training data using 0 and 0' data, a parameter a is used in order to weight the 0' distribution, a is chosen manually. In practice, a number of different a will be trialled and the best one will be selected. One possibility is to set a according to the amount of data of 0 and 0'. For example, if 0 comprises an hour of speech data and 0' comprises ten hours, then, a will be set to 1/10 which equals 0.1. Thus, in this situation, 0 and 0' have the same amount of data imaginary.
A good a will be determined offline. For speech synthesis, speech samples will be synthesised from estimated HMMs sets (various a) using test sentences and they will be listened to. The alpha which gives the best subjective listening test score will be selected. For recognition, a speech recogniser will be run with estimated HMMs sets (having various a). For test utterances and check its recognition accuracy. The alpha which give the best recognition accuracy will be selected.
Next, a method of training a speech recogniser using a method in accordance with an embodiment of the present invention will be described with reference to figures 9 and 10. The flowchart of figure 9 corresponds to basic training which will be performed for the 0' data, the flowchart of figure 10 corresponds to training of the data using the 0 and 0' data.
The 0-data is the data which is used to initially train the model. This will be from a large number of speakers.
In step S201, monophone HMMs are initialised. This is to establish initial HMM parameters e.g. Gaussians variances etc for single phonemes. Initialising HMMs is well known and a number of techniques maybe used such as setting all means and variances to zero, setting all means and variances for each HMM to a global mean and variance or using prior data as an estimate for the means and variances of HMMs.
In step S203, embedded re-estimation is performed on monophone HMMs. This is used to re-estimate phoneme level HMMs on the basis of each segment. This is required because during speech recognition, better accuracy is obtained if parameters are correctly estimated for each segment. In the preferred embodiment, embedded re-estimation is used where it is assumed that there is a soft assignment of one frame per state where there is a probability of a state being assigned to a frame. The Baum Welch algorithm or forward backward algorithm may also be used at this stage, both of these algorithms presume a soft assignment of frame to state. The viterbi algorithm may also be used which assumes a hard assignment of frame to state.
In step S205, monophone HMMs are copied to context dependence HMMs. Context dependent HMMs (e.g. triphones) have been described previously. A triphone comprises a middle or "current" phoneme with the preceding and succeeding phonemes. At this stage, all current phonemes i.e. the middle phoneme have the same statistics.
In step S207, embedded re-estimation is performed then on context dependent HMMs in step S207. This allows the HMMs to be estimated on the basis of whole sentences.
Next, decision tree context based clustering is performed in step S209. This is the same as that described with reference to figure 7. As this is the initial training which is performed on data set 0', this will be purely performed on 0' data.
Decision trees do not support HMM mixtures, therefore, the embedded re-estimation needs to be performed after the decision tree context based clustering has been performed in step S21 1.
As previously indicated, the steps of figure 9 are well-known for training acoustic models for both speech recognition and speech synthesis.
If the above is being used for training an acoustic model for speech synthesis, then the embedded re-estimation steps may be omitted as in general, a mixture of Gaussians is not used for an acoustic model for speech synthesis due to its large computational cost.
Figure 10 is a method in accordance with an embodiment of the present invention where an 0 plus 0' model is estimated. To estimate the 0 plus 0' model, the 0' model above is used to obtain the state level assignments of 0 (this is the state/frame assignments). This is performed using the well known forward-backward algorithm.
The forward/backward algorithm computes forward and backward probabilities. Using the forward/backward probabilities, it is possible to compute the state-occupancy probabilities of HMM states for giving observation vectors. This state-occupancy probability corresponds to the "state-level assignments of 0" referred to above. In addition to obtaining the state occupancy, the first and second order statistics are also obtained.
The state occupancy of a HMM state is the total sum of state-occupancy probabilities of this HMM state over the entire training data: E T.2 = e1 1 The first order statistic for a HMM state is the total sum of state occupancy probability multiplied by the observation vector associated with this HMM state over the entire training data.
The second order statistic for HMM state is the total sum of state-occupancy probability multiplied by the observation vector squared associated with the HMM state over the entire training data or: The first and second order statistics are related to the mean and variance as:
E T
>12 >1y(t)o e1 t1 l_L c
C
--1tZZ1 Ic
By using the above, it is possible to compute the mean and variance of a HMM.
Once the statistics have been obtained, they are scaled by parameter a. Parameter a is the same as has been described with reference to the construction of the decision trees. The parameters are scaled as follows: the occupancy will be E(O) T(O) E(O') T(O') rc= r:(t)+a r:(t) the mean derived from the first order statistics E(O) T.(O) E(O') T,(O) y:(t)o+a 7(t)o 1 5 = e=I 1=1 eI 11
IC
and the variance derived from the second order statistics as: E(O) T(O) E(O) T(O') y(to _p)o,e)t +a y(t)(o,e -iiJo -,uJ e=I t=1 e=1 =I C yc Next, in step S235, decision tree based context clustering is performed using 0 and 0'.
This is performed in the same manner as described with relation to figure 8.
It should be noted that the forward-backward algorithm is run with the 0' model and with state tying structure performed in step S209. However, decision tree-based context clustering of step S235 requires "untied" statistics, i.e., each individual context-dependent model has its own occupation counts, 1st and 2nd-order statistics which are accumulated only on itself. . When step S235 has been performed, step 237 is performed where embedded re-estimation is performed on clustered context dependent HMMs. Again, this step may be omitted if the acoustic model is to be used for speech synthesis since mixture Gaussians are not usually used The training of the first and second models can take place at a manufacturer's premises. However, it is also possible for a speech processing product to be produced which has just been trained with the initial training data. The product could then be later trained with the second training data.
In figure 11, the present invention can be used in a speech recognition system. In a speech recognition system the basic steps as set out in figure 11 will be performed.
In step S301, input speech is received from a speaker. The system would preferably have been trained for that speaker using the speaker data 0.
In step S303, the likelihood of a sequence of words arriving from the speech input is determined using the acoustic model which has been trained as explained with reference to figures 9 and 10. Next, in step 8305, the likelihood of a sequence of observations occurring in a given language is evaluated using the language model. In step 8307, the results of the language model and acoustic model are combined to produce a sequence of words. The sequence of words is outputted in step S309. The sequence of words could be outputted on a monitor, or directed into a search engine, directed into a SatNav system etc. In one embodiment, the outputted sequence of words is then directed into a translation system where it is translated into a second language.
Figure 12 shows a very simple system for speech synthesis. In step S321, a text input is received. This text input may be from a data file or may be inputted directly to computer.
An acoustic model is then run to determine the sequence of speech vectors corresponding to the input text in step S323. Audios output then corresponding to the text input in step S325.
For a speech-to-speech translation system, figures 11 and figures 12 could be run one after the other with the output from step S309 of figure 11 being translated into a different language and inputted as text input 321 in figure 12.

Claims (20)

  1. CLAIMS: 1. A speech recognition method, said method comprising: receiving a speech input from a speaker which comprises a sequence of observations; and determining the likelihood of a sequence of words arising from the sequence of observations using an acoustic model, said acoustic model having a plurality of model parameters describing probability distributions which relate a word or part thereof to an observation, said acoustic model having been trained using first training data and adapted using second training data to said speaker, the speech recognition method further comprising determining the likelihood of a sequence of observations occurring in a given language using a language model; and combining the likelihoods determined by the acoustic model and the language model and outputting a sequence of words identified from said speech input signal, wherein said acoustic model is context based for said speaker, said context based information being contained in said model using a plurality of decision trees, wherein the structure of said decision trees is based on second training data.
  2. 2. A speech recognition method according to claim 1, wherein the structure of the decision trees is based on both the first and second training data.
  3. 3. A method according to claim 1, wherein the structure is determined from the splitting of the nodes of the trees and has been calculated using maximum a posterior criteria.
  4. 4. A method according to claim 3, wherein the splitting is calculated using maximum a posterior criteria implemented as: (?v1AP)MAP) = argrnax {logp(O ?n, )) + a logp(O' m, A)} m.A Where 0' is the first training data, 0 is the second training data, m denotes a parameter tying structure, A is a set of HMM parameters, th. denotes the parameter tying structure under maximum a posterior criteria, M' are the HMM parameters under maximum a posterior criteria and a is a parameter to be set
  5. 5. A method according to any preceding claim, wherein the context dependency is implemented as tn-phones.
  6. 6. A method according to any preceding claim, wherein said acoustic model comprises probability distributions which are represented by means and variances and wherein said decision trees are provided for both means and variances.
  7. 7. A method according to any preceding claim, wherein said context based information is selected from phonetic, linguistic and prosodic contexts.
  8. 8. A method according to any preceding claim, wherein said decision trees are used to model at least one selected from expressive contexts, gender, age or voice characteristics.
  9. 9. A text to speech processing method, said method comprising: receiving a text input which comprises a sequence of words; and determining the likelihood of a sequence of speech vectors arising from the sequence of words using an acoustic model, said acoustic model having a plurality of model parameters describing probability distributions which relate a word or part thereof to an observation, said acoustic model having been trained using first training data and adapted using second training data to said speaker, wherein said acoustic model is context based for said speaker, said context based information being contained in said model using a plurality of decision trees, wherein the structure of said decision trees is based on second training data.
  10. 10. A method of training an acoustic model for a speech processing system, the method comprising: receiving first training data, said first training data comprising speech and text corresponding to said speech; training a first acoustic model using said first training data; receiving second training data from a known speaker; adapting said first acoustic model to form a second acoustic model using said second training data, wherein adapting said first model to form said second model comprises constructing decision trees to model context dependency, and wherein the structure of the decision trees is based on the second training data.
  11. II. A method according to claim 10, further comprising storing the first acoustic model such that adaptation to the second acoustic model can be performed at a different location.
  12. 12. A method according to either of claims 10 or 11, wherein training said first acoustic model comprises: initialising a plurality of Hidden Markov Models; re-estimating the HMMs on the basis of the first training data; and construct decision trees to model contexts in said first training data.
  13. 13. A method according to claim 12, wherein training of said first model further comprises re-estimating the HMM5 clustered by the decision trees.
  14. 14. A method according to any of claims 10 to 13, wherein training the second model comprises: deriving HMM parameters for said second model by running the forward-backward algorithm on said second training data and said first training data; scaling the statistics obtained from the first training data using a parameter; and constructing decision trees using said first and second training data.
  15. 15. A method according to claim 14, further comprising determining said parameter by trial and error.
  16. 16. A method according to either of claims 14 or 15, wherein training of said second model further comprises re-estimating the HMMs clustered by the decision trees.
  17. 17. A carrier medium carrying computer readable instructions for controlling the computer to carry out the method of any preceding claim.
  18. 18. A speech recognition apparatus comprising: a receiver for receiving a speech input from a speaker which comprises a sequence of observations; and a processor configured to: determine the likelihood of a sequence of words arising from the sequence of observations using an acoustic model, said acoustic model having a plurality of model parameters describing probability distributions which relate a word or part thereof to an observation, said acoustic model having been trained using first training data and adapted using second training data to said speaker; determine the likelihood of a sequence of observations occurring in a given language using a language model; and combine the likelihoods determined by the acoustic model and the language model and outputting a sequence of words identified from said speech input signal, wherein said acoustic model is context based for said speaker, said context based information being contained in said model using a plurality of decision trees, wherein the structure of said decision trees is based on second training data.
  19. 19. A text to speech system comprising: A receiver for receiving a text input which comprises a sequence of words; and a processor, said processor being configured to: determine the likelihood of a sequence of speech vectors arising from the sequence of words using an acoustic model, said acoustic model having a plurality of model parameters describing probability distributions which relate a word or part thereof to an observation, said acoustic model having been trained using first training data and adapted using second training data to said speaker, wherein said acoustic model is context based for said speaker, said context based information being contained in said model using a plurality of decision trees, wherein the structure of said decision trees is based on second training data.
  20. 20. A speech to speech translation system, said system comprising a speech recognition system according to claim 18 configured to recognise speech in a first language, a translation module configured to translate text received in a first language into text of a second language and a text to speech system according to claim 19 configured to output speech in said second language.
GB1003496.5A 2010-03-02 2010-03-02 A speech processor, a speech processing method and a method of training a speech processor Active GB2478314B (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
GB1003496.5A GB2478314B (en) 2010-03-02 2010-03-02 A speech processor, a speech processing method and a method of training a speech processor
US13/014,185 US9043213B2 (en) 2010-03-02 2011-01-26 Speech recognition and synthesis utilizing context dependent acoustic models containing decision trees
JP2011045161A JP5242724B2 (en) 2010-03-02 2011-03-02 Speech processor, speech processing method, and speech processor learning method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB1003496.5A GB2478314B (en) 2010-03-02 2010-03-02 A speech processor, a speech processing method and a method of training a speech processor

Publications (3)

Publication Number Publication Date
GB201003496D0 GB201003496D0 (en) 2010-04-14
GB2478314A true GB2478314A (en) 2011-09-07
GB2478314B GB2478314B (en) 2012-09-12

Family

ID=42125880

Family Applications (1)

Application Number Title Priority Date Filing Date
GB1003496.5A Active GB2478314B (en) 2010-03-02 2010-03-02 A speech processor, a speech processing method and a method of training a speech processor

Country Status (3)

Country Link
US (1) US9043213B2 (en)
JP (1) JP5242724B2 (en)
GB (1) GB2478314B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2846327A1 (en) * 2013-08-23 2015-03-11 Kabushiki Kaisha Toshiba A speech processing system and method

Families Citing this family (38)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9798653B1 (en) * 2010-05-05 2017-10-24 Nuance Communications, Inc. Methods, apparatus and data structure for cross-language speech adaptation
CN102385858B (en) 2010-08-31 2013-06-05 国际商业机器公司 Emotional voice synthesis method and system
US8484023B2 (en) * 2010-09-24 2013-07-09 Nuance Communications, Inc. Sparse representation features for speech recognition
KR20120046627A (en) * 2010-11-02 2012-05-10 삼성전자주식회사 Speaker adaptation method and apparatus
US9558738B2 (en) * 2011-03-08 2017-01-31 At&T Intellectual Property I, L.P. System and method for speech recognition modeling for mobile voice search
GB2489473B (en) * 2011-03-29 2013-09-18 Toshiba Res Europ Ltd A voice conversion method and system
US8682670B2 (en) * 2011-07-07 2014-03-25 International Business Machines Corporation Statistical enhancement of speech output from a statistical text-to-speech synthesis system
CN102270449A (en) * 2011-08-10 2011-12-07 歌尔声学股份有限公司 Method and system for synthesising parameter speech
US9275636B2 (en) * 2012-05-03 2016-03-01 International Business Machines Corporation Automatic accuracy estimation for audio transcriptions
GB2505400B (en) * 2012-07-18 2015-01-07 Toshiba Res Europ Ltd A speech processing system
CA2882664A1 (en) * 2012-07-20 2014-01-23 Interactive Intelligence, Inc. Method and system for real-time keyword spotting for speech analytics
US20150199960A1 (en) * 2012-08-24 2015-07-16 Microsoft Corporation I-Vector Based Clustering Training Data in Speech Recognition
WO2014061230A1 (en) * 2012-10-16 2014-04-24 日本電気株式会社 Prosody model learning device, prosody model learning method, voice synthesis system, and prosody model learning program
US8935170B2 (en) 2012-11-27 2015-01-13 Longsand Limited Speech recognition
CN103871403B (en) * 2012-12-13 2017-04-12 北京百度网讯科技有限公司 Method of setting up speech recognition model, speech recognition method and corresponding device
US9640173B2 (en) * 2013-09-10 2017-05-02 At&T Intellectual Property I, L.P. System and method for intelligent language switching in automated text-to-speech systems
US10140981B1 (en) * 2014-06-10 2018-11-27 Amazon Technologies, Inc. Dynamic arc weights in speech recognition models
WO2016042626A1 (en) * 2014-09-17 2016-03-24 株式会社東芝 Speech processing apparatus, speech processing method, and program
CN104795063A (en) * 2015-03-20 2015-07-22 中国人民解放军信息工程大学 Acoustic model building method based on nonlinear manifold structure of acoustic space
JP6523893B2 (en) * 2015-09-16 2019-06-05 株式会社東芝 Learning apparatus, speech synthesis apparatus, learning method, speech synthesis method, learning program and speech synthesis program
CN111243606B (en) * 2017-05-12 2023-07-21 苹果公司 User-specific acoustic models
DK179496B1 (en) 2017-05-12 2019-01-15 Apple Inc. USER-SPECIFIC Acoustic Models
KR20200027475A (en) * 2017-05-24 2020-03-12 모듈레이트, 인크 System and method for speech-to-speech conversion
CN107515862A (en) * 2017-09-01 2017-12-26 北京百度网讯科技有限公司 Voice translation method, device and server
US11694681B2 (en) * 2018-01-08 2023-07-04 Ebay Inc. Artificial assistant system notifications
WO2019139428A1 (en) * 2018-01-11 2019-07-18 네오사피엔스 주식회사 Multilingual text-to-speech synthesis method
CN111566655B (en) 2018-01-11 2024-02-06 新智株式会社 Multi-language text-to-speech synthesis method
JP7124358B2 (en) 2018-03-13 2022-08-24 富士通株式会社 Output program, information processing device and output control method
US11308939B1 (en) * 2018-09-25 2022-04-19 Amazon Technologies, Inc. Wakeword detection using multi-word model
US11955120B1 (en) 2019-01-31 2024-04-09 Alan AI, Inc. Systems and methods for integrating voice controls into applications
US11935539B1 (en) * 2019-01-31 2024-03-19 Alan AI, Inc. Integrating voice controls into applications
CN109887484B (en) * 2019-02-22 2023-08-04 平安科技(深圳)有限公司 Dual learning-based voice recognition and voice synthesis method and device
US11538485B2 (en) 2019-08-14 2022-12-27 Modulate, Inc. Generation and detection of watermark for real-time voice conversion
CN110737268B (en) * 2019-10-14 2022-07-15 哈尔滨工程大学 Viterbi algorithm-based instruction determining method
KR20210053020A (en) 2019-11-01 2021-05-11 삼성전자주식회사 Electronic apparatus and operating method thereof
CN113627153B (en) * 2021-07-30 2023-10-27 湖南提奥医疗科技有限公司 Method, device, equipment and storage medium for processing data
CN114420087B (en) * 2021-12-27 2022-10-21 北京百度网讯科技有限公司 Acoustic feature determination method, device, equipment, medium and product
CN116386637B (en) * 2023-06-05 2023-08-04 中国电子科技集团公司第十五研究所 Radar flight command voice instruction generation method and system

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5715367A (en) * 1995-01-23 1998-02-03 Dragon Systems, Inc. Apparatuses and methods for developing and using models for speech recognition
EP0856835A2 (en) * 1997-01-30 1998-08-05 Nec Corporation Speaker recognition device
EP1205907A2 (en) * 2000-11-14 2002-05-15 International Business Machines Corporation Phonetic context adaptation for improved speech recognition
US20020123891A1 (en) * 2001-03-01 2002-09-05 International Business Machines Corporation Hierarchical language models
WO2004047077A1 (en) * 2002-11-15 2004-06-03 Voice Signal Technologies, Inc. Multilingual speech recognition
WO2010035892A1 (en) * 2008-09-29 2010-04-01 Kabushiki Kaisha Toshiba Speech recognition method

Family Cites Families (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6151575A (en) * 1996-10-28 2000-11-21 Dragon Systems, Inc. Rapid adaptation of speech models
US6574597B1 (en) * 1998-05-08 2003-06-03 At&T Corp. Fully expanded context-dependent networks for speech recognition
DE19912405A1 (en) 1999-03-19 2000-09-21 Philips Corp Intellectual Pty Determination of a regression class tree structure for speech recognizers
US6725190B1 (en) * 1999-11-02 2004-04-20 International Business Machines Corporation Method and system for speech reconstruction from speech recognition features, pitch and voicing with resampled basis functions providing reconstruction of the spectral envelope
US6442519B1 (en) * 1999-11-10 2002-08-27 International Business Machines Corp. Speaker model adaptation via network of similar users
US6571208B1 (en) 1999-11-29 2003-05-27 Matsushita Electric Industrial Co., Ltd. Context-dependent acoustic models for medium and large vocabulary speech recognition with eigenvoice training
WO2002029614A1 (en) * 2000-09-30 2002-04-11 Intel Corporation Method and system to scale down a decision tree-based hidden markov model (hmm) for speech recognition
ATE297588T1 (en) * 2000-11-14 2005-06-15 Ibm ADJUSTING PHONETIC CONTEXT TO IMPROVE SPEECH RECOGNITION
WO2002091357A1 (en) * 2001-05-08 2002-11-14 Intel Corporation Method, apparatus, and system for building context dependent models for a large vocabulary continuous speech recognition (lvcsr) system
US7668718B2 (en) * 2001-07-17 2010-02-23 Custom Speech Usa, Inc. Synchronized pattern recognition source data processed by manual or automatic means for creation of shared speaker-dependent speech user profile
US7574359B2 (en) * 2004-10-01 2009-08-11 Microsoft Corporation Speaker selection training via a-posteriori Gaussian mixture model analysis, transformation, and combination of hidden Markov models
US7409346B2 (en) * 2004-11-05 2008-08-05 Microsoft Corporation Two-stage implementation for phonetic recognition using a bi-directional target-filtering model of speech coarticulation and reduction
US20070033027A1 (en) * 2005-08-03 2007-02-08 Texas Instruments, Incorporated Systems and methods employing stochastic bias compensation and bayesian joint additive/convolutive compensation in automatic speech recognition
JP4087400B2 (en) * 2005-09-15 2008-05-21 株式会社東芝 Spoken dialogue translation apparatus, spoken dialogue translation method, and spoken dialogue translation program
KR100815115B1 (en) * 2006-03-31 2008-03-20 광주과학기술원 An Acoustic Model Adaptation Method Based on Pronunciation Variability Analysis for Foreign Speech Recognition and apparatus thereof
US20080059200A1 (en) * 2006-08-22 2008-03-06 Accenture Global Services Gmbh Multi-Lingual Telephonic Service
JP4705535B2 (en) 2006-08-31 2011-06-22 日本放送協会 Acoustic model creation device, speech recognition device, and acoustic model creation program
ATE457511T1 (en) * 2007-10-10 2010-02-15 Harman Becker Automotive Sys SPEAKER RECOGNITION
WO2009144368A1 (en) * 2008-05-30 2009-12-03 Nokia Corporation Method, apparatus and computer program product for providing improved speech synthesis
JP2010152081A (en) * 2008-12-25 2010-07-08 Toshiba Corp Speaker adaptation apparatus and program for the same
US8340965B2 (en) * 2009-09-02 2012-12-25 Microsoft Corporation Rich context modeling for text-to-speech engines

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5715367A (en) * 1995-01-23 1998-02-03 Dragon Systems, Inc. Apparatuses and methods for developing and using models for speech recognition
EP0856835A2 (en) * 1997-01-30 1998-08-05 Nec Corporation Speaker recognition device
EP1205907A2 (en) * 2000-11-14 2002-05-15 International Business Machines Corporation Phonetic context adaptation for improved speech recognition
US20020123891A1 (en) * 2001-03-01 2002-09-05 International Business Machines Corporation Hierarchical language models
WO2004047077A1 (en) * 2002-11-15 2004-06-03 Voice Signal Technologies, Inc. Multilingual speech recognition
WO2010035892A1 (en) * 2008-09-29 2010-04-01 Kabushiki Kaisha Toshiba Speech recognition method

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2846327A1 (en) * 2013-08-23 2015-03-11 Kabushiki Kaisha Toshiba A speech processing system and method
EP2860725A1 (en) * 2013-08-23 2015-04-15 Kabushiki Kaisha Toshiba A speech processing system and method
EP3282444A1 (en) * 2013-08-23 2018-02-14 Kabushiki Kaisha Toshiba Text-to-speech method and system
US10140972B2 (en) 2013-08-23 2018-11-27 Kabushiki Kaisha Toshiba Text to speech processing system and method, and an acoustic model training system and method

Also Published As

Publication number Publication date
GB2478314B (en) 2012-09-12
GB201003496D0 (en) 2010-04-14
US20110218804A1 (en) 2011-09-08
JP2011180596A (en) 2011-09-15
US9043213B2 (en) 2015-05-26
JP5242724B2 (en) 2013-07-24

Similar Documents

Publication Publication Date Title
US9043213B2 (en) Speech recognition and synthesis utilizing context dependent acoustic models containing decision trees
JP6052814B2 (en) Speech recognition model construction method, speech recognition method, computer system, speech recognition apparatus, program, and recording medium
Gibson et al. Hypothesis spaces for minimum Bayes risk training in large vocabulary speech recognition.
Hain et al. New features in the CU-HTK system for transcription of conversational telephone speech
JP5326892B2 (en) Information processing apparatus, program, and method for generating acoustic model
US8595006B2 (en) Speech recognition system and method using vector taylor series joint uncertainty decoding
US9099082B2 (en) Apparatus for correcting error in speech recognition
US7783484B2 (en) Apparatus for reducing spurious insertions in speech recognition
Stolcke et al. Highly accurate phonetic segmentation using boundary correction models and system fusion
US8620655B2 (en) Speech processing system and method
US20070213987A1 (en) Codebook-less speech conversion method and system
US8417522B2 (en) Speech recognition method
Uebel et al. Improvements in linear transform based speaker adaptation
JP4836076B2 (en) Speech recognition system and computer program
Gerosa et al. Towards age-independent acoustic modeling
GB2465383A (en) A speech recognition system using a plurality of acoustic models which share probability distributions
JP2011053312A (en) Adaptive acoustic model generating device and program
Bollepalli et al. Speaking style adaptation in text-to-speech synthesis using sequence-to-sequence models with attention
GB2480084A (en) An adaptive speech processing system
Nguyen et al. The 2016 KIT IWSLT speech-to-text systems for English and German
Zhang et al. Keyword spotting based on phoneme confusion matrix
KR20180041114A (en) Outlier Identification System and Method for Removing Poor Alignment in Speech Synthesis
Gulić et al. A digit and spelling speech recognition system for the croatian language
RU160585U1 (en) SPEECH RECOGNITION SYSTEM WITH VARIABILITY MODEL
Montoya Multilingvální rozpoznávání řeči pro vybrané západoevropské jazyky