EP1116218A1 - Inter-word triphone models - Google Patents

Inter-word triphone models

Info

Publication number
EP1116218A1
EP1116218A1 EP99952974A EP99952974A EP1116218A1 EP 1116218 A1 EP1116218 A1 EP 1116218A1 EP 99952974 A EP99952974 A EP 99952974A EP 99952974 A EP99952974 A EP 99952974A EP 1116218 A1 EP1116218 A1 EP 1116218A1
Authority
EP
European Patent Office
Prior art keywords
word
phone
model
models
vocabulary
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
EP99952974A
Other languages
German (de)
French (fr)
Other versions
EP1116218B1 (en
Inventor
Vladimir Sejnoha
Tom Lynch
Ramesh Sarukkai
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lernout and Hauspie Speech Products NV
Original Assignee
Lernout and Hauspie Speech Products NV
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lernout and Hauspie Speech Products NV filed Critical Lernout and Hauspie Speech Products NV
Publication of EP1116218A1 publication Critical patent/EP1116218A1/en
Application granted granted Critical
Publication of EP1116218B1 publication Critical patent/EP1116218B1/en
Anticipated expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/04Segmentation; Word boundary detection
    • G10L15/05Word boundary detection
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • G10L15/18Speech classification or search using natural language modelling
    • G10L15/183Speech classification or search using natural language modelling using context dependencies, e.g. language models
    • G10L15/187Phonemic context, e.g. pronunciation rules, phonotactical constraints or phoneme n-grams
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/02Feature extraction for speech recognition; Selection of recognition unit
    • G10L2015/022Demisyllables, biphones or triphones being the recognition units

Definitions

  • the present invention relates to speech recognition systems, and more particularly to the recognition models they use.
  • Such "cross-word-boundary" units have a significant effect on the computational load of a continuous speech recognition system.
  • each vocabulary word must be able to connect to every other vocabulary word.
  • the system must consider all the words in the vocabulary as potential successors, and must thus connect the current word to all these potential followers using the appropriate connecting units.
  • Inter-word connections present a particularly serious computational challenge to large vocabulary continuous speech recognition (LVCSR) systems, because at this point in the extension of hypotheses, little acoustic information about the identity of the following word is available, and thus it is difficult to apply aggressive thresholding and pruning schemes which are typically used to limit the overall computation within words.
  • LVCSR large vocabulary continuous speech recognition
  • the full set of connecting units for a given vocabulary word can be expressed as follows: 1.) A first set of cross word triphones connecting the given word to all possible following phonetic contexts of which there are P (B-C+#D in the above example). 2.) For each of these units there is a further set connecting the last phone of the first word to all the valid pairs of the first two phones of following words in the vocabulary, of which there are p (C#-D+E in the above example).
  • each vocabulary word requires P(l+p) segments to connect it to all following vocabulary words.
  • P may be on the order of 50, while p may be on the order of the 15, resulting in on the average 800 connecting units requiring activation for each vocabulary word.
  • a preferred embodiment of the present invention provides a speech recognition system for recognizing an input utterance of spoken words.
  • the system includes a set of word models for modeling vocabulary to be recognized, each word model being associated with a word in the vocabulary, each word in the vocabulary considered as a sequence of phones including a first phone and a last phone, wherein each word model begins in the middle of the first phone of its associated word and ends in the middle of the last phone of its associated word; a set of word connecting models for modeling acoustic transitions between the middle of a word's last phone and the middle of an immediately succeeding word's first phone; and a recognition engine for processing the input utterance in relation to the set of word models and the set of word connecting models to cause recognition of the input utterance.
  • each word model uses context-dependent phone models, e.g., triphones, to represent the sequence of phones.
  • the acoustic transitions modeled may include a pause, a period of silence, or a period of noise.
  • Each word connecting model may further include a previous word identification field which represents the word associated with the word model immediately preceding the word connecting model, an ending score field which represents a best score from the beginning of the input utterance to reach the word connecting model, or a type field which represents specific details of the word connecting model.
  • a preferred embodiment also includes a method of a speech recognition system for recognizing an input utterance of spoken words.
  • the method includemo deling vocabulary to be recognized with a set of word models, each word model being associated with a word in the vocabulary, each word in the vocabulary being considered as a sequence of phones including a first phone and a last phone, wherein each word model begins in the middle of the first phone of its associated word and ends in the middle of the last phone of its associated word; modeling acoustic transitions between the middle of a word's last phone and the middle of an immediately succeeding word's first phone with a set of word connecting models; and processing with a recognition engine the input utterance in relation to the set of word models and the set of word connecting models to cause recognition of the input utterance.
  • each word model uses context-dependent phone models, e.g., triphones, to represent the sequence of phones.
  • the acoustic transitions may further include a pause, a period of silence, or a period of noise.
  • Each word connecting model may further include a previous word identification field which represents the word associated with the word model immediately preceding the word connecting model, an ending score field which represents a best score from the beginning of the input utterance to reach the word connecting model, or a type field which represents specific details of the word connecting model.
  • Fig. 1 illustrates glues according to a preferred embodiment of the present invention.
  • Fig. 2 illustrates the use of glues in the first search pass of a speech recognition system according to a preferred embodiment.
  • a preferred embodiment of the present invention simplifies the inter- word connecting models from triphones to diphones.
  • Such inter-word diphone models also referred to as "glues,” are based on the assumption that coarticulation has relatively little effect across phone cores.
  • Fig. 1 in the context of the phone sequence A B C of the left word model 1, the transition between the phones A and B is relatively unaffected by the following phone C.
  • the merit of this assumption has been empirically confirmed in speech recognition experiments.
  • a preferred embodiment defines a new set of word-connecting units having the full set of cross-boundary diphones, denoted in Fig. 1, for example, by glue 3 C#D.
  • the segment boundaries of diphones occur in phone core centers. Therefore, the use of such connecting units places a special constraint on the last phone models and the first phone models of word models, in that these must represent only the first and last half of the respective phone.
  • Fig. 1 .... A B C -> D E F ....
  • the left word model 1 must end in the middle of the phone C, and the right word model 2 must begin in the middle of the phone D.
  • the inter-word connection, glue 3 would thus be made as follows: .... Cl C#D D2 ...
  • the left word model 1 ends in the middle of the phone C (denoted as Cl, to indicate that only the first half of this phone is in fact modeled), followed by the cross-word-boundary diphone C#D, glue 3, which connects into the right word model 2, which begins in the middle of the phone D (denoted by D2, to indicate that only the second half of this phone is modeled here).
  • the diphone connecting units become compatible with all types of word models, including those using triphones.
  • word models including those using triphones.
  • triphone models other forms of wider- or narrower- context models could be used.
  • a word-specific state sequence model custom to a particular word could be used.
  • the number of cross-word boundary units needed to connect a particular word to all other vocabulary words is simply P.
  • the use of diphone cross-word- boundary connecting units thus results in a p-fold reduction (typically, fifteen times reduction) in the number of units that need to be evaluated at each word end, with negligible reduction in modeling accuracy.
  • a glue can be written as A # B representing that the glue is between words ending in A and beginning in B.
  • Fig. 1 also depicts special glues 4-6 are included for transitions to noise and silence.
  • the special glues 4-6 shown in Fig. 1 are broken into two parts: one set of glues connect the word end to the special contexts of silence 4, pause 5, or noise 6; another set of glues connects the special contexts to the word starting phones.
  • glues may be used as part of a three-pass speech recognition system. The first pass quickly reduces the number of word hypotheses, and finds reasonable "word starting times".
  • the second pass does an A* like search starting from the end of the speech utterance and using the "word starting" information provided by the first pass, and generates the word graph.
  • the third pass trigramizes the word graph produced by the second pass, and determines the N-best hypotheses.
  • the first pass search is managed by breaking up the search using different networks: initial silence network 21, final silence network 22, glue network 23, unigram tree network 24, and bigram linear network 25.
  • the glue network 23 manages and decodes the glue models described above.
  • Initial silence and final silence refer to models trained to represent the silence regions at the beginning and ends of utterances.
  • the connecting glue network 23 acoustically matches every transition between two words.
  • the glue network 23 connects back to the bigram linear network 25, or the unigram tree network 24 or to final silences 22.
  • the glues 26 in the glue network 23 carry the predecessor word information: i.e., each ending word has its own unique set of glues 26. If the same word ends in both the bigram linear network 25 and unigram tree network 24, then the outgoing scores from the two instances of the same word are minimized, and serve as the incoming score of the glue network 26 for that word.
  • the first pass stores the following 4-tuple information at the end of every active glue 26, for every time: ⁇ Glue ID, Previous Word ID, Ending Score, GlueTypeFlag >
  • the glue ID corresponds to the actual diphone segment representing that particular glue 26.
  • the "Previous Word ID” refers to the word that the glue 26 started from.
  • the Ending Score is the best score from the beginning of the utterance to reach this glue 26 at this time through the appropriate word context.
  • the Glue Type Flag is used to refer to the details of the glue 26: whether it was a normal glue, or whether it had a pause core embedded within it and so on.
  • the second pass uses this information stored at the ends of glues 26 in the first pass to do an A* like backward-in-time search to generate a word graph.
  • the second pass starts by extending the final silence models 22, and determines valid starting times. For each of those times, the glue end table stored in the first pass is looked up, and the first pass estimate of the score from the beginning of the utterance to that point is extracted.
  • the sum of the first pass estimate and the second pass backward score constitutes the estimated total hypothesis score which is stored along with the hypothesis that pushed on to a search stack.
  • the best hypothesis is popped off the stack and the extensions continued till every hypothesis either reaches initial silence 22 or is pruned out (the threshold for pruning is an second pass offset plus the best first pass score to exit the final silence model for the whole utterance).
  • the threshold for pruning is an second pass offset plus the best first pass score to exit the final silence model for the whole utterance.
  • estimate is used since the first pass uses various approximations to speed up the process, whereas the second pass uses more exact models.
  • glues 26 are word conditioned. This is necessary to propagate the previous word context so that bigram language models may be applied when entering the linear network 25. Thus, the glues 26 themselves "carry" the previous word context information. This leads to an explosion in the number of glues 26 that need to be dealt with in the LVCSR search, both in terms of computation and memory.
  • glue extensions may be approximated to every other time frame.
  • glues 26 can connect to following words only every other time frame. This approximation speeds up the system, while not affecting accuracy much.
  • the second pass uses a threshold offset of the best score provided by the first pass.
  • the first pass "best" score is only approximate, as hypotheses are extended right-to-left, the second pass will have the exact scores (i.e. without the glue extension approximations used in the first pass).
  • the effective best score varies as the second pass processes word arcs from right to left.
  • a threshold is set for every time in the second pass. The constraint that the threshold is monotonically decreasing from right to left (in time) is enforced at the end of every second pass extension. This allows the second pass to always use the best possible threshold (so far) at any given time.
  • the threshold for every time is initialized to the first pass "best" score plus the second pass threshold offset.
  • glues such as the use of detailed glues for the second search pass.
  • Simplified glues i.e., context- independent
  • more detailed glues i.e., context-dependent
  • search thresholds may have to be wide open in order to cope up with the loss of discrimination of the "poor" models used in the first pass.

Landscapes

  • Engineering & Computer Science (AREA)
  • Acoustics & Sound (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Machine Translation (AREA)
  • Navigation (AREA)
  • Telephone Function (AREA)
  • Telephone Set Structure (AREA)

Abstract

A speech recognition system recognizes an input utterance of spoken words. The system includes a set of word models for modeling vocabulary to be recognized, each word model being associated with a word in the vocabulary, each word in the vocabulary considered as a sequence of phones including a first phone and a last phone, wherein each word model begins in the middle of the first phone of its associated word and ends in the middle of the last phone of its associated word; a set of word connecting models for modeling acoustic transitions between the middle of a word's last phone and the middle of an immediately succeeding word's first phone; and a recognition engine for processing the input utterance in relation to the set of word models and the set of word connecting models to cause recognition of the input utterance.

Description

INTER-WORD TRIPHONE MODELS
Technical Field The present invention relates to speech recognition systems, and more particularly to the recognition models they use.
Background Art State-of-the-art speech recognition systems make use of context- dependent sub-word models to represent the system vocabulary. These models represent phones in the context of other phones, so as to capture the effects of coarticulation between adjacent phones in spoken language. Dealing with coarticulation effectively is crucial— systems which do not do so, and rely on context-independent models grossly underperform systems with context- dependent models. One type of model frequently used to deal with coarticulation is the triphone. A triphone model for a particular phone P will be conditioned on both the preceding and following phonetic context. For example, A-P+B would be the triphone model for phone P with the left context of phone A and the right context of phone B. It is effectively impossible to train and use triphones involving all phone combinations. Nevertheless, the repertory of such units used in typical speech recognition systems is large.
A particular problem arises in the recognition of continuous speech, where words are spoken without pauses between them. Coarticulation effects cross word boundaries, and to maximize system performance, models should be utilized which reflect the effect that the phones in a preceding word have on the phones in the following word and vice versa.
Such "cross-word-boundary" units have a significant effect on the computational load of a continuous speech recognition system. In principle, in a dictation system, each vocabulary word must be able to connect to every other vocabulary word. Thus, at the end of each hypothesized word, the system must consider all the words in the vocabulary as potential successors, and must thus connect the current word to all these potential followers using the appropriate connecting units. Inter-word connections present a particularly serious computational challenge to large vocabulary continuous speech recognition (LVCSR) systems, because at this point in the extension of hypotheses, little acoustic information about the identity of the following word is available, and thus it is difficult to apply aggressive thresholding and pruning schemes which are typically used to limit the overall computation within words.
Consider the following example involving within-word and cross-word triphone models. To connect a word which ends with the phones B and C to a following word which begins with the phones D and E: .... A B C -> D E F ...., means that the last phone model of the first word and the first phone model of the second word have to be cross-word triphones: .... A-B+C B-C+#D -> C#-D+E D-E+F ...., where # denotes a word boundary. Thus, the last triphone of the first word, and the first triphone of the second word depend on the second and first words respectively. The full set of connecting units for a given vocabulary word can be expressed as follows: 1.) A first set of cross word triphones connecting the given word to all possible following phonetic contexts of which there are P (B-C+#D in the above example). 2.) For each of these units there is a further set connecting the last phone of the first word to all the valid pairs of the first two phones of following words in the vocabulary, of which there are p (C#-D+E in the above example).
Thus, in a full triphone model system, each vocabulary word requires P(l+p) segments to connect it to all following vocabulary words. In a typical system with a vocabulary of several 10's of thousands of words, P may be on the order of 50, while p may be on the order of the 15, resulting in on the average 800 connecting units requiring activation for each vocabulary word.
Summary of the Invention A preferred embodiment of the present invention provides a speech recognition system for recognizing an input utterance of spoken words. The system includes a set of word models for modeling vocabulary to be recognized, each word model being associated with a word in the vocabulary, each word in the vocabulary considered as a sequence of phones including a first phone and a last phone, wherein each word model begins in the middle of the first phone of its associated word and ends in the middle of the last phone of its associated word; a set of word connecting models for modeling acoustic transitions between the middle of a word's last phone and the middle of an immediately succeeding word's first phone; and a recognition engine for processing the input utterance in relation to the set of word models and the set of word connecting models to cause recognition of the input utterance.
In a further embodiment, each word model uses context-dependent phone models, e.g., triphones, to represent the sequence of phones. The acoustic transitions modeled may include a pause, a period of silence, or a period of noise. Each word connecting model may further include a previous word identification field which represents the word associated with the word model immediately preceding the word connecting model, an ending score field which represents a best score from the beginning of the input utterance to reach the word connecting model, or a type field which represents specific details of the word connecting model.
A preferred embodiment also includes a method of a speech recognition system for recognizing an input utterance of spoken words. The method includemo deling vocabulary to be recognized with a set of word models, each word model being associated with a word in the vocabulary, each word in the vocabulary being considered as a sequence of phones including a first phone and a last phone, wherein each word model begins in the middle of the first phone of its associated word and ends in the middle of the last phone of its associated word; modeling acoustic transitions between the middle of a word's last phone and the middle of an immediately succeeding word's first phone with a set of word connecting models; and processing with a recognition engine the input utterance in relation to the set of word models and the set of word connecting models to cause recognition of the input utterance.
In a further embodiment, each word model uses context-dependent phone models, e.g., triphones, to represent the sequence of phones. The acoustic transitions may further include a pause, a period of silence, or a period of noise. Each word connecting model may further include a previous word identification field which represents the word associated with the word model immediately preceding the word connecting model, an ending score field which represents a best score from the beginning of the input utterance to reach the word connecting model, or a type field which represents specific details of the word connecting model.
Brief Description of the Drawings The present invention will be more readily understood by reference to the following detailed description taken with the accompanying drawings, in which: Fig. 1 illustrates glues according to a preferred embodiment of the present invention.
Fig. 2 illustrates the use of glues in the first search pass of a speech recognition system according to a preferred embodiment.
Detailed Description of Specific Embodiments To cut down on the amount of computation during the connection of words, a preferred embodiment of the present invention simplifies the inter- word connecting models from triphones to diphones. Such inter-word diphone models, also referred to as "glues," are based on the assumption that coarticulation has relatively little effect across phone cores. Thus, as shown in Fig. 1 in the context of the phone sequence A B C of the left word model 1, the transition between the phones A and B is relatively unaffected by the following phone C. The merit of this assumption has been empirically confirmed in speech recognition experiments.
Thus, a preferred embodiment defines a new set of word-connecting units having the full set of cross-boundary diphones, denoted in Fig. 1, for example, by glue 3 C#D. Unlike triphones, the segment boundaries of diphones occur in phone core centers. Therefore, the use of such connecting units places a special constraint on the last phone models and the first phone models of word models, in that these must represent only the first and last half of the respective phone. Thus, in the example in Fig. 1: .... A B C -> D E F ....
The left word model 1 must end in the middle of the phone C, and the right word model 2 must begin in the middle of the phone D. The inter-word connection, glue 3, would thus be made as follows: .... Cl C#D D2 ... Thus, the left word model 1 ends in the middle of the phone C (denoted as Cl, to indicate that only the first half of this phone is in fact modeled), followed by the cross-word-boundary diphone C#D, glue 3, which connects into the right word model 2, which begins in the middle of the phone D (denoted by D2, to indicate that only the second half of this phone is modeled here). Note that provided the above constraint on the first and last phone model in the word model is satisfied, the diphone connecting units become compatible with all types of word models, including those using triphones. There is no particular requirement on what type of phone models are used internally within the word models. Besides triphone models, other forms of wider- or narrower- context models could be used. In fact, a word-specific state sequence model custom to a particular word could be used.
The number of cross-word boundary units needed to connect a particular word to all other vocabulary words is simply P. The use of diphone cross-word- boundary connecting units thus results in a p-fold reduction (typically, fifteen times reduction) in the number of units that need to be evaluated at each word end, with negligible reduction in modeling accuracy.
The concept of glues is further illustrated in Fig. 1. As described above, a glue can be written as A # B representing that the glue is between words ending in A and beginning in B. Fig. 1 also depicts special glues 4-6 are included for transitions to noise and silence. The special glues 4-6 shown in Fig. 1 are broken into two parts: one set of glues connect the word end to the special contexts of silence 4, pause 5, or noise 6; another set of glues connects the special contexts to the word starting phones. In a preferred embodiment as shown in Fig. 2, glues may be used as part of a three-pass speech recognition system. The first pass quickly reduces the number of word hypotheses, and finds reasonable "word starting times". The second pass does an A* like search starting from the end of the speech utterance and using the "word starting" information provided by the first pass, and generates the word graph. The third pass trigramizes the word graph produced by the second pass, and determines the N-best hypotheses.
The first pass search is managed by breaking up the search using different networks: initial silence network 21, final silence network 22, glue network 23, unigram tree network 24, and bigram linear network 25. The glue network 23 manages and decodes the glue models described above. Initial silence and final silence refer to models trained to represent the silence regions at the beginning and ends of utterances.
The connecting glue network 23 acoustically matches every transition between two words. The glue network 23 connects back to the bigram linear network 25, or the unigram tree network 24 or to final silences 22. In order to carry context of the previous word to the linear network 25 (where the bigram scores are determined and applied), the glues 26 in the glue network 23 carry the predecessor word information: i.e., each ending word has its own unique set of glues 26. If the same word ends in both the bigram linear network 25 and unigram tree network 24, then the outgoing scores from the two instances of the same word are minimized, and serve as the incoming score of the glue network 26 for that word.
The first pass stores the following 4-tuple information at the end of every active glue 26, for every time: < Glue ID, Previous Word ID, Ending Score, GlueTypeFlag >
The glue ID corresponds to the actual diphone segment representing that particular glue 26. The "Previous Word ID" refers to the word that the glue 26 started from. The Ending Score is the best score from the beginning of the utterance to reach this glue 26 at this time through the appropriate word context. The Glue Type Flag is used to refer to the details of the glue 26: whether it was a normal glue, or whether it had a pause core embedded within it and so on.
The second pass uses this information stored at the ends of glues 26 in the first pass to do an A* like backward-in-time search to generate a word graph. The second pass starts by extending the final silence models 22, and determines valid starting times. For each of those times, the glue end table stored in the first pass is looked up, and the first pass estimate of the score from the beginning of the utterance to that point is extracted. The sum of the first pass estimate and the second pass backward score (there is also a bigram language model score that has to be added in) constitutes the estimated total hypothesis score which is stored along with the hypothesis that pushed on to a search stack. The best hypothesis is popped off the stack and the extensions continued till every hypothesis either reaches initial silence 22 or is pruned out (the threshold for pruning is an second pass offset plus the best first pass score to exit the final silence model for the whole utterance). The term "estimate" is used since the first pass uses various approximations to speed up the process, whereas the second pass uses more exact models.
One of the key issues with glues 26 in the above architecture is that glues 26 are word conditioned. This is necessary to propagate the previous word context so that bigram language models may be applied when entering the linear network 25. Thus, the glues 26 themselves "carry" the previous word context information. This leads to an explosion in the number of glues 26 that need to be dealt with in the LVCSR search, both in terms of computation and memory.
In addition, the process of extending from a glue 26 to the beginning of the next word is also computationally expensive. This is because, for every glue 26, the bigram followers of the "previous word" have to be examined, thresholded, and perhaps new arcs activated for the following words. In order to speedup such "computationally expensive" extensions, approximations have been introduced into the glue extensions. In addition, the matching arcs in the unigram tree network 24 that start with the ending phone context of the glues 26 also need to be "activated".
In order to alleviate the computational costs of extending at the end of glues 26 and connecting to the linear bigram network 25 and to the unigram tree network 24, the glue extensions may be approximated to every other time frame. In other words, glues 26 can connect to following words only every other time frame. This approximation speeds up the system, while not affecting accuracy much.
There are some changes that are needed in the second pass to accommodate such an approximate glue ending score from the first pass. The second pass uses a threshold offset of the best score provided by the first pass. However, since the first pass "best" score is only approximate, as hypotheses are extended right-to-left, the second pass will have the exact scores (i.e. without the glue extension approximations used in the first pass). Thus, the effective best score varies as the second pass processes word arcs from right to left. In order to cope up with the scenario of varying thresholds in the second pass, a threshold is set for every time in the second pass. The constraint that the threshold is monotonically decreasing from right to left (in time) is enforced at the end of every second pass extension. This allows the second pass to always use the best possible threshold (so far) at any given time. The threshold for every time is initialized to the first pass "best" score plus the second pass threshold offset. Two aspects of glues 26 make them difficult to handle in a large vocabulary search:
1) The high density of glues 26 unfairly lets them dominate many parts of the search, and 2) The lack of language model scores at glues 26 generally makes them better scoring than competing segments from the unigram tree 24 and linear network
25 (many of which have language model increments applied at that time).
Various experiments have been attempted, rather unsuccessfully, where the "best possible" language model scores are propagated to the glues. The high density of the glues 26 is inevitable, every word end spawns glue starting segments, roughly equal to the number of phones. This doubles when pause glue extensions survive to spawn another set of glue fragments.
One idea that was effective in managing the sheer number and lack of language model scores in the glue network 23 was the separation of the active arcs allocated to the glue network 23. The allowed active arc fraction was divided into three parts (although variations were tried). The most effective split was when the glues 26 were isolated from the rest of the networks, and each network allocated its own resources. Thus, a separate score histogram was maintained for the glues 26, and a separate histogram threshold was determined to decide which active glue arcs to prune out. This is effective only in a dictation context, since command networks do not operate with histogram pruning, and work off standard fixed thresholds.
Given that the glues in a preferred embodiment are context-dependent
(i.e. carry previous word identity), the number of possible glues is theoretically equal to the number of words times the number of glues that can connect from that word (roughly for a 30k system, 30k*80 = 2.4 Million glues). It is, therefore, important to design methods where glues can be dynamically allocated and removed, in a fast and efficient manner.
There are two different algorithms that have been effective: dynamic reusing of glues and dynamic pause glue extensions. In the former, a list of unused, allocated glue models is maintained. Whenever, a new set of glues is needed, the list is searched for a set of unused glues with the same ending context as the current word context that is requesting the set of glues. If found, this is reused, else a new set of glues is reallocated. Due to the short life spans of most glues, this algorithm is quite effective. The second idea is the dynamic allocation of pause glues. In this scheme, the glue subnetwork corresponding to the pause extensions of the glues are not actually allocated till the end of the first half of the pause glue is reached. In many cases, the first half of the pause glue is itself pruned out, thus enabling the dynamic allocation of the pause extensions save unwanted memory allocations.
Other ideas are suggested by the concept of glues, such as the use of detailed glues for the second search pass. Simplified glues (i.e., context- independent) can be used in the first search pass, and more detailed glues (i.e., context-dependent) in the second pass. Such a system is sensitive to the quality of the glues in the first pass, and search thresholds may have to be wide open in order to cope up with the loss of discrimination of the "poor" models used in the first pass.
Another possibility is the use of single state collapsed models. This refers to the idea of collapsing all the states of all the segments, and using such models in the first pass. The idea is to speedup operation by making the updates much faster. Initial results have been promising in systems with wide-open parameters, although some issues remain to be addressed.
Language model look ahead and glue look ahead schemes can also be considered in glues, although more work remains to be done in this area. Similarly, "anti-glues" may be considered to move out the beginning and ends of words into a separate network, and elimination of the glue concept. Presently, however, this increases dramatically the complexity of the system with such a change. Further glue sharing allows words that have the same language model identification and the same word ending phone context to share glues. Methods for extending the glues for modeling liaisons are also under investigation.

Claims

What is claimed is:
1. A speech recognition system for recognizing an input utterance of spoken words, the system comprising: a set of word models for modeling vocabulary to be recognized, each word model being associated with a word in the vocabulary, each word in the vocabulary considered as a sequence of phones including a first phone and a last phone, wherein each word model begins in the middle of the first phone of its associated word and ends in the middle of the last phone of its associated word; a set of word connecting models for modeling acoustic transitions between the middle of a word's last phone and the middle of an immediately succeeding word's first phone; and a recognition engine for processing the input utterance in relation to the set of word models and the set of word connecting models to cause recognition of the input utterance.
2. A system as in claim 1, wherein each word model uses context-dependent phone models to represent the sequence of phones.
3. A system as in claim 2, wherein the context-dependent phone models are triphones.
4. A system as in claim 1, wherein the acoustic transitions include a pause.
5. A system as in claim 1, wherein the acoustic transitions include a period of silence.
6. A system as in claim 1, wherein the acoustic transitions include a period of noise.
7. A system as in claim 1, wherein each word connecting model further includes a previous word identification field which represents the word associated with the word model immediately preceding the word connecting model.
8. A system as in claim 1, wherein each word connecting model further includes an ending score field which represents a best score from the beginning of the input utterance to reach the word connecting model.
9. A system as in claim 1, wherein each word connecting model further includes a type field which represents specific details of the word connecting model.
10. A method of a speech recognition system for recognizing an input utterance of spoken words, the method comprising: modeling vocabulary to be recognized with a set of word models, each word model being associated with a word in the vocabulary, each word in the vocabulary being considered as a sequence of phones including a first phone and a last phone, wherein each word model begins in the middle of the first phone of its associated word and ends in the middle of the last phone of its associated word; modeling acoustic transitions between the middle of a word's last phone and the middle of an immediately succeeding word's first phone with a set of word connecting models; and processing with a recognition engine the input utterance in relation to the set of word models and the set of word connecting models to cause recognition of the input utterance.
11. A method as in claim 10, wherein each word model uses context- dependent phone models to represent the sequence of phones.
12. A method as in claim 11, wherein the context-dependent phone models are triphones.
13. A method as in claim 10, wherein the acoustic transitions include a pause.
14. A method as in claim 10, wherein the acoustic transitions include a period of silence.
15. A method as in claim 10, wherein the acoustic transitions include a period of noise.
16. A method as in claim 10, wherein each word connecting model further includes a previous word identification field which represents the word associated with the word model immediately preceding the word connecting model.
17. A method as in claim 10, wherein each word connecting model further includes an ending score field which represents a best score from the beginning of the input utterance to reach the word connecting model.
18. A method as in claim 10, wherein each word connecting model further includes a type field which represents specific details of the word connecting model.
19. An improved speech recognition system of the type employing word models, wherein the improvement comprises: a set of word models for modeling vocabulary to be recognized, each word model being associated with a word in the vocabulary, each word in the vocabulary considered as a sequence of phones including a first phone and a last phone, wherein each word model begins in the middle of the first phone of its associated word and ends in the middle of the last phone of its associated word; and a set of word connecting models for modeling acoustic transitions between the middle of a word's last phone and the middle of an immediately succeeding word's first phone.
20. A system as in claim 19, wherein each word model uses context- dependent phone models to represent the sequence of phones.
21. A system as in claim 20, wherein the context-dependent phone models are triphones.
22. A system as in claim 19, wherein the acoustic transitions include a pause.
23. A system as in claim 19, wherein the acoustic transitions include a period of silence.
24. A system as in claim 19, wherein the acoustic transitions include a period of noise.
25. A system as in claim 19, wherein each word connecting model further includes a previous word identification field which represents the word associated with the word model immediately preceding the word connecting model.
26. A system as in claim 19, wherein each word connecting model further includes an ending score field which represents a best score from the beginning of the input utterance to reach the word connecting model.
27. A system as in claim 19, wherein each word connecting model further includes a type field which represents specific details of the word connecting model.
28. An improved method of a speech recognition system for recognizing an input utterance of spoken words, the improvement comprising: modeling vocabulary to be recognized with a set of word models, each word model being associated with a word in the vocabulary, each word in the vocabulary being considered as a sequence of phones including a first phone and a last phone, wherein each word model begins in the middle of the first phone of its associated word and ends in the middle of the last phone of its associated word; and modeling acoustic transitions between the middle of a word's last phone and the middle of an immediately succeeding word's first phone with a set of word connecting models.
29. A method as in claim 28, wherein each word model uses context- dependent phone models to represent the sequence of phones.
30. A method as in claim 29, wherein the context-dependent phone models are triphones.
31. A method as in claim 28, wherein the acoustic transitions include a pause.
32. A method as in claim 28, wherein the acoustic transitions include a period of silence.
33. A method as in claim 28, wherein the acoustic transitions include a period of noise.
34. A method as in claim 28, wherein each word connecting model further includes a previous word identification field which represents the word associated with the word model immediately preceding the word connecting model.
35. A method as in claim 28, wherein each word connecting model further includes an ending score field which represents a best score from the beginning of the input utterance to reach the word connecting model.
36. A method as in claim 28, wherein each word connecting model further includes a type field which represents specific details of the word connecting model.
EP99952974A 1998-09-29 1999-09-29 Inter-word connection phonemic models Expired - Lifetime EP1116218B1 (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US10237398P 1998-09-29 1998-09-29
US102373P 1998-09-29
PCT/US1999/022501 WO2000019409A1 (en) 1998-09-29 1999-09-29 Inter-word triphone models

Publications (2)

Publication Number Publication Date
EP1116218A1 true EP1116218A1 (en) 2001-07-18
EP1116218B1 EP1116218B1 (en) 2004-04-07

Family

ID=22289500

Family Applications (1)

Application Number Title Priority Date Filing Date
EP99952974A Expired - Lifetime EP1116218B1 (en) 1998-09-29 1999-09-29 Inter-word connection phonemic models

Country Status (7)

Country Link
US (1) US6606594B1 (en)
EP (1) EP1116218B1 (en)
AT (1) ATE263997T1 (en)
AU (1) AU6501999A (en)
CA (1) CA2395012A1 (en)
DE (1) DE69916297D1 (en)
WO (1) WO2000019409A1 (en)

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE19939102C1 (en) * 1999-08-18 2000-10-26 Siemens Ag Speech recognition method for dictating system or automatic telephone exchange
DE10120513C1 (en) * 2001-04-26 2003-01-09 Siemens Ag Method for determining a sequence of sound modules for synthesizing a speech signal of a tonal language
JP2003208195A (en) * 2002-01-16 2003-07-25 Sharp Corp Device, method and program for recognizing consecutive speech, and program recording medium
TWI454955B (en) * 2006-12-29 2014-10-01 Nuance Communications Inc An image-based instant message system and method for providing emotions expression
KR100897554B1 (en) * 2007-02-21 2009-05-15 삼성전자주식회사 Distributed speech recognition sytem and method and terminal for distributed speech recognition
US8536976B2 (en) * 2008-06-11 2013-09-17 Veritrix, Inc. Single-channel multi-factor authentication
US8166297B2 (en) 2008-07-02 2012-04-24 Veritrix, Inc. Systems and methods for controlling access to encrypted data stored on a mobile device
WO2010051342A1 (en) * 2008-11-03 2010-05-06 Veritrix, Inc. User authentication for social networks
US8914279B1 (en) * 2011-09-23 2014-12-16 Google Inc. Efficient parsing with structured prediction cascades
US9602666B2 (en) 2015-04-09 2017-03-21 Avaya Inc. Silence density models
US10134425B1 (en) * 2015-06-29 2018-11-20 Amazon Technologies, Inc. Direction-based speech endpointing
US10121471B2 (en) * 2015-06-29 2018-11-06 Amazon Technologies, Inc. Language model speech endpointing
US11615239B2 (en) * 2020-03-31 2023-03-28 Adobe Inc. Accuracy of natural language input classification utilizing response delay

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS57178295A (en) * 1981-04-27 1982-11-02 Nippon Electric Co Continuous word recognition apparatus
US5268990A (en) * 1991-01-31 1993-12-07 Sri International Method for recognizing speech using linguistically-motivated hidden Markov models
US5502790A (en) * 1991-12-24 1996-03-26 Oki Electric Industry Co., Ltd. Speech recognition method and system using triphones, diphones, and phonemes
JPH0728487A (en) 1993-03-26 1995-01-31 Texas Instr Inc <Ti> Voice recognition
US5819221A (en) * 1994-08-31 1998-10-06 Texas Instruments Incorporated Speech recognition using clustered between word and/or phrase coarticulation
US5937384A (en) * 1996-05-01 1999-08-10 Microsoft Corporation Method and system for speech recognition using continuous density hidden Markov models
US6163769A (en) * 1997-10-02 2000-12-19 Microsoft Corporation Text-to-speech using clustered context-dependent phoneme-based units

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See references of WO0019409A1 *

Also Published As

Publication number Publication date
CA2395012A1 (en) 2000-04-06
AU6501999A (en) 2000-04-17
WO2000019409A9 (en) 2000-08-31
WO2000019409A1 (en) 2000-04-06
ATE263997T1 (en) 2004-04-15
DE69916297D1 (en) 2004-05-13
EP1116218B1 (en) 2004-04-07
US6606594B1 (en) 2003-08-12

Similar Documents

Publication Publication Date Title
US10971140B2 (en) Speech recognition circuit using parallel processors
US5907634A (en) Large vocabulary connected speech recognition system and method of language representation using evolutional grammar to represent context free grammars
US5983177A (en) Method and apparatus for obtaining transcriptions from multiple training utterances
US6606594B1 (en) Word boundary acoustic units
JP3459712B2 (en) Speech recognition method and device and computer control device
US20060074662A1 (en) Three-stage word recognition
KR19990014292A (en) Word Counting Methods and Procedures in Continuous Speech Recognition Useful for Early Termination of Reliable Pants- Causal Speech Detection
KR100415217B1 (en) Speech recognizer
US7493258B2 (en) Method and apparatus for dynamic beam control in Viterbi search
JPH0728487A (en) Voice recognition
Boite et al. A new approach towards keyword spotting.
US20070038451A1 (en) Voice recognition for large dynamic vocabularies
Lee et al. Acoustic modeling of subword units for speech recognition
Zhang et al. Improved context-dependent acoustic modeling for continuous Chinese speech recognition
JP2871420B2 (en) Spoken dialogue system
EP1082719B1 (en) Multiple stage speech recognizer
JP3315565B2 (en) Voice recognition device
KR100281582B1 (en) Speech Recognition Method Using the Recognizer Resource Efficiently
Kuo et al. Advances in natural language call routing
Murakami et al. A spontaneous speech recognition algorithm using word trigram models and filled-pause procedure
JPH0217038B2 (en)

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20010418

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LI LU MC NL PT SE

RIC1 Information provided on ipc code assigned before grant

Ipc: 7G 10L 15/18 A

RTI1 Title (correction)

Free format text: INTER-WORD CONNECTION PHONEMIC MODELS

GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

RIC1 Information provided on ipc code assigned before grant

Ipc: 7G 10L 15/18 A

RTI1 Title (correction)

Free format text: INTER-WORD CONNECTION PHONEMIC MODELS

GRAS Grant fee paid

Free format text: ORIGINAL CODE: EPIDOSNIGR3

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

AK Designated contracting states

Kind code of ref document: B1

Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LI LU MC NL PT SE

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: NL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20040407

Ref country code: LI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20040407

Ref country code: IT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT;WARNING: LAPSES OF ITALIAN PATENTS WITH EFFECTIVE DATE BEFORE 2007 MAY HAVE OCCURRED AT ANY TIME BEFORE 2007. THE CORRECT EFFECTIVE DATE MAY BE DIFFERENT FROM THE ONE RECORDED.

Effective date: 20040407

Ref country code: FR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20040407

Ref country code: FI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20040407

Ref country code: CY

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20040407

Ref country code: CH

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20040407

Ref country code: BE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20040407

Ref country code: AT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20040407

REG Reference to a national code

Ref country code: GB

Ref legal event code: FG4D

REG Reference to a national code

Ref country code: CH

Ref legal event code: EP

REF Corresponds to:

Ref document number: 69916297

Country of ref document: DE

Date of ref document: 20040513

Kind code of ref document: P

REG Reference to a national code

Ref country code: IE

Ref legal event code: FG4D

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20040707

Ref country code: GR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20040707

Ref country code: DK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20040707

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: DE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20040708

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: ES

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20040718

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: LU

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20040929

Ref country code: IE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20040929

Ref country code: GB

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20040929

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MC

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20040930

NLV1 Nl: lapsed or annulled due to failure to fulfill the requirements of art. 29p and 29m of the patents act
REG Reference to a national code

Ref country code: CH

Ref legal event code: PL

PLBE No opposition filed within time limit

Free format text: ORIGINAL CODE: 0009261

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT

EN Fr: translation not filed
26N No opposition filed

Effective date: 20050110

GBPC Gb: european patent ceased through non-payment of renewal fee

Effective date: 20040929

REG Reference to a national code

Ref country code: IE

Ref legal event code: MM4A

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: PT

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20040907