US20020099543A1 - Segmentation technique increasing the active vocabulary of speech recognizers - Google Patents

Segmentation technique increasing the active vocabulary of speech recognizers Download PDF

Info

Publication number
US20020099543A1
US20020099543A1 US09/382,743 US38274399A US2002099543A1 US 20020099543 A1 US20020099543 A1 US 20020099543A1 US 38274399 A US38274399 A US 38274399A US 2002099543 A1 US2002099543 A1 US 2002099543A1
Authority
US
United States
Prior art keywords
constituent
prefix
core
language
suffix
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US09/382,743
Inventor
Ossama Eman
Siegfried Kunzmann
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
International Business Machines Corp
Original Assignee
International Business Machines Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority to EP98116278.7 priority Critical
Priority to EP98116278 priority
Application filed by International Business Machines Corp filed Critical International Business Machines Corp
Assigned to INTERNATIONAL BUSINESS MACHINES CORPORATION reassignment INTERNATIONAL BUSINESS MACHINES CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: EMAM, OSSAMA, KUNZMANN, SIEGFRIED
Publication of US20020099543A1 publication Critical patent/US20020099543A1/en
Application status is Abandoned legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/04Segmentation; Word boundary detection
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/26Speech to text systems
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/226Taking into account non-speech caracteristics
    • G10L2015/228Taking into account non-speech caracteristics of application context

Abstract

A speech recognition system and a method executed by a speech recognition system focusing on the vocabulary of the speech recognition system and its usage during the speech recognition process is provided. A segmented vocabulary and its exploitation is provided comprising a multitude of entries wherein an entry is either identical to a legal word or a constituent of a legal word of the language, and the constituent is an arbitrary sub-component of the legal word according to the orthography. A constituent can comprise any number of characters not limited to a syllable of a legal word or a recognition unit of the speech recognition system. The vocabulary is used to recognize constituents of the vocabulary for recombination of the constituents into legal words if a constituent combination table indicates that the recognized constituents are a legal concatenation in the language.

Description

  • 1 BACKGROUND OF THE INVENTION [0001]
  • 1.1 Field of the Invention [0002]
  • The present invention relates to a speech recognition system and a method executed by a speech recognition system. More particularly, the invention relates to the vocabulary of a speech recognition system and its usage during the speech recognition process. [0003]
  • 1.2 Description and Disadvantages of Prior Art [0004]
  • The invention may preferably be implemented in accordance with the IBM ViaVoice 98 speech recognition system developed by the present assignee. IBM ViaVoice 98 is a real time speech recognition system for large vocabularies which can be speaker-trained with little cost to the user. However, the invention is not limited to use with this particular system and may be used in accordance with other speech recognition systems. [0005]
  • The starting point in these known systems is the breakdown of the speech recognition process into a part based on acoustic data (decoding) and a language statistics part referring back to bodies of language or text for a specific area of application (language model). The decision on candidate words is thus derived both from a decoder and a model language probability. For the user, the fitting of the vocabulary processed by this recognition system, to the specific field or even to individual requirements, is of particular significance. [0006]
  • With this speech recognition system, the acoustic decoding first supplies hypothetical words. The further evaluation of competing hypothetical words is then based on the language model. This represents estimates of word string frequencies obtained from application-specific bodies of text based on a collection of text samples from a desired field of application. From these text samples are generated the most frequent forms of words and statistics on word sequences. [0007]
  • In the method used here for estimating the frequency of sequences of words, the frequency of occurrence of the so-called word form trigrams in a given text are estimated. In known speech recognition systems, the so-called Hidden Markov Model is frequently used for estimating the probabilities. Here, several frequencies observed in the text are set down. For a trigram “uvw” these are a nullgram term f[0008] 0, a unigram term f(w), a bigram term f(w|v) and a trigram term f(w|uv). These terms correspond to the relative frequencies observed in the text, where the nullgram term has only a corrective significance.
  • If these terms are interpreted as probabilities of the word w under various conditions, a so-called latent variable can be added, from which one of the four conditions which produce the word w is achieved by substitution. If the transfer probabilities for the corresponding term are designated λ[0009] 0 λ1 λ2 λ3, then we obtain the following expression for the trigram probability sought
  • Pr(w|uv)=λ0 f 01 f(w)+λ2 f(w|v)+λ3 f (w|uv)
  • The known speech recognition systems have the disadvantage that each word appears as a word form in the vocabulary of the system. For this reason there are relatively large demands on the memory capacity of the system. The generally very extensive vocabularies also have a disadvantageous effect on the speed of the recognition process. [0010]
  • Typical speech recognition systems are working in real-time on today's PCs. They have an active vocabulary of up to and exceeding 60000 thousands words, can recognize continuously and/or naturally spoken input without the need to adapt the system to specific characteristics of a speaker. S. Kunzmann; “VoiceType: A Multi-Lingual, Large Vocabulary Speech Recognition System for a PC”, Proceedings of the 2nd SQEL Workshop, Pilsen, Apr. 27-29, 1997, ISBN 80-7082-314-3) gives an outline on these aspects. Given the actual vocabulary used in human communication, the order of magnitude of the vocabulary recognized by computer-based speech recognition systems must actually reach 100000 s to several millions of words. Even if such large vocabulary sizes would be available today, beside algorithmic limitations on recognizing these extremely large vocabulary sizes, issues like recognition accuracy, decoding speed and system resources (CPU, memory, disc) play a major role for classifying real-time speech recognition systems. [0011]
  • In the past several approaches have been suggested to increase the size of the active vocabulary for such recognition systems. In particular such state of the art approaches are related to the handling of compound words. [0012]
  • The German patent for instance DE 19510083 C2 assumes that the compound words e.g., German “Fahrbahnschalter” or “vorgehen” are decomposed in constituents like “Fahrbahn-schalter” or “vor-gehen”. The assumption is that composita are split in constituents which are a sequences of legal words in the German language as well as in the recognition vocabulary (“Fahrbahn”, “Schalter” and “vor”, “gehen”). For each of these words statistics are computed, describing the most likely frequencies of each word (Fahrbahnschalter, vorgehen) in their context of occurrence e.g., “Der Fahrbahnschalter ist geschlossen”. In addition separate frequency statistics are computed which describe the sequence of these constituents within compound words. Both statistical models are used to decide if the individual constituents are displayed to the user as single words or as compound word. Cases like “Verfugbarkeit” (constituents: “verfügbar”+“keit”) or “Birnen” (constituents: “Birne”+“n”) are not covered since “keit” and “n” are neither legal (standalone) words nor syllables in the German language, thus it's not contained within the recognition vocabulary. According to this teaching an additional, separate frequency model is required to allow the resolving of problems of illegal word sequences during recombination of these arbitrary constituents into words (e.g. “vor”-“Verfugbar”). [0013]
  • The recent US patent U.S. Pat. No. 5,754,972 teaches the introduction of a special dictation mode where the user either announces a “compound dictation mode” or the system is switched into a special recognition mode. This is exposed to the user by a specific user interface. In languages like German the occurrence of compound words is extremely frequent, so the need to switch towards specific dictation modes is extremely cumbersome. In addition, the teaching of U.S. Pat. No. 5,754,972 is based on the same fundamental assumption as German patent DE 19510083 C2: compound words can be built only on constituents representing legal words of the vocabulary by their own. To support the generation of new compound words the spelling of the characters of the compound word is introduced within this special dictation mode. [0014]
  • A different approach is disclosed by G. Ruske, “Half words as processing units in automatic speech recognition”, Journal “Sprache und Datenverarbeitung”, Vol. 8, 1984, Part ½, pp. 5-16. A word of the recognition vocabulary is usually described via it's orthography (spelling) and it's associated (multiple) pronunciations via smallest recognition units. The recognition units are the smallest recognizable units for the decoder. G. Ruske defines these recognition units based on a set of syllables (around 5000 in German). To each spelling of the vocabulary, a sequence of syllables describes the pronunciation(s) of each individual word. Thus, according to the teaching of Ruske, words of the vocabulary are set up by the recognition units of the decoder being identical to the syllables according to the pronunciation of the word in that language. Therefore, the recombination of constituents to build words of the language is thus limited to the recognition units of the decoder. [0015]
  • 1.3 Objective of the Invention
  • The invention is based on the objective to provide a technology to increase the size of an active vocabulary recognized by speech recognition systems. It is a further objective of the current invention to reduce at the same time the algorithmic limitations on recognizing such extremely large vocabulary sizes for instance in terms of recognition accuracy, decoding speed and system resources (CPU, memory, disc), and thus to play a major role in classifying real-time speech recognition systems. [0016]
  • 2 Summary and Advantages of the Invention
  • The objective of the invention is solved by the independent claim [0017] 1. The invention teaches a speech recognition system for recognition of spoken speech of a language comprising a segmented vocabulary. Said vocabulary comprising a multitude of entries and an entry being either identical to a legal word of said language, or an entry being a constituent of a legal word of said language. A constituent can be an arbitrary sub-component of said legal word according to the orthography. Said constituent is not limited to a syllable of said legal word or to a recognition unit of said speech recognition system.
  • The technique proposed by the current invention allows for a significant compression of a vocabulary. The invention allows to define and store N words but generate and recognize up to M×N words (where M is language dependent) as combinations of the vocabulary entries. [0018]
  • Smaller vocabularies allow in addition a better estimation of the word (or piece) probabilities (uni-, bi-, tri-grams within their context environment as more occurrences are seen in the respective corpora. [0019]
  • Efficient storing is achieved via mapping the N words into a set of groups having the same pattern of constituents. Such an approach ensures logical completeness and coverage of the chosen vocabulary. Usually the user who dictates a word defined in the vocabulary expects that all derived forms are also available. For example, one doesn't expect that the word ‘use’ is in the vocabulary while ‘user’ is not. [0020]
  • Complete flexibility (as with the current teaching) in defining the constituent sets for each language makes it possible to achieve the best compression. The constituents are not necessarily a linguistic or phonetic known unit of the language. [0021]
  • Additional advantages are accomplished by said vocabulary defining legal words of said language recognizable by said speech recognition system either by an entry itself or by recombination of up to S entries in combination representing a legal word of said language. The invention preferably suggests S being the number 2 or 3. [0022]
  • As any number of constituents can be used for recombination of legal words, the compression rate of such a segmented vocabulary can be very large. On the other hand, the compression rate and the algorithmic complexity for recombination are antagonistic properties of the proposed speech recognition system. To limit the number of segments to recombine constituents into legal words to S=2 or S=3 is an effective compromise. [0023]
  • According to a further embodiment of the proposed invention the speech recognition system, if based on a segmented vocabulary, comprises, if S is 2, constituents allowing for recombination of legal words from a prefix-constituent and a core-constituent, or from a core-constituent and a suffix-constituent, or from a prefix-constituent and a suffix-constituent. In addition said vocabulary comprises, if S is 3, constituents allowing for recombination of legal words from a prefix-constituent, a core-constituent and a suffix-constituent. [0024]
  • By distinguishing different types of constituents, properties of the individual languages can be reflected since typically not every constituent type can be recombined with any other constituent type. This approach simplifies the recognition process and eases the determination of recognition errors. [0025]
  • According to a further embodiment of the proposed invention, a constituent combination table is taught. It indicates which concatenations of said constituents are legal concatenations in said language. [0026]
  • Such constituent combination tables are performance and storage efficient means to define which constituent may be recombined with other constituents resulting in a legal constituent or legal word of said language. [0027]
  • According to a further embodiment of the proposed invention, said constituent combination table comprises in the case of S=2 or S=3, a core-prefix-matrix indicating whether a combination of a prefix-constituent and a core-constituent is a legal combination in said language or not; and/or a prefix-suffix-matrix indicating whether a combination of a prefix-constituent and a suffix-constituent is a legal combination in said language or not; and/or a prefix-prefix-matrix indicating whether a combination of a first-prefix-constituent and a second-prefix-constituent is a legal combination in said language building a third-prefix-constituent or not; and/or a core-suffix-matrix indicating whether a combination of a core-constituent and a suffix-constituent is a legal combination in said language or not. [0028]
  • The approach to reduce the question of legal recombinations to a sequence of decisions involving only two constituents reduces computation effort. Moreover, introduction of a collection of constituent combination tables depending on the types of constituents to be recombined increases efficiency of the recombination process. Depending on the type of constituents for certain cases, no legal combination is possible and thus no table access has to be performed. Also, in terms of access and storage requirements, it is more efficient to exploit a larger number of smaller tables than only a few larger tables. [0029]
  • According to a further embodiment of the proposed invention, said core-prefix-matrix and/or said core-suffix-matrix and/or said prefix-suffix-matrix and/or said prefix-prefix-matrix have a structure wherein said core-constituents, said prefix-constituents and said suffix-constituents are represented by unique numbers which form the indexes of said matrixes. [0030]
  • By encoding the various constituents as unique numbers and by setting up the various constituent combination tables based on these numbers, the complete recombination and recognition process is accelerated as no translations between constituents and there encodings are required anymore. [0031]
  • According to a further embodiment of the proposed invention, a separate post-processor is suggested responsive to an input comprising recognized constituents of said vocabulary. Said post-processor recombines said constituents into legal words of said language exploiting said constituent combination table. [0032]
  • Implementing the recombination of constituents into a separate post-processor has the advantage that the teaching of the current invention can be applied to any existing speech recognition system without further modification or enhancements. If recombination is done in a post-processor, the statistic correlation information of the language model has been exploited already when the post-processor becomes active. Thus, the reliability of the recognized constituents is already high when inputted to the post-processor and will be increased further by said post-processing. [0033]
  • A further embodiment of the proposed invention relates to details of the recombination. Several cases can be distinguished. [0034]
  • Said post-processor is responsive to two consecutive constituents representing a first prefix-constituent and a second prefix-constituent and recombines said first prefix-constituent and said second prefix-constituent into a third prefix-constituent if said prefix-prefix-matrix is indicating said first prefix-constituent and said second prefix-constituent as a legal combination in said language. If said prefix-prefix-matrix indicates said first prefix-constituent and said second prefix-constituent as an illegal combination in said language, said first prefix-constituent is dropped. Said post-processor is responsive to two consecutive constituents representing a prefix-constituent and a core-constituent and recombines said prefix-constituent and said core-constituent into a second core-constituent if said core-prefix-matrix is indicating said prefix-constituent and said core-constituent as a legal combination in said language. If said core-prefix-matrix indicates said prefix-constituent and said core-constituent as an illegal combination in said language, it replaces said prefix-constituent with an alternative prefix-constituent and recombines said alternative prefix-constituent and said core-constituent if said core-prefix-matrix is indicating said alternative prefix-constituent and said core-constituent as a legal combination in said language. [0035]
  • Said post-processor is responsive to two consecutive constituents representing a prefix-constituent and a suffix-constituent and recombines said prefix-constituent and said suffix-constituent into a second prefix-constituent if said prefix-suffix-matrix is indicating said prefix-constituent and said suffix-constituent as a legal combination in said language. [0036]
  • Said post-processor is responsive to two consecutive constituents representing a core-constituent and a suffix-constituent and recombines said core-constituent and said suffix-constituent into a second core-constituent if said core-suffix-matrix is indicating said core-constituent and said suffix-constituent as a legal combination in said language. [0037]
  • Besides recombining constituents, these features offer the advantages of detecting and also in a certain extend of correcting recognition errors. [0038]
  • According to a further embodiment of the proposed invention, said prefix-constituent and said suffix-constituent are not recombined and said prefix-constituent is treated as a separate entry if said prefix-suffix-matrix is indicating said prefix-constituent and said suffix-constituent as an illegal combination in said language. Moreover, said core-constituent and said suffix-constituent are not recombined and said core-constituent is treated as a separate entry if said core-suffix-matrix is indicating said core-constituent and said suffix-constituent as an illegal combination in said language. [0039]
  • This invention feature allows for determination of word boundaries. [0040]
  • According to a further embodiment of the proposed invention, said alternative prefix-constituent is retrieved from an alternative-list comprising alternative prefix-constituents to said prefix-constituents in decreasing matching probability. [0041]
  • Such an approach further increases recognition accuracy. [0042]
  • The objective of the invention is also solved by the independent method claim [0043] 14. Further embodiments of the proposed invention are suggested in the dependent claims of claim 14.
  • For the feature details it is referred to the claims. The features are in tight correspondence to the device claims. As far as the advantages are concerned, above statements relating to the claimed device are also applicable.[0044]
  • 3 BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 is a diagram reflecting the structure of the split-tables according to the current invention, modeling how legal words of a language are decomposed into constituents which then become part of the segmented vocabulary. [0045]
  • FIG. 2 visualizes the steps according to the state of the art teaching to compute a language model (LM) for the speech recognizer. [0046]
  • FIG. 3 visualizes the steps according the current invention to compute a language model (LM) for the speech recognizer based on the segmented vocabulary. [0047]
  • FIG. 4 visualizes by example how a small segmented vocabulary allows to support the recognition of a rich set of legal words of a language by recombination of constituents. [0048]
  • FIG. 5 is a block diagram reflecting the structure of a state of the art decoder of a speech recognizer. [0049]
  • FIG. 6 is a block diagram reflecting the structure of a decoder of a speech recognizer according to the current teaching visualizing the new post-processor. [0050]
  • FIG. 7 visualizes two examples of constituent combination tables, a core-prefix-matrix and a prefix-prefix-matrix. [0051]
  • FIG. 8 depicts in form of a flow-diagram the exploitation of the core-prefix-matrix and the prefix-prefix-matrix during the execution of the post-processor.[0052]
  • 4 DESCRIPTION OF THE PREFERRED EMBODIMENT
  • If the current specification is referring to a certain natural language for outlining certain inventive features, this has to be understood as an example only. The inventive technology itself is applicable to any type of natural language. [0053]
  • The current specification is based on the IBM ViaVoice 98 speech recognition system. The invention is not limited to this specific speech recognition system, it applies to other speech recognition systems as well. [0054]
  • 4.1 Introduction [0055]
  • A Speech Recognizer is a device which automatically transcribes speech into text. It can be thought of as a voice activated “typewriter” in which the transcription is carried out by a computer program and the transcribed text appears on a workstation display. [0056]
  • For the purpose of this invention, the designation “word” denotes a word form defined by its spelling. Two differently spelled inflections or derivations of the same stem are considered different words (example: work, works, workers, . . . ). Homographs having different parts of speech or meaning constitute the same word. [0057]
  • The list of words that are chosen for a dictation task is called “vocabulary.” It is finite, pre-defined and constitutes the words that can be “printed” out by the speech recognizer. [0058]
  • For each word (according to the spelling) in the vocabulary, there is a phonetic description of the way(s) this word could be uttered; this is called the “pronunciation” of that word. One word (according to its spelling) can have more than one pronunciation. [0059]
  • A “Language Model” (LM) is a conceptual device which, given a string of past words, provides an estimate of the probability that any given word from an allowed vocabulary will follow the string i.e., P(W[0060] k/Wk-1, . . . W1). In speech recognition, a LM is used to direct the hypothesis search for the sentence that was spoken. For reasons of estimation as well as estimate storage, past strings which the prediction is based on, are partitioned into a manageable number of n words. In a 3-gram LM, the prediction of a word depends on the past two words.
  • The probability is derived from the uni-grams, bi-grams, and tri-grams counts according to the following mathematical formula, [0061] P ( w 3 / w 1 w 2 ) = h 3 C 123 C 12 + h 2 C 23 C 2 + h 1 C 3 N + h 0
    Figure US20020099543A1-20020725-M00001
  • where C[0062] x, counts of word x, Cxy counts of occurrence of word x followed by word y, Cxyz counts of occurrence of the sequence word x y z, hx are weighting factors to combine the context information probabilities.
  • The training “corpus” is the text data coming from various sources to be used in counting the statistics to build the LM. [0063]
  • 4.2 The Solution [0064]
  • The following invention relates to the recognition of spoken speech and solves problems relative to limited vocabulary size defined/known to speech recognition systems. [0065]
  • The current invention is based on the fundamental idea to increase the size of recognizable words stored within the vocabulary not by the straightforward approach of just storing further complete legal words of the language and thus increasing the vocabulary size. The invention instead suggests to create a vocabulary consisting of a mixture of complete legal words of the language and of constituents of legal words of the language. By providing constituent combination tables indicating which recombinations of the constituents form legal words of the language, the recognition process of the speech recognizer is able to identify an increased size of recognizable words of that language forming the active vocabulary. In the subsequent specification, methods are being described allowing to use arbitrary constituents within the active recognition vocabulary. The current teaching allows to form constituents from any number of characters; constituents even with one character only are possible. Constituents also are not limited to specific relations to the recognition units of the decoder. Constituents according to the current invention are denoted as prefix or suffix (e.g., suffix “n” to denote pluralization of nouns) or cores. In addition, the current invention does not require to compute an additional, separate frequency model to allow the resolving of problems of illegal word sequences during recombination of these arbitrary constituents into words (e.g., “vor”-“Verfügbar”) thus saving quite some disk space and decoding overhead (minimizing alternate paths evaluation). The current teaching may be applied to segmentation and recombination of any number S of constituents. For efficiency reasons, using a smaller number of constituents, for example S=2 or S=3, turned out to be advantageous. [0066]
  • The following specification will refer to “prefix-core” segmentation as 2-pieces and to “prefix-core-suffix” segmentation as 3-pieces. FIG. 1 visualizes the structure of the “split-tables” showing how legal words in a language are split or recombined by constituents. The split-tables contain the word splitting information (either into 2 pieces or 3 or even more). How to obtain this table will be explained below. [0067]
  • Assume for a given language that a split table is obtained where the number of the original words (non-split) is much higher than the number of cores plus the set of prefixes and suffixes (pieces). The main idea then is to make use of this to define to the recognizer a vocabulary of pieces, recognize during run-time the pieces and display on the screen the original word after concatenating the pieces that constitute this word. [0068]
  • Moreover, the current invention does not expose to the user at any time the special handling of compound words and/or words preceded or followed but a set of prefixes/suffixes and specific dictation modi for compound word handling has to be introduced. [0069]
  • The current invention suggests to use phones as the smallest recognizable unit for the decoder process. Thus, each entry of the increased vocabulary is consisting of a spelling (which can be an arbitrary constituent) and associated phones to identify allowable pronunciations. On this background, if the current disclosure makes use of the term “syllables” it has to be understood as potential prefix or suffix of words (German: “vor”, “keit”, Arabic: “uuna”) to imply a set of characters on orthographic level having associated a phone based pronunciation. Therefore, a syllable (in terms of orthography) according to the current invention is not limited to a syllable in terms of pronunciation. The Arabic suffix “uuna” (masculine, plural) is clearly consisting of 2 syllables (based on Ruske's definition of smallest recognition unit) but represented by 4 phones in the system according to this invention. The recombination of constituents to build words of the language is thus based on characters and not by means of recognition units. [0070]
  • As therefore the proposed technique essentially covers the preprocessing of large corpora (for setting up the new segmented type of vocabulary) and post-processing, i.e., recombination of constituents, these techniques can be applied to any existing decoder without further modifications or enhancements. [0071]
  • It has to be pointed out that the segmentation and recombination approach of the current invention is not used for the recognition of compound words (from other legal words) in clear distinction to other state of the art teaching; instead the invention allows for storage-efficient realization of vocabularies. The reduced size of segmented vocabularies on the other hand may then be used to again increase the vocabulary for increased recognition and coverage of the language. [0072]
  • 4.2.1 Limitations of State of the Art Speech Recognition Systems [0073]
  • Current speech recognition systems can only handle up to 64 k (128 k in the very recent version of the IBM ViaVoice 98) pronunciations. This limitation is due to the numbers of bits provided by a certain processor architecture for addressing the storage contents. This means that if a given language has an average of two pronunciations per word, the number of words is often limited to a maximum of 32 k (64 k in ViaVoice 98) words. [0074]
  • Many languages are characterized by the use of “inflection”; one basic word form can generate several hundreds of word forms. For instance, a relatively small vocabulary of 35 k English words can represent more than 99% of the everyday spoken English. A different situation exists for inflected languages that use very large vocabularies. For example, the Russian language requires at least 400 k words to represent more than 99% of the everyday spoken Russian language and Arabic requires at least 200 k to represent 99% of the everyday spoken Arabic language. Thus, the size of vocabularies in these languages is very large. Such a large vocabulary cannot be used in current real-time speech recognizers for the limitations mentioned above. [0075]
  • In general, highly inflected languages, in addition to possessing a very rich morphology, are also highly combinatorial, in that several hundreds of words can be generated from one core preceded/followed by one or more prefixes/suffixes. Thus, one would expect to achieve a great deal of coverage on handling those language words as a combination of these pieces. [0076]
  • 4.2.2 Building the Language Model [0077]
  • The traditional building processes of a language model (LM) are depicted in FIG. 2 and comprise the following steps: [0078]
  • 1. Collecting corpora that represent the domain where the Speech Recognizer is to be used. [0079]
  • 2. Cleaning (e.g., removing the tables and formatting information from the text) and tokenization (e.g., convert a sentence like “see you at 14:00.” to “see you at 2 o'clock PM.”). [0080]
  • 3. Select the vocabulary by counting the frequency of occurrence of each word and choosing the top N (where N <=32 k) most frequently occurred words. [0081]
  • 4. Build the LM by computing the 3-gram counts for those N words. [0082]
  • To handle the constituents according to the split-table (comprising a mixture of legal words and actual constituents) the current invention suggests to introduce a further step ([0083] 301) before the selection of the vocabulary as shown in FIG. 3. Within this new step (301), the splitting information for the words into pieces is applied to the corpora. The new corpora are then used to select the vocabulary of pieces. In this case, the vocabulary will be the top N pieces. Therefore, as the vocabulary consists of legal words and constituents the LM actually is computed based on the sequence statistics of words as well as mixtures of words and constituents (depending on the nature of constituents as defined within the split-table).
  • 4.2.3 The Split-table for a Given Language [0084]
  • The process of getting the split-table is very much dependent on the nature of each language and how its derivatives are formed. If English is chosen as an example (although it is not a highly inflected language), a logical suffix set might be f{s,ed,ing,er,ers,ly, . . . } and a prefix set might be {ab, un, re, pre, . . . } (this has to be understood as an example only). However, the current invention goes beyond this. The prefixes/suffixes are not necessarily a linguistic or phonetic known unit of the language but should be chosen to achieve maximum compression ratio of the selected vocabulary (pieces) compared to the number of real valid words that can be generated during recognition, i.e., the current teaching allows to use those constituents resulting in a maximum of compression of the vocabulary. But the prefixes and suffixes set can also be chosen to contain (any) part of compound words or even syllables. In the following example, the split-table is obtained by hand. However, in general, one could use clustering techniques to obtain the table using as starting points e.g., linguistic motivated pieces. [0085]
  • For example, if we choose prefix set as {c,m,h} and suffix set {s,ed,ing,er,ers} for cores like (at, all, work, use), the words of FIG. 4 can be generated. In the above example, 3 prefixes+4 cores+5 suffixes which will occupy 12 vocabulary entries in a recognition system will be able to generate 23 valid English words. This proofs the compression of a vocabulary achievable by the current invention. On the other hand, this advantage gives freedom to introduce additional words and constituents to the vocabulary not covered so far and thus increases the spectrum of words recognizable by the speech recognition system. [0086]
  • The following benefits are achievable by the current teaching: [0087]
  • 1. Compression of the vocabulary The invention allows to define and store N words but generate and recognize up to M×N words (where M is language dependent). [0088]
  • 2. Smaller vocabularies allow in addition a better estimation of the word (or piece) probabilities (uni-, bi-, tri-grams within their context environment as more occurrences are seen in the respective corpora. [0089]
  • 3. Efficient storing via mapping the N words into a set of groups having the same prefix/suffix pattern. Such approach ensures logical completeness and coverage of the chosen vocabulary. Usually the user who dictates a word defined in the vocabulary expects that all derived forms are also available. For example, one does not expect that the word ‘use’ is in the vocabulary while ‘user’ is not. [0090]
  • 4. Flexibility of defining the prefix and suffix sets for each language to achieve the best compression. The prefix/suffix is not necessarily a linguistic or phonetic known unit of the language. In the above English example, the prefix set {c,m,h} does not have a linguistic or phonetic definition but has been found to give good compression ration (maximize M) for this set of words. [0091]
  • It is very important to point out that the suffix (or prefix) set can be NULL and therefore a prefix set and a core set can generate the words. Also, the prefix set can be NULL and in this case a suffix set and a core set can generate the words. Actually any type of constituent being part of the segmented vocabulary can be combined with one another to reconstruct legal words of the language. [0092]
  • To illustrate, with an example for the Arabic language, the Arabic word ‘wasayaktubuunahaa’ (“and they will write it”) can be segmented as follows ‘wasaya+ktub+uunahaa’ (wasaya: “and will”, ktub: “write”, uunahaa: “they it”). The core is ‘ktub’ to which are attached the prefixes ‘wa’ (“and”), sa (future tense), ya (3rd person), and the suffixes ‘uuna’ (masc. pl.) and ‘haa’ (“it”). The Arabic word ‘wasayaktubuunahaa’ maps to a complete English sentence: “and they” ((masc.) pl.) “will write it”. Arabic is normally written without short vowels and other diacritics that mark gemination, zero vowel, and various inflectional case endings. The word in the above example is normally written ‘wsyktbuunhaa’. [0093]
  • Similar segmentations in prefix-core-suffix parts can be applied in languages like German, Czech or Russian or of course any other (highly inflected) language. [0094]
  • 4.2.4 The Text Post-processor [0095]
  • FIG. 5 shows a block diagram of a traditional speech recognizer. The speech signal is first processed by the acoustic processor ([0096] 501) where digital signal processing features (like energy, cepstrum, . . . ) are extracted. A fast match (502) technique is implemented to determine a short list of candidate words quickly before performing expensive detailed match (503). The LM (504) is then consulted to determine (505) the corresponding sequence of words. Thus, the detailed match technique is applied to promising candidate words only.
  • In case the vocabulary consists of legal words and/or constituents (or “pieces”, i.e., prefixes, cores, suffixes) of legal words or mixtures thereof a post processing step ([0097] 606) is required as visualized in FIG. 6 to concatenate those pieces and display them to the user as valid words. In this invention, a “core-tag table” is generated automatically from the split table. First the split table is sorted by cores and then cores are grouped according to the prefix/suffix set that can be attached thereto. A “prefix (suffix) tag” is just a number given to each prefix (suffix). Also a “core-prefix matrix” is formed where the element (core_no,prefix_no) denotes if this is a valid concatenation (1) or not (0). Another “prefix-prefix matrix” is also formed for validating the concatenation of two prefixes.
  • A specific implementation of the current invention relates to an Arabic speech recognition system based on the IBM ViaVoice Gold engine (where the limitation of 64 k pronunciations still existed). This implementation was built using the 2-pieces segmentation technique as described above. [0098]
  • A 32 k words vocabulary (to be more precise: according to the current invention the entries within the vocabulary comprise a mixture/collection of prefixes and cores and complete legal words of the language) is used to build the LM and the 3-gram counts are collected on the 2-pieces segmented corpora. [0099]
  • The possible pronunciations of each word are collected to build the baseform pool (needed to tell the recognizer how words are pronounced). The baseform pool contains 60 k baseforms with an average of 2 pronunciations of each word. [0100]
  • Automatically, a “core-tag table” is obtained from the split-table where the 32 k words are classified into 380 groups. A core-tag table is a table relating the cores, defined according to the split-table, to a certain range of natural numbers serving unique as indexes for quickly identifying a certain core via its index. It is for instance of the structure: Word=Tag[0101] 000, . . . Word=Tag380. Along the same lines, a prefix-tag table has been generated assigning another range of natural numbers, serving as unique indexes for the prefixes, to the prefixes. It is for instance of the structure: Prefix=Tag400, . . . Prefix=Tag500.
  • As one of the representatives of the constituent combination tables, a “core-prefix matrix” of 380×100 entries has been formed of 1's (valid prefixes) and 0's (not valid) to be used during recognition to stop non-valid combinations from appearing on the screen. FIG. 7 gives an example of a core-prefix matrix visualizing the row indexes (000-380) as representations of the cores and the column indexes (001-100) as representations of the prefixes while the cells indicate the (in)validity of the specific prefix/core recombination via 0 and 1. Another representative of the constituent combination tables is the “prefix-prefix matrix” of 100×100 entries also visualized in FIG. 7. It indicates by 1's (valid prefixes) and 0's (not valid) to be used during recognition to stop non-valid combinations from appearing on the screen recombining two prefixes into a valid third prefix. As also apparent from FIG. 7 with respect to the prefix-prefix matrix, the row indexes (000-100) and the column indexes (001-100) are the representations of the two prefixes while the cells indicate the (in)validity of the specific prefix/prefix recombination via 0 and 1 into a third prefix. [0102]
  • For this specific implementation of the invention, the exploitation of the core-prefix matrix and the prefix-prefix-matrix during the execution of the post-processor ([0103] 606) is discussed.
  • After a constituent is recognized by the speech recognizer, it is processed according to the following logic, which is also visualized by the flow-diagram of FIG. 8: [0104]
  • 1 If the piece/constituent has a tag in the range from 400-500 (which identifies it as a prefix), subtract 400 to get the PREFIX_NO1 ([0105] 801, 802)
  • 2 Get the next piece from the recognizer and check: [0106]
  • 2.1 If it is a core, get the tag which gives the GROUP_NO ([0107] 808). Check element (GROUP_NO, PREFIX_NO1) of the core-prefix matrix (809):
  • 2.1.1 If 1, concatenate the prefix and the core and display ([0108] 810)
  • 2.1.2 If 0, replace the prefix with a valid prefix, concatenate with the core and display. As one possibility said valid prefix might be determined from an “alternative-list” for that prefix. For each recognized word (here a prefix) the decoder could send the best hypothesis as well as the next close matching words. This list could be used as alternative-list to improve the recombination of pieces to words. [0109]
  • 2.2 If it is a second prefix, get the PREFIX_NO2 ([0110] 804), check the prefix-prefix matrix element (PREFIX_NO1, PREFIX_NO2) (805)
  • 2.2.1 If 1, change PREFIX_NO1 to new PREFIX_NO which corresponds to the concatenation of the two prefixes and clear PREFIX_NO2. Get next piece and repeat at step 1 (806) [0111]
  • 2.2.2 If 0, copy PREFIX_NO2 to PREFIX_NO1, clear PREFIX_NO2. Get next piece and repeat at step 1 ([0112] 807)
  • Based on these embodiments, the engine sends the correct prefix sub-word combination in 95% of the cases. [0113]
  • Above teaching outlined with respect to recombination of prefix with prefix as well as prefix with core can be generalized to recombination of any two constituents. Of course, the recombination process operates recursively on creating recombined constituents. For example, two consecutive constituents representing a first prefix-constituent and a second prefix-constituent may be recombining a third prefix-constituent based upon a prefix-prefix-matrix indicating a legal combination in said language, or two consecutive constituents representing a prefix-constituent and a core-constituent may be recombined into a second core-constituent based upon core-prefix-matrix indicating a legal combination in said language, or two consecutive constituents representing a prefix-constituent and a suffix-constituent may be recombined into a second prefix-constituent based upon a prefix-suffix-matrix indicating a legal combination in said language, or two consecutive constituents representing a core-constituent and a suffix-constituent may be recombined into a second core-constituent based upon a core-suffix-matrix indicating a legal combination in said language. [0114]
  • As standard behavior for handling of any two constituents for which the corresponding constituent combination table indicates an illegal combination, the two constituents are not recombined and said constituents are treated as separate entries in said language. [0115]
  • Moreover, the above example uses matrix technology for implementing the various constituent combination tables. Further improvements may be achieved using sparse-matrix technology for implementing the matrixes. [0116]
  • 4.2.5 The Coverage of Legal Words of a Language by the Approach of Segmented Vocabularies [0117]
  • In the following, the efficiency of the current teaching of segmented vocabularies to increase the coverage of a certain language and decrease the size of the vocabulary at the same time is demonstrated. This is done on the basis of the Arabic language. A comparison is given based on vocabulary size and language coverage according to the state of the art which then is compared to the segmentation approach of vocabularies according to the current teaching. [0118]
  • The state of the art situation is compared to a segmentation approach using 2 constituents, prefix and core, with a corresponding split-table and a core-prefix-matrix. [0119]
  • In addition, this situation is compared to a segmentation approach using up to 3 constituents, prefix, core and suffix. The Arabic words are treated as having two or three elements: prefix-core or prefix-core-suffix. [0120]
  • The 3-constituent vocabulary has been set up with 100 prefixes, 200 suffixes and 29000 cores. The 2-constituent vocabulary was formed by concatenating the core and the following suffix of the 3-constituent vocabulary to form a new core, it contains 100 prefixes and 604 k cores. Thus the 3-constituent vocabulary was originally of prefix-core-suffix structure and has been transformed into a 2-constituent vocabulary. [0121]
  • A corpus of 100M words (journalism, business correspondence and encyclopedia) has been used to test the coverage and to build the LM. [0122]
  • To show efficiency of the underlying idea of segmentation vocabularies the following steps have been performed: [0123]
  • 1. The unique words constituting the corpora are collected. [0124]
  • 2. The segmentation experience is reflected into corpora to from a new segmented one. [0125]
  • 3. The coverage versus number of words in the segmented vocabulary has been computed. zAs the result, by segmentation of the original words into 2 pieces or 3 pieces, coverage has been significantly increased compared to an unsegmented vocabulary of equal size. The following table shows how 30 k constituents (from a 3 pieces segmentation) achieve 99% coverage and 32 k constituents (from a 2 pieces segmentation) achieve 97% whereas 200 k and 115 k original words (non-segmented) are required respectively to achieve the same coverage. [0126] Number of Words Before After After Segmentation Segmentation Segmentation Original 2 3 Coverage Words Constituents Constituents 99% 200K 46K 30K 97% 115K 32K no need 93%  46K no need no need
  • 5 Acronyms [0127]
  • LM Language Model [0128]

Claims (23)

What is claimed is:
1. A speech recognition system for recognition of spoken speech of a language, the speech recognition system comprising:
a vocabulary including a multitude of words of said language recognizable by said speech recognition system, the vocabulary comprising a multitude of entries wherein an entry is either identical to a legal word of said language, or an entry is a constituent of a legal word of said language and said constituent is an arbitrary sub-component of said legal word according to the orthography, and further wherein said constituent is not limited to a syllable of said legal word, or said constituent is not limited to a recognition unit of said speech recognition system.
2. A speech recognition system according to claim 1, wherein said vocabulary defines legal words of said language recognizable by said speech recognition system either by an entry itself, or by legal recombination of up to S entries in combination representing a legal word of said language.
3. A speech recognition system according to claim 2, wherein S is 2 or 3.
4. A speech recognition system according to claim 3, wherein said vocabulary comprises, if S is 2, constituents allowing for recombination of legal words from a prefix-constituent and a core-constituent, or from a core-constituent and a suffix-constituent, or from a prefix-constituent and a suffix-constituent and further wherein said vocabulary comprises, if S is 3, constituents allowing for recombination of legal words from a prefix-constituent, a core-constituent and a suffix-constituent.
5. A speech recognition system according to claim 2, further comprising a constituent-combination-table indicating which concatenations of said constituents are legal concatenations in said language.
6. A speech recognition system according to claim 5, further wherein said constituent-combination-table comprises, in the case of S=2 or S=3, at least one of:
a core-prefix-matrix indicating whether a combination of a prefix-constituent and a core-constituent is a legal combination in said language or not;
a prefix-suffix-matrix indicating whether a combination of a prefix-constituent and a suffix-constituent is a legal combination in said language or not;
a prefix-prefix-matrix indicating whether a combination of a first-prefix-constituent and a second-prefix-constituent is a legal combination in said language building a third-prefix-constituent or not; and
a core-suffix-matrix indicating whether a combination of a core-constituent and a suffix-constituent is a legal combination in said language or not.
7. A speech recognition system according to claim 6, further wherein for at least one of said core-prefix-matrix, said core-suffix-matrix, said prefix-suffix-matrix, said prefix-prefix-matrix, said core-constituents, said prefix-constituents and said suffix-constituents are represented by unique numbers forming the indexes of said matrixes.
8. A speech recognition system according to claim 7, further comprising a post-processor responsive to an input comprising recognized constituents of said vocabulary for recombination of said constituents into legal words of said language exploiting said constituent-combination-table.
9. A speech recognition system according to claim 8, further wherein said post-processor is at least one of:
(a) responsive to two consecutive constituents representing a first prefix-constituent and a second prefix-constituent by one of:
recombining said first prefix-constituent and said second prefix-constituent into a third prefix-constituent if said prefix-prefix-matrix is indicating said first prefix-constituent and said second prefix-constituent as a legal combination in said language, and
dropping said first prefix-constituent if said prefix-prefix-matrix indicating said first prefix-constituent and said second prefix-constituent as an illegal combination in said language;
(b) responsive to two consecutive constituents representing a prefix-constituent and a core-constituent by one of:
recombining said prefix-constituent and said core-constituent into a second core-constituent if said core-prefix-matrix is indicating said prefix-constituent and said core-constituent as a legal combination in said language, and
if said core-prefix-matrix is indicating said prefix-constituent and said core-constituent as an illegal combination in said language, by replacing said prefix-constituent with an alternative prefix-constituent and recombining said alternative prefix-constituent and said core-constituent if said core-prefix-matrix is indicating said alternative prefix-constituent and said core-constituent as a legal combination in said language;
(c) responsive to two consecutive constituents representing a prefix-constituent and a suffix-constituent by:
recombining said prefix-constituent and said suffix-constituent into a second prefix-constituent if said prefix-suffix-matrix is indicating said prefix-constituent and said suffix-constituent as a legal combination in said language; and
(d)responsive to two consecutive constituents representing a core-constituent and a suffix-constituent by:
recombining said core-constituent and said suffix-constituent into a second core-constituent if said core-suffix-matrix is indicating said core-constituent and said suffix-constituent as a legal combination in said language.
10. A speech recognition system according to claim 9, further comprising at least one of the steps of:
not recombining said prefix-constituent and said suffix-constituent and treating said prefix-constituent as a separate entry if said prefix-suffix-matrix is indicating said prefix-constituent and said suffix-constituent as an illegal combination in said language; and
not recombining said core-constituent and said suffix-constituent and treating said core-constituent as a separate entry if said core-suffix-matrix is indicating said core-constituent and said suffix-constituent as an illegal combination in said language.
11. A speech recognition system according to claim 9, further wherein said alternative prefix-constituent is retrieved from an alternative-list, said alternative-list comprising alternative prefix-constituents to said prefix-constituents in decreasing matching probability.
12. A speech recognition system according to claim 1, further comprising a language-model of said language being computed based on the N-gram frequencies of a sequence of N entries of said vocabulary.
13. A speech recognition system according to claim 1, wherein phones are used as smallest recognition units.
14. A method for use with a speech recognition system for recognition of spoken speech of a language, said method using a vocabulary including a multitude of words of said language recognizable by said method, said method comprising the step of:
identifying, in said spoken speech, entries of said vocabulary wherein an entry is either identical to a legal word of said language, or an entry is a constituent of a legal word of said language and said constituent is an arbitrary sub-component of said legal word according to the orthography, and further wherein said constituent is not limited to a syllable of said legal word or said constituent is not limited to a recognition unit of said speech recognition system.
15. A method according to claim 14, further comprising the step of:
post-processing an input comprising recognized constituents of said vocabulary for recombination of said constituents into legal words, said post-processing-step recombining up to S constituents if a constituent-combination-table indicates that said recognized constituents are a legal concatenation in said language.
16. A method according to claim 15, wherein S is 2 or 3.
17. A method according to claim 16, wherein the post-processing step further comprises:
recombining, if S is 2, legal words from a prefix-constituent and a core-constituent, or from a core-constituent and a suffix-constituent, or from a prefix-constituent and a suffix-constituent, and recombining, if S is 3, legal words from a prefix-constituent, a core-constituent and a suffix-constituent.
18. A method according to claim 17, wherein the post-processing step further comprises at least one of:
(a)recombining two consecutive constituents representing a first-prefix-constituent and a second-prefix-constituent by one of:
recombining said first prefix-constituent and said second prefix-constituent into a third prefix-constituent if a prefix-prefix-matrix is indicating said first prefix-constituent and said second prefix-constituent as a legal combination in said language, and dropping said first prefix-constituent if said prefix-prefix-matrix indicating said first prefix-constituent and said second prefix-constituent as an illegal combination in said language;
(b)recombining two consecutive constituents representing a prefix-constituent and a core-constituent by one of:
recombining said prefix-constituent and said core-constituent into a second core-constituent if said core-prefix-matrix is indicating said prefix-constituent and said core-constituent as a legal combination in said language, and
if said core-prefix-matrix is indicating said prefix-constituent and said core-constituent as an illegal combination in said language, replacing said prefix-constituent with an alternative prefix-constituent and recombining said alternative prefix-constituent and said core-constituent if said core-prefix-matrix is indicating said alternative prefix-constituent and said core-constituent as a legal combination in said language;
(c)recombining two consecutive constituents representing a prefix-constituent and a suffix-constituent by recombining said prefix-constituent and said suffix-constituent into a second prefix-constituent if said prefix-suffix-matrix is indicating said prefix-constituent and said suffix-constituent as a legal combination in said language; and
(d)recombining two consecutive constituents representing a core-constituent and a suffix-constituent by recombining said core-constituent and said suffix-constituent into a second core-constituent if said core-suffix-matrix is indicating said core-constituent and said suffix-constituent as a legal combination in said language.
19. A method according to claim 18, further comprising at least one of:
not recombining said prefix-constituent and said suffix-constituent and treating said prefix-constituent as a separate entry if said prefix-suffix-matrix is indicating said prefix-constituent and said suffix-constituent as an illegal combination in said language; and
not recombining said core-constituent and said suffix-constituent and treating said core-constituent as a separate entry if said core-suffix-matrix is indicating said core-constituent and said suffix-constituent as an illegal combination in said language.
20. A method according to claim 18, wherein said alternative prefix-constituent is retrieved from an alternative-list, said alternative-list comprising alternative prefix-constituents to said prefix-constituents in decreasing matching probability.
21. A method according to claim 18, wherein the post-processing step further comprises:
representing said core-constituents, said prefix-constituents and said suffix-constituents by unique numbers used as indexes of said core-prefix-matrix and/or said core-suffix-matrix and/or said prefix-suffix-matrix and/or said prefix-prefix-matrix.
22. A method according claim 14, further comprising the step of using a language-model of said language being based on the N-gram frequencies of a sequence of N entries of said vocabulary.
23. An article of manufacture for use with a speech recognition system for recognition of spoken speech of a language, said method using a vocabulary including a multitude of words of said language recognizable by said method, the article of manufacture comprising a machine readable medium containing one or more programs which when executed implement the step of:
identifying, in said spoken speech, entries of said vocabulary wherein an entry is either identical to a legal word of said language, or an entry is a constituent of a legal word of said language and said constituent is an arbitrary sub-component of said legal word according to the orthography, and further wherein said constituent is not limited to a syllable of said legal word or said constituent is not limited to a recognition unit of said speech recognition system.
US09/382,743 1998-08-28 1999-08-25 Segmentation technique increasing the active vocabulary of speech recognizers Abandoned US20020099543A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP98116278.7 1998-08-28
EP98116278 1998-08-28

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US10/298,494 US6738741B2 (en) 1998-08-28 2002-11-18 Segmentation technique increasing the active vocabulary of speech recognizers

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US10/298,494 Continuation US6738741B2 (en) 1998-08-28 2002-11-18 Segmentation technique increasing the active vocabulary of speech recognizers

Publications (1)

Publication Number Publication Date
US20020099543A1 true US20020099543A1 (en) 2002-07-25

Family

ID=8232527

Family Applications (2)

Application Number Title Priority Date Filing Date
US09/382,743 Abandoned US20020099543A1 (en) 1998-08-28 1999-08-25 Segmentation technique increasing the active vocabulary of speech recognizers
US10/298,494 Expired - Fee Related US6738741B2 (en) 1998-08-28 2002-11-18 Segmentation technique increasing the active vocabulary of speech recognizers

Family Applications After (1)

Application Number Title Priority Date Filing Date
US10/298,494 Expired - Fee Related US6738741B2 (en) 1998-08-28 2002-11-18 Segmentation technique increasing the active vocabulary of speech recognizers

Country Status (5)

Country Link
US (2) US20020099543A1 (en)
AT (1) AT374421T (en)
DE (1) DE69937176T2 (en)
PL (1) PL335150A1 (en)
RU (1) RU99118670A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030115169A1 (en) * 2001-12-17 2003-06-19 Hongzhuan Ye System and method for management of transcribed documents
US20030130843A1 (en) * 2001-12-17 2003-07-10 Ky Dung H. System and method for speech recognition and transcription
US20030220788A1 (en) * 2001-12-17 2003-11-27 Xl8 Systems, Inc. System and method for speech recognition and transcription
US7020606B1 (en) * 1997-12-11 2006-03-28 Harman Becker Automotive Systems Gmbh Voice recognition using a grammar or N-gram procedures
US20080059188A1 (en) * 1999-10-19 2008-03-06 Sony Corporation Natural Language Interface Control System
US20110010165A1 (en) * 2009-07-13 2011-01-13 Samsung Electronics Co., Ltd. Apparatus and method for optimizing a concatenate recognition unit
US20130030787A1 (en) * 2011-07-25 2013-01-31 Xerox Corporation System and method for productive generation of compound words in statistical machine translation

Families Citing this family (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7080005B1 (en) * 1999-07-19 2006-07-18 Texas Instruments Incorporated Compact text-to-phone pronunciation dictionary
US20020099544A1 (en) * 2001-01-24 2002-07-25 Levitt Benjamin J. System, method and computer program product for damage control during large-scale address speech recognition
US7181398B2 (en) * 2002-03-27 2007-02-20 Hewlett-Packard Development Company, L.P. Vocabulary independent speech recognition system and method using subword units
US7181396B2 (en) * 2003-03-24 2007-02-20 Sony Corporation System and method for speech recognition utilizing a merged dictionary
US8447602B2 (en) * 2003-03-26 2013-05-21 Nuance Communications Austria Gmbh System for speech recognition and correction, correction device and method for creating a lexicon of alternatives
US7747428B1 (en) 2003-09-24 2010-06-29 Yahoo! Inc. Visibly distinguishing portions of compound words
US7464020B1 (en) * 2003-09-24 2008-12-09 Yahoo! Inc. Visibly distinguishing portions of compound words
JP4652737B2 (en) * 2004-07-14 2011-03-16 インターナショナル・ビジネス・マシーンズ・コーポレーションInternational Business Maschines Corporation Word boundary probability estimation device and method, probabilistic language model construction device and method, kana-kanji conversion device and method, and unknown word model construction method,
KR100679042B1 (en) * 2004-10-27 2007-02-06 삼성전자주식회사 Method and apparatus for speech recognition, and navigation system using for the same
JP2008529101A (en) * 2005-02-03 2008-07-31 ボイス シグナル テクノロジーズ インコーポレイテッドVoice Signal Technologies,Inc. Method and apparatus for automatically expanding the speech vocabulary of a mobile communication device
US7698128B2 (en) * 2006-01-13 2010-04-13 Research In Motion Limited Handheld electronic device and method for disambiguation of compound text input and that employs N-gram data to limit generation of low-probability compound language solutions
US8996376B2 (en) 2008-04-05 2015-03-31 Apple Inc. Intelligent text-to-speech conversion
US20090292546A1 (en) * 2008-05-20 2009-11-26 Aleixo Jeffrey A Human Resources Employment Method
US8682667B2 (en) 2010-02-25 2014-03-25 Apple Inc. User profiling for selecting user specific voice input processing information
US9721563B2 (en) 2012-06-08 2017-08-01 Apple Inc. Name recognition system
US9547647B2 (en) 2012-09-19 2017-01-17 Apple Inc. Voice-based media searching
WO2014197334A2 (en) 2013-06-07 2014-12-11 Apple Inc. System and method for user-specified pronunciation of words for speech synthesis and recognition
US9785630B2 (en) * 2014-05-30 2017-10-10 Apple Inc. Text prediction using combined word N-gram and unigram language models
US9668121B2 (en) 2014-09-30 2017-05-30 Apple Inc. Social reminders
US9578173B2 (en) 2015-06-05 2017-02-21 Apple Inc. Virtual assistant aided communication with 3rd party service in a communication session
US10067938B2 (en) 2016-06-10 2018-09-04 Apple Inc. Multilingual word prediction
US10043516B2 (en) 2016-09-23 2018-08-07 Apple Inc. Intelligent automated assistant
DK179496B1 (en) 2017-05-12 2019-01-15 Apple Inc. User-specific acoustic models
DK201770432A1 (en) 2017-05-15 2018-12-21 Apple Inc. Hierarchical belief states for digital assistants
US20190130902A1 (en) * 2017-10-27 2019-05-02 International Business Machines Corporation Method for re-aligning corpus and improving the consistency

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5283833A (en) * 1991-09-19 1994-02-01 At&T Bell Laboratories Method and apparatus for speech processing using morphology and rhyming
US5715468A (en) * 1994-09-30 1998-02-03 Budzinski; Robert Lucius Memory system for storing and retrieving experience and knowledge with natural language
US5832428A (en) * 1995-10-04 1998-11-03 Apple Computer, Inc. Search engine for phrase recognition based on prefix/body/suffix architecture
US5835888A (en) * 1996-06-10 1998-11-10 International Business Machines Corporation Statistical language model for inflected languages
US6073091A (en) * 1997-08-06 2000-06-06 International Business Machines Corporation Apparatus and method for forming a filtered inflected language model for automatic speech recognition
US6507678B2 (en) * 1998-06-19 2003-01-14 Fujitsu Limited Apparatus and method for retrieving character string based on classification of character
US6192337B1 (en) * 1998-08-14 2001-02-20 International Business Machines Corporation Apparatus and methods for rejecting confusible words during training associated with a speech recognition system
US6308149B1 (en) * 1998-12-16 2001-10-23 Xerox Corporation Grouping words with equivalent substrings by automatic clustering based on suffix relationships
US6405161B1 (en) * 1999-07-26 2002-06-11 Arch Development Corporation Method and apparatus for learning the morphology of a natural language

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7020606B1 (en) * 1997-12-11 2006-03-28 Harman Becker Automotive Systems Gmbh Voice recognition using a grammar or N-gram procedures
US20080059188A1 (en) * 1999-10-19 2008-03-06 Sony Corporation Natural Language Interface Control System
US20030115169A1 (en) * 2001-12-17 2003-06-19 Hongzhuan Ye System and method for management of transcribed documents
US20030130843A1 (en) * 2001-12-17 2003-07-10 Ky Dung H. System and method for speech recognition and transcription
US20030220788A1 (en) * 2001-12-17 2003-11-27 Xl8 Systems, Inc. System and method for speech recognition and transcription
US6990445B2 (en) 2001-12-17 2006-01-24 Xl8 Systems, Inc. System and method for speech recognition and transcription
US20110010165A1 (en) * 2009-07-13 2011-01-13 Samsung Electronics Co., Ltd. Apparatus and method for optimizing a concatenate recognition unit
US20130030787A1 (en) * 2011-07-25 2013-01-31 Xerox Corporation System and method for productive generation of compound words in statistical machine translation
US8781810B2 (en) * 2011-07-25 2014-07-15 Xerox Corporation System and method for productive generation of compound words in statistical machine translation

Also Published As

Publication number Publication date
US6738741B2 (en) 2004-05-18
PL335150A1 (en) 2000-03-13
AT374421T (en) 2007-10-15
DE69937176T2 (en) 2008-07-10
RU99118670A (en) 2001-07-27
US20030078778A1 (en) 2003-04-24
DE69937176D1 (en) 2007-11-08

Similar Documents

Publication Publication Date Title
Jelinek Statistical methods for speech recognition
Black et al. Issues in building general letter to sound rules
Itou et al. JNAS: Japanese speech corpus for large vocabulary continuous speech recognition research
US5610812A (en) Contextual tagger utilizing deterministic finite state transducer
US6501833B2 (en) Method and apparatus for dynamic adaptation of a large vocabulary speech recognition system and for use of constraints from a database in a large vocabulary speech recognition system
US6029132A (en) Method for letter-to-sound in text-to-speech synthesis
US6208971B1 (en) Method and apparatus for command recognition using data-driven semantic inference
CN1667700B (en) Method for adding voice or acoustic description, pronunciation in voice recognition dictionary
US6856956B2 (en) Method and apparatus for generating and displaying N-best alternatives in a speech recognition system
JP4652737B2 (en) Word boundary probability estimation device and method, probabilistic language model construction device and method, kana-kanji conversion device and method, and unknown word model construction method,
US6490563B2 (en) Proofreading with text to speech feedback
US7440889B1 (en) Sentence reconstruction using word ambiguity resolution
US8214213B1 (en) Speech recognition based on pronunciation modeling
US4741036A (en) Determination of phone weights for markov models in a speech recognition system
Wang et al. Complete recognition of continuous Mandarin speech for Chinese language with very large vocabulary using limited training data
US7181398B2 (en) Vocabulary independent speech recognition system and method using subword units
US7031908B1 (en) Creating a language model for a language processing system
JP3126985B2 (en) Method and apparatus for adapting the size of the language model of a speech recognition system
US20020173955A1 (en) Method of speech recognition by presenting N-best word candidates
Lee et al. Golden Mandarin (II)-an improved single-chip real-time Mandarin dictation machine for Chinese language with very large vocabulary
US20080059190A1 (en) Speech unit selection using HMM acoustic models
US6374210B1 (en) Automatic segmentation of a text
US20050080611A1 (en) Use of a unified language model
JP4249538B2 (en) Multimodal input for ideographic languages
US6269335B1 (en) Apparatus and methods for identifying homophones among words in a speech recognition system

Legal Events

Date Code Title Description
AS Assignment

Owner name: INTERNATIONAL BUSINESS MACHINES CORPORATION, NEW Y

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:EMAM, OSSAMA;KUNZMANN, SIEGFRIED;REEL/FRAME:010201/0987;SIGNING DATES FROM 19990806 TO 19990816

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION