WO1999046763A1 - Apparatus and method for simultaneous multimode dictation - Google Patents
Apparatus and method for simultaneous multimode dictation Download PDFInfo
- Publication number
- WO1999046763A1 WO1999046763A1 PCT/US1999/005090 US9905090W WO9946763A1 WO 1999046763 A1 WO1999046763 A1 WO 1999046763A1 US 9905090 W US9905090 W US 9905090W WO 9946763 A1 WO9946763 A1 WO 9946763A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- recognition
- command
- sequence
- dictation
- module
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims description 32
- 239000013598 vector Substances 0.000 claims abstract description 32
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 20
- 238000012545 processing Methods 0.000 claims abstract description 20
- 238000004364 calculation method Methods 0.000 claims description 13
- 230000008569 process Effects 0.000 description 11
- 239000000203 mixture Substances 0.000 description 8
- 230000006870 function Effects 0.000 description 6
- 238000013459 approach Methods 0.000 description 5
- 239000011159 matrix material Substances 0.000 description 4
- 239000003292 glue Substances 0.000 description 3
- 238000013138 pruning Methods 0.000 description 3
- 230000007704 transition Effects 0.000 description 3
- 230000009471 action Effects 0.000 description 2
- 238000010276 construction Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000005315 distribution function Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000005259 measurement Methods 0.000 description 2
- 238000013139 quantization Methods 0.000 description 2
- 230000003595 spectral effect Effects 0.000 description 2
- 206010011224 Cough Diseases 0.000 description 1
- -1 best branch entries Substances 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000004883 computer application Methods 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 230000035945 sensitivity Effects 0.000 description 1
- 238000013518 transcription Methods 0.000 description 1
- 230000035897 transcription Effects 0.000 description 1
- 230000001131 transforming effect Effects 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
- 230000014616 translation Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/26—Speech to text systems
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/08—Speech classification or search
- G10L15/18—Speech classification or search using natural language modelling
- G10L15/183—Speech classification or search using natural language modelling using context dependencies, e.g. language models
- G10L15/19—Grammatical context, e.g. disambiguation of the recognition hypotheses based on word sequence rules
- G10L15/193—Formal grammars, e.g. finite state automata, context free grammars or word networks
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
- G10L2015/223—Execution procedure of a spoken command
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
- G10L2015/226—Procedures used during a speech recognition process, e.g. man-machine dialogue using non-speech characteristics
- G10L2015/228—Procedures used during a speech recognition process, e.g. man-machine dialogue using non-speech characteristics of application context
Definitions
- the present invention relates to a speech recognition system, and more particularly to a flexible speech recognition system for large vocabulary continuous speech dictation which also recognizes and acts upon command and control phrases embedded in a user provided dictation stream.
- Speech recognition systems allow a user to operate and control other applications such as word processors, spreadsheets, databases, etc. Accordingly, a useful speech recognition system allows a user to perform to broad functions: (1) dictate input to an application, and (2) control the input and the application.
- One approach of prior art systems has been to provide separate dictation processing and control processing modes and require the user to switch between the two modes. Thus, operating mode would be definitely known by the system, since positive direction by the user was necessary to change processing modes.
- Hsu and Yegnanarayanan represented an advance in that a user of the speech recognition system no longer needed to toggle between dictation mode and command mode, rather the system automatically determined whether a given portion of an input utterance should be treated as dictated text or as application related command directives.
- Hsu and Yegnanarayanan explicitly limit the large vocabulary speech recognition module to an isolated word approach which requires a user to pause unnaturally between each word of dictated text.
- a preferred embodiment of the present invention represents a method for operating a modeless large vocabulary continuous speech recognition system of the type that represents an input utterance as a sequence of input vectors.
- the method includes:
- each acoustic model may be composed of a sequence of segment models and each segment model may be composed of a sequence of model states.
- the match score may be a probability calculation or a distance measure calculation.
- Each recognition module may include a recognition grammar used with the acoustic models to determine the at least one recognition result.
- the recognition grammar may be a context-free grammar, a natural language grammar, or a dynamic command grammar.
- the method may further include comparing the recognition results of the recognition modules to select at least one system recognition result. The step of comparing may use an arbitration algorithm and a score ordered queue of recognition results and associated recognition modules.
- the plurality of recognition modules may include one or more of a dictation module for producing at least one probable dictation recognition result, a select module for recognizing a portion of visually displayed text for processing with a command, and a command module for producing at least one probable command recognition result.
- a related embodiment provides a modeless large vocabulary continuous speech recognition system of the type that represents an input utterance as a sequence of input vectors.
- the system includes a common library of acoustic model states for arrangement in sequences that form acoustic models; an input processor that compares each vector in a sequence of input vectors to a set of model states in the common library to produce a match score for each model state in the set reflecting the likelihood that such state is represented by such vector; and a plurality of recognition modules operating in parallel that use the match scores with the acoustic models to determine at least one recognition result in each of the recognition modules.
- each acoustic model may be composed of a sequence of segment models and each segment model may be composed of a sequence of model states.
- the match score may be a probability calculation or a distance measure calculation.
- Each recognition module may include a recognition grammar used with the acoustic models to determine the at least one recognition result.
- the recognition grammar may be a context-free grammar, a natural language grammar, or a dynamic command grammar.
- the system may further include an arbitrator that compares the recognition results of the recognition modules to select at least one system recognition result.
- the arbitrator may include an arbitration algorithm and a score ordered queue of recognition results and associated recognition modules.
- the plurality of recognition modules may include one or more of a dictation module for producing at least one probable dictation recognition result, a select module for recognizing a portion of visually displayed text for processing with a command, and a command module for producing at least one probable command recognition result.
- Fig. 1 illustrates a block diagram of simultaneous recognition networks established for active applications in accordance with a preferred embodiment of the invention.
- Fig. 2 illustrates a block diagram of operation of an arbitration algorithm in accordance with a preferred embodiment of the invention.
- a preferred embodiment of the present invention correctly identifies and performs command and control functions embedded by the user in a stream of dictated text without requiring an exact sequence or predefined format. For instance, to create a table, the user may say:
- Figs. 1 and 2 illustrate in broad terms a preferred embodiment of the present invention. Figs. 1 and 2 also show various levels of operation of the embodiment. The highest level is the application level 18, which exists above the line 12 representing the Speech Application Programming Interface (SAPI).
- SAPI Speech Application Programming Interface
- a preferred embodiment implements a flexible speech recognition dictation system which incorporates both a large vocabulary continuous speech dictation path 13 and one or more limited-vocabulary, application-associated command and control paths 14 which operate simultaneously in parallel on a user provided spoken input. These two paths are here divided by dashed line X-X.
- the large vocabulary continuous speech dictation path 13 of the embodiment employs a combination of acoustic models and language models to perform multiple search passes on the spoken input and generates scores indicating the degree of match of the input utterance with an identified sequence of the respective models.
- the limited- vocabulary, application-associated command and control path 14 utilizes acoustic models in combination with context free grammars 141 to generate scores indicating the degree of match of the input utterance with at least one of the recognizable commands.
- An arbitration algorithm selects among the scores and models generated by the recognition networks.
- the scores generated by the respective networks are scaled by a factor or factors empirically trained to minimize incursions by each of the networks on correct results from the other vocabulary.
- the input speech may be provided to the system from an audio object, be processed by the speech recognizer, and then provided as output to another computer application, such as a word processing program or accounting spreadsheet, via the speech application programming interface (SAPI) 12.
- SAPI speech application programming interface
- the large vocabulary dictation context 13 operates across the SAPI 12 in a Voice Dictation Object (VDO) 11 which typically causes the display of dictated text.
- VDO Voice Dictation Object
- a dictation system user may multiple open VDOs.
- the natural language process (NLP) grammars and dynamic grammar constituting a part 14 of the embodiment providing command and control contexts may act on an application such as a word processing application.
- Fig. 1 also shows levels beneath the SAPI including the engine level 15, wherein the various grammar rules are applied, and the interface level 16, wherein multiple dictation contexts at the VDO level 18 are mapped to a single dictation context for use at the recognizer level 17.
- each active context constructs working recognition hypotheses and associated probabilities.
- Figure 2 illustrates operation of the system to arbitrate among the hypotheses generated for a given input utterance by each active context.
- Arbitration consists of selecting the best scoring recognition result hypothesis, identifying the active context which produced that hypothesis and directing the hypothesis to the corresponding application-level path.
- Figure 2 shows an input utterance 20 being directed to a recognition network for each active context; a dictation network 21, a select network 22, and command and control networks 23.
- the dictation network uses a sophisticated and elaborate processing scheme which requires substantial system computational resources; however, use of intermediate dictation network calculations by the other recognition networks, along with other efficiency measures, allows existing commercial computer systems to perform the simultaneous multiple recognition network processing utilized by preferred embodiments of the present invention.
- the recognition hypotheses and their associated probabilities are grouped together with their associated context by the recognition result selector 24 which forms a precedence ordered queue of (hypothesis, context) ordered pairs known as the hypothesis list.
- the hypothesis list is retrieved by the interface result object 25 which uses the interface map 26 to map the hypothesis list onto the active context models.
- the arbitrator 27 gets the translated hypothesis list from the interface result object 25, selects the best scoring hypothesis, and directs it to the appropriate application-level path according to the application map 28.
- a preferred embodiment of the system of the present invention operates by transforming an input speech signal into a sequence of digitally encoded speech frames as is well known in the art.
- the input frames are further processed by describing them each with N parameters, producing a sequence of N-dimensional vectors.
- a preferred embodiment then utilizes sequence state acoustic models in the form of Hidden Markov Models (HMMs) with continuous observation densities as is known in the art and described, for example, in Rabiner and Juang, Fundamentals of Speech Recognition, pp. 350-52, Prentice Hall, 1993, which reference is hereby incorporated herein by reference.
- HMMs Hidden Markov Models
- the initial processing of the input data also estimates the beginning and end of a word or phrase based on an analysis of energy levels such that some adjustable sensitivity threshold is set to distinguish user speech from background noise.
- the acoustic models represent demi- triphones—models of the acoustic transition from the middle of one phoneme (basic speech sound) to the middle of another phoneme, which are left-right
- the sequence of states in the models actually is derived from a relatively small pool of approximately 2000 states which model every specific speech sound.
- the states are represented as mixture models such that the acoustic modeling measures the distance from the input speech to various model parameters.
- the model states are a Gaussian family of probability distribution functions in which the conditional probability of the input, given the state, is a weighted sum of Gaussian probabilities.
- a preferred embodiment uses a simplified version in which the family has an assigned weight such that for a given input speech sequence, the distance is measured from the input to the nearest Gaussian.
- the input frames are relatively far from the nearest Gaussian so that it would be computationally inefficient to perform the relatively complex computation described above.
- a preferred embodiment simply uses the simpler measurement models produced by the process of vector quantization (VQ) in which the N-dimensional vector acoustic models are represented by sequences of standard or prototype states.
- the state indices identify or correspond to probability distribution functions.
- the state spectral index essentially serves as a pointer into a table which identifies, for each state index, the set of probabilities that each prototype frame or VQ index will be observed to correspond to that state index.
- the table is, in effect, a precalculated mapping between all possible frame indices and all state indices.
- a distance measurement or a measure of match can be obtained by directly indexing into the tables using the respective indices and combining the values obtained with appropriate weighting. It is thus possible to build a table or array storing a distance metric representing the closeness of match of each standard or prototype input frame with each standard or prototype model state. This matrix is further compressed by methods described
- Natural variations in speaking rate require that some method be employed for time aligning a sequence of frames representing an unknown speech segment with each sequence of acoustic model states representing a vocabulary word. This process is commonly referred to as time alignment.
- the sequence of frames which constitute the unknown speech segment taken together with a sequence of states representing a vocabulary model in effect, define a matrix and the time aligning process involves finding a path across the matrix which produces the best score, e.g., least distance or cost.
- the distance or cost is typically arrived at by accumulating the cost or distance values associated with each pairing of frame index with state index as described previously with respect to the vector quantization process.
- the vocabulary may also include word initial state and word final state noise models along with models of common intrusive noises, e.g. paper rustling, door closing, or a cough. When an acoustic input is best matched with one of these models, a null output or no output is provided.
- a preferred embodiment also uses models of word-to-word transition sounds, also called word "glues.”
- states corresponding to phones or other sub-units of speech are typically interconnected in a network and decoded in correspondence with the ongoing utterance. A score is progressively built up as the utterance proceeds. This total score is a function both of the degree of match of the utterance with the decoded path and the length of the utterance.
- a continuous speech recognition system will typically identify the best scoring model sequence and may also identify a ranked list of possible alternative sequences.
- the large vocabulary continuous speech dictation network employs various language models to additionally process the input speech, while in parallel one or more limited-vocabulary, application-associated command and control networks uses the acoustic model processing together with context free grammars to process and score the same input speech. In a prior art speech recognizer, this would mean implementing two or more independent and complete speech recognizers. However, preferred embodiments of the present invention share the processing results of the acoustic models to reduce the computational load and the total amount of work done by the system. This computational savings allows a single commercially available computer processor to perform the multiple simultaneous path processing needed to realize preferred embodiments of the present invention.
- the large vocabulary continuous speech dictation recognition network employs a three pass search utilizing both a tree-structured network, a linear network, and networks for initial and final noises and for the word glues.
- the first pass uses at every time step, for reasonably good word beginning times, both the linear network and the unigram tree networks.
- the linear network is used in conjunction with a bigram word history (i.e., the previous word context is used), and therefore, hopefully will not start a large number of words.
- the tree lexicon shares the computation of the words (e. .,which share common phone transcription prefixes), and therefore is not that computationally expensive due to shared system resources; furthermore, most state sequences will be pruned after the first few phones.
- the first search pass sequences the frames into networks with accumulated scores and calculates times of word endings using Hidden Markov Models (HMMs) which are statistical representations of word sequence probabilities.
- HMMs Hidden Markov Models
- the statistical processing performed by the first pass includes computing for each state the mixture model distances in a four-layered approach employing, in turn, a squeezed matrix, best branch entries, mixture model
- the calculation of the mixture model distances determines the distance of each state from the input frame.
- the mixture model distance calculations performed by the dictation network are also cached for use by the other recognition networks.
- the state scores are updated, and active states of the unigram tree, linear network, word initial and final noises network and the glue network are pruned to delete active states below a threshold value.
- the second search pass uses first pass tracking and score array based network construction to produce a word graph of states and probabilities.
- acoustic and language model distances are calculated for every word that ends that way. This process is described in greater detail in the Lynch patent previously cited.
- the second pass uses the mixture model distances (again, also stored to a cache for use by the other recognition networks) to identify sequences of states known as arcs which represent fundamental sound units and compute their acoustic scores, creating a word graph of word hypotheses and associated probabilities.
- the third pass shares the language model used in the first pass in conjunction with trigram word models to produce a ranking of the most probable hypotheses. Besides the above described dictation context and its associated network of language model word searches, a preferred embodiment of the present invention implements three other contexts and associated networks:
- the select context feature is to the word sequences displayed in the voice dictation object (VDO).
- VDO voice dictation object
- -11- mouse in a keyboard based word processing program may be used to select a portion of text.
- a preferred embodiment of the present invention utilizes two independent command context types, one type using a natural language process (NLP) grammar (actually several NLP grammars are used) and one type using a dynamic grammar.
- NLP natural language process
- Each natural language process grammar utilizes context free grammar rules to parse a sequence of input words and return a score representing the likelihood that a given sequence represents a command and control sequence from the user in natural language format.
- the dynamic grammar utilizes its own context free grammar and designed to detect and implement the short commands available on a drop-down GUI menu; for example, "Bold on” "Bold off” "Spell check document.”
- the select and command and control networks utilize acoustic models corresponding to the vocabulary recognized by their respective grammars. They perform recognition searches similar to the first and second pass searches described above with respect to the dictation recognition network, analyzing mixture model distances and producing word graphs of recognition hypotheses and their associated probabilities. By using the cache of mixture model distance calculations produced by the dictation network rather than performing these calculations independently for each recognition network, substantial savings are realized in systems computational resources. Thus, preferred embodiments of the present invention may be implemented in existing commercially available computer systems.
- search beamwidth refers to the pruning or deleting of recognition hypotheses which have scores beyond a threshold value
- Preferred embodiments prune the poorly scoring hypotheses in the non-dictation networks according to the best overall hypothesis score by any network in conjunction with common threshold values.
- networks in which scores are poor compared to the other networks have all or most of their hypotheses pruned off.
- sharing the pruning beamwidth allows preferred embodiments to consider the best scoring hypotheses from all the networks considered as a whole. Again, this results in significant computational efficiencies being realized.
- even greater computational efficiencies may be realized by utilizing a single recognition network such as that described above with respect to the dictation network.
- the non-dictation contexts such as select context and command and control context would attempt to parse the output of the single recognition network. Recognition results parsed by a given context would be recognized as commands for that context rather than dictated text. This embodiment requires that a careful compromise be reached between the overall system's recognition of a given word or phrase as dictated text or as commands.
- the different recognition contexts and their associated networks employ different types of models and different scoring mechanisms so that the scores are not directly comparable. Relative scaling of the scores is applied to minimize or avoid intrusions by each recognition network on correct translations from the other recognition network.
- the scoring produced by the dictation network with its combination of acoustic models, multipass language models, and dictation grammar must be adjusted to be comparable with the scoring produced by the select network and the command and control networks.
- An arbitration algorithm for selecting among the competing contexts and hypotheses combines and orders the various scores obtained, and then selects
- the large vocabulary continuous recognition dictation context, the select context and some number of command and control contexts have an initial precedence ordering with respect to the others. For instance, in a preferred embodiment, the highest precedence is assigned to the command and control contexts, then to the select context, and lowest is the dictation context.
- the arbitration algorithm finds the best scoring parseable hypothesis and its context (all dictation and selection results are considered parseable). If the same hypothesis is parseable by more than one context, the arbitration algorithm selects the hypothesis with the highest precedence. Precedence ordering ensures that when an utterance is recognized by both a command and control context and by the dictation context, the utterance will be treated as a command rather than dictation text, since that is the most likely user-desired action.
- the arbitration algorithm initially generates data structures of sets of (hypothesis, score, list of networks) in a score-ordered queue produced at the recognizer level. From the word graph generated by each network, including dictation, the arbitration algorithm gets the score of the best hypothesis and inserts the set of (hypothesis, score, list of networks) in the score-ordered queue. Until the algorithm is done or the queue is empty: the best hypothesis in the queue is found, and all the sets of (hypothesis, score, list of networks) in the queue which have the same hypothesis and score are found.
- hypotheses not yet in the queue having the same hypothesis and score are found by, for each network not in the list of networks associated with the best hypothesis, analyzing the next hypothesis and score, until the score is worse than the current score (or until some maximum number of hypotheses per
- the algorithm determines whether or not to get another (hypothesis, score, list of networks) candidate by checking whether some maximum number of hypotheses has been reached (e.g., 100). There may be more parses than this, since there may be multiple networks per hypothesis, but only a certain maximum number of hypotheses is considered.
- the hypotheses in fact, may not be distinct— the same hypothesis may be considered in different networks and with different scores. That is, the same hypothesis may come from the dictation network and from a command and control network at different times, with different scores.
- a list of networks associated with this hypothesis is made (often containing only one entry), and this list is ordered by precedence.
- the list of networks associated with the best scoring hypothesis is translated into a list of contexts associated with the best scoring hypothesis. Then, at the engine level, the first context in the list is found which parses the hypothesis, and if there is such a context it is returned along with its corresponding hypothesis and the algorithm is done. Otherwise, if the queue was empty, the algorithm returns an empty result.
- the arbitration algorithm selects a best hypothesis and associated application path to direct it to. If the hypothesis is recognized as dictated text, then that is sent to the VDO. If the hypothesis is recognized as a command to select text or for a command and control grammar, the hypothesis is translated into the appropriate command associated with the recognized hypothesis.
Landscapes
- Engineering & Computer Science (AREA)
- Computational Linguistics (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- Acoustics & Sound (AREA)
- Multimedia (AREA)
- Machine Translation (AREA)
- Document Processing Apparatus (AREA)
- Preparation Of Compounds By Using Micro-Organisms (AREA)
Abstract
Description
Claims
Priority Applications (6)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2000536068A JP2002507010A (en) | 1998-03-09 | 1999-03-09 | Apparatus and method for simultaneous multi-mode dictation |
EP99909926A EP1062660B1 (en) | 1998-03-09 | 1999-03-09 | Apparatus and method for simultaneous multimode dictation |
CA002321299A CA2321299A1 (en) | 1998-03-09 | 1999-03-09 | Apparatus and method for simultaneous multimode dictation |
AT99909926T ATE254328T1 (en) | 1998-03-09 | 1999-03-09 | APPARATUS AND METHOD FOR SIMULTANEOUS MULTIMODAL DICTATION |
DE69912754T DE69912754D1 (en) | 1998-03-09 | 1999-03-09 | DEVICE AND METHOD FOR SIMULTANEOUS MULTIMODAL DICTATING |
AU29012/99A AU2901299A (en) | 1998-03-09 | 1999-03-09 | Apparatus and method for simultaneous multimode dictation |
Applications Claiming Priority (6)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US7733798P | 1998-03-09 | 1998-03-09 | |
US60/077,337 | 1998-03-09 | ||
US7773898P | 1998-03-12 | 1998-03-12 | |
US60/077,738 | 1998-03-12 | ||
US7792298P | 1998-03-13 | 1998-03-13 | |
US60/077,922 | 1998-03-13 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO1999046763A1 true WO1999046763A1 (en) | 1999-09-16 |
Family
ID=27373082
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/US1999/005090 WO1999046763A1 (en) | 1998-03-09 | 1999-03-09 | Apparatus and method for simultaneous multimode dictation |
Country Status (8)
Country | Link |
---|---|
US (1) | US6292779B1 (en) |
EP (1) | EP1062660B1 (en) |
JP (1) | JP2002507010A (en) |
AT (1) | ATE254328T1 (en) |
AU (1) | AU2901299A (en) |
CA (1) | CA2321299A1 (en) |
DE (1) | DE69912754D1 (en) |
WO (1) | WO1999046763A1 (en) |
Cited By (22)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1094445A2 (en) * | 1999-10-19 | 2001-04-25 | Microsoft Corporation | Command versus dictation mode errors correction in speech recognition |
EP1126436A2 (en) * | 2000-02-18 | 2001-08-22 | Canon Kabushiki Kaisha | Speech recognition from multimodal inputs |
WO2003001319A2 (en) * | 2001-06-26 | 2003-01-03 | Vladimir Grigorievich Yakhno | Method for recognising information images using automatically controlled adaptation and system for carrying out said method |
ES2237345A1 (en) * | 2005-02-28 | 2005-07-16 | Prous Science S.A. | Method for converting phonemes to written text and corresponding computer system and computer program |
EP1796080A2 (en) | 2005-12-12 | 2007-06-13 | Gregory John Gadbois | Multi-voice speech recognition |
EP1922723A2 (en) * | 2005-08-05 | 2008-05-21 | Voicebox Technologies, Inc. | Systems and methods for responding to natural language speech utterance |
US8849652B2 (en) | 2005-08-29 | 2014-09-30 | Voicebox Technologies Corporation | Mobile systems and methods of supporting natural language human-machine interactions |
US8886536B2 (en) | 2007-02-06 | 2014-11-11 | Voicebox Technologies Corporation | System and method for delivering targeted advertisements and tracking advertisement interactions in voice recognition contexts |
US8983839B2 (en) | 2007-12-11 | 2015-03-17 | Voicebox Technologies Corporation | System and method for dynamically generating a recognition grammar in an integrated voice navigation services environment |
US9015049B2 (en) | 2006-10-16 | 2015-04-21 | Voicebox Technologies Corporation | System and method for a cooperative conversational voice user interface |
US9031845B2 (en) | 2002-07-15 | 2015-05-12 | Nuance Communications, Inc. | Mobile systems and methods for responding to natural language speech utterance |
US9105266B2 (en) | 2009-02-20 | 2015-08-11 | Voicebox Technologies Corporation | System and method for processing multi-modal device interactions in a natural language voice services environment |
US9171541B2 (en) | 2009-11-10 | 2015-10-27 | Voicebox Technologies Corporation | System and method for hybrid processing in a natural language voice services environment |
US9305548B2 (en) | 2008-05-27 | 2016-04-05 | Voicebox Technologies Corporation | System and method for an integrated, multi-modal, multi-device natural language voice services environment |
US9502025B2 (en) | 2009-11-10 | 2016-11-22 | Voicebox Technologies Corporation | System and method for providing a natural language content dedication service |
US9626703B2 (en) | 2014-09-16 | 2017-04-18 | Voicebox Technologies Corporation | Voice commerce |
US9626959B2 (en) | 2005-08-10 | 2017-04-18 | Nuance Communications, Inc. | System and method of supporting adaptive misrecognition in conversational speech |
US9747896B2 (en) | 2014-10-15 | 2017-08-29 | Voicebox Technologies Corporation | System and method for providing follow-up responses to prior natural language inputs of a user |
US9898459B2 (en) | 2014-09-16 | 2018-02-20 | Voicebox Technologies Corporation | Integration of domain information into state transitions of a finite state transducer for natural language processing |
US10331784B2 (en) | 2016-07-29 | 2019-06-25 | Voicebox Technologies Corporation | System and method of disambiguating natural language processing requests |
US10431214B2 (en) | 2014-11-26 | 2019-10-01 | Voicebox Technologies Corporation | System and method of determining a domain and/or an action related to a natural language input |
US10614799B2 (en) | 2014-11-26 | 2020-04-07 | Voicebox Technologies Corporation | System and method of providing intent predictions for an utterance prior to a system detection of an end of the utterance |
Families Citing this family (31)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6799169B1 (en) * | 1999-08-13 | 2004-09-28 | International Business Machines Corporation | Method and system for modeless operation of a multi-modal user interface through implementation of independent decision networks |
US6912499B1 (en) * | 1999-08-31 | 2005-06-28 | Nortel Networks Limited | Method and apparatus for training a multilingual speech model set |
US6600497B1 (en) * | 1999-11-15 | 2003-07-29 | Elliot A. Gottfurcht | Apparatus and method to navigate interactive television using unique inputs with a remote control |
US7020845B1 (en) * | 1999-11-15 | 2006-03-28 | Gottfurcht Elliot A | Navigating internet content on a television using a simplified interface and a remote control |
US6741963B1 (en) * | 2000-06-21 | 2004-05-25 | International Business Machines Corporation | Method of managing a speech cache |
US7275033B1 (en) * | 2000-09-30 | 2007-09-25 | Intel Corporation | Method and system for using rule-based knowledge to build a class-based domain specific statistical language model |
US6983239B1 (en) * | 2000-10-25 | 2006-01-03 | International Business Machines Corporation | Method and apparatus for embedding grammars in a natural language understanding (NLU) statistical parser |
US20020072914A1 (en) * | 2000-12-08 | 2002-06-13 | Hiyan Alshawi | Method and apparatus for creation and user-customization of speech-enabled services |
US7027987B1 (en) | 2001-02-07 | 2006-04-11 | Google Inc. | Voice interface for a search engine |
DE10120513C1 (en) * | 2001-04-26 | 2003-01-09 | Siemens Ag | Method for determining a sequence of sound modules for synthesizing a speech signal of a tonal language |
US20040150676A1 (en) * | 2002-03-25 | 2004-08-05 | Gottfurcht Elliot A. | Apparatus and method for simple wide-area network navigation |
US7366645B2 (en) * | 2002-05-06 | 2008-04-29 | Jezekiel Ben-Arie | Method of recognition of human motion, vector sequences and speech |
KR100504982B1 (en) * | 2002-07-25 | 2005-08-01 | (주) 메카트론 | Surrounding-condition-adaptive voice recognition device including multiple recognition module and the method thereof |
US7191130B1 (en) * | 2002-09-27 | 2007-03-13 | Nuance Communications | Method and system for automatically optimizing recognition configuration parameters for speech recognition systems |
US7171358B2 (en) * | 2003-01-13 | 2007-01-30 | Mitsubishi Electric Research Laboratories, Inc. | Compression of language model structures and word identifiers for automated speech recognition systems |
US20040138883A1 (en) * | 2003-01-13 | 2004-07-15 | Bhiksha Ramakrishnan | Lossless compression of ordered integer lists |
GB2418764B (en) * | 2004-09-30 | 2008-04-09 | Fluency Voice Technology Ltd | Improving pattern recognition accuracy with distortions |
JP5062171B2 (en) * | 2006-03-23 | 2012-10-31 | 日本電気株式会社 | Speech recognition system, speech recognition method, and speech recognition program |
US20080086311A1 (en) * | 2006-04-11 | 2008-04-10 | Conwell William Y | Speech Recognition, and Related Systems |
US9129599B2 (en) * | 2007-10-18 | 2015-09-08 | Nuance Communications, Inc. | Automated tuning of speech recognition parameters |
US8364481B2 (en) * | 2008-07-02 | 2013-01-29 | Google Inc. | Speech recognition with parallel recognition tasks |
JP5478903B2 (en) * | 2009-01-22 | 2014-04-23 | 三菱重工業株式会社 | Robot, voice recognition apparatus and program |
US9478216B2 (en) * | 2009-12-08 | 2016-10-25 | Nuance Communications, Inc. | Guest speaker robust adapted speech recognition |
JP2012047924A (en) * | 2010-08-26 | 2012-03-08 | Sony Corp | Information processing device and information processing method, and program |
US9620122B2 (en) * | 2011-12-08 | 2017-04-11 | Lenovo (Singapore) Pte. Ltd | Hybrid speech recognition |
EP2733697A1 (en) * | 2012-11-16 | 2014-05-21 | QNX Software Systems Limited | Application services interface to ASR |
US9477753B2 (en) * | 2013-03-12 | 2016-10-25 | International Business Machines Corporation | Classifier-based system combination for spoken term detection |
US10186262B2 (en) * | 2013-07-31 | 2019-01-22 | Microsoft Technology Licensing, Llc | System with multiple simultaneous speech recognizers |
JP5709955B2 (en) * | 2013-09-30 | 2015-04-30 | 三菱重工業株式会社 | Robot, voice recognition apparatus and program |
US10089977B2 (en) * | 2015-07-07 | 2018-10-02 | International Business Machines Corporation | Method for system combination in an audio analytics application |
US10607606B2 (en) | 2017-06-19 | 2020-03-31 | Lenovo (Singapore) Pte. Ltd. | Systems and methods for execution of digital assistant |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO1996013829A1 (en) * | 1994-10-26 | 1996-05-09 | Motorola Inc. | Method and system for continuous speech recognition using voting techniques |
US5677991A (en) * | 1995-06-30 | 1997-10-14 | Kurzweil Applied Intelligence, Inc. | Speech recognition system using arbitration between continuous speech and isolated word modules |
DE19635754A1 (en) * | 1996-09-03 | 1998-03-05 | Siemens Ag | Speech processing system and method for speech processing |
Family Cites Families (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5920837A (en) | 1992-11-13 | 1999-07-06 | Dragon Systems, Inc. | Word recognition system which stores two models for some words and allows selective deletion of one such model |
US5832430A (en) * | 1994-12-29 | 1998-11-03 | Lucent Technologies, Inc. | Devices and methods for speech recognition of vocabulary words with simultaneous detection and verification |
US5794196A (en) | 1995-06-30 | 1998-08-11 | Kurzweil Applied Intelligence, Inc. | Speech recognition system distinguishing dictation from commands by arbitration between continuous speech and isolated word modules |
US5737489A (en) * | 1995-09-15 | 1998-04-07 | Lucent Technologies Inc. | Discriminative utterance verification for connected digits recognition |
US5799279A (en) | 1995-11-13 | 1998-08-25 | Dragon Systems, Inc. | Continuous speech recognition of text and commands |
US6029124A (en) * | 1997-02-21 | 2000-02-22 | Dragon Systems, Inc. | Sequential, nonparametric speech recognition and speaker identification |
US6076056A (en) * | 1997-09-19 | 2000-06-13 | Microsoft Corporation | Speech recognition system for recognizing continuous and isolated speech |
US6182038B1 (en) * | 1997-12-01 | 2001-01-30 | Motorola, Inc. | Context dependent phoneme networks for encoding speech information |
-
1999
- 1999-03-09 DE DE69912754T patent/DE69912754D1/en not_active Expired - Lifetime
- 1999-03-09 WO PCT/US1999/005090 patent/WO1999046763A1/en active IP Right Grant
- 1999-03-09 AU AU29012/99A patent/AU2901299A/en not_active Abandoned
- 1999-03-09 AT AT99909926T patent/ATE254328T1/en not_active IP Right Cessation
- 1999-03-09 JP JP2000536068A patent/JP2002507010A/en not_active Withdrawn
- 1999-03-09 CA CA002321299A patent/CA2321299A1/en not_active Abandoned
- 1999-03-09 EP EP99909926A patent/EP1062660B1/en not_active Expired - Lifetime
- 1999-03-09 US US09/267,925 patent/US6292779B1/en not_active Expired - Lifetime
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO1996013829A1 (en) * | 1994-10-26 | 1996-05-09 | Motorola Inc. | Method and system for continuous speech recognition using voting techniques |
US5677991A (en) * | 1995-06-30 | 1997-10-14 | Kurzweil Applied Intelligence, Inc. | Speech recognition system using arbitration between continuous speech and isolated word modules |
DE19635754A1 (en) * | 1996-09-03 | 1998-03-05 | Siemens Ag | Speech processing system and method for speech processing |
Cited By (55)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1094445A3 (en) * | 1999-10-19 | 2001-09-12 | Microsoft Corporation | Command versus dictation mode errors correction in speech recognition |
US6581033B1 (en) | 1999-10-19 | 2003-06-17 | Microsoft Corporation | System and method for correction of speech recognition mode errors |
EP1094445A2 (en) * | 1999-10-19 | 2001-04-25 | Microsoft Corporation | Command versus dictation mode errors correction in speech recognition |
EP1126436A2 (en) * | 2000-02-18 | 2001-08-22 | Canon Kabushiki Kaisha | Speech recognition from multimodal inputs |
EP1126436A3 (en) * | 2000-02-18 | 2001-09-26 | Canon Kabushiki Kaisha | Speech recognition from multimodal inputs |
US6823308B2 (en) | 2000-02-18 | 2004-11-23 | Canon Kabushiki Kaisha | Speech recognition accuracy in a multimodal input system |
WO2003001319A2 (en) * | 2001-06-26 | 2003-01-03 | Vladimir Grigorievich Yakhno | Method for recognising information images using automatically controlled adaptation and system for carrying out said method |
WO2003001319A3 (en) * | 2001-06-26 | 2003-09-18 | Vladimir Grigorievich Yakhno | Method for recognising information images using automatically controlled adaptation and system for carrying out said method |
US9031845B2 (en) | 2002-07-15 | 2015-05-12 | Nuance Communications, Inc. | Mobile systems and methods for responding to natural language speech utterance |
ES2237345A1 (en) * | 2005-02-28 | 2005-07-16 | Prous Science S.A. | Method for converting phonemes to written text and corresponding computer system and computer program |
EP1922723A4 (en) * | 2005-08-05 | 2010-09-29 | Voicebox Technologies Inc | Systems and methods for responding to natural language speech utterance |
EP1922723A2 (en) * | 2005-08-05 | 2008-05-21 | Voicebox Technologies, Inc. | Systems and methods for responding to natural language speech utterance |
US9263039B2 (en) | 2005-08-05 | 2016-02-16 | Nuance Communications, Inc. | Systems and methods for responding to natural language speech utterance |
CN104778945A (en) * | 2005-08-05 | 2015-07-15 | 沃伊斯博克斯科技公司 | Systems and methods for responding to natural language speech utterance |
US8849670B2 (en) | 2005-08-05 | 2014-09-30 | Voicebox Technologies Corporation | Systems and methods for responding to natural language speech utterance |
US9626959B2 (en) | 2005-08-10 | 2017-04-18 | Nuance Communications, Inc. | System and method of supporting adaptive misrecognition in conversational speech |
US8849652B2 (en) | 2005-08-29 | 2014-09-30 | Voicebox Technologies Corporation | Mobile systems and methods of supporting natural language human-machine interactions |
US9495957B2 (en) | 2005-08-29 | 2016-11-15 | Nuance Communications, Inc. | Mobile systems and methods of supporting natural language human-machine interactions |
EP1796080A3 (en) * | 2005-12-12 | 2008-07-16 | Gregory John Gadbois | Multi-voice speech recognition |
US7899669B2 (en) | 2005-12-12 | 2011-03-01 | Gregory John Gadbois | Multi-voice speech recognition |
EP1796080A2 (en) | 2005-12-12 | 2007-06-13 | Gregory John Gadbois | Multi-voice speech recognition |
US11222626B2 (en) | 2006-10-16 | 2022-01-11 | Vb Assets, Llc | System and method for a cooperative conversational voice user interface |
US10755699B2 (en) | 2006-10-16 | 2020-08-25 | Vb Assets, Llc | System and method for a cooperative conversational voice user interface |
US9015049B2 (en) | 2006-10-16 | 2015-04-21 | Voicebox Technologies Corporation | System and method for a cooperative conversational voice user interface |
US10515628B2 (en) | 2006-10-16 | 2019-12-24 | Vb Assets, Llc | System and method for a cooperative conversational voice user interface |
US10510341B1 (en) | 2006-10-16 | 2019-12-17 | Vb Assets, Llc | System and method for a cooperative conversational voice user interface |
US10297249B2 (en) | 2006-10-16 | 2019-05-21 | Vb Assets, Llc | System and method for a cooperative conversational voice user interface |
US9406078B2 (en) | 2007-02-06 | 2016-08-02 | Voicebox Technologies Corporation | System and method for delivering targeted advertisements and/or providing natural language processing based on advertisements |
US10134060B2 (en) | 2007-02-06 | 2018-11-20 | Vb Assets, Llc | System and method for delivering targeted advertisements and/or providing natural language processing based on advertisements |
US11080758B2 (en) | 2007-02-06 | 2021-08-03 | Vb Assets, Llc | System and method for delivering targeted advertisements and/or providing natural language processing based on advertisements |
US9269097B2 (en) | 2007-02-06 | 2016-02-23 | Voicebox Technologies Corporation | System and method for delivering targeted advertisements and/or providing natural language processing based on advertisements |
US8886536B2 (en) | 2007-02-06 | 2014-11-11 | Voicebox Technologies Corporation | System and method for delivering targeted advertisements and tracking advertisement interactions in voice recognition contexts |
US10347248B2 (en) | 2007-12-11 | 2019-07-09 | Voicebox Technologies Corporation | System and method for providing in-vehicle services via a natural language voice user interface |
US9620113B2 (en) | 2007-12-11 | 2017-04-11 | Voicebox Technologies Corporation | System and method for providing a natural language voice user interface |
US8983839B2 (en) | 2007-12-11 | 2015-03-17 | Voicebox Technologies Corporation | System and method for dynamically generating a recognition grammar in an integrated voice navigation services environment |
US10089984B2 (en) | 2008-05-27 | 2018-10-02 | Vb Assets, Llc | System and method for an integrated, multi-modal, multi-device natural language voice services environment |
US9711143B2 (en) | 2008-05-27 | 2017-07-18 | Voicebox Technologies Corporation | System and method for an integrated, multi-modal, multi-device natural language voice services environment |
US9305548B2 (en) | 2008-05-27 | 2016-04-05 | Voicebox Technologies Corporation | System and method for an integrated, multi-modal, multi-device natural language voice services environment |
US10553216B2 (en) | 2008-05-27 | 2020-02-04 | Oracle International Corporation | System and method for an integrated, multi-modal, multi-device natural language voice services environment |
US10553213B2 (en) | 2009-02-20 | 2020-02-04 | Oracle International Corporation | System and method for processing multi-modal device interactions in a natural language voice services environment |
US9105266B2 (en) | 2009-02-20 | 2015-08-11 | Voicebox Technologies Corporation | System and method for processing multi-modal device interactions in a natural language voice services environment |
US9953649B2 (en) | 2009-02-20 | 2018-04-24 | Voicebox Technologies Corporation | System and method for processing multi-modal device interactions in a natural language voice services environment |
US9570070B2 (en) | 2009-02-20 | 2017-02-14 | Voicebox Technologies Corporation | System and method for processing multi-modal device interactions in a natural language voice services environment |
US9171541B2 (en) | 2009-11-10 | 2015-10-27 | Voicebox Technologies Corporation | System and method for hybrid processing in a natural language voice services environment |
US9502025B2 (en) | 2009-11-10 | 2016-11-22 | Voicebox Technologies Corporation | System and method for providing a natural language content dedication service |
US11087385B2 (en) | 2014-09-16 | 2021-08-10 | Vb Assets, Llc | Voice commerce |
US10430863B2 (en) | 2014-09-16 | 2019-10-01 | Vb Assets, Llc | Voice commerce |
US9898459B2 (en) | 2014-09-16 | 2018-02-20 | Voicebox Technologies Corporation | Integration of domain information into state transitions of a finite state transducer for natural language processing |
US9626703B2 (en) | 2014-09-16 | 2017-04-18 | Voicebox Technologies Corporation | Voice commerce |
US10216725B2 (en) | 2014-09-16 | 2019-02-26 | Voicebox Technologies Corporation | Integration of domain information into state transitions of a finite state transducer for natural language processing |
US9747896B2 (en) | 2014-10-15 | 2017-08-29 | Voicebox Technologies Corporation | System and method for providing follow-up responses to prior natural language inputs of a user |
US10229673B2 (en) | 2014-10-15 | 2019-03-12 | Voicebox Technologies Corporation | System and method for providing follow-up responses to prior natural language inputs of a user |
US10614799B2 (en) | 2014-11-26 | 2020-04-07 | Voicebox Technologies Corporation | System and method of providing intent predictions for an utterance prior to a system detection of an end of the utterance |
US10431214B2 (en) | 2014-11-26 | 2019-10-01 | Voicebox Technologies Corporation | System and method of determining a domain and/or an action related to a natural language input |
US10331784B2 (en) | 2016-07-29 | 2019-06-25 | Voicebox Technologies Corporation | System and method of disambiguating natural language processing requests |
Also Published As
Publication number | Publication date |
---|---|
ATE254328T1 (en) | 2003-11-15 |
EP1062660A1 (en) | 2000-12-27 |
AU2901299A (en) | 1999-09-27 |
JP2002507010A (en) | 2002-03-05 |
US6292779B1 (en) | 2001-09-18 |
EP1062660B1 (en) | 2003-11-12 |
DE69912754D1 (en) | 2003-12-18 |
CA2321299A1 (en) | 1999-09-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US6292779B1 (en) | System and method for modeless large vocabulary speech recognition | |
US10176802B1 (en) | Lattice encoding using recurrent neural networks | |
US10121467B1 (en) | Automatic speech recognition incorporating word usage information | |
US6542866B1 (en) | Speech recognition method and apparatus utilizing multiple feature streams | |
EP0977174B1 (en) | Search optimization system and method for continuous speech recognition | |
US7162423B2 (en) | Method and apparatus for generating and displaying N-Best alternatives in a speech recognition system | |
JP2965537B2 (en) | Speaker clustering processing device and speech recognition device | |
US5794196A (en) | Speech recognition system distinguishing dictation from commands by arbitration between continuous speech and isolated word modules | |
US5937384A (en) | Method and system for speech recognition using continuous density hidden Markov models | |
EP1610301B1 (en) | Speech recognition method based on word duration modelling | |
US6125345A (en) | Method and apparatus for discriminative utterance verification using multiple confidence measures | |
EP1055226B1 (en) | System for using silence in speech recognition | |
US5822728A (en) | Multistage word recognizer based on reliably detected phoneme similarity regions | |
Goffin et al. | The AT&T Watson speech recognizer | |
US6490555B1 (en) | Discriminatively trained mixture models in continuous speech recognition | |
US6738745B1 (en) | Methods and apparatus for identifying a non-target language in a speech recognition system | |
US20040186714A1 (en) | Speech recognition improvement through post-processsing | |
US20110077943A1 (en) | System for generating language model, method of generating language model, and program for language model generation | |
US20030200090A1 (en) | Speech recognition apparatus, speech recognition method, and computer-readable recording medium in which speech recognition program is recorded | |
US10199037B1 (en) | Adaptive beam pruning for automatic speech recognition | |
WO1993013519A1 (en) | Composite expert | |
JP6031316B2 (en) | Speech recognition apparatus, error correction model learning method, and program | |
JP2007240589A (en) | Speech recognition reliability estimating device, and method and program therefor | |
JP3104900B2 (en) | Voice recognition method | |
JP3494338B2 (en) | Voice recognition method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AK | Designated states |
Kind code of ref document: A1 Designated state(s): AU CA JP |
|
AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
DFPE | Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101) | ||
WWE | Wipo information: entry into national phase |
Ref document number: 29012/99 Country of ref document: AU |
|
ENP | Entry into the national phase |
Ref document number: 2321299 Country of ref document: CA Ref country code: CA Ref document number: 2321299 Kind code of ref document: A Format of ref document f/p: F |
|
WWE | Wipo information: entry into national phase |
Ref document number: 1999909926 Country of ref document: EP |
|
ENP | Entry into the national phase |
Ref country code: JP Ref document number: 2000 536068 Kind code of ref document: A Format of ref document f/p: F |
|
WWP | Wipo information: published in national office |
Ref document number: 1999909926 Country of ref document: EP |
|
WWG | Wipo information: grant in national office |
Ref document number: 1999909926 Country of ref document: EP |