EP1604350A4 - Verfahren, systeme und programmierung zur durchführung der spracherkennung - Google Patents

Verfahren, systeme und programmierung zur durchführung der spracherkennung

Info

Publication number
EP1604350A4
EP1604350A4 EP02773307A EP02773307A EP1604350A4 EP 1604350 A4 EP1604350 A4 EP 1604350A4 EP 02773307 A EP02773307 A EP 02773307A EP 02773307 A EP02773307 A EP 02773307A EP 1604350 A4 EP1604350 A4 EP 1604350A4
Authority
EP
European Patent Office
Prior art keywords
recognition
user
word
words
speech recognition
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP02773307A
Other languages
English (en)
French (fr)
Other versions
EP1604350A2 (de
Inventor
Daniel L Roth
Jordan R Cohen
David F Johnson
Manfred G Grabherr
Paul A Franzosa
Edward W Porter
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
GRABHERR, MANFRED G.
JOHNSON, DAVID F.
ROTH, DANIEL L.
Voice Signal Technologies Inc
Original Assignee
Voice Signal Technologies Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Voice Signal Technologies Inc filed Critical Voice Signal Technologies Inc
Priority claimed from PCT/US2002/028590 external-priority patent/WO2004023455A2/en
Publication of EP1604350A2 publication Critical patent/EP1604350A2/de
Publication of EP1604350A4 publication Critical patent/EP1604350A4/de
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/28Constructional details of speech recognition systems
    • G10L15/32Multiple recognisers used in sequence or in parallel; Score combination systems therefor, e.g. voting systems
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/28Constructional details of speech recognition systems

Definitions

  • the present invention relates to methods, systems, and programming for performing speech recognition.
  • large large- vocabulary speech recognition typically functions by having a user 100 speak into a microphone 102, which in the example of FIG. 1 is a microphone of a cellular telephone 104.
  • the microphone transduces the variation in air pressure over time caused by the utterance of words into a corresponding electronically represented waveform represented by an electronic signal 106.
  • this waveform signal is converted by digital signal processing performed either by a computer processor or by a special digital signal processor 108, into a time domain representation.
  • time domain representation ⁇ ___-- comprised ofs a plurality of parameter frames 112, each of which represents properties of the sound represented by the waveform 106 at each of a plurality of successive time periods, such as every one -hundredth of a second.
  • the time domain, or frame, representation of an utterance to be recognized is then matched against a plurality of possible sequences of phonetic models 200 corresponding to ⁇ different words in a recognition systems ⁇ individual words 202 are each represented by a corresponding phonetic spelling 204, similar to the phonetic spellings found in most dictionaries.
  • Each phoneme in a phonetic spelling has one or more phonetic models 200 associated with it.
  • the models 200 are phoneme -in-context models, which model the sound of their associat ed phoneme when it occurs in the context of the preceding and following phoneme in a given word's phonetic spelling.
  • the phonetic models are commonly composed of the sequence of one or more probability models, each of which ⁇ cprcpcntS represents the probability of different parameter values for each of the parameters used in the frames of the time domain representation 110 of an utterance to be recognized.
  • ⁇ cprcpcntS represents the probability of different parameter values for each of the parameters used in the frames of the time domain representation 110 of an utterance to be recognized.
  • FIG. 8 Recently there has been an increase in usage the use of new types of computers such as the tablet computer shown in FIG. 4, the personal digital assistant computer shown in FIG. 5, cell phones which have increased computing power, shown in FIG. 6, wrist phone computers represented in FIG. 7, and a wearable computer which provides a user interface with a screen and eyetracking and/or audio output provided from a head wearable device as indicated in FIG. 8.
  • One aspect of the present invention relates to speech recognition using selectable recognition modes.
  • This includes innovations such as: allowing a user to select between recognition modes with and without language context; allowing, a user to select between 'con tinuous and discrete largej-fyocabulary speech recognition modes; allowing a user to select between at least two different alphabetic entry speech recognition modes; and allowing a user to select recognitions modes when creating text: a large ⁇ vocabulary mode, a letters recognizing mode, a numbers recognizing mode, and a punctuation recognizing mode.
  • Another asnapt. of the invention relates to using choice
  • choice lists_ providing vertically scrollable choice lists; providing horizontally scrollable choice lists; and providing choice lists on characters in an alphabetic filter used to limit recognition candidates .
  • Another aspect of the invention relates to enabling users to select word transformations.
  • This includes innovations such as enabling a users- to select to havechoose one from a plurality of transformation ⁇ to performed upon a recognized word so as to change it in a desired way, such as to change from singular to plural, to give irfe—the word a gerund form, etc.
  • innovations such as enabling a user to select to transform a selected word between an alphabetic and non -alphabetic form.
  • innovations such as providing a user with a choice list of transformed words_ corresponding to a recognized word and allowing the user to select one of the transformed words as output .
  • Another aspect of the invention relates to speech recognition that automatically turns recognition off in one ways.
  • This includes innovation s_ such as a speech recognition command that turns on recognition and then automatically turns such recognition off until receiving another command to turn recognition back on. It also includes the innovation of speech recognition in which pressing a button causes recognition for a duration determined by the length of time of such a press, and in which clicking the same button causes 'recognition for a length of time independent of the length of such a click.
  • Another aspect ⁇ of the invention relates to phone key includes the innovations of using phone keys to select a word from a choice list; of using them to select a help mode that provides explanation about a subsequently pressed key; and of using them to select a list of functions currently associated with phone keys. It also includes the innovation of speech recognition of having a text navigation mode in which multiple numbered phone keys concurrently have multiple different key mappings associated with them, and the pressing of such a key mapping key causes the functions associated with the numbered phone keys to change to the mapping associated with the pressed key.
  • Another aspect of the invention relates to speech recognition using phone key alphabetic filtering and spelling.
  • alphabetic filtering we mean favoring the speech recognition of words including a sequence of letters, normally an initial sequence of letters, corresponding to a sequence of letters indicated by user input .
  • This aspect of the invention includes the innovation of using as filtering input the pressing of phone keys, where each key press is ambiguous in that it indicates that a corresponding character location in a desired word corresponds to one of a plurality of letters identified with that phone key.
  • This aspect of the invention also includes the innovation of using as filtering input a sequence of phone key presses in
  • the This aspect of the invention also includes the innovation' of using such ambiguous and non -ambiguous phone key input for spelling text that can be used in addition to text produced by speech recognition.
  • Another aspect of the invention relates to speech recognition that enables a user to perform re -utterance
  • I recognition in which speech recognition is performed upon both a second saying of a sequence of one or more words and upon an early saying of the same sequence to help the speech recognition better select one or more best scoring text sequences for the utterances .
  • Another aspect of the invention relates to the combination of speech recognition ' and text-to-speech (TTS) generation. This includes the innovation of having speech
  • This aspect of the invention also includes the innovation of a large vocabulary system that automatically repeats recognized text using TTS after each utterance.
  • This aspect also includes the innovation of a large vocabulary system that enables a user to move back or forward in recognized text, with one or more words at the current location after each such move being said by TTS.
  • This aspect also includes the innovation of a large vocabulary system that uses speech recognition to produce a choice list and provides TTS output of one or more of that list's choices.
  • Another aspect of the invention relates to the combination of speech recognition with handwriting and/or character recognition.
  • This includes the innovation of selecting as a function of recognition of both handwritten and spoken representations of a sequence of one dr more words to be recognized. It also includes the innovation of using character or handwriting recognition of one or more letters to alphabetically filter speech recognition of one or more words. It also includes the innovations of using speech recognition of one or more letter Hjidentifying words to alphabetically filter handwriting recognition, and of using speech recognition to correct handwriting recognition of one or more words .
  • Another aspect of the invention relates to the combination of large—vocabulary speech recognition with audio recording and playback. It includes the innovation of a handheld device with both large —vocabulary speech recognition and audio recoding in which users can switch between at least two of the following modes of recording sound input: one which records audio without corresponding speech recognition output; one that records audio with corresponding speech recognition output; and one that records the audio' s speech recognition output without corresponding audio.
  • This aspect of the invention also includes the innovation of a handheld device that has both large—vocabulary speech recognition and audio recoding capability and that enables a user to select a portion of previously recorded sound and to have speech recognition performed upon it.
  • It also includes the innovation of a large— ocabulary speech recognition system that enable s_ a user to use large—vocabulary speech recognition to provide a text label for a portion of sound that is recorded without corresponding speech recognition output, and the innovation of a system that enables a user to search for a text label associated with portions of unrecognized recorded sound by uttering the label's words, recognizing the utterance, and searching for text containing those words.
  • This aspect of the invention also includes the innovation of a large — vocabulary system that allows ⁇ users to switch between playing back previously recorded audio and performing speech recognition with a single input, with successive audio playbacks automatically starting slightly before the end of prior playback.
  • This aspect of the invention also includes the innovation of a cell phone that has both large — vocabulary speech recognition and audio recording and playback capabilities .
  • Figure 1 is a schematic illustration of how spoken sound can be converted into acoustic parameter frames for use by speech recognition software.
  • Figure 2 a schematic illustration of how speech recognition, using phonetic spellings, can be used to recognize words represented by a sequence of parameter frames such as those shown in figure 1, and how the time alignment between phonetic models of the word can be used to time aligne those words against the original acoustic signal from which the parameter frames have been derived
  • Figures 3 through 8 show a progression of different types of computing platforms upon which many aspects of the present invention can be used, and illustrating the trend toward smaller and/or more portable computing devices .
  • Figure 9 illustrates a personal digital assistant, or PDA, device having a touch screen displaying a software input panel, or SIP, embodying many aspects of the present invention, that allows entry by speech recognition of text into application programs running on such a device
  • Figure 10 is a highly schematic illustration of many of the hardware and software components that can be found in a PDA of the type shown in figure 9.
  • Figure 11 is a blowup of the screen image shown in figure 9, used to point out many of the specific elements of the speech recognition SIP shown in figure 9
  • Figure 11 s similar to figure 12 except that it also illustrates a correction window produced by the speech recognition SIP and many of its graphical user interface elements .
  • Figures 13 through 17 provide a highly simplified pseudocode description of the responses that the speech recognition SIP makes to various inputs, particularly inputs received from its graphical user interface
  • Figure 18 is a highly simplified pseudocode description of the recognition duration logic used to determine the length of time for which speech recognition is turned on m response to the pressing of one or more user interface buttons, either in the speech recognition SIP shown m figure 9 or m the cellphone embodiment shown starting at figure 59.
  • Figure 19 is a highly simplified pseudocode description of a help mode that enables a user to see a description of the function associated with each element of the speech recognition SIP of figure 9 merely by tcuchmg it
  • Figures 20 and 21 are screen images produced by the help mode described in figure 19.
  • Figure 22 is a highly simplified pseudocode description of a displayChoice ist routine used m various forms by both the speech recognition SIP a figure 9 aid the cellphone embodiment of figure 59 to display correction windows.
  • Figure 23 is a highly simplified pseudocode description of the getchoices routine used m various forms by both the speech recognition SIP and the cellphone embodiment to generate one or more choice list for use by the display choice list routine of figure 22
  • Figures 2 and 25 illustrate the utterance list data structure used by the getchoices routine of figure 23
  • Figure 26 is a highly simplified pseudocode description of a filter ⁇ tch routine used by the getchoices routine to limit correction window choices to match filtering input, if any, entered by a user
  • Figure 27 is a highly simplified pseudocode description of a wordForm ist routine used in various forms by both the speech recognition SIP and the cellphone embodiment to generate a word form correction list that displays alternate forms of a given word or selection
  • Figures 28 and 29 provided a highly simplified pseudocode description of a filterEdit routine used in various forms by both the speech recognition SIP and cellphone embodiment to edit a filter string used by the filterMatch routine of figure 26 in response to alphabetic filtering information input from a user
  • Figure 30 provides a highly simplified pseudocode description of a filterCharacterChoice routine used m various forms by both the speech recognition SIP and cellphone embodiment to display choice lists for individual characters of a filter string
  • Figures 31 through 35 illustrate a sequence of interactions between a user and the speech recognition SIP, in which the user enters and corrects the recognition of words using a one-at-a-time discrete speech recognition method
  • Figure 36 shows how a user of the SIP can correct a mis-recognition shown at the end of figure 35 by a scrolling through the choice list provided in the correction window until finding a desired word and then using a capitalized button to capitalize it before entering it into text
  • Figures 37 shows how a user of the SIP can correct such amis-recognition by selecting part of an alternate choice in the correction window and using it as a filter for selecting the desired speech recognition output
  • Figure 38 shows how a user of the SIP can select two successive alphabetically ordered alternate choices m the correction window to cause the speech recognizer's output to be limited to output starting with a sequence of characters located between the two selected choices m the alphabetic
  • Figure 39 illustrates how a user of the sip can use th. speech recognition of letter names to input filtering characters and how a filter character choice list can be used to correct errors m the recognition of such filtering characters
  • Figure 0 illustrates how a user of the SIP recognizer can enter one or more characters of a filter string using the international communication alphabets and how the SIP interface can show the user the words out of that alphabet
  • Figure 1 shows how a user can select an initial sequence of characters from an alternate choice in the correction window and then use international communication alphabets to add characters to that sequence so as to complete the spelling of a desired output
  • Figures .2 through 3 illustrate a sequence of user interactions in which the user enters and edits text into the SIP using continuous speech recognition
  • Figure 4 5 illustrates how the user can correct a mis-recognition by spelling all or part of the desired output using continuous letter name recognition as an ambiguous (or multivalued) filter, and how the user can use filter character choice lists to rapidly correct errors produced m such continuous letter name recognition
  • Figure 6 illustrates how the speech recognition SIP also enables a user to input characters by drawn character recogaition
  • Figure 7 is a highly simplified pseudocode description of a character recognition mode used by the SIP when performing drawn character recognition of the type shown m figure 6
  • Figure 4 8 illustrates how the speech recognition SIP lets a user nput text using handwriting recognition
  • Figure 49 is a highly simplified pseudocode description of the handwriting recognition mode used by the SIP when performing handwriting recognition of the type shown in figure 8
  • Figure 50 illustrates how the speech recognition system enables a user to input text with a software keyboard
  • Figure 51 illustrates a filter entry mode menu that can be selected to choose from different methods of entering filtering information, including speech recognition, character recognition, handwriting recognition, and software keyboard input
  • Figures 52 through 5 4 illustrates how either character recognition, handwriting recognition, or software keyboard input can be used to filter speech recognition choices produced by m the SIP's correction window
  • Figure 55 and 56 illustrate how the SIP allows speech recognition of words or filtering characters to be used to correct handwriting recognition input
  • Figure 58 is a highly simplified description of an alternate embodiment of the display choice list routine of figure 22 in which the choice list produced orders choices only by recognition score, rather than by alphabetical ordering as in figure 22
  • Figure 59 illustrates a cellphone that embodies many aspects of the present invention
  • Figure 60 provides a highly simplified block diagram of the ma] or components of a typical cellphone such as that shown in figure 59
  • Figure 61 is a highly simplified block diagram of various programming and data structures contained in one or more mass storage devices on the cellphone of figure 59
  • Figure 62 illustrates that the cellphone of figure 59 allows traditional phone dialing by the pressing of numbered phone keys
  • Figure 63 is a highly simplified pseudocode description of the command structure of the cellphone of figure 59 when in its top level phone mode, as illustrated by the screen shown in the top of figure 62
  • Figure 64 illustrates how a user of the cellphone of figure 59 can access and quickly view the commands of a main menu by pressing the menu key on the cellphone
  • Figure 65 and 66 provide a highly simplified pseudocode description of the operation of the main menu illustrated m figure 64
  • Figure 67 through 74 illustrate command mappings of the cellphone ' s numbered keys i_n_. each of various important modes and menus associated with a speech recognition text editor that operates on the cellphone of figure 59
  • Figure 75 illustrates how user of the cellphone's text editing software can rapidly see the function associated with one or nore keys in a non-menu mode by pressing the menu button and scrolling through a command l st that can be used substantially in the same manner as a menu of the type shown in figure 64
  • Figure 66 through 68 provide a highly simplified pseudocode description of the responses of the cellphone's speech recognition program when in its text window, editor, mode
  • Figure 79 and 80 provide a highly simplified pseudocode description of an entry mode menu, that can be accessed from various speech recognition modes, to select amoung various ways to enter text
  • Figures 81 through 83 provide a highly simplified pseudocode description of the correctionWmdow routine used by the cellphone to display a correction window and to respond to user input when such correction window is shown
  • Figure 8 4 is a highly simplified pseudocode description of an edit navigation menu that allows a user to select various ways of navigating with the cellphone's navigation keys when the ed t mode's text window is displayed
  • Figure 85 is a highly simplified pseudocode description of a correction window navigation menu that allows the user to select various ways of navigating with the cellphone's navigation keys when in a correction window, and also to select from among different ways the correction window can respond to the selection of an alternate choice m a correction window
  • Figures 86 through 88 provide highly simplified pseudocode descriptions of three slightly different embodiments of the key Alpha mode, which enables a user to enter a letter by saying a word starting with that letter and which responds to the pressing of a phone key by substantially limiting such recognition to words starting with one of the three or four letters associated with the pressed key
  • Figures 89 and 90 provide a highly simplified pseudocode description of some of the options available under the edits options menu that is accessible from many of the modes of the cellphone's speech recognition programming
  • Figures 91 and 92 provide a highly simplified description of a word type menu that can be used to limit recognition choices to a particular type of word, such as a particular grammatical type of word
  • Figure 93 provides a highly simplified pseudocode description of an entry preference menu that can be used to set default recognition settings for various speech recognition functions, or to set recognition duration settings
  • Figure 9 provides a highly simplified pseudocode description of text to speech playback operation available on the cellphone
  • Figure 95 provides a highly simplified pseudocode description of how the cellphone's text to speech generation uses programming and data structures also used by the cellphone's speech recognition
  • Figure 96 is a highly simplified pseudocode description of the cellphone's transcription mode that makes it easier for a user to transcribe audio recorded on the cellphone using the device's speech recognition capabilities
  • Figure 97 is a highly simplified pseudocode description of programming that enables the cellphone's speech recognition editor to be used to enter and edit text in dialogue boxes presented on the cellphone, as well as to change the state of controls such as list boxes check boxes and radio buttons in such dialog boxes
  • Figure 98 is a highly simplified pseudocode description of a help routine available on the cellphone to enable a user to rapidly find descriptions of various locations in the cellphone's command structure
  • Figures 99 and 100 illustrate examples of help menus of the type that displayedby the programming of figure 98
  • Figures 101 and 102 illustrate how a user can use the help programming of figure 98 to rapidly search for, and received descriptions of, the functions associated with various portions of the cellphone's command structure.
  • Figures 103 and 104 illustrate a sequence of interactions between a user and the cellphone's speech recognition editor's user interface in which the user enters and corrects text using continuous speech recognition
  • Figure 105 illustrates how a user can scroll horizontally in one a correction window displayed on the cellphone
  • Figure 107 illustrates operation of the key Alpha mode shown in figure 86
  • Figures 108 and 109 illustrate how the cellphone's speech recognition editor allows the user to address and enter and edit text in an e-mail message that can be sent by the cellphone's wireless communication capabilities
  • Figure 110 illustrates how the cellphone's speech recognition can combine scores from the discrete recognition of one or more words with scores from a prior continuous recognition of those words to help produce the desired output
  • Figure 111 illustrates how the cellphone speech recognition software can be used to enter a URL for the purposes of accessing a World Wide Web site using the wireless communication capabilities of the cellphone
  • Figures 112 and 113 illustrate how elements of the cellphone's speech recognition user interface can be used to navigate World Wide Web pages and to select items and enter and edit text in the fields of such web pages
  • Figure 114 illustrates how elements of the cellphone speech recognition user interface can be used to enable a user to more easily read text strings too large to be seen at one time in a text field displayed on the cellphone screens, such as a text fields of a web page or dialogue box
  • Figure 115 illustrates the cellphone's find dialog box, how a user can enter a search string into that dialog box by speech recognition, how the find function then performs a search for the entered string, and how the found text can be text used to label audio recorded on the cellphone
  • Figure 116 illustrates how the dialog box editor programming shown m figure 97 enable speech recognition to be used to select from among possible values associated with a list boxes
  • Figure 117 illustrates how speech recognition can be used to dial people by name, and how the audio playback and recording capabilities of the cellphone can be used during such a cellphone call
  • Figure 118 illustrates how speech recognition can be turned on and of when the cellphone is recording audio to insert text labels or text comments into recorded audio
  • Figure 119 illustrates how the cellphone enables a user to have speech recognition performed on portions of previously recorded audio
  • Figure 120 illustrates how the cellphone enables a user to strip text recognized for a given segment of sound from the audio recording of that sound
  • Figure 121 illustrates how the cellphone enables the user to either turned on or off an indication of which portions of a selected segment of text have been associated audio recording
  • Figures 122 through 125 illustrate how the cellphone speech recognition software allows the user to enter telephone numbers by speech recognition and to correct the recognition such numbers when wrong
  • Figure 126 is provided to illustrate how many aspects of the cellphone embodiment shown in figure 59 through 125 can be used in an automotive environment, including the TTS and duration logic aspects of the cellphone embodiment
  • Figures 127 and 128 illustrate that most of the aspects of the cellphone embodiment shown in figure 59 through 125 can be used either on cordless phones or landlme phones
  • Figure 129 provides a highly simplified pseudocode description of the name dialing programming of the cellphone embodiment, which is partially illustrated in FIG 117
  • Figure 130 provides a highly simplified pseudocode description of the cellphone's digit dial programming illustrated in figures 122 through 125
  • FIG. 9 illustrates the personal digital assistant, or
  • PDA 900 on which many aspects of the present invention can be used.
  • the personal .digital a ⁇ oiat nt or PDA shown is similar to that currently being sold as the Compaq iPaq IPAQ H3650 Pocket PC, the Casio Cassiopeia, and the Hewlett - Packard Jornado 525.
  • the PDA 900 includes a relatively high resolution touch screen 902 j _ which enables the user to select software buttons as well as portions of text by means of touching the touch screen, such as with a stylus 904 or a finger.
  • the personal digital aoolotant-PDA also includes a set of input buttons 906 and a two - dimensional navigational control 908.
  • a navigational input device that allows a user to select discrete units of motion on one or more dimensions will often be considered to be included in the definition of a button. The This is particularly true with regard to telephone interfaces, in which the up, down, left, and right inputs of a navigational device will be considered phone keys or phone buttons .
  • FIG. 10 provides a schematic system diagram of a PDA 900. It shows the touch screen 902 and input buttons 906 (which including include the navigational input 908) . It also shows that the device has a central processing unit such as a microprocessor 1002.
  • the CPU 1002 is connected over one or more electronic communication buses 1004 with read-only memory 1006 (often flash ROM) ; random access memory 1008; -3r-one or more I/O devices 1010; a video controller 1012 for controlling displays on the touch screen 902; and an audio device 1014 for receiving input from a microphone 1015 and supplying audio output to a speaker 1016.
  • read-only memory 1006 often flash ROM
  • random access memory 1008 random access memory
  • I/O devices 1010 for controlling displays on the touch screen 902
  • an audio device 1014 for receiving input from a microphone 1015 and supplying audio output to a speaker 1016.
  • T e PDA also includes a battery 1018 for providing it with portable power; a headphone -in and headphone -out ⁇ -Jack 1020, which is connected to be—the audio circuitry 1014; a docking connector 1022 for providing a connection between the PDA and another computers- such as a desktop j_ ⁇ and an add-on connector 1024 for enabling a user to add circuitry to the PDA such as additional flash ROM, a modem, a wireless transceiver 1025, or a mass storage device.
  • a battery 1018 for providing it with portable power
  • a headphone -in and headphone -out ⁇ -Jack 1020 which is connected to be—the audio circuitry 1014
  • a docking connector 1022 for providing a connection between the PDA and another computers- such as a desktop j_ ⁇
  • an add-on connector 1024 for enabling a user to add circuitry to the PDA such as additional flash ROM, a modem, a wireless transceiver 10
  • FIG. 10 shows a mass storage device 1017 is shown.
  • this mass storage device could be any type of mass storage device, including all or part of the flash ROM 1006 or a miniture hard disk.
  • the PDA would normally store an operating system 1026 for providing much of the basic functionality of the device.
  • it would include one or more application programs, such as a word processor, a spreadsheet, a Web browser, or a personal information management system, in addit ion to the operating system and in additi recognition related functional
  • It includes programming for performing word matching of the general type described above with regard to FIGS . 1 and 2.
  • the speech recognition programming will also normally include one or more vocabularies or vocabulary groupings
  • each vocabulary word will normally have a text spelling 1034 and— one or more vocabulary groupings 1036 to which the word belongs (for example ⁇ the text output " . " mMight actually be in both a large— ocabulary recognition vocabulary, a spelling vocabulary, and a punctuation vocabulary grouping in some systems
  • Each vocabulary word will also normally have an of speech 1038 in which the phonetic spelling parts of speech.
  • the speech recognition programming commonly includes a pronunciation guesser 1042 for guessing the pronunciation of new words that are added to the system and, thus, which do not have a predefined phonetic spelling.
  • the speech recognition programming commonly includes one or more phonetic lexical tree- J -s 1044.
  • a phonetic lexical tree is a tree—shaped data structure that groups together in a common path from the tree's rootoutc all phonetic spellings that start with the same sequence of phonemes. Using such lexical trees improves recognition performance because it enables all portions of different words that share the same initial phonetic spelling to be scored a-fe—together.
  • the speech recognition programming will also include a PolyGram language model 1045 that indicates the probability of the occurrence of different words in text, including the probability of words occurring in text given one or more preceding and/or following words .
  • a PolyGram language model 1045 that indicates the probability of the occurrence of different words in text, including the probability of words occurring in text given one or more preceding and/or following words .
  • the speech recognition programming will store language model update data 1046, which includes information that can be used to update the PolyGram language model 1045 just described.
  • this language model update data will either include or contain statistical information derived from text that the user has created or thatwhich the user has indicated isas- similar to the text that he or she wishesi ⁇ interested into generatirag-e .
  • contact information 1048 which includes names, addresses, phone numbers, e-mail addresses, and phonetic spellings for some or all of such information. This data is used to help the speech recogiiition programming recognize the speaking of such contact information.
  • such contact information will be included in an external program, such as one of the application programs 1028 or accessories to the operating system 1026, but, even in such cases, the speech recognition programming would normally need access to such names, addresses, phone numbers, e-mail
  • PDA us ng a software nput panel or SIP 1100 em o yng many aspects of the present invention.
  • FIG. 12 is similar to FIG. 11 except it shows the touch screen 902 when the speech recognition SIP is displaying a correction window 1200.
  • FIGS. 13 through 17 represent successive pages of a pseudocode description of how the speech recognition SIP responds to various inputs on its graphical user interface.
  • this pseudocode is represented as one main event loop 1300 in the SIP program which respon fls ⁇ e to user input .
  • this event loop is described as having two major switch statements _— a switch statements- 1301 in FIGS-. 13 that responds to inputs on the user interface thatwhich can be generated whether or not the correction window 1200 is displayedshown, and a switch statements- 1542 in FIG. 15 that responds to user inputs that can only be generated when the correction window 1200 is displayed.
  • function 1302 of FIG. 13 causes functions 1304 through 1308 to be performed.
  • Function 1304 tests to see if there is any text in the SIP buffer shown by the window 1104 in FIG. 11.
  • the SIP buffer is designed to hold a relatively small number of lines of text, of which the SIP'S software will keep track of the acoustic input and best choices associated with the recognition of each word, and the linguistic context created
  • Such a text buffer is used because the speech recognition SIP often will not have knowledge about the text in the remote application shown in the window 1106 in FIG. • 11 into which the SIP outputs text at the location of the current cursor 1108 in the application. In other embodiments of the invention a much larger SIP buffer could be used. In other embodiments many of the aspects of the present invention will be used as part of an independent speech recognition text creation application that will not require the use of a SIP for the inputting of text.
  • the major advantage of using a speech recognizer that functions as a SIP is that it can be used to provided input for almost any application designed to run on a PDA.
  • function 1304 clears any text from the SIP buffer 1104 because the Talk button 1102 is provided as a way for user to indicate to the SIP that he is dictating text in a new context. Thus, if the user of the Talk button 1102 is provided as a way for user to indicate to the SIP that he is dictating text in a new context. Thus, if the user of the Talk button 1102 is provided as a way for user to indicate to the SIP that he is dictating text in a new context. Thus, if the user of the
  • Function 1306 in FIG. 13 responds to the pressing on the Talk button by testing to see if the speech recognition system is currently in correction mode .— and T-_rf so, it r exits that mode ⁇ removing any correction window 1200— of the type shown in FIG. 12— that might be shown.
  • the SIP shown in the FIGS is not in correction mode when a correction window is displayed, but has not been selected to receive input inputs from most buttons of the main SIP interface, and is in correction mode when the correction window is displayed and has' been selected to receive inputs from many of such buttons.
  • This distinction is desirable because the particular SIP shown can be selected to operate in a one-at-a-timeOnc At A Time mode in which words are spoken and recognized discreetly, and in which a correction window is displayed fort for each word as it is recognized, to enable a user to more quickly see the choice list or provide correction input.
  • Timoone-at-a-time recognition or those that do not use it at all, there would be no need to have the added complication of switching into and out of correction mode.
  • Function 1308 of FIG. 13 responds to the pressing of the Talk button by causing SIP buffer recognition to start according to a previously selected current recognition duration mode.
  • This recognition takes place without any prior language context for the first word.
  • language model context will be derived from words recognized in response to one pressing of the Talk button and used to provide a language context for the recognition of the second and subsequent words in such recognition.
  • FIG. 18 is a schematic representation of the recognition duration programming 1800 that enables a user to select different modes of activating speech recognition in response to the pressing or clicking of any button in the .
  • SIP interface that can be used to start speech recognition.
  • there arc is a plurality of buttons, including the Talk button, each of which- that can be used to start speech recognition. This enables a user to both select a given mode of recognition and to start recognition in -at—that mode with a single pressing of a button.
  • Function 1802 helps determine which functions of FIG. 18 are performed, depending on the current recognition duration mode.
  • the mode can have been set in multiple different ways, including by default and by selection under the Entry Preference option in the function menu shown in
  • function 1804 will cause functions 1806 and 1808 to recognize speech sounds that are uttered during the pressing of a speech button.
  • This recognition duration type is both simple and flexible-r because it enables a user to control the length of recognition by one simple rule: recognition occurs during and only during the pressing of a speech button.
  • utterance and/or end of utterance detection is used during any recognition mode, to decrease the likelihood that background noises will be recognized as utterances .
  • function 1810 will cause functions 1812 and 1814 to respond to the pressing of a speech button by recognizing speech during that press.
  • the "pressing" of a speech button is defined as
  • the Press And Click To Utterance End recognition duration type has the benefit of enabling the use of one button to rapidly and easily select between a mode that allows a user to select a variable length extended recognition, and a mode that recognizes only a single utterance. If the current recognition duration type is the Press
  • function 1820 causes functions 1822 through 1828 to be performed. If the speech button is clicked, as just defined, functions 1822 and 1824 perform discrete recognition until the next end of utterance. If, on the other hand, the speech button is pressed, as previously defined, functions 1826 and 1828 I perform continuous recognition as long as the speech button is continuouslyremains pressed. I
  • This recognition duration type has the benefit of making it easy for users to quickly switch between continuous and discrete recognition merely by using different types of presses on a given speech button.
  • the other recognition duration types do not switch between continuous and discrete recognition.
  • function 1830 causes functions 1832 to 1840
  • functions 1833 through 1836 normally toggle recognition between off and on.
  • Function 1834 responds to a click by testing to see whether or not speech recognition is currently on. If so, and if the speech button being clicked is other than one that changes vocabulary, it responds to the click by turning off speech recognition.
  • function 1836 turns speech recognition on until a timeout duration has elapsed. The length of this timeout duration can be set by the user under the Entry Preferences option in the function menu 4602 shown in FIG. 46.
  • functions 1838 and 1840 will cause recognition to be on during the press but to be turned off at its end.
  • This recognition duration type provides a quick and easy way for users to select with one button between toggling speech recognition on an off ' and causing speech recognition to be turned on only during an extended press of a speech buttonk-ey.
  • the Clear button 1112 shown in FIG. 11 functions 1309 through 1314 remove any correction window which might be displayed-r and clear the contents of the SIP buffer without sending any deletions to the operating system text input.
  • the SIP text window 1104, shown in FIG. 11 is designed to hold a relatively small body of text.
  • the Clear button enable s a user to clear text from the SIP buffer, to prevent it from being overloaded, without causing corresponding deletions to be made to text in the application window.
  • the Continue button 1114 shown in FIG. 11 is intended to be used when the user wants to dictate a continuation of the last dictated text, or— ⁇ text which is to be inserted at the current location in the SIP buffer window 1104, shown in FIG. 11.
  • function 1316 causes functions 1318 through 1330 to be performed.
  • Function 1318 removes any correction window, because the pressing of the Continue button indicates that the user has no interest in using the correction window.
  • Next _ function 1132 tests if the current cursor in the SIP buffer window has a prior language context which that can be used to help in predicting the probability of the first word or wo rds of any utterance recognized as a result of the pressing of the continue Continue k-eybutton. If so, it causes that language context to be used. If not, and if there is currently no text in the SIP buffer, function 1326 uses the last one or more words previously entered in the SIP buffer as the language context at the start of recognition initiated by the continued Continue button. Next function 1330 starts
  • SIP buffer recognition that is, recognition of text to be output to the cursor in the SIP buffer, using the current recognition duration mode.
  • Function 1134 testa, if the SIP is currently in the correction mode. If so, it enters the backspace into the filter editor of the correction window.
  • the correction window 1200 shown in FIG. 12 includes a first choice window 1202.
  • the correction window interface allows the user to select and edited one or more characters in the first choice window as being part of a filter string which identifies a sequence of initial characters belonging to the desired recognition word or words. If the SIP is in the correction mode, pressing backspace will delete from the filter string and characters currently selected in the first choice window, and if no characters are so selected, will delete the character to the left of the filter cursor 1204.
  • function 1136 will respond to the pressing of a—the Bbackspace buttonk-ey by entering a backspace character into the SIP buffer and will make the outputting that same character to the operating system so that necessary to make the same change can be made to the corresponding text in the application window 1106 shown in FIG. 11.
  • the SIP responds to user selection of a Ss-pace button 1120 in substantially the same manner that it responds to a backspace, that i ⁇ j _ by entering it into the filter editor if the SIP is in correction mode, and. otherwise outputting it to the SIP buffer and the Operating system.
  • functions 1350 and 1356 set the current models recognition vocabulary to the name recognition vocabulary and start • recognition according to the current recognition duration settings and other appropriate speech settings.
  • these functions will treat in—the current recognition mode as either filter or SIP buffer recognition, depending on whether the SIP is in correction mode. This is because these other vocabulary buttons are associated with vocabularies used for inputting sequences of characters tha t are appropriate for defining a filter string or for direct entry into the SIP buffer.
  • the large vocabulary and the name vocabulary are considered inappropriate for filter string editing and, thus, in the disclosed embodiment the current recognition mode is considered to be either re - utterance or SIP buffer recognition, depending on whether the SIP is in correction mode.
  • name and large vocabulary recognition could be used for editing a multiword filter.
  • functions 1404 I through 1406 cause a list of all the words used by the ⁇ international Communication Aalphabet (or ICA) to be displayed, as is illustrated a—in . numberal 4002 in FIG. 40.
  • functions 1418 through 1422 of FIG. 14 are performed. These toggle between continuous recognition mode, which that uses continuous speech acoustic models and which allows multiword recognition candidates to match a given single utterance, and a discrete recognition mode , which that uses discrete recognition acoustic models and which only allows single word recognition candidates to be recognized for a single utterance.
  • the function also starts speech recognition using either discrete or continuous recognition, as has just been selected by the pressing of the Ceontinuous/Ddiscrete button.
  • functions 1424 and 1426 call the function menu 4602 shown in FIG.46.
  • This function menu allows the user to select from other options besides those available directly from the buttons shown in FIGS. 11 and 12.
  • help -Help button 1136 shown in FIG. 11 ⁇ functions 1432 and 1434 of FIG. 14 call help mode.
  • a function 1902 displays a help window 2000 providing information about using the help mode, as illustrated in n
  • FIG. 20 During subsequent operation of the help mode ⁇ if the user touches a portion of the SIP interface toast functions 1904 and 1906 display a help window with information abut the touched portion of the interface that continues to be displayed as long as the user continues that touch. This is illustrated in FIG. 21 ⁇ in which the u'ser has used the stylus 904 to press the F ilter button 1218 of the correction window. In response ) a help window 2100 is shown that explains the function of the Ff-ilter button. If during the help mode a user double -click- ⁇ s on a portion of the display, functions 1908 and 1910 display such a help window that stays up until the user presses another portion of the interface. This enables the user to use the scroll bar 2102 shown in the help window of FIG. 21 to scroll through and read help information too large to fit on the help window 2102 at one time.
  • help wWindows caxijJUf_3 ⁇ lso have a Kkeep Uup button 2100 to- which a user can drag -fee—from an initial down press on a portion of the SIP user interface of interest to also select to keep the help window up until the touching of a another portion of the SIP user interface.
  • the displayChoiceList routine is called with the following parameters: a selection parameter;
  • the selection parameter indicates the text in the SIP buffer for which the routine has been called.
  • the filter string indicates a sequence of one or more characters or character indicating elements that define the set of one or more possible spellings with which the desired recognition output begins.
  • the filter range parameter defines two character sequences j _ which bound a section of the alphabet in which the desired recognition output falls.
  • the word type parameter indicates that the desired recognition output is of a certain type, such as a desired grammatical type.
  • the NotChoiceList flag indicates a list of one or more words that the user's actions indicate are not a desired word.
  • Function 2202 of the displayChoiceList routine calls a getchoices routine ⁇ shown in FIG. 23 ⁇ _ with the filter string and filter range parameters with which the displayChoiceList routine s—has been called and with an utterance list associated with the selection parameter.
  • the utterance list 2404 stores sound representations of one or more utterances that have been spoken as part of the desired sequence of one or more words associated with the current selection.
  • function 2202 of FIG. 22 calls the getchoices routine, it places a representation 240 00(0, shown in FIG. 24 ⁇ of that portion of the sound 2402 from which the words of the current selection have been recognized. As was indicated in FIG.
  • the first entry 2004 in the utterance list is part of a continuous utterance 2402.
  • the present invention enables a user to add additional utterances of a desired sequence of one or more words to a selection's utterance list_ j _ and recognition can be performed on all these utterance together to increase the chance of correctly recognizing a desired output.
  • such additional utterances can include both discrete utterances, such as entry 2400A, as well as continuous utterances, such as entry 2400B.
  • Each additional utterance contains information as indicated by the numerals 2406 and 2408 that indicates whether it is a continuous or discrete utterance and the vocabulary mode in which it was dictated.
  • FIGS. 24 and 25 ⁇ _ the acoustic representations of utterances in the utterance list are shown as waveforms . It should be appreciated that in many e —embodiments ⁇ other forms of acoustic representation will be used, including parameter frame representations such as the representation 110 shown in FIGS. 1 and 2.
  • FIG. 25 the -is similar to FIG. 24, except that in it_ £ _ the original utterance list entry is a sequence of discrete utterances. It shows that additional utterance entries used to help correct the recognition of an initial sequence of one or more discrete utterances can also include either discrete or continuous utterances, 2500A and 2500B, respectively.
  • the getchoices routine 2300 includes a function 2302 which tests to see if there ⁇ ' a there has been a prior recognition for the selection for which this routine has been called thatwhich hasi ⁇ - been performed with the current utterance list and filter values (that is _ filter string and filter range values) . If so, it causes function 2304 to return with the choices from that prior
  • function 2306 tests to see if the filter range parameter is null. If it is not null, function 2308 tests to see if the filter range is more specific than the current filter string, and , if so, it changes the filter string to the common letters or the filter range.- If not 0. , function 2312 nulls the filter range,
  • the filter string contains more detailed information that it does.
  • a filter range is selected when a user selects two choices on a choice list as an indication that the desired recognition output fall s_ between them in the alphabet .
  • function 2310 causes the filter string to correspond to those shared letters . This is done so that when the choice, list is displayed the shared letters will be indicated to the users- as one which havc ⁇ -has been confirmed as corresponding to the initial characters of the desired output.
  • function 2316 causes function r
  • Function 2318 calls a filterMatch routine shown in FIG. 26 for each such prior recognition candidate with the candidate's prior recognition score and the current filter definitions _,_ and function 2320 deletes those candidates returned as a result of such calls that have scores below a certain threshold.
  • the filterMatch routine 2600 performs filtering upon word candidates.
  • this filtering process is extremely flexible, since it allows filters to be defined by filter strings, filter range, or word type. It is also flexible because it allows a combination of word type and either filter string or filter range specifications, and because it allows ambiguous filtering, including ambiguous filters where elements in a filter string are not only ambiguous as to the value of their associative characters but also ambiguous as to the number of characters in their associative character sequences.
  • a filter string— or a portion of a filter stringV is ambiguous_ j _ we mean that a plurality of possible character sequences can be considered to match it.
  • Ambiguous filtering is valuable when used with a filter string input, which, although reliably recognized, does not uniquely defined a single character, such as is the case with ambiguous phone key filtering of the type described below with regard to a cell phone embodiment of many aspects of the present invention.
  • Ambiguous filtering is also valuable with filter string input that cannot be recognized with a high degree of certainty, such as recognition of letter names, particularly if the recognition is performed continuously.
  • _ j _ not only is there a high degree of likelihood that the best choice for the recognition of the sequence of characters will include one or more errors, but also there is also a reasonable probability that the number of characters recognized in a best —scoring recognition
  • I candidate might differ from the number spoken. But spelling all or the initial characters of a desired output is a very rapid and intuitive way of inputting filtering information, and even though the best choice from such recognition will often be incorrect, particularly when dictating under adverse conditions .
  • the filterMatch routine is called for each individual word candidates-. It is called with that word candidate's prior recognition score, if any, or else with a score of l ⁇ ae. It returns a recognition score equal to the score with which it hais been called multiplied by -an indication e-f— [?] the probability that the candidate matches the current filter values.
  • Functions 2602 through 2606 of the filterMatch routine tests- to see if the word type parameter has been defined, and ⁇ _ if so and if the word candidates- is not of the defined word type ⁇ _ it returns from the filterMatch function with a score of 0, indicating that the word candidate ⁇ is clearly not compatible with current filter values.
  • Functions 2608 through 2614 test to see if a current value is defined for the filter range. If so, and if the current word candidate is alphabetically between the starting and ending words of that filter range , they returned withis an unchanged score value. Otherwise they returned- with a score value of 0.
  • Function 2616 finds determines if there is a defined filter string. If so i it causes functions 2618 through 2606 key2653 to be performed. Function 2618 sets the current candidate character, a variable that will be used in the following loop, to the first character in the word candidates- for which filterMatch -is—has been called Next ; a loop 2620 is performed until the end of the filter string is reached by its iterations. This loop includes functions 2622 through 2651.
  • the first function in each iteration of this loop is the test by step 2622 to determine the nature of the next elements- in the filter string.
  • j _ three types of filter string elements are allowed: an unambiguous character, an ambiguous character, and an ambiguous element representing a set of ambiguous character sequences ⁇ which can be of different lengths .
  • unambiguous character unambiguously identifies a letter of the alphabet or other character, such as a space. It can be produced by unambiguous recognition of any form of alphabetic input, but it is most commonly aasociativc associated with letter or ICA word recognition, keyboard input, or non-ambiguous phone key input in phone implementations . Any recognition of alphabetic input can be treated as unambiguous merely by accepting a single best scoring spelling output by the recognition as an unambiguous character sequence.
  • An ambiguous character is one which can have multiple letter values, but which has a definite length of 1 character. As stated above ⁇ this can be produced by the ambiguous pressing upon keys in a telephone embodiment, or arc by speech or character recognition of letters. It can also be produced by continuous recognition of letter names in which the all the best scoring character sequences have the same character length.
  • An ambiguous length element is commonly associaitt eedd--fe with the output of continuous letter name recognition or handwriting recognition. It represents multiple best — scoring letter sequences against handwriting or spoken input, some of which sequences can have different lengths.
  • function 2644 causes functions 2626 through 2606 to be performed.
  • Function 2626 tests to see if the current candidate character matches the current unambiguous character. If no ⁇ the called: to filterMatch returns with a score of 0 for the current word candidate. If s ⁇ j _ function 2630 increments the position of the current candidate character. .
  • function 2632 causes functions 2634 through 2636 to be performed.
  • Function 2634 tests to see if the current character fails to match one of the recognized values of the ambiguous character. If so, function 2636 returns from the call to filterMatch with a score of 0. Otherwise ⁇ functions 2638 through 2 . 642 alter the current word candidate's score as a function of the probability of the ambiguous character matching the current candidate character's value, and .then increment the current candidate character's position is incremented.
  • function 2644 causes a loop 2646 to be performed for each character sequence represented by the ambiguous length element.
  • This loop is—comprisesd of I functions 2648 through 2652.
  • Function 2648 tests to see if there is a matching sequence of characters starting at the current candidate's character position that matches the current character sequence of the loop 2646. If so, function 2649 alters the word candidate's score as a function of the probability of the recognized matching sequence represented by the ambiguous length element, and then function 2650 increments the current position of the current candidate character by the number of the characters in the matching ambiguous length element sequence. If there is no sequence of characters starting at the current word candidate's character position which that match any of the sequences of characters associated with the ambiguous length element, functions 2651 and 2652 returned from the call to filterMatch with a score of 0.
  • function 2653 returns from filterMatch with the current word' s score produced by the loop 2620.
  • step 2616 finds that there is no filter string' defined ⁇ step 2654 merely returns from filterMatch with the current word candidate's score unchanged.
  • If-taken place ⁇ function 2322 tests to see if the number of prior recognition candidates last left after the deletions, if any_ ⁇ _ of function 2320 is below a desired number of candidates. Normally this desired number would represent a desired number of choices which arc desired for use in a choice list- that- is to be created . If the number of prior recognition candidates is below such a desired number _ functions 2324 through 2336 se—are performed. Function 2324
  • I performs speech recognition upon every one £>f he one or more entries in the utterance list 240 0 p ⁇ r*2 ⁇ E_ shown in FIGS. 24 and 25.
  • this recognition process includes a tests- to determine if there are both continuous and discrete entries in the utterance list, and ⁇ if so ⁇ limitkampf the number of possible word candidates in recognition of the continuous entries to a number corresponding to the number of individual utterances—s- detected in one or more of the discrete entries.
  • the recognition of function 2324 also includes recognizing each entry in the utterance list with either continuous arts or discrete recognition' depending upon the respective mode that was in fact effect when each was received, as indicated by the continuous or discrete recognition indication 2406 shown in FIGS. 24 and 25.
  • the recognition of each utterance list entry also includes using the filterMatch routine previously described and using a language model in selecting a list of best—scoring acceptable candidates for the recognition of each such utterance.
  • the vocabulary indicator 2408 shown in FIGS . 24 and 25 for the most recent utterance in the utterance list is used as a word type filter to reflect any indication by the user that the desired word sequence is limited to one or more words
  • functions 2334 and 2336 pick a list of best scoring recognition candidates for the utterance list pace based on a combination of scores from different recognitions. It should be appreciated that in some embodime -nts of this aspect of the invention ⁇ r/, ⁇ combination of scoring could be used from the recognition of the different utterances so as to improve the effectiveness of—e# the recognition using more than one utterance.
  • sequences o one or more c aracters, t e cho ces pro uce y function 2344 will be scored. correspondingly by a scoring mechanism corresponding to that shown in functions 2616 through 2606 the three of FIG. 26.
  • function 2204 tests to see if any filter has been defined for the current selection, if there has been any utterance at aadded to the current selection's utterance list, and if the selection for which displayChoiceList has been called is not in the notChoiceList , which includes a list of one or more words which- that the user's, inputs indicate are not desired as recognition candidates. If these conditions are met up of the , function 2206 makes that selection the first choice for display in the correction window, which the routine is to create. Next function 2210 removes any other candidates from the list of candidates produced by the call to the getchoices routine that arei ⁇ - contained in the notChoiceList . Next, if the first choice has not already
  • function 2212 makes the best—scoring candidates- returned by the call to getchoices the first choice for the subsequent correction window display. If there is no single best —scoring recognition candidates- ⁇ alphabetical order can be used to select that one of the candidates- which is to be the first choice.
  • Next j _ function 2218 selects those characters of the first choice which correspond to the filter string, if any, for special display. As will be described below, in the preferred embodiments , characters in the first choice which correspond to an unambiguous filter are indicated in one way, and characters in the first choice which correspond to an ⁇ -.ambiguous filter are indicated in a different way so I that the user can appreciate which portions of the filter string correspond to which type of filter elements.
  • Next ⁇ I function 2220 places a filter cursor before the first character of the first choice thatwhich does not correspond to the filter string. When there i-'-s no filter string defined ⁇ this cursor will be placed before the first character of the first choice.
  • e t ⁇ function 2222 causes steps 2224 through 2228 to be performed if the getchoices routine returned any candidates other than the current first choice .
  • ⁇ function 2224 creates a first—character—ordered
  • functions 2226 and 2228 create a second —character—ordered choice list of up to a preset number of screens f or all r « ⁇ _. such choices from the remaining best —scoring candidates .
  • _ function 2230 displays-is- a correction window showing the current first choice, and indication of which admits characters that the
  • the displayChoiceList routine can be called with a null value for the current selection as well as for a text selection which has no associated 'Utterances . In this case/ it will respond to alphabetic input by performing word completion based on the operation of functions 2338 and 2340. It allows to select choices for the recognition of an utterance without the use of filtering or re -utterances, to use filtering and/or re -
  • functions 1436 and 1438 respond to wait On-a tap on a word in the SIP buffer by calling the displayChoiceList routine, which in turn, causes a correction window such as the correction window 1200 shown in FIG. 12 to be displayed.
  • the ability to display a correction window with its associated choice list merely by tapping on a word provides a fast and convenient way for enabling a user to correct a— single word errors.
  • functions 1440 hrough 1444 escape from any current correction window that might be displayed ⁇ and start SIP buffer recognition according to current recognition duration modes and settings_ using the current language context of the current selection.
  • the recognition duration logic responds to the duration of associated with such a double -click in determining whether to respond as if there is—has been either a press or a click
  • function 1446 causes functions 1448 to 1452 to be performed.
  • Function 1448 plants a cursor at .the location of the tap. If the tap is located at any point in the SIP buffer window which is after the and end of the text in up to the SIP buffe_ ⁇ _,the cursor will be placed after the last word in that buffer. If the tap is a double tap ⁇ functions 1450 1452 start SIP buffer recognition at the new cursor location according to the current recognition duration modes and other settings, using the duration of the second touch of the double F-ea?—tap for determining whether it is -fche—to be responded to as a press or a click.
  • Figure 15 is a continuation of the pseudocode described above with regard to FIGS. 13 and 14.
  • FIG. 22 with all of the words that are all or partially dragged across as the current selection and with the acoustic data aaaociativc associated with the recognition of those words, if any' as the first entry in the utterance list .
  • the displayChoiceList function with that word as the selection j _ with that word added to be—the notChoiceList, with the dragged initial portion of the word as the filter string, and with the acoustic data associated with that word as the first entry in the utterance list.
  • This programming interprets the fact that a user has dragged across only the initial part of a word as an indication that the entire word is not the desired choice, as indicated by the fact that the word is added to fee—the notChoiceList .
  • ttje displayChoiceList routine with the word as a selection,, with the selection added to be notChoiceList, with the onion undragged acroaa an initial portion of the word as the filter string, and with the acoustic data associative with a selected word as the first entry in the utterance list.
  • functions 1514 and 1516 display a warning to the user that the buffer is close to full.
  • this morning warning informs the user that the buffer will be automatically cleared if more than an additional number of characters er atare added to the buffer , and request s_ that the user verify that the text currently in the buffer is correct and then press .talker talk or continue, which will clear the buffer.
  • step 1520 tesc s, to see if the cursor is currently at the and of the SIP bulffffee.r, If not function 1522 outputs to the operating system a number of backspaces—is- equal to the distance from the last letter of the SIP buffer to the current cursor position within that buffer.
  • Next' function 1526 causes the text input, which can be compos-?is-ed of one or more characters, to be output into the SIP buffer at its current cursor location. Steps 1527 and 1528 output the same text sequence' and any following text in the SIP buffer to the text input of the operating
  • function 1537 calls the displayChoiceList routine for the recognized text, and function 1538 turns off correction mode.
  • function 1538 prevents this from being the case when one at a timcone-at-a-time mode is being used. As has been described above, this is because in one at a timcone-at-a-time mode a correction window is displayed automatically each time speech recognition is performed upon a inan utterance of the word, and thus there is a relatively high likelihood that a user intends input supplied to fee—the non-correction window aspects of the SIP interface to be used for purposes other than input into the correction window.
  • the correction window is being displayed as a result of specific user input indicating a desire to correct one or more words, correction mode is entered so that certain non -correction window inputs will be directed to the correction window.
  • Function 1539 tests to see if the following set of conditions is met: the SIP is in one ⁇ at- a timcone-at-a-1ime mode, a correction window is ahowndisplayed, but the system is not in correction mode. This is the state of affairs which normally exists after each utterance of the word in one at a timcone-at-a-time mode. If they said conditions exist s- functions 1540 -rcaponae responds to any of the inputs above in FIGS.
  • functions 1544 and 1546 cause the SIP program to exit the correction window without changing the current selection.
  • functions 1548 and 1550 delete the current selection in the SIP buffer and aand send and output to the operating system , which causes a corresponding change to be made to any text in the application window which correspondings- to that in the SIP buffer.
  • functions 1552 causes functions 1553 to 1556 to be performed.
  • Function 1553 deletes the current selection in the SIP buffer corresponding to the correction window and sends_ and output to the operating system so as to cause a corresponding change to text in the application window.
  • Function 1554 sets the recognition mode to the new utterance default, which will normally be the large vocabulary recognition mode, and can be set by the user to be a to either a—continuous or discrete recognition mode aa he or she dcairea .
  • Function 1556 starts fe ⁇ -SIP buffer recognition using the current recognition duration mode and other recognition settings.
  • SIP buffer recognition is recognition that -will provides an input to the SIP buffer, according to the operation of functions 1518 to 1538' described above.
  • FIG. 16 continues the illustration of the response of the main loop of the SIP program to input received during the display of a correction window.
  • function 1600 and tol602 causes functions 1603 through 1600 and ⁇ anl610 to be performed.
  • Function 1603 sets the SIP program to the correction mode if it is not currently in it . This will happen if the correction window has been display ed ⁇ as a result of a discrete word recognition in one at a timeone-at-a-time mode and the user responds by pressing a button in the correction window, in this case the -3?eRe- utterance button, indicating an intention to usethe correction window for correction purposes.
  • 1C00 and forl604 sets the recognition mode toCthe current recognition mode associated with re -utterance recognition.
  • function 1606 receives one or more utterance s—__s. according to the current -rccdyre-utterance recognition duration mode and other recognition settings, including vocabulary.
  • function 1600 and a atl608 adds the one or more utterances—is- received by function 1606 to the utterance list for the correction window selection' along with an indication of the vocabulary mode at the time of those utterance—ss , an weatherand whether not continuous or discrete recognition is in effect. This causes the other- ends-utterance list 2004 shown in FIGS. 24 is-and 25 to have an additional utterance.
  • function 1600 and ⁇ anl610 calls te—the displayChoiceList routine of FIG. 22, described above. This in turn will call began the getchoices choice ia function described above the gardcnregarding jjj0 FIG. 23 and will cause functions 2306 ⁇ i;! ⁇ 6e ⁇ gg ⁇ _ ⁇ ' through 2336 to perform rccntcrs re-utterance recognition using the new Utterance list entry.
  • the current filter entry mode is an lentry window mode functions 1618 and 1620 call the appropriate entry window. As described below, in the embodiment of the invention shown, these /entry window modes correspond to a character recognition entry mode, a handwriting recognition entry mode and a keyboard entry mode.
  • FIG. 27 to be called for the current first choice word.
  • the current first choice will normally be the selection for which the correction window has been called. This means that by selecting one or more words in the SIP buffer and JrH—by pressing the word formWord Form button in the correction window ⁇ a user can rapidly select a list of alternate forms for any such a selection.
  • FigurcFIG. 25 illustrates the function of the word form list routine. If a correction window is already displayed when it is ⁇ called ⁇ functions 2702 and 2704 treat the current best choice as the selection for which the word form list will be displayed. If the current selection is one word_ j _ functions- 2706 causes functions 2708 through 2714 to be performed. If the current selection has any homonyms , function 2708 places them at the start of the word form r i choice list. Next_ ⁇ step 2700 and canlO finds the root form of the selected word, and function 2712 creates a list of alternate grammatical forms for the word. Then function 2714 alphabetically orders all these grammatical forms in the choice list after any homonyms , which may have been added to the list by function 2708.
  • j _ function 2716 causes functions 2718 through functions 2728 to be performed.
  • Function 2718 test 1s to see it—if fee—the selection has any spaces between its words. If so_ j _ function 2720 as—adds a copy of the selection to the choice list ⁇ which has no such spaces between its words ⁇ and function 2222 as—adds a copy of the selection with the spaces replaced by hyphens . mjjf period although Although not shown in FIG. 27 ⁇ _ additional functions can be performed to replace hyphens with spaces or with the absence of spaces. If the selection has multiple elements subject to the same spelled/non-spelled transformation function ⁇
  • H ⁇ transformations -gjjgS to the choice list.
  • tT-his for- example-will transform a series of Heh-number names into a numerical equivalent, or reoccurrences of the word
  • a correction window showing the selection as the first choice, the filter cursor at the start of the first choice, and a scrollable choice list and a scrollable list .
  • the filter cursor could be placed after that common sequence with the common sequence indicated as an unambiguous filter string.
  • the word form list provides _fc—one single alphabetically ordered list of optional word forms.
  • options can be ordered in terms of frequency of use ⁇ or their there could be a first and a second alphabetically ordered choice list, with the first choice list containing a set of the most commonly selected optional forms which will fit in the correction window at one time, and the second list containing less commonly used word forms .
  • the word form list provides a very rapid way of correcting a very common type of speech recognition error, that is a is, an error ⁇ in which the first choice is a homonym of the desired word or is an alternate grammatical form of it.
  • functions 1630 and 1632 cause an audio playback of the first entry in the utterance list associated with the correction window's associated selection, if any such entry exists .
  • This enables a user to hear exactly what was spoken with regard to a mis -recognized sequence of one or more words.
  • the preferred embodiments enables- a user to select a setting which automatically causes such audio to be plyed automatically when a correction window is first displayed. If the add Add word Word button 1226 shown in
  • function . 1634 and 1636 call a dialog box which that allows a user to enter the current first choice word into either the active or backup vocabulary.
  • the system uses a subset of its total vocabulary as the active vocabulary that is available for recognition during the normal recognition using the large vocabulary mode.
  • Function 1636 allows a user to make a word that is normally in the backup lot vocabulary part of the active vocabulary. It also allows the user to add a word that is in neither vocabulary but which has been spelled in the first choice window by use of alphabetic input to be added to either the active or backup vocabulary. It should be appreciated that
  • the add Add word Word button 1226 will only be in a non-grayed state when the first choice word is not currently in the active vocabulary. This provides an indication to the user that he or she may want to add the first choice to either the active or backup vocabulary.
  • check Check button 1228 If the user selects the check Check button 1228 shown in Figure FIG. 12, functions 1638 through 1648 to—remove the current correction window and output its first choice to the SIP buffer and feed it—to the operating system a sequence of keystrokes necessary to make a corresponding change to text in the application window.
  • functions 1650 through 1653 remove the current correction window, and output the selected choice to the SIP buffer and feed the operating system a sequence of keystrokes necessary to make the corresponding change in the application window.
  • function 1654 causes functions 1656 through 1658 to be performed.
  • Function 1656 changes to correction mode if the system is not already currently in it.
  • Function 1656 makes the choice associated with the tapped choice addedChoice Edit button to be the first choice and to be the current filter string, then function 1658 calls the displayChoiceList with a new filter string. As will be described below, this enables a user to selected a choice word or sequence of words as the current filter string and then. to added edit that filter string, normally by deleting any characters from it ' a its end which disagree with the desired word.
  • I through 1666 change the system to correction mode if it s not in it, and call the displayChoiceList with the dragged choice added to the choice list and with the dragged initial portion of the choice as the filter string.
  • the FigureFIG. 17 provides the final continuation of the list of functions which the SIP recognizer makes in response to correction window inpu .
  • functions 1702 and 1704 enter the correction mode if the system is currently not already in it, and call a—displayChoiceList with the partially dragged choice added to the set—notChoiceList choice liot and with the undragged initial portion of the choice as the filter string.
  • functions 1710 through 1712 enters- the correction mode if the SIP is not already in it, and moves- the filter cursor to the tapped location. No call is made to displayChoiceList at this time because the user has not yet made any change to the filter.
  • function 1714 causes functions 1718 through 1720 to be performed.
  • Function 1718 calls the filter edit routine of Figure ' ⁇ FIGS. 28 and 29 when a backspace is input.
  • the filter edit routine 2800 is designed to give a—the user in the editing of a filter with a combination of unambiguous, ambiguous, and/or ambiguous length filter elements.
  • This entertaining routine & includes a function 2802, a test to see if there are any characters in the choice with which it has been called before the current location of the filter cursor. If so, it causes functions 2804 to define the filter string with which the routine has been called as the old filter string, and function 2806 makes the characters in the choice with which the routine has been called before the location of the filter cursor, the new filter cursor, and all the characters in that string to be unambiguously defined. This enables a user to define any part of a first choice because of the location of an edit to be automatically confirmed as a correct filter character.
  • the function 2807 tests to see if the input with which the filter edits has have been called is a backspace. If so, it causes functions 2808 through 2812 to be performed.
  • Functions 2808 and 2810 delete the last character of the new filter string if the filter cursor is a non-selection cursor. If the filter cursor corresponds to a selection of one or more characters in the current first choice, these characters were already not to " be included in the new filter by the operation of function 2806 just described. Then a—function 2812 clears the old filter
  • functions 2814 and 2816 add the one or more unambiguous characters to the end of the new filter string.
  • function 2818 and function 2820 place an element representing each ambiguous character in the sequence at the end of the new filter.
  • function 2822 determines whether the input to the filter edit routine is an ambiguous length ambiguouselement Jjj 'r input ⁇ function 2822 ⁇ causes functions 2824 through 2832 to be performed.
  • Fun ion 2824 selects the best—scoring sequence of letters associated with the ambiguous input, which ⁇ if added to the prior unambiguous part of the filter, would correspond to all or an initial part of a vocabulary word. It should be remembered that when this function is performed, all of the prior portions of the new filter string will have been confirmed by the operation of function 2806/, described above.
  • function 2826 tests to see if there are any sequences selected by function 2824 above a certain minimum score. If so, 7 —it will cause function' 2828 to select the best—scoring letters- sequences independent of vocabulary.
  • functions 2830 and 2832 associate the character sequences selected by the operation of functions 2824 through function 2828 with a new ambiguous filter element, and they add that new ambiguous filter element to the end of the new filter string.
  • This loop is comprised ef-comprises the functions 2836 through 2850 shown in the remainder of Figure FIG. 28 and the functions 2900 through 2922 shown in Figure -FIG. 29.
  • functions 2836 and 2838 add the old element to the end of the new filter string if it extends beyond those new elements. This is done because editing of a filter string other than by use of the ba ⁇ kapace Backspace button does not delete previously entered filter information which- that corresponds to part of the prior filter to the right of the new edit .
  • function 28.40 causes functions 2842 through 2850 to be performed.
  • Function 2842 performs a loop for each character sequence represented by the new ambiguous length element that has been added to the filter string.
  • the loop performed for each such character sequence of the new ambiguous length element includes a loop 2844 performed for each character sequence which agrees with
  • This inner loop 2844 includes a function 2846, which test to see if the old element matches and extends beyond the current sequence in the new element. If so, function 2848 ada adds to the list of character sequences represented by the new ambiguous length element a new sequence of characters corresponding to the current sequence from the new element plus the portion of the sequence from the old element that extends beyond that current sequence from the new element .
  • function 2902 is a loop which is performed for each sequence represented by the old ambiguous length element. It is compriacd composed of a test 2904 that checks to see if that the current sequence from the old element matches and extends beyond the new fixed length element. If so, function 2906 creates a new character sequence corresponding to that extension from the old element that extends beyond the new.
  • a function 2908 tests to see if any new sequences have been created by the function 2906, and if so, they cause function 2910 to add that new ambiguous length element to the end of the new filter, after the new element.
  • This new ambiguous length element represents the possibility of each of the sequences created by function 2906.
  • a probability score is associated with each such new sequence based on the relative probability scores of eac of the character sequences which were found by the loop 2902 to match the current new fixed length element .
  • function 2912 causes functions 2914 through 2920 to be performed.
  • Function 2914 is a loop that is performed for each character sequence in the new ambiguous length element . It is comprised composed of an inner loop 2916 which is performed for each character sequence in the old ambiguous length element .
  • This inner loop is compriacd composed of functions 2918 and 2920.. which, test to see if the character sequence from the old element matches and extends beyond the current character sequence from the new element. If so, they associate with the new ambiguous length element, a new character sequence corresponding to the current sequence from the new element plus the extension from the current old element character sequence.
  • function 1724 calls displayChoiceList for the selection with the new filter string that has been returned by the call to filter edit.
  • Function 1724 tests to see if the system is in one at a- timcone-at-a-time recognition mode and if the filter input has been produced by speech recognition. If so, it causes functions 1726 to 1730 to be performed. Function 1726 tests to see if a filter character choice window, such as window
  • function 1728 closes that filter choice window and ⁇ function 1730 calls filter edit with the first choice filter character as input. This causes all previous characters in the filter string to be treated as an unambiguously defined filter sequence.
  • a function 1732 calls filter edit for the new filter input which is causing operation of function 1722 and the functions listed below it.
  • function 1734 calls displayChoiceList for the current selection and the new filter string.
  • functions 1736 and 1738 call 'the filter character choice routine with the filter string returned by filter edit arid with the riewly recognized filter input character as the selected filter character.
  • FIG. 30 illustrates the operation of the filter character choice subroutine 3000. It includes a function 3002 which tests to see if the selected filter character with which the routine has been called corresponds to an either an ambiguous character or an unambiguous character in the current filter string having multiple best choice characters associated with it. If this is the case, function 3004 sets a filter character choice list equal to all characters associated with that character. If the number of characters is more than will fit on the filter character choice list at one time, the choice list can have scrolling buttons to enable the user to see such additional characters . Preferably the choices are displayed in alphabetical order to make it easier for the user to more rapidly scan for a desired character.
  • function 30 also includes a function 3006 which tests to see if the selected filter character corresponds to a character of an ambiguous length filter string element in the current filter string. If so, it causes' functions 3008 through 3014 to be performed. Function 3008 tests to see if the selected filter character is the first character of the ambiguous length element. If so, function 3010 sets the filter character choice list equal to all the first characters in any of the ambiguous element's associated character sequences.
  • function 3012 and 3014 set the filter character choice li t equal to all characters in any character sequences represented by the ambiguo us element that are preceded by the same characters as ⁇ - ⁇ .n_the selected filter character in the current, first choice.
  • function 3016 displays that choice list in a window, such as the window 3906 shown in Figure FIG. 39
  • function 1740 causes functions 1742 through 1746 to be performed.
  • Function 1742 closes the filter choice window in which such a selection is—has been made.
  • Function 1744 calls the filter edit function for the current filter string with the character thatwhich has been selected in the filter choice window as the new input.
  • function 1746 calls the displayChoiceList routine with the new filter string returned by filter edit.
  • function 1747 causes functions 1748 through 1750 to be performed.
  • Function 1748 calls the filter- character choice routine for the character which has been dragged upon, which causes a filter character choice window to be generated for it if there are any other character choices associated with that character. If the drag is released over a filter choice character in this window, function 1749 generates, a selection of the filter character choice over which the release takes place. Thus it causes the operation of the functions 1740 through 1746 which have just been described. If the drag is released other than on a choice in the filter character choice window, function 1750 closes the filter choice window.
  • the interface is illustrated as being in the one at a timcone-at-a-time mode_ ⁇ _ which is a discrete recognition mode that causes a correction window with a choice list to be displayed every time a discrete utterance is recognized.
  • numeral 3100 points to the I screerishot of the PDA screen showing the user tapping the Talk button 1102 to ⁇ ommcnta commence dictation starting in I a new linguistic context.
  • the I SIP recognizer is in the large vocabulary mode.
  • the sequence of separated dots on the Ceontinuous/Ddiscrete j button 1134 indicates that the recognizer is in a discrete recognition mode. It is assumed the SIP is in the Press And Click To End Of Utterance Recognition duration mode described with regard to numerals 1810 to 1816 of Figure FIG. 18. As a result/) the click of the Talk button causes recognition to take place until the end of the next utterance.
  • Numeral 3102 represents an utterance by the user of the word "this!' .()
  • Numeral 3104 points to an image of the screen of the PDA after a response to this utterance by placing the recognized text 3106 in the SIP text window 1104, outputting this text to the application window 1106, and by displaying a correction window 1200 which includes
  • the user taps the capitalized Capitalization button 1222 as pointed to buy-by the numeral 3108.
  • this utterance is mis -s—recognized as the word "its” causing the PDA screen to have the appearance pointed to by numeral 3116, in which a new correction window 1200 is displayed having the mis-s-recognized word as its • first choice 3118 and a new choice list for that recognition
  • Figure -FIG. 32 represents a continuation of this example, in which the user clicks the choice word "is" 3200 in the image pointed to by numeral 3202. This causes the PDA screen to have the appearance indicated by the numeral 3204 in which the correction window has been removed, and corrected text appears in both as—the SIP buffer window and the application window.
  • the user is shown tapping the letter name vocabulary button 1130, which changes the current recognition mode to the letter name vocabulary as is indicated by the' highlighting of the button 1130.
  • the tapping of this button commences speech recognition according to the current recognition duration mode. This causes the system to recognize the subsequent utterance of the letter name "e" as pointed to by numeral 3208
  • FIG 33 illustrates a continuation of this example, in which the user taps on the Punctuation Vocabulary key button 11,024 as indicated in the screenshot pointed to by button 11,024. This starts utterance recognition causing the utterance of the word "period” pointed to by the numeral 3300, which changes the recognition vocabulary to the punctuation vocabulary as to Cr punctuation mark " . " is shown in the first choice window followed by that punctuation mark's name to make it easier for the user to recognize.
  • the user taps the Word Form button 1220, which calls the word form list routine described above with regard to Figure FIG. 27. Since the selected text string includes spaces, it is treated as a multiple—word selection causing the portion of the routine shown in Figure FIG. 27 illustrated by functions 2716 through 2728 to be performed. This includes a choice list such as that pointed to by 3404 including a choice 3406 in which the spaces have been removed from the correction window's selection. In the example, the user taps .the Edit button 1232 next to the closest choice 3406.
  • FIG. 35 is a continuation of this example.
  • the comment 3512 a plurality of different correction options will be illustrated.
  • Figure -FIG. 36 illustrates the correction option of scrolling through the first and second choice list associated with the miss-recognition.
  • the user shown tapping the pa'ge down scroll button 3600 in the scroll bar 3602 of the correction window causes the first choice list 3603 to be replaced by the first screenful ⁇ __ of the second choice list 3605 as indicated in the view of the correction window
  • the SIP user interface provides a rapid way to allow a user to select from among a relatively large number of recognition choices.
  • the first choice list is compriacd composed of up to six choices
  • the second choice list can include up to three additional screens of up to 18 additional choices . Since the choices are arranged alphabetically and since all four screens can be viewed in less than a second, this enables the user to select from among up to 24 choices extremely faot-very quickly .
  • FIG. 37 illustrates the method of filtering ⁇ choices by dragging across an initial part of a choice, as has been described above with regard to functions 1664 through 1666 of FiguroFIG. 16.
  • the first choice list includes a choice 3702 shown in the view of the correction window pointed to by 3700 ⁇ which includes the first six characters o,f the desired word "embedded" .
  • the correction window 3704( ⁇ ) the user drags across these initial six letters and the system responds by displaying a new correction window limited to recognition candidates that start with an unambiguous filter corresponding to the six characters, as is displayed in the screenshot 3706.
  • the desired word is the first choice and- the first six unambiguously confirmed letters of the first choice are shown highlighted as indicated by the box 3708, and the filter cursor 3710 is also illustrated.
  • FIG. 38 illustrates the method of filtering choices by dragging across two choices in the choice list that has been described above with regard to functions 1706 through 1708 of Figure—FIG. 17.
  • the correction window 3800 displays the desired choice "embedded" as it occurs alphabetically between the two displayed numeral 3802 and 3804.
  • the user indicates that the desired word falls in this range of the alphabet by dragging across these two choices.
  • This causes a new correction window to be displayed in which the possible choices are limited to words which occur in the selected range of the alphabet, as indicated by the screenshot 3808.
  • the desired word is selected as a first choice and as a result of the filtering caused by the selection shown in 3806.
  • this screenshot the portion of the first choice which forms an initial portion of the two choices selected in the view
  • 3806 is indicated as unambiguously confirmed portion of the 3812 is placed after that confirmed filter portion.
  • FIG. 39 illustrates a method in which alphabetic filtering is used in ono at a timcone-at-a-time mode to help select the desired word choice.
  • the user presses the Filter button as indicated in the correction window view 3900.
  • the default filter vocabulary is the letter named name vocabulary.
  • Pressing the Filter button starts speech recognition for the next utterance and the user says the letter "e" as indicated by 3902.
  • This causes the correction window 3904 to be shown in which it is assumed, that the filter character has been misa- recognized as in "pVfj
  • alphabetic input also has a choice list displayed for its recognition. In this case, it is a
  • the user selects the desired filtering character, the letter " ⁇ -/" as shown in the view 3908, which causes a new correction window 3900 to be displayed.
  • the user decides to enter an additional filtering letter by again pressing the Filter button as shown in the view 3912, and then says the utterance "m3914" 3914.
  • This causes the correction window 3916 to be displayed, which displays the filter character choice window 3918.
  • the filtering character has been correctly recognized and the user could either confirm it by speaking an additional filtering character or by selecting the correct letter as is shown in the window 3916.
  • This confirmation of the desired filtering character causes a new correction window to be displayed with the filter strain "em" treated as an unambiguously confirmed filter's string. In the example shown in screenshot 3920, this causes the desired word to be recognized.
  • Figure FIG. 40 illustrates a method of alphabetic filtering with AlphaBravo, or ICA word, alphabetic spelling.
  • the user taps on the AlphaBravo button 1128. This changes the alphabet to the ICA word alphabet, as described above by functions 1402 through 1408 of Figurcl4FIG. 14.
  • the D,isplay_Alpha_On_Double_Click variable has not been set .
  • the function 1406 of Figurcl4 FIG. 14 will display the list of ICA word-J-s 4002 shown in the screenshot 4004 during the press of the AlphaBravo button 1128.
  • the user enters the ICA word "echor/j which represents the • letter “e” " followed by a second pressing out of the AlphaBravo key as shown at 4008 and the utterance of a second ICA word “Mike” which represents the letter "m” .
  • the inputting of these two alphabetic filtering characters successfully creates an unambiguous filter string composed of the desired letters "em” and produces recognition of the desired word, "embedded” .
  • FIG. 41 illustrates a method in which the user selects part of a choice as a filter and then uses AlphaBravo spelling to complete the selection of a word which is not in the system's vocabulary, in this case the made up word "embedded".
  • the user is presented with the correction window 4100 which includes one choice 4100, and which includes the first six letters of his -the desired word.
  • the correction window 4104 the user drags across these first six letters causing those letters to be unambiguously confirmed characters of the current filter string.
  • the screenshot 4108 shows the display of this correction window in which the user drags from the filter button 1218 and releases on the Discrete/Continuous button 1134, changing it from the discrete filter dictation mode to the continuous filter dictation mode, as is indicated by the continuous line on that button shown in the screenshot 4108.
  • screenshot 4110 the user presses the alpha button again and says an utterance containing the following ICA words "Echo, Delta, Echo, Sierra, Tango". This causes the current filter string to correspond to the spelling of the desired word. Since there are no words in the vocabulary matching this filter string, the filter string itself becomes the first choice as is shown in the correction window 4114. In the view of this window shown at 4116, the user taps on the check button to indicate selection of the first choice,
  • the system responds by recognizing this utterance and placing a recognized text in the SIP bufferJL104 ⁇ and through $r the operating system to the application window 1106, as shown in the screenshot 4208. Because the recognized text is slightly more than fits within the SIP window at one time, the user scrolls in the SIP window as shown at.
  • this correction is shown by the screenshot 4300.
  • the user selects the four mistaken words "enter faces men rum” by dragging across them as indicated in view 4302.
  • & 4..4 illust_rat.es how the correction window shown at the bottom of Figure 43 can be corrected by a combination of horizontal and vertical scrolling of the correction window and choices that are displayed in it.
  • Numeral 4400 points to a view of the same correction window shown at 4304 in Fii,-qqguuurrrgee 43. In it mot only as) a vertical scroll bar 4602 that is displayed but also a horizontal scroll bar 4402 in this view ⁇ phe user ⁇ mown tapping the page down button 3006 in the vertical scroll barvwhich causes the portion of the choice list displayed to mo)ve from the display of the one ⁇ - ⁇ >
  • Figure 45 illustrates the use of an ambiguous filter created by the recognition' of continuously spoken letter names and edited by filter character choice windows can be used to rapidly correct an erroneous dictation.
  • the user presses the talk button 1102 as shown at 4500 and then utters the word "trouble" as indicated at 4502.
  • this utterance is misa trecognized as the word "treble” as indicated at 4504.
  • the user taps on the word “treble” as indicated 4506 i ⁇ which causes the correction window shown at 4508 to be shown.
  • the desired word is not shown as any of the choices the user caps the filter button 1218 as shown at 4510 and makes a continuous utterance 4512 containing the names of each of the letters in the desired word "troubleV
  • the filter recognition mode is set to include continuous letter name recognition.
  • the system responds to recognition of the utterance 4512 by displaying the choice list 4518.
  • the result of the recognition of this utterance is to cause a filter strain to be created whic * is comprised of one ambiguous length element .
  • an ambiguous length filter element allows any recognition candidate ⁇ that contains in the corresponding portion of its initial character sequence one of the character sequences wtrtT ⁇ h a're represented by that ambiguous element.
  • the portion of the first choice word 4519 that corresponds to an ambiguous filter element is indicated by the ambiguous filter indicator 4520. Since the filter uses ambiguous element, the choice list displayed contains,_best scoring
  • .recognition candidates that start with different initial character sequences including ones with length less than the- portion of the first choice wfeiefe corresponds to a matching character sequence represented by the ambiguous element .
  • the first choice 4530 is shown with a__ €_ ⁇ an unambiguous filter indicator 4532 for ⁇ i
  • j th.is is indicated in the new correction window 454C which is shown as a result of the selection in which the'first choice 4542 is the desired word, and the unambiguous portion of the filter-is indicated by the unambiguous filter indicator 4544 and .the, remaining portion of the ambiguous filter element, t « ⁇ _ ⁇ stays in the filter string by operations of functions 2900 through 2910 as shown in F-ig -se*' 29.
  • function 4906 When in this loop, if the user touches the character recognition window, function 4906 records "ink” during the continuation of such a touch ⁇ which records the motion if any of the touch across the surface of the portion of the displays touch screen corresponding to the character recognition window. If the user releases a touch in this window, functions 4708 through 4714 are performed. Function
  • Function 4710 performance character recognition on the "ink” c ⁇ rrently in the window.
  • Function 4712 clears the character recognition window, as indicated by the nume iJal 4610 in
  • ⁇ Figure 48 illustrates that if the user selects the handwriting recognition option in the function menu shown in the screenshot 4600, a handwriting recognition entry window 4008 will be displayed in association with the SIP as is shown in screenshot 4802.
  • Figw-e 49 The operation of the handwriting mode is provided in Figw-e 49.
  • function 4902 displays the handwriting recognition window, and then a loop 4903 is entered until the user selects to use another input option.
  • the motion if any during the touch is recorded as "ink” by function 4904.
  • function 4905 causes functions 4906 through 4910 to be performed.
  • Function 4906 performs handwriting recognition on any "ink” previously entered in the handwriting recognition window.
  • Function 4908 supplies the recognized output to the SIP buffer and the operating system, and function 4910 clears the recognition window. If the user presses the ⁇ elete button 4804 shown in JEigiwre 48 functions
  • the use of the recognition button 4806 allows the user to both instruct the system to recognize the "ink” that was previously in the handwriting recognition at -the g me"time.* that -he -or otre ⁇ * starts ⁇ the writing of a new word to be recognized
  • a l?xgur 50 shows the keypad 5000 / which can also be selected from the function menu. /
  • pprroovvii ides the user with optional filter entry mode options. These include options of using a letter-name speech recognition, AlphaBravo speech recognition, character recognition, handwriting recognition, and the keyboard window, as alternative methods of entering filtering spellings. It also enables a user to select whether any of the speech recognition modes are discrete or continuous and whether the letter name recognition character recognition and handwriting recognition entries are to be treated as ambiguous in the filter string.
  • This user interface enables the user to quickly select fe filter entry mode which is appropriate for the current time and place . For example, in a quiet location where one does not have to worry about offending people by speaking, continuous letter name recognition is often very useful. However, in a location where there's a lot of noise, but a user feels that speech would not be offensive to -hi.y neighbors, AlphaBravo recognition might be more appropriate. In a location such as a library where speaking might be offensive to others silent filter entry methods such as character recognition, handwriting recognition or keyboard input might be more appropriate. )
  • Ftgtrre 52 provides an example of how character recognition can be quickly selected to filter a recognition.
  • 5200 shows a portion of a correction window in which 'the user has pressed the filter button and dragged up, causing the filter entry mode menu 5100 shown in Figure 51 to be displayed, and then selected the character recognition option. As is shown in screenshot 5202 this causes the character recognition entry window 4608 to be displayed in a location that allows the user to see the entire correction window. In the screenshot 5202 the user has drawn the
  • ⁇ _? 53 starts with a partial screenshot 5300 where the user has tapped and dragged up from the filter key 1218 to cause the display of the filter entry mode menu, and has selected the handwriting option. This displays a screen such as 5302 with a handwriting entry window 4800 displayed at a location that does not block a view of the correction window 1 . In the screenshot 5302 the user has handwritten in
  • F _igu&re 5.5 illustrates how speech recognition can be used to collect handwriting recognition.
  • Screenshot 5500 shows a handwriting entry window 4800 displayed in a position for entering text into the SIP buffer window 1104. In this screenshot the user _.» just finish-rtig writing a word.
  • Numerals 5502 through 5510 indicate the handwriting of
  • the user could have presse the new button in the correction window 5518 instead of the (rse- ⁇ gjrdd button k in which case the utterance 5520 would have used the output of speech recognition to replace the handwriting outputs wfei ⁇ t had been selected as shown at 5516.
  • Figt ⁇ r ⁇ e 57 illustrates an alternate embodiment 5700 of the SIP speech recognition interface in which there are two separate top-level buttons 5702 and 5704 to select between discrete and continuous speech recognition, respectively.
  • the entry mode menu is used to select among various text and alphabetic entry modes available on the system.
  • Figure 69 displays the functions that are available on the numerical phone key pad when the user has a correction window displayed, which can be ffij use from the editor mode by pressing the "2" key.
  • Fiure 70 displays the numerical
  • EJigtrre 72 illustrates the numerical phone key mapping during a key Alpha mode, in which the pressing of a phone key having letters associated with it will cause a prompt to be shown on the cell phone display asking the user to say the ICA word associated with the desired one of the sets of letters associated with the pressed key. This mode is selected by double -clicking the "3" phone key when ajg. entry mode menu shown in Figure 68.
  • i Sigwz ⁇ 73 shows a basic keys menu, which allows the user to rapidly select from among a set of the most common punctuation and function keys used in text editing, or by pressing the "1" key to see a menu that allows a selection of less commonly used punctuation marks.
  • the basic keys menu is selected by pressing a J' 9. 1 in 'the editor mode illustrated in Fi gnre 67.
  • Figure 74 illustrates the edit o ⁇ prt ⁇ .i_.o-_n. ⁇ mmet.n ⁇ uu/ ⁇ wrhii_.c1.h11 stseeleecutLeedu b_.yy prieesastsinigy " ⁇ Ou' i-tni tuhiiee ecuditor
  • This contains a menu wl ⁇ c_h allows a user to perform basic tasks associated with use of the editor whieb. are not available in the other modes or menus ,
  • a correction window is sJaQne on the cell phones display.
  • the user can access the command list to see the current phone key mapping as is, illustrated in -.r--/ /
  • a display screen 7502 shows a window of the editor mode before the pressing of the r ⁇ tenu button.
  • I additional options associated with the current mode at the time the command list is entered they can also be selected from the command list by means of scrolling the highlight r
  • a phone call indicator 7514 having the general shape of a telephone handset is indicated at the left of each title bar to indicate to the user that the cell phone is currently in a telephone call.
  • extra functions are available in the editor which allow the user to quickly select to mute the microphone of the cell found, to record only audio from the user side of the phone conversatior and to play the playback only to the user side of the phone' conversation.
  • a function, 7604 that tests to see if the editor is currently in word/wide navigational mode. This is the most common mode of navigation in the editor, and it can be quickly selected by pressing the "3"' key twice from the editor. The first press selects the navigational mode menu shown in Mga?-,. and the second press selects the word/line navigational mode from that menu. If the editor is m wordV Tlme mode function 7606 through 7624 are performed/
  • function 7606 causes function 7608 through 7617 to be performed.
  • Functions 7608 and 7610 test to see if extended selection is on, and if so, they move the cursor one word to the left or right, respectively, and extend the previous selection to that word. If extended selection is not on, function 7612 causes functions 7614 to 7617 to be performed.
  • Functions 7614 and 7615 test to see if either the prior input was a word left/right command of a different direction than the current command or if the current command would put the cursor before or after the end of text.
  • function 7617 will move the cursor one word to the left or the right out of its current position and make the word that has been moved to the current selection.
  • function 7612 through 7617 enable word left and word right navigation to allow a user to not only move the cursor by a word but also to select the current word at each move if so desired. It also enables the user to rapidly switch between the cursor whieli corresponds to a selected word or cursor wb_L___ ⁇ represents an insertion point before or after a previously selected word.
  • function 7620 moves the cursor to the nearest word on the line up or down from the current cursor position, and if extended selection is on, function 7624 extends the current selection through that new current word.
  • the editor also includes programming for responding to navigational inputs when the editor is in other navigation modes that can be selected from the edit navigation menu shown in Figure 70.
  • function 7630 tests to see if the editor has been called to enter text into another program, such as to enter text into a field of a Web document or a dialog box, and if so function 7632 enters the current context of the editor into that other program at the current text entry location in that program and returns. If the test 7630 is not met, function 7634 exits the editoi isaving its current content and state f- ⁇ r ⁇ possible later use.
  • function 7638 calls the display menu routine for the editor b d ib d b hi ll h
  • function 7650 enters help mode for the editor. This will provide a quick explanation of the function of the editor mode and allow the user to ' explore the' editor' s hierarchical command structure by pressing its keys and having a brief explanation produced for the portion of that hierarchical command structure reached as a result of each such key pressed.
  • function 7654 turns on recognition according to current recognition settings, including vocabulary and recognition duration mode.
  • the .talk button will often be used as the major button used for initiating speech recognition in the cellphone embodiment.
  • function 7658 goes to the phone mode, such as the quickly make or answer a phone call. It saves the current state of the editor so that the user can return to it when such a phone call is over.
  • the entry mode menu has been associated with the "1" key because of the "1" key's proximity to the talk key. This allows the user to quickly switch dictation modes and then continue dictation using the talk button.
  • correction window navigational mode is page/item navigational mode, which is best for scrolling r through and selecting recognition candidate choices. They then can call the correction window routine for the current selection ⁇ which causes a correction window somewhat similar to the correction window 1200 shown in - ⁇ L-gri- ⁇ e 12 to be displayed on the screen of the cellphone. If there currently is no cursor, the correction window will be called with an empty selection. If this is the case, it can be
  • correction window routine I used to select one or more words using alphabetic input, word completion, and/or the addition of what are more utterances.
  • the correction window routine will be described in greater detail below.
  • function 7712 through 7716 set the correction window navigational mode to the word/character mode used for navigating in a first choice or filter string. They than call the correction window routine for the current selection and treat the second press of the double -click, if one has been entered, as the speech key for recognition duration purposes.
  • the "2" key is usually located directly below the navigational key. This enables the user to navigate in the editor to a desired word or words that need correction and then single- ⁇ ress the nearby "2" key to see a correction window with alternate choices for the selection, or to double -click on the "2" key and immediately start entering filtering information to help the recognizer selects? a correct choice.
  • function 7750 turns recording off .
  • record command function 7754 turns audio recording on.
  • function 7756 tests to see if the system is currently on a phone call and if the record only me setting 7511 shown in Figure 75 is in the off state. If so, function 7758 records audio from the other side of the phone li ne as well as from the phone's microphone or microphone input jack.
  • function 7762 displays a Hrf capitalized menu .asisteh offers the user the choice to select between modes that cause all subsequently entered text to be either in all lowercase, all initial caps, or all capitalized. It also allows the user to select h vi ⁇ y tliS" one or more words currently selected, if any, i_o changed to all lowercase, all initial caps, or all capitalized form.
  • word tense types word part of speech types, and other word types such as possessive or non possessive form, singular or plural nominative forms, singular or plural verb forms, spelled or not spelled forms and homonyms, if any exist.
  • function 7802 displays the basic key's menu shown in Figure 73, which allows the user to. select the entry of one of the punctuation marks or input character that can be selected from that menu as text input .
  • function 7806 enters a New Paragraph Character into the editor's text.
  • functions 7810 to 7824 are performed. Function 7810 tests to see if the editor has been called to input or edit text in another program, in which case/ ) function 7812 returns from the call to the editor with the edited text for insertion to that program. If the editor has not been called for such purpose, function 7820 prompts the user with the choice of exiting the editor, saving its contents and/or canceling escape. If the user selects to escape, functions 7822 and 7824 escape to the topGleyel of the phone mode described above with regard to Figare 63. If the user double-clicks on the "*" key or selects the task list function, function 7828 goes to the task list, as such a double-click does in most of the cellphones, operating modes- ⁇ and menus .
  • function 7832 is the edited options menu described above briefly with regard to Figure 74. If the user double -clicks on the "0" key or selects the undo command, function 7836 undoes the last command in the editor, if any.
  • function 7840 tests to see if there's a current selection. If so, function 7842 deletes it. If there is no current selection and if the current smallest navigational unit is a character, word, or outline item, functions 7846 and 7848 delete backward by that smallest current navigational unit .
  • FTgCTres 79 and 80 illustrate the options as provided by the entry mode menu discussed above with regard to Figure 68. '
  • functions 7906 through 7914 are performed. These set the recognition vocabulary to the large vocabulary. They treat the press of the "1" key as a speech key for recognition duration purposes. They also test to see if a correction window is displayed. If so, they set the recognition mode to discrete recognition, based on the assumption that in a correction window, the user desires the more accurate discrete recognition. They add any new utterance or utterances received in this mode to the utterance list of the type described above, and they call to the display the choice list routine of Figure 22 to display a new correction window for any re-utterance received.
  • the "1" key has been selected for large vocabulary in the entry mode menu because it is the most common recognition vocabulary and thus the user can easily select-eo it by clicking the "1" key twice from the editor. The first click selecting the entry mode menu and the second click selecting the large vocabulary recognition.
  • function 7926 sets the recognition vocabulary to the letter -name vocabulary and indicates that the output of that recognition is to be treated as an ambiguous filter.
  • the user has the capability to indicate under the entry preference option associated with the "9" key of the menu .whether or not such filters are to be treated as .ambiguous length filters or not.
  • the default setting is to let such recognition be treated as an ambiguous length filter in continuous letter - name recognition, and a fixed length ambiguous filter in response to the discrete letter -name recognition.
  • recognition is set to the AlphaBravo mode. If the user double -click on the "3" key ⁇ ⁇ re ⁇ cpgnition is set to the key ⁇ "Alpha” mode as described
  • the recognition vocabulary is limited to a punctuation vocabulary.
  • the recognition vocabulary is limited to the contact name vocabulary described above.
  • Figure 86 illustrates the key Alpha mode which has been described above to some extent with regard to figure 72.
  • the navigation mode is set to the word/character navigation mode normally associated with alphabetic entry.
  • function ' 8604 overlays the keys listed below it with the functions indicated with each such key.
  • pressing the talk key turns on recognition with the AlphaBravo vocabulary according to current recognition settings and responding to key press according to the current recognition duration setting.
  • the 1 key continues to operate as the entry edit mode so that the user can press it to exit the key Alpha mode.
  • function 8628 enters a key punctuation mode that response to the pressing of any phone key having letters associated with it by recogn t on y a ternat ng between press ng t e ta utton and the "4" key.
  • Figures 79 and 80 ill trate'the options as provided by the entry mode menu discusse ve with regard to Figure 68.
  • the "1" key has been selected for large * Dulary in the entry mode menu because it is the most _n recognition vocabulary and thus the user can eas ly select it by clicking the "1" key twice from the editor, the firs>t click selecting the entry mode menu and th. ⁇ r second click selecting the large vocabulary recognition.
  • recognition is set to the AlphaBravo mode. If the user presses the "3" key, recognition is set to the AlphaBravo mode. If the user double -clicks on the "3" key, _recognition is set to the key “Alpha” mode as described ⁇ brought briefly) with regard to Figure 72. This mode is similar to AlphaBravo mode except tha pressing one of the number keys "2" through “9” will cause the user to be prompted to one of the ICA words associated with the letters on the pressed key and the recognition will favor recognition of one from that limited set of ICA words, so as to provide very reliable alphabetic entry even under
  • function 7806 enters a New Paragraph Character into the editor's text.
  • the user has the capability to in unde; the entry preference option associated with the "9" the menu whether or not such filters are to be treated ambiguous length filters or not .
  • recognition is set to the AlphaBravo mode the us ⁇ uble -click on the "3" key, recognition is set the " y "Alpha” mode as described brought briefly with regan Figure 72.
  • This mode is similar to AlphaBravo mode Sx. ept that in pressing of one of the number keys "2" thro- will c'ause the user to be prompted to one of the words ssociated with the letters on the pressed key an recogni ion will favor recognition of one frbm that 1lmi e ⁇ , set of ICA words, so as
  • the text ⁇ -toT-Xspeech submenu also includes a choice that allows the user to play the current selection whenever he or she desires to do so as indicated by functions 8924 and 8926 and functions 8928 and 8930 that allow the user to toggle continuous play on or off whether or not the machine is in a
  • TTS on or TTS. off mode As Indicated by the top-level choices in the edit options menu at 8932, a double -lclick of the ⁇ 4 ⁇ key toggles text -to-speech on or off as if the user had pressed the 4 key, then waited for the text-to-speech
  • the 5 key in the edit options menu selects the outline menu wfeilch includes a plurality of functions that let a user navigate in an expand and contract headings and an outline mode. If the user double -clicks on the 5 key, the system toggles between totally expanding and totally contracting the current outline element in which the editors cursor is located.
  • ⁇ UL playback settings such as volume and speed and whether audio associated with recognized words is to be played and/or audio recorded without .associated recognized words.
  • Figure 90 starts with items selected by the 3, 4, '5, 6 it J* and 7 keys under the audio menu described above, starting with numeral 8938 in figure 89. If the user presses the 3 key, a recognized audio options dialog box 90O0 will be displayed wh*efe, as is described by numerals 9002 through
  • this dialog box gives the user the option to select to perform speech recognition on any audio contained in the current selection in the editor, to recognize all audio in the current document, to decide whether or not previously recognized audio is to be read recognized, and to set parameters to determine the quality of, and time required by, such recognition.
  • this dialog box provides an estimate of recognizing the current selection with the current quality settings and, if a task of recognizing a selection is currently underway, status on the current job.
  • This dialog box allows the user to perform recognitions on relatively large amounts of audio as a background task or at times with, a phone is not being used for other purposes, including times when it is plugged into an auxiliary power supply.
  • the user is provided with a submenu that allows him to select to delete certain information from the current selection. This includes allowing the user to select to delete all audio
  • Deleting recognition audio from recognized text greatly reduces the memory associated with the storage of such text and is often a useful thing to do once the user has decided that he does not need the text -associated audio to help him her determine its intended meaning.
  • Deleting text but not audio from a portion of media is often useful where the text has been produced by speech recognition from the audio but is sufficiently inaccurate to be of little use.
  • the 5 key allows the users to select whether or not text that has associated recognition audio is marked, such as by underlining to allow the user to know if such text has playback that can be used to help understand it or, in some embodiments, will have an acoustic representation from which alternate recognition choices can
  • ⁇ ___ _ cot . i ⁇ wanwyto as the search string.
  • the speech recognition text editor can be used to enter a different search string, if so desired. If the user ii double-clicks on the 8 key, this will be interpreted as a find again command .which will search again for the previously entered -search string.
  • a vocabulary menu is displayed whj i c.llows the user to determine which words are in the current vocabulary, to select between different vocabularies, and vadd words to a given vocabulary. If the user either presses or double - clicks the 0 button when in the edit options menu, an undo function will be performed. A double click accesses the undo function from within the edit options menu so as to i. " provide similarity with the fact that a double -click on 0 accesses the undo function from the editor or the correction
  • the pound key operates as a redo button.
  • Figure 94 illustrates the . These are the rules that govern the operation of text ⁇ to- speech generation when tex -to-speech operation has been selected through the text? -to-speech options described above with regard to functions 8908 to 8932 of f ⁇ giaase 89.
  • function 9404 causes functions 9406 to 9414 to be performed. These functions enable a user to safely select phone keys without being able to see them, such as when the user is driving a car or is otherwise occupied. Preferably this mode is not limited to operation in the speech recognition editor that can be used in any mode of the cell phones operation.
  • function 9408 tests to see t the same key has been pressed within a TTS KeyTime, which is a short period of time such as a quarter or a third of a second. __ "
  • the function 9408 finds that the time since the release of the last key press of the same key is less than the TTS key time function 9414 the cel-Jphonefe software Irs. respond* to the key press, including any double -clicks, the same as it would as if the TTS key mode were not on.
  • the TTS keys mode allows the user to find a cell phone key by touch, to press it Jt> determine if it is the desired kevsa d, if so, to quickly
  • this mode allows the user to search for the desired key without causing any undesired consequences.
  • the cell phone keys are designed so that they are merely t hli aann pushe audio feedback a,_ ho whi r.h j Rv f.h _ are and their current * p function, similar to that provided by function 9412, will be provided. This can be provided, for example, by having the
  • functions 9416 and 9418 When TTS is on, if the system recognizes or otherwise ⁇ . receives a command input, functions 9416 and 9418 cause ' ' text-to-speech or recorded audio playback to say the name of the recognized command.
  • audio confirmation qf commands Preferably such audio confirmation qf commands have an associated sound quality, such as in the form of the different tone of voice or different associated sounds, that distinguish the saying of command words from
  • functions 9432 to 9438 use text-to'-'speech to say that newly selected word or character. If such a movement of a cursor to a new word or character position extends an already started selection, after the saying of the new cursor position, functions 9436 and 9438 will say the word "selection" in a manner that indicates that it is not part of recognized text, and then proceeds to say the words of the current selection.
  • cursor If the user moves the cursor to be a non -selection cursor, such as is described above with regard to functions 7614 and 7615 of 9'4 use tOextj-to- speech' to say the two words that the cursor is located between.
  • functions 9444 and 9446 use text -co- speech to say the first choice in the correction window, dispel the current filter (a& any)' indicating which parts of it are unambiguous and which parts of it are ambiguous, and then use text-to-speech to say each candidate in the currently displayed portion of the choice list. For purposes of speed, it is best that differences in tone or sound be used to indicate which portions, of the filter are absolute or ambiguous .
  • functions 9448 and 9450 use text -to-speech to say the currently highlighted choice and its selection number in response to each such scroll. If the user scrolls a page in a correction window, functions 9452 and 9454 use text, -to- speech to say that newly displayed choices a s well as indicating the currently highlighted choice.
  • functions 9456 and 9458 use text -to-speech or free recorded audio to say the name of the current menu and all of the choices in the menu and their associated numbers, indicating the current selection position. Preferably this is done with audio cues that indicate to a user that the words being said are menu options .
  • FIG. 95 illustrates some aspects of the programming used in text'-lco-speech generation. If a word to be generated by text -to-speech is in the speech recognition programming ' s vocabulary of phonetically spelled words, function 9502 causes functions 9504 through 9512 to be performed. Function 9504 tests to see if the word has multiple phonetic spellings associated with different parts of speech, and if the word to be set using TTS has a current linguistic context indicating its current part of speech.
  • function 9506 uses speech recognition programming ' s part of speech indicating code to select the phonetic spelling associated with a part of speech found most probable by that part of speech indicating code as the phonetic spelling in the text -to-speech generation for the current word. If, 'on the other hand, there is only one phonetic spelling associated with the word or there is no context sufficient to identify the most probable part of speech for the word, function 9510 selects the single phonetic spelling for the word or its most common phonetic spelling. Once a phonetic spelling has been selected for the word to be generated either by function 9506 or function 9510, function 9512 uses the phonetic spelling selected for the word as a phonetic spelling to be used in the text -to/speech generation.
  • function 9514 and 9516 use pronunciation guessing software that is used by the speech recognizer to assign a phonetic spellings/to names and newly entered words for the text-to-speech generation of the word.
  • Figure 96 describes the operation of the transcription mode that can be selected by operation of the transcription menu of the edits options menu shown in figures ⁇ 89 and 90.
  • function 9602 When the transcription mode is entered, function 9602 normally changes navigation mode to an audio navigation mode that navigates forward or backward five seconds and an audio recording in response to left and right navigational key input and forward and backward one second in response to u ⁇ O ⁇ -. down navigational input. These are default values/which can be changed in the transcription mode dialog, box.
  • functions 9606 through 9614 are performed.
  • Functions 9607 and 9608 toggle play between on and off.
  • Function 9610 causes functions 9612 to be performed if the toggle is turning play on. If so, if there has been no sound navigation since the last time sound was played,
  • function 9616 causes functions 9618 through 9622 to be performed. These functions test to see if play is on, and if so they turn it off. They also turn on large vocabulary recognition during the press, in either continuous or discrete mode, according to present settings. They then insert the recognize text into the editor in the location in the audio being transcribed at which the last end of play took place. If the user double -clicks the play button, functions 9624 and 9626 prompt the user that audio recording is not available in transcription mode and that transcription mode can.be turhed off in the audio menu under the added options menu.
  • its transcription mode enables the user to alternate between playing a portion of previously recorded audio and then transcribing it by use of speech recognition by merely alternating between clicking and making sustained presses of the play key, which is the numbe_ ⁇ 6 phone key.
  • the user is free to use the other functionality of the editor to correct any mistakes wh._ cji P ⁇ have been made in the recognition during the transcription process, and then merely return to it by again pressing the 6 key to play the next segment of audio to be transcribed.
  • the user will often not desire to perform a literal transcription out of the audio. For example, the user may play back a portion of a phone call and merely transcribe a summary of the more noteworthy portions .
  • Figure 97 illustrates the operation of a dialogue box editing programming that uses many features of the editor mode described above to enable users t o enter text and other information into a dialogue box displayed in the cell phones screen.
  • function 9702 When a dialogue box is first entered, function 9702 displays an editor window showing the first portion ⁇ of the dialog box. If the dialog box is too large to fit ⁇ jji one screen at one time, it will be displayed in a scrollable window. As indicated by function 9704, the dialog box responds to all that the editor mode described above 76 through 78 does, except as is indicated by the functions 9704 through 9726.
  • the cursor movement resporids in a manner similar to that in which it wouldiuhe ? editor except that it can normally only mov e to a control into which the user can supply input.
  • the cursor would move left or right to the next dialog box control, moving up or down lines if necessary to find such a control.
  • the cursor would move to the nearest -on 1 the lines above or below the current cursor position.
  • normally a cursor will not move more than a page even if there are no controls within that distance.
  • function 9712 displays a separate , editor window for the field, which displays the text currently in that field, if
  • functions 9714 and 9716 limit the recognition in the editor to that vocabulary. For example, if the fie ⁇ were limited to state names, recognition that field would be so limited.
  • function 9718 will direct all editor commands to perform editing within it. The user can exit this field - editing window by selecting OK, which will cause the text currently in the window at that time to be entered into the corresponding field in the dialog box window.
  • function 9722 displays a correction window showing the current value in the list box as the first choice and other options provided in the list box as other available choices shown in a scrollable choice list.
  • the scrollable options are not only accessible by selecting an associated number but also are available by speech recognition using a vocabulary limited to those options.
  • any 9724 and 9726 change the state of the ⁇ heck ⁇ box or radio button, by toggling whether the check box or radio button is selected.
  • Figure 98 illustrates a help routine 9800,.which is the cell phone embodiments analog of the help mode described above with regard to figure 19 in the PDA embodiments.
  • function 9802 displays a scrollable help menu for the state that includes a description of the state along with a se lectable list of help options and of all of the state's commands.
  • Figure 99 displays such a help menu_Eor the editor mode described above with regard to f ⁇ gures 67 and 76 through 78.
  • Figure 100 illustrates such a help menu for the entry mode menu described above with regard to figtlr ⁇ 8 and f'iyure 79 and 80.
  • each of these help menus includes a help options selection, which can be selected by means of a scrollable highlight and operation of the help key, which will allow the user to quickly jump to the various portions of the help menu as well as the other help related functions.
  • Each help menu also includes a help options selection, which can be selected by means of a scrollable highlight and operation of the help key, which will allow the user to quickly jump to the various portions of the help menu as well as the other help related functions.
  • Each help menu also includes a help options selection, which can be selected by means of a scrollable highlight and operation of the help key, which will allow the user to quickly jump to the various portions of the help menu as well as the other help related functions.
  • Each help menu also includes a help options selection, which can be selected by means of a scrollable highlight and operation of the help key, which will allow the user to quickly jump to the various portions of the help menu as well as the other help related functions.
  • Each help menu also includes a help options selection, which can be selected by means of a scrollable highlight and
  • the help mode will be entered for the editor mode, causing the cell phone to display the screen 10102.
  • the navigational mode is a page/line navigational mode as indicated by the characters " ⁇ P ⁇ L" shown in screen 1102, the display will scroll down a page' as indicated by screen 10104.
  • the screen will again scroll down a page, causing the screen to have the appearance shown at 10106.
  • the user has been able to read the summary of the function of the editor mode 9904 shown in figure 99 with just two clicks of the page right key.
  • the user can use the navigational keys to scroll the entire length of the help menu if so desired.
  • the user finds the key number associated with the entry mode menu he presses that key as shown at 10110 to cause the help mode to display the help menu associated with the entry mode menu as shown a t screen 10112.
  • the 1 key briefly calls the help menu for the dictation defaults option and the escape key returns to the entry preference menu at the locat ion and menu associated with the dictation defaults option, as shown by screen 10118.
  • Such a selection of a key option followed by an escape allows the user to rapidly navigate to a desired portion of the help menu's command list merely by pressing the number of the key in that portion of the command and list followed by an escape.
  • the user presses the page right key as shown at 10120 to scroll down a page in the command list as indicated by screen 1122.
  • the user selects the option associated with the 5 key, by pressing that key as indicated at 10124 to obtain a description of the press continuous, click discrete to
  • phone key member indicators 10230 are used to label portions of the editor screen. If the user presses one of these associated phone numbers, a description of the corresponding portion of the screen will be displayed. In the example of figure 1'02, the user presses the 4 key, which causes an editor screen help screen 10234 to be displayed, which describes the function of the navigation mode indicator " ⁇ W A L" shown at the top of the editor screen help screen 10228.
  • the user presses the escape key three times as is shown to numeral 10236.
  • the first of these escapes from the screen 10234 back to the screen 10228, giving the user the option to select explanations o f other of the numbered portions of the screen being described.
  • the user has no interest in making such other selections, and thus has followed the first press of the escape key with two other rapid presses, the first of which escapes back to the help menu for the editor mode and the second of which escapes back to the editor mode itself.
  • the hierarchical operation of help menus enables the user to rapidly explore the command structure on the cell phohe. This can be used either to search for a command that performs a desired function, or to merely learn the command structure in a linear order.
  • F gures 103 and 104 describe an example of a user continuously dictating some speech in the editor mode and then usingi the editor's interface to correct the resulting text output .
  • the user presses the 3 key to access the added navigation menu illustrated in figure 70 and 84 and then presses the 1 button to select the utterance This makes the cursor correspond to the first word of the text recognized for the most recent utterance as indicated at 10308 in screen 10310.
  • the user double-clicks the 7 key to select the capitalized cycle function described in figure
  • each choice is si_o_- ⁇ with an associated phone key number that can be used to select it.
  • the user presses the 6 key to select the desired choice, which causes it to be inserted into the editor's text window at the location of the cursor selection, causing the editor text window to appear as shown at 10330.
  • the user presses the word right key three times to place the cursor at the location 10332.
  • the recognized word is "results" 'and a desired word is the singular form of that word "result.” For this reason, the
  • the user next presses the line down button to move the cursor down to the location 1400.
  • the user then presses the 5 key to start an extended selection and presses the word key to move the cursor right one word to the location 10402, causing the current selection 10404 to be extended rightward by one word.
  • the second click of the 2 key is an extended click, as indicated by the down arrow 10406.
  • the user continuously utters the letter string, "p, a, i, n, s, which are the initial letters of the desired word, "painstakingW. /
  • the correction window is in the continuous letter name recogniti on mode as indicated by the characters "_abc" in the title bar of the correction 10412.
  • the recognition of the utterance 10408 i ⁇ as filter input causes the correction window 10412 to show a set of choices that have been filtered against an dbiguous length filter corresponding to the recognition results from the recognition of that continuously spoken string of letter names.
  • the correction window has a first choice, 10414, that starts with one of the character sequences associated with the ambiguous filter element.
  • the portion of the first choice that corresponds to a sequence of characters associated with the ambiguous filter is indicated by the ambiguous filter indicator 10416.
  • The' filter cursor, 10418 is located after the end of this portion of the first choice.
  • Functions 8151 and 8162 of figure 81 cause a filter character choice window, 10422, to be displayed. gince the desired character is a "pY/0 the user presses the 7 key to choose it, which causes that character to be made an unambiguous character of the filter string, and causes a new correction window, 10424, to be displayed as a result of that change in the filter.
  • the correct choice is shown associated with the phone key 6 and the user presses that phone key to cause the desired word to be inserted into the editor's text window as shown at 10434.
  • the user presses the line down/" and word right keys to move the cursor selection down a li ne and to the right so as to select the text "period" shown at 10436.
  • the user then presses the 8, or word form list key, which causes a word form list correction window 104738, to be displayed.
  • the desired output a period mark, is associated with the 4 phone key.
  • the user presses that key and causes the desired output to be inserted into the text of the editor window as shown at 10440.
  • Figure 105 illustrates how'user can scroll a choice list horizontally by operation of functions 8132 and 8135 • described above with regard to figure 81.
  • Figure 106 illustrates how the Key Alpha recognition mode can be used to enter alphabetic input into the editor's text window.
  • Screen 10600 shows an editor text window in
  • the user makes an extended press of the phone key as indicated at 10608, which causes a prompt window, 10610vto display the ICA words associated with each of the letters on the phone key that has been pressed.
  • the user makes the utterance "charleyVO 10612. This causes the corresponding letter "c" to ' be entered into the text window at the former position of the cursor and causes the text window to have the appearance shown in screen 10614.
  • Figure 7 provides an illustration of the same Key Alpha recognition mode being used to enter alphabetic filtering input . It shows that the Key Alpha mode can be entered when in the correction window by pressing the 1 key followed by a double-click on the 3 key in the same way thew it can be from the text editor,
  • Figures 106 and 109 show how a user can use the interface of the voice recognition text editor described above to address and to enter and correct text and e -mails in the cell phone embodiment .
  • screen 10800 shows the e-mail option he selects the e -mail option when in the main menu, as illustrated in figure 66 .
  • this causes the slightly incorrect name, "Stan Rothj" ⁇ to be inserted into the message's addressee line as a shown at 10806.
  • the user responds by pressing the 2 key to select a choice list, 10806, for the selection.
  • the desired name is shown on the choice list and the user presses the 5 key to select it, causing the desired name to be inserted into the addressee line as shown at 10808.
  • the user presses the down line button twice to move the cursor down to the start of the subject line, as a shown in screen 10810.
  • the user then presses the talk button while saying the utterance "cell phone speech interface ⁇ 10812. In the example, this is slightly mis '- recognized as "sell phone is inserted at the cursor cause the e-mail edit window to have the appearance shown at 10814.
  • the user presses the line up button and the word left button to position the cursor selection at the 'position 10816.
  • the desired output is associated with the 4 key, the user selects that key and causes the desired output to be placed in the cursor's position as indicated in screen 10820.
  • the user presses the line down button twice to place the cursor at the beginning of the body portion of the e-mail message as shown in screen 10822. Once this is done, the user presses the talk button while continuously saying the, utterance "the new Elvis interface is working really
  • the user presses the line up key once and the word left key twice to place the cursor in the position shown by screen 10900 of figure 199.
  • the user then presses the 5 key to start an extended selection and presses the word left key twice to place the cursor at the position
  • the word right key which moves the filter cursor to the first character of the next word to the right, as indicated by numeral 10912.
  • the user then presses the 1 key to enter the entry mode menu and presses the 3 key to select to select the AlphaBravo, or ICA word, input vocabulary.
  • the user says the continuous utterance, "echo, lima, victor, india, sierra ⁇ ” 10914. This is recognized as detector sequence "ELVIS, "'which is inserted, starting with
  • the user presses the OK key to select the current first choice because it is the desired output.
  • Figure 110 illustrates how re -utterance can be used to help obtain the desired recognition output. It starts with the correction window in the same state as was indicated by screen 10906 and figure 109. But in the example of figure 110, the user responds to the screen by pressing the 1 key twice, once to enter the entry menu mode, and a second time to select a large vocabulary recognition. As indicated by function 7908 through 7,914 in figure 79, if large vocabulary recognition is selected in the entry mode menu when a correction window is displayed, the system interprets this as an indication that the user wants to perform a re - utterance, that is, to add a new utterance for the desired output into the utterance list for use in helping to select the desired output.
  • the user continues the second press of the 1 key while using discrete speech to say the three words "the” //"newt' 'Elvis” corresponding to the desired output.
  • the additional discrete utterance information provided by this new utterance list entry causes the system to correctly recognize the first two of the three words.
  • the third of the three words is not in the current vocabulary, which will require the user to spell that third word with filtering input, such as was done by the utterance 10914 in figure 109.
  • Figure 110 illustrates how the editor functionality can be used to enter a URL text string for purposes of accessing a desired web page on a Web browser whsch is part of the cell phone's software.
  • the browser option screen, 11,100 shows the screen that is displayed if the user selects the Web browser option associated with the 7 key in the main menu, as indicated on figure 66.
  • the user desires to enter the URL of a desired web site and selects the URL window option associated with the 1 key by pressing that key.
  • This causes the screen 11,102 to display a brief prompt instructing the user.
  • the user responds by using continuous letter-name spelling to spell the name of a desired web site during a continuous press of the talk button.
  • the URL editor is always in correction mode so that the recognition of the utterance, 11,103, causes a correction window, 11,104, to be displayed.
  • Figures 112 through 114 illustrate how the editor interface can be used to navigate and enter text into the fields of Web pages .
  • Screen 11,200 illustrates the appearance of the cell phone's Web browser when it first accesses a new web site.
  • a URL field, 11,201 is shown before the top of the web page, 11,204, to help the user identify the current web page. This position can be scrolled back to at any time if
  • FIG. 116 illustrates how the cell phone embodiment shown allows a special form of correction window to be used as a list box when editing a dialog box of the type described above with regard to figure 115.
  • figure 116 starts from the find di alog box being in the state shown at screen 11504 in figure 15.
  • a list box correction window 11512, is displayed- j ⁇ i__3h ⁇ shows the current selection in the list box as the current first choice and provides a scrollable list- of the other list box choices, with each such other choice being shown with associated phone key number. The user could scroll through this list and choose the desired choice by phone key number or by using a highlighted selection.
  • the user continues the press of the talk key and says the desired list box value with the utterance,
  • FIG. 117 illustrates a series of interactions between a user and the cell phone interface, which display some of the functions wJa ⁇ eii the interface allows the user to perform when making phone calls.
  • the screen 6400 at figure 117 is the same top -level phone 'mode screen described above with regard to figure 64.
  • this -mode allows a user to select names from a contact Igf ⁇ by adding them, and if there is a mis -recognition, to correct it by alphabetic filtering by selecting choices from a potentially scrollable choice list in a correction window whrir ⁇ n is similar to those of the described above.
  • an initial prompt screen, 11700 is shown as indicated in figure 117.
  • the user utters a name, 11702, during the pressing of the talk key.
  • name di al such utterances are recognized with the vocabulary automatically limited to the name vocabulary, and the resulting
  • I recognition causes a correction window, 11704, to be displayed.
  • the first choice is correct, so the user selects the OK key, causing the phone to initiate a call to the phone number associated with the named party in the user's contact list.
  • a screen, 11706 is displayed having the same ongoing call indicator, 7414, described above with regard to figure 75.
  • the user selects the down button, which is ass ociated with the same Notes function described above with regard to figure 64.
  • an editor window, 11710 is displayed for the Notes outline with an automatically created heading item, 11712, being created in the Notes outline for the current call, labeling the party to whom it is made and its start and ultimately its end time.
  • a cursor, 11714 is then placed at a new item indented under the calls heading.
  • the user says a continuous utterance, 11714, during the pressing of the talk button because recognized text corresponding to that utterance to be inserted into the notes outline at the cursor as indicated in screen 11716. Then the user double -clicks the 6 key to start recording, which causes an audio graphic representation of the sound to be placed in the notes to editor window at the current location of the cursor.
  • the user next doubleclicks on the star key to select the task list.
  • This shows a screen, 11720, that lists the currently opened tasks, on the cell phone.
  • the user selects the task associated with the 4 phone key, which is another notes editor window displaying a different location in the notes outline.
  • the phone keys display shows a screen, 11722, of that portion of the notes outlined.
  • the user presses the up key three times to move the cursor to location 11724 and then presses the 6 key to start playing the sound associated with the audio graphics representation at the cursor, as indicated by the motion between the cursors of screens 11726 and 11728.
  • the playback of the audio in screen 11728 will be -played to both sides of the current phone call, enabling the user of the cell phone to share audio recording with the other party during the cell phone call .
  • FIG. 118 illustrates that when an edit window is recording audio, such as is shown in screen 11717 near the bottom middle of figure 117, the user can turn on speech recognition during the recording of such an audio to cause the audio recorded during that portion to also have speech recognition performed upon it.
  • the user presses the talk button and speaks the utterance, 11800. This causes the text associated with that utterance, 11802, to be inserted in the editor window, 11806. Audio recorded after the duration of the recognition is recorded merely with audio graphics . Normally this would be used in the methods
  • FIG. 119 illustrates how the system enables the user to select a portion of audio, such as the portion 11900 shown in that figure ⁇ by a combination of the extended selection key and play or navigation keys, and then to select the recognized audio dialog box discussed above with regard to functions 9000 through 9014 of figure 90 to have the selected text recognized as indicated at 11902.
  • the user has selected the show recognized audio option, 9026, shown in figure 90, which causes the recognized text, 11902, to be underlined, indicating that it has a playable audio associated with it.
  • FIG. 120 illustrates how a user can select a portion, 12,000, of recognized text that has associated recorded audio, and then select to have that text stripped from its associated recognized audio by selecting the option 9024, shown in figure 90, in a submenu under the editor options menu. This leaves just the audio, 12,002, and its corresponding audio graphic representation, remaining in the portion of media where the recognized text previously stood.
  • FIG. 121 illustrates how the function 9020, of FIG. 90, from under the audio menu of the edit options menu allows the user to strip the recognition audio that has been associated with a portion, 12100, of recognized text from that text as indicated at 12102 in FIG. 21.
  • FIGS. 122 through 125 provide illustrations of the operation of the digital dialed mode described in pseudocode in FIG. 126. If the user selects the digit dial mode, such as by pressing the 2 phone key when in the main menu, as illustrated at function 6552 of FIG. 65 or by selecting the left navigational button when the system is in the top -level phone mode shown in screen 6400 and FIG. 64, the system will enter the digital dial mode shown in FIG. 126 and will display a prompt screen, 12202, which prompts the user to say a phone number. When the user says an utterance of a phone number, as indicated at 12204, that utterance will be recognized. If the system is quite confident that the recognition of the phone number is correct, it will automatically dial the recognized phone number as indicated at 12206.
  • the digit dial mode such as by pressing the 2 phone key when in the main menu, as illustrated at function 6552 of FIG. 65 or by selecting the left navigational button when the system is in the top -level phone mode shown in screen 6400 and FIG. 64, the system will
  • the system If the system is not that confident of the phone number's recognition, it will display a correction window, ' 12208. If the correction window has the desired number as the first choice as is indicated 12210, the user can merely select it by pressing the OK key, which causes the system to dial the number as indicated at 12212. If the correct choice is on the first choice list as is indicated at 12214, the user can merely press the phone key number associated with that choice because the system to dial the number as is indicated at 12216
  • the user can check to see if the desired number is on one of the screens of the second choice list by either repeatedly pressing the page down key as indicated by the number 12302, or repeatedly pressing the item down key as is indicated at 12304. If by scrolling
  • digit change indicators, 12310 are provided to indicate the digit column of the most significant digit by which any choice differs from the choice ahead of it on the list. This makes it easier for the eye to scan for the desired phone number.
  • FIG. 124 illustrates how the digit dial mode allows the user to navigate to a digit position in the first choice and correct any error wlT ⁇ CTr exists within it. In FIG. 124, this is done by speaking the desired number, but the user is also allowed to correct the desired number by pressing the appropriate phone key.
  • the user is also able to edit a misperceived phone number by inserting a missing digit as well as by replacing a mis -recognized one.
  • the invention described above has many aspects -wteirch can be used for the entering and correcting of speech recognition as well as other forms of recognition on many different types of computing platforms, including all those shown in FIGS . tfeele through eight .
  • a lot of the features of the invention described with regard to FIG. 94 can be used in situations where a user desires to enter and/or edit text without having to pay close visual attention to those tasks. For example, this could allow a user to listen to e - mail and dictate responses while walking in a Park, without the need to look closely at his cell phone or other dictation device.
  • One particular environment in which such audio feedback is useful for speech recognition and other control functions, such as phone dialing and phone control, is in an automotive arena, such as is illustrated in FIG. 126.
  • the car's electronic system will have a short range wireless transceiver such as a Blue Tooth or other short range transceiver, 12606. These can be used to communicate to a wireless headphone, 2608, or the user's cell phone, 12610, so that the user can have the advantage of accessing information stored on his normal cell phone while using his car.
  • the cell phone/wireless transceiver, 12602 can be used not only to send and receive cell phone calls
  • digital files such as text files wfejj_*t can be listened to and edited with the functionality described above, and audio Web pages.
  • the input device for controlling many of the functions described above with regard to the shown cell phone , * embodiment can be accessed by a phone keypad, 12212. that is preferably located in a position such as on the steering wheel of the automobile, which will enable a user to a access its keys without unduly distracting him from the driving function.
  • a phone keypad having a location similar to that shown in FIG. 126, a user can have the forefingers of one hand around the rim of the steering wheel while selecting keypad buttons with the thumb of the same hand.
  • the system would have the TTS keys function described above with regard to
  • the touch sensitive keypad that responds to a. mere touching of its phone keys with such information could also be provided witi-rfi would be even easier and more rapid to use.
  • FIG.s 127 and 128 illustrate that most of the capabilities described above with regard to the cell phone embodiment can be used on other types of phones, such as on the cordless phone shown in FIG. 127 or on the landline found indicated at FIG. 128.
EP02773307A 2002-09-06 2002-09-06 Verfahren, systeme und programmierung zur durchführung der spracherkennung Withdrawn EP1604350A4 (de)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/US2002/028590 WO2004023455A2 (en) 2002-09-06 2002-09-06 Methods, systems, and programming for performing speech recognition

Publications (2)

Publication Number Publication Date
EP1604350A2 EP1604350A2 (de) 2005-12-14
EP1604350A4 true EP1604350A4 (de) 2007-11-21

Family

ID=34271640

Family Applications (1)

Application Number Title Priority Date Filing Date
EP02773307A Withdrawn EP1604350A4 (de) 2002-09-06 2002-09-06 Verfahren, systeme und programmierung zur durchführung der spracherkennung

Country Status (5)

Country Link
EP (1) EP1604350A4 (de)
JP (1) JP2006515073A (de)
KR (1) KR100996212B1 (de)
CN (1) CN1864204A (de)
AU (1) AU2002336458A1 (de)

Families Citing this family (147)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7720682B2 (en) * 1998-12-04 2010-05-18 Tegic Communications, Inc. Method and apparatus utilizing voice input to resolve ambiguous manually entered text input
US8645137B2 (en) 2000-03-16 2014-02-04 Apple Inc. Fast, language-independent method for user authentication by voice
US8677377B2 (en) 2005-09-08 2014-03-18 Apple Inc. Method and apparatus for building an intelligent automated assistant
US7697827B2 (en) * 2005-10-17 2010-04-13 Konicek Jeffrey C User-friendlier interfaces for a camera
US9318108B2 (en) 2010-01-18 2016-04-19 Apple Inc. Intelligent automated assistant
JP4672686B2 (ja) * 2007-02-16 2011-04-20 株式会社デンソー 音声認識装置及びナビゲーション装置
US8977255B2 (en) 2007-04-03 2015-03-10 Apple Inc. Method and system for operating a multi-function portable electronic device using voice-activation
US8457946B2 (en) * 2007-04-26 2013-06-04 Microsoft Corporation Recognition architecture for generating Asian characters
JP4862740B2 (ja) * 2007-05-14 2012-01-25 ソニー株式会社 撮像装置、情報表示装置、および表示データ制御方法、並びにコンピュータ・プログラム
US9330720B2 (en) 2008-01-03 2016-05-03 Apple Inc. Methods and apparatus for altering audio output signals
US8996376B2 (en) 2008-04-05 2015-03-31 Apple Inc. Intelligent text-to-speech conversion
US10496753B2 (en) 2010-01-18 2019-12-03 Apple Inc. Automatically adapting user interfaces for hands-free interaction
US20100030549A1 (en) 2008-07-31 2010-02-04 Lee Michael M Mobile device having human language translation capability with positional feedback
KR100998566B1 (ko) 2008-08-11 2010-12-07 엘지전자 주식회사 음성인식을 이용한 언어 번역 방법 및 장치
US9959870B2 (en) 2008-12-11 2018-05-01 Apple Inc. Speech recognition involving a mobile device
US10706373B2 (en) 2011-06-03 2020-07-07 Apple Inc. Performing actions associated with task items that represent tasks to perform
US10241644B2 (en) 2011-06-03 2019-03-26 Apple Inc. Actionable reminder entries
US10241752B2 (en) 2011-09-30 2019-03-26 Apple Inc. Interface for a virtual digital assistant
US9858925B2 (en) 2009-06-05 2018-01-02 Apple Inc. Using context information to facilitate processing of commands in a virtual assistant
US9431006B2 (en) 2009-07-02 2016-08-30 Apple Inc. Methods and apparatuses for automatic speech recognition
US8494852B2 (en) * 2010-01-05 2013-07-23 Google Inc. Word-level correction of speech input
US10276170B2 (en) 2010-01-18 2019-04-30 Apple Inc. Intelligent automated assistant
US10553209B2 (en) 2010-01-18 2020-02-04 Apple Inc. Systems and methods for hands-free notification summaries
US10705794B2 (en) 2010-01-18 2020-07-07 Apple Inc. Automatically adapting user interfaces for hands-free interaction
US10679605B2 (en) 2010-01-18 2020-06-09 Apple Inc. Hands-free list-reading by intelligent automated assistant
DE202011111062U1 (de) 2010-01-25 2019-02-19 Newvaluexchange Ltd. Vorrichtung und System für eine Digitalkonversationsmanagementplattform
US8682667B2 (en) 2010-02-25 2014-03-25 Apple Inc. User profiling for selecting user specific voice input processing information
KR101687614B1 (ko) * 2010-08-04 2016-12-19 엘지전자 주식회사 음성 인식 방법 및 그에 따른 영상 표시 장치
US20120110456A1 (en) * 2010-11-01 2012-05-03 Microsoft Corporation Integrated voice command modal user interface
US10762293B2 (en) 2010-12-22 2020-09-01 Apple Inc. Using parts-of-speech tagging and named entity recognition for spelling correction
US9262612B2 (en) 2011-03-21 2016-02-16 Apple Inc. Device access using voice authentication
KR101218332B1 (ko) * 2011-05-23 2013-01-21 휴텍 주식회사 하이브리드 방식의 음성인식을 통한 문자 입력 방법 및 장치, 그리고 이를 위한 하이브리드 방식 음성인식을 통한 문자입력 프로그램을 기록한 컴퓨터로 판독가능한 기록매체
US10057736B2 (en) 2011-06-03 2018-08-21 Apple Inc. Active transport based notifications
US8994660B2 (en) 2011-08-29 2015-03-31 Apple Inc. Text correction processing
US8762156B2 (en) * 2011-09-28 2014-06-24 Apple Inc. Speech recognition repair using contextual information
US9256396B2 (en) 2011-10-10 2016-02-09 Microsoft Technology Licensing, Llc Speech recognition for context switching
US10134385B2 (en) 2012-03-02 2018-11-20 Apple Inc. Systems and methods for name pronunciation
US9483461B2 (en) 2012-03-06 2016-11-01 Apple Inc. Handling speech synthesis of content for multiple languages
US9280610B2 (en) 2012-05-14 2016-03-08 Apple Inc. Crowd sourcing information to fulfill user requests
US9721563B2 (en) 2012-06-08 2017-08-01 Apple Inc. Name recognition system
US9576574B2 (en) 2012-09-10 2017-02-21 Apple Inc. Context-sensitive handling of interruptions by intelligent digital assistant
US9547647B2 (en) 2012-09-19 2017-01-17 Apple Inc. Voice-based media searching
KR101330671B1 (ko) * 2012-09-28 2013-11-15 삼성전자주식회사 전자장치, 서버 및 그 제어방법
KR102009423B1 (ko) * 2012-10-08 2019-08-09 삼성전자주식회사 음성 인식을 이용한 미리 설정된 동작 모드의 수행 방법 및 장치
US8994681B2 (en) * 2012-10-19 2015-03-31 Google Inc. Decoding imprecise gestures for gesture-keyboards
CN103823547B (zh) * 2012-11-16 2017-05-17 中国电信股份有限公司 移动终端及其光标控制方法
WO2014109104A1 (ja) * 2013-01-08 2014-07-17 クラリオン株式会社 音声認識装置、音声認識プログラム及び音声認識方法
JP2016508007A (ja) 2013-02-07 2016-03-10 アップル インコーポレイテッド デジタルアシスタントのためのボイストリガ
US9368114B2 (en) 2013-03-14 2016-06-14 Apple Inc. Context-sensitive handling of interruptions
WO2014144579A1 (en) 2013-03-15 2014-09-18 Apple Inc. System and method for updating an adaptive speech recognition model
KR101759009B1 (ko) 2013-03-15 2017-07-17 애플 인크. 적어도 부분적인 보이스 커맨드 시스템을 트레이닝시키는 것
US9582608B2 (en) 2013-06-07 2017-02-28 Apple Inc. Unified ranking with entropy-weighted information for phrase-based semantic auto-completion
WO2014197336A1 (en) 2013-06-07 2014-12-11 Apple Inc. System and method for detecting errors in interactions with a voice-based digital assistant
WO2014197334A2 (en) 2013-06-07 2014-12-11 Apple Inc. System and method for user-specified pronunciation of words for speech synthesis and recognition
WO2014197335A1 (en) 2013-06-08 2014-12-11 Apple Inc. Interpreting and acting upon commands that involve sharing information with remote devices
CN110442699A (zh) 2013-06-09 2019-11-12 苹果公司 操作数字助理的方法、计算机可读介质、电子设备和系统
US10176167B2 (en) 2013-06-09 2019-01-08 Apple Inc. System and method for inferring user intent from speech inputs
CN105265005B (zh) 2013-06-13 2019-09-17 苹果公司 用于由语音命令发起的紧急呼叫的系统和方法
JP6163266B2 (ja) 2013-08-06 2017-07-12 アップル インコーポレイテッド リモート機器からの作動に基づくスマート応答の自動作動
EP2933067B1 (de) * 2014-04-17 2019-09-18 Softbank Robotics Europe Verfahren zur Durchführung eines multimodalen Dialogs zwischen einem humanoiden Roboter und einem Benutzer, Computerprogrammprodukt und humanoider Roboter zur Implementierung des Verfahrens
US9620105B2 (en) 2014-05-15 2017-04-11 Apple Inc. Analyzing audio input for efficient speech and music recognition
US10592095B2 (en) 2014-05-23 2020-03-17 Apple Inc. Instantaneous speaking of content on touch devices
US9502031B2 (en) 2014-05-27 2016-11-22 Apple Inc. Method for supporting dynamic grammars in WFST-based ASR
US9760559B2 (en) 2014-05-30 2017-09-12 Apple Inc. Predictive text input
US10289433B2 (en) 2014-05-30 2019-05-14 Apple Inc. Domain specific language for encoding assistant dialog
US9966065B2 (en) 2014-05-30 2018-05-08 Apple Inc. Multi-command single utterance input method
US9430463B2 (en) 2014-05-30 2016-08-30 Apple Inc. Exemplar-based natural language processing
US9715875B2 (en) 2014-05-30 2017-07-25 Apple Inc. Reducing the need for manual start/end-pointing and trigger phrases
US9842101B2 (en) 2014-05-30 2017-12-12 Apple Inc. Predictive conversion of language input
US9785630B2 (en) 2014-05-30 2017-10-10 Apple Inc. Text prediction using combined word N-gram and unigram language models
US9734193B2 (en) 2014-05-30 2017-08-15 Apple Inc. Determining domain salience ranking from ambiguous words in natural speech
US10170123B2 (en) 2014-05-30 2019-01-01 Apple Inc. Intelligent assistant for home automation
US9633004B2 (en) 2014-05-30 2017-04-25 Apple Inc. Better resolution when referencing to concepts
US10078631B2 (en) 2014-05-30 2018-09-18 Apple Inc. Entropy-guided text prediction using combined word and character n-gram language models
US9338493B2 (en) 2014-06-30 2016-05-10 Apple Inc. Intelligent automated assistant for TV user interactions
US10659851B2 (en) 2014-06-30 2020-05-19 Apple Inc. Real-time digital assistant knowledge updates
US10446141B2 (en) 2014-08-28 2019-10-15 Apple Inc. Automatic speech recognition based on user feedback
US9818400B2 (en) 2014-09-11 2017-11-14 Apple Inc. Method and apparatus for discovering trending terms in speech requests
US10789041B2 (en) 2014-09-12 2020-09-29 Apple Inc. Dynamic thresholds for always listening speech trigger
CN104267922B (zh) * 2014-09-16 2019-05-31 联想(北京)有限公司 一种信息处理方法及电子设备
US9606986B2 (en) 2014-09-29 2017-03-28 Apple Inc. Integrated word N-gram and class M-gram language models
US9646609B2 (en) 2014-09-30 2017-05-09 Apple Inc. Caching apparatus for serving phonetic pronunciations
US10074360B2 (en) 2014-09-30 2018-09-11 Apple Inc. Providing an indication of the suitability of speech recognition
US9668121B2 (en) 2014-09-30 2017-05-30 Apple Inc. Social reminders
US9886432B2 (en) 2014-09-30 2018-02-06 Apple Inc. Parsimonious handling of word inflection via categorical stem + suffix N-gram language models
US10127911B2 (en) 2014-09-30 2018-11-13 Apple Inc. Speaker identification and unsupervised speaker adaptation techniques
US10552013B2 (en) 2014-12-02 2020-02-04 Apple Inc. Data detection
US9711141B2 (en) 2014-12-09 2017-07-18 Apple Inc. Disambiguating heteronyms in speech synthesis
US9865280B2 (en) 2015-03-06 2018-01-09 Apple Inc. Structured dictation using intelligent automated assistants
US10567477B2 (en) 2015-03-08 2020-02-18 Apple Inc. Virtual assistant continuity
US9886953B2 (en) 2015-03-08 2018-02-06 Apple Inc. Virtual assistant activation
US9721566B2 (en) 2015-03-08 2017-08-01 Apple Inc. Competing devices responding to voice triggers
US9899019B2 (en) 2015-03-18 2018-02-20 Apple Inc. Systems and methods for structured stem and suffix language models
US9842105B2 (en) 2015-04-16 2017-12-12 Apple Inc. Parsimonious continuous-space phrase representations for natural language processing
US10083688B2 (en) 2015-05-27 2018-09-25 Apple Inc. Device voice control for selecting a displayed affordance
US10127220B2 (en) 2015-06-04 2018-11-13 Apple Inc. Language identification from short strings
US9578173B2 (en) 2015-06-05 2017-02-21 Apple Inc. Virtual assistant aided communication with 3rd party service in a communication session
US10101822B2 (en) 2015-06-05 2018-10-16 Apple Inc. Language input correction
US10186254B2 (en) 2015-06-07 2019-01-22 Apple Inc. Context-based endpoint detection
US10255907B2 (en) 2015-06-07 2019-04-09 Apple Inc. Automatic accent detection using acoustic models
US11025565B2 (en) 2015-06-07 2021-06-01 Apple Inc. Personalized prediction of responses for instant messaging
US10747498B2 (en) 2015-09-08 2020-08-18 Apple Inc. Zero latency digital assistant
US10671428B2 (en) 2015-09-08 2020-06-02 Apple Inc. Distributed personal assistant
US9787819B2 (en) * 2015-09-18 2017-10-10 Microsoft Technology Licensing, Llc Transcription of spoken communications
US9697820B2 (en) 2015-09-24 2017-07-04 Apple Inc. Unit-selection text-to-speech synthesis using concatenation-sensitive neural networks
US10366158B2 (en) 2015-09-29 2019-07-30 Apple Inc. Efficient word encoding for recurrent neural network language models
US11010550B2 (en) 2015-09-29 2021-05-18 Apple Inc. Unified language modeling framework for word prediction, auto-completion and auto-correction
US11587559B2 (en) 2015-09-30 2023-02-21 Apple Inc. Intelligent device identification
US10691473B2 (en) 2015-11-06 2020-06-23 Apple Inc. Intelligent automated assistant in a messaging environment
US10049668B2 (en) 2015-12-02 2018-08-14 Apple Inc. Applying neural network language models to weighted finite state transducers for automatic speech recognition
US10223066B2 (en) 2015-12-23 2019-03-05 Apple Inc. Proactive assistance based on dialog communication between devices
US10446143B2 (en) 2016-03-14 2019-10-15 Apple Inc. Identification of voice inputs providing credentials
US9934775B2 (en) 2016-05-26 2018-04-03 Apple Inc. Unit-selection text-to-speech synthesis based on predicted concatenation parameters
US9972304B2 (en) 2016-06-03 2018-05-15 Apple Inc. Privacy preserving distributed evaluation framework for embedded personalized systems
US10249300B2 (en) 2016-06-06 2019-04-02 Apple Inc. Intelligent list reading
US10049663B2 (en) 2016-06-08 2018-08-14 Apple, Inc. Intelligent automated assistant for media exploration
DK179309B1 (en) 2016-06-09 2018-04-23 Apple Inc Intelligent automated assistant in a home environment
US10192552B2 (en) 2016-06-10 2019-01-29 Apple Inc. Digital assistant providing whispered speech
US10509862B2 (en) 2016-06-10 2019-12-17 Apple Inc. Dynamic phrase expansion of language input
US10586535B2 (en) 2016-06-10 2020-03-10 Apple Inc. Intelligent digital assistant in a multi-tasking environment
US10067938B2 (en) 2016-06-10 2018-09-04 Apple Inc. Multilingual word prediction
US10490187B2 (en) 2016-06-10 2019-11-26 Apple Inc. Digital assistant providing automated status report
DK179049B1 (en) 2016-06-11 2017-09-18 Apple Inc Data driven natural language event detection and classification
DK201670540A1 (en) 2016-06-11 2018-01-08 Apple Inc Application integration with a digital assistant
DK179415B1 (en) 2016-06-11 2018-06-14 Apple Inc Intelligent device arbitration and control
DK179343B1 (en) 2016-06-11 2018-05-14 Apple Inc Intelligent task discovery
CN106126156B (zh) * 2016-06-13 2019-04-05 北京云知声信息技术有限公司 基于医院信息系统的语音输入方法及装置
US10043516B2 (en) 2016-09-23 2018-08-07 Apple Inc. Intelligent automated assistant
TWI610294B (zh) * 2016-12-13 2018-01-01 財團法人工業技術研究院 語音辨識系統及其方法、詞彙建立方法與電腦程式產品
US10593346B2 (en) 2016-12-22 2020-03-17 Apple Inc. Rank-reduced token representation for automatic speech recognition
DK201770439A1 (en) 2017-05-11 2018-12-13 Apple Inc. Offline personal assistant
DK179496B1 (en) 2017-05-12 2019-01-15 Apple Inc. USER-SPECIFIC Acoustic Models
DK179745B1 (en) 2017-05-12 2019-05-01 Apple Inc. SYNCHRONIZATION AND TASK DELEGATION OF A DIGITAL ASSISTANT
DK201770431A1 (en) 2017-05-15 2018-12-20 Apple Inc. Optimizing dialogue policy decisions for digital assistants using implicit feedback
DK201770432A1 (en) 2017-05-15 2018-12-21 Apple Inc. Hierarchical belief states for digital assistants
DK179549B1 (en) 2017-05-16 2019-02-12 Apple Inc. FAR-FIELD EXTENSION FOR DIGITAL ASSISTANT SERVICES
GB2564668B (en) * 2017-07-18 2022-04-13 Vision Semantics Ltd Target re-identification
CN108899016B (zh) * 2018-08-02 2020-09-11 科大讯飞股份有限公司 一种语音文本规整方法、装置、设备及可读存储介质
JP2020042074A (ja) * 2018-09-06 2020-03-19 トヨタ自動車株式会社 音声対話装置、音声対話方法および音声対話プログラム
JP7159756B2 (ja) * 2018-09-27 2022-10-25 富士通株式会社 音声再生区間の制御方法、音声再生区間の制御プログラムおよび情報処理装置
CN110211576B (zh) * 2019-04-28 2021-07-30 北京蓦然认知科技有限公司 一种语音识别的方法、装置和系统
CN110808035B (zh) * 2019-11-06 2021-11-26 百度在线网络技术(北京)有限公司 用于训练混合语言识别模型的方法和装置
US11455148B2 (en) 2020-07-13 2022-09-27 International Business Machines Corporation Software programming assistant
KR102494627B1 (ko) * 2020-08-03 2023-02-01 한양대학교 산학협력단 데이터 라벨을 자동 교정하는 음성 인식 시스템 및 방법
CN112259100B (zh) * 2020-09-15 2024-04-09 科大讯飞华南人工智能研究院(广州)有限公司 语音识别方法及相关模型的训练方法和相关设备、装置
CN114454164B (zh) * 2022-01-14 2024-01-09 纳恩博(北京)科技有限公司 机器人的控制方法和装置
US11880645B2 (en) 2022-06-15 2024-01-23 T-Mobile Usa, Inc. Generating encoded text based on spoken utterances using machine learning systems and methods

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE19635754A1 (de) * 1996-09-03 1998-03-05 Siemens Ag Sprachverarbeitungssystem und Verfahren zur Sprachverarbeitung
US6122613A (en) * 1997-01-30 2000-09-19 Dragon Systems, Inc. Speech recognition using multiple recognizers (selectively) applied to the same input sample
WO2000058945A1 (en) * 1999-03-26 2000-10-05 Koninklijke Philips Electronics N.V. Recognition engines with complementary language models

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE19635754A1 (de) * 1996-09-03 1998-03-05 Siemens Ag Sprachverarbeitungssystem und Verfahren zur Sprachverarbeitung
US6122613A (en) * 1997-01-30 2000-09-19 Dragon Systems, Inc. Speech recognition using multiple recognizers (selectively) applied to the same input sample
WO2000058945A1 (en) * 1999-03-26 2000-10-05 Koninklijke Philips Electronics N.V. Recognition engines with complementary language models

Also Published As

Publication number Publication date
KR100996212B1 (ko) 2010-11-24
JP2006515073A (ja) 2006-05-18
KR20060037228A (ko) 2006-05-03
AU2002336458A8 (en) 2004-03-29
AU2002336458A1 (en) 2004-03-29
EP1604350A2 (de) 2005-12-14
CN1864204A (zh) 2006-11-15

Similar Documents

Publication Publication Date Title
US7225130B2 (en) Methods, systems, and programming for performing speech recognition
EP1604350A2 (de) Verfahren, systeme und programmierung zur durchführung der spracherkennung
US7505911B2 (en) Combined speech recognition and sound recording
US7809574B2 (en) Word recognition using choice lists
US7444286B2 (en) Speech recognition using re-utterance recognition
US7467089B2 (en) Combined speech and handwriting recognition
US7313526B2 (en) Speech recognition using selectable recognition modes
US7577569B2 (en) Combined speech recognition and text-to-speech generation
US7526431B2 (en) Speech recognition using ambiguous or phone key spelling and/or filtering
US7634403B2 (en) Word recognition using word transformation commands
US7716058B2 (en) Speech recognition using automatic recognition turn off
JP4829901B2 (ja) マニュアルでエントリされた不確定なテキスト入力を音声入力を使用して確定する方法および装置
JP5166255B2 (ja) データ入力システム
TWI266280B (en) Multimodal disambiguation of speech recognition
US8954329B2 (en) Methods and apparatus for acoustic disambiguation by insertion of disambiguating textual information
US6314397B1 (en) Method and apparatus for propagating corrections in speech recognition software
US6415258B1 (en) Background audio recovery system
WO1999063425A1 (fr) Procede et appareil de traitement d'informations et support de fourniture d'informations
CN103369122A (zh) 语音输入方法及系统
JP3790038B2 (ja) サブワード型不特定話者音声認識装置
JP2007535692A (ja) 任意に話されたキャラクタのコンピュータによる認識及び解釈のためのシステム及び方法
CN116543764A (zh) 一种动作机构的控制电路及车辆

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20050405

AK Designated contracting states

Kind code of ref document: A2

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR IE IT LI LU MC NL PT SE SK TR

AX Request for extension of the european patent

Extension state: AL LT LV MK RO SI

DAX Request for extension of the european patent (deleted)
PUAK Availability of information related to the publication of the international search report

Free format text: ORIGINAL CODE: 0009015

RIC1 Information provided on ipc code assigned before grant

Ipc: G10L 21/00 20060101AFI20060426BHEP

RAP1 Party data changed (applicant data changed or rights of an application transferred)

Owner name: PORTER, EDWARD W.

Owner name: FRANZOSA, PAUL A.

Owner name: GRABHERR, MANFRED G.

Owner name: JOHNSON, DAVID F.

Owner name: COHEN, JORDAN R.

Owner name: ROTH, DANIEL L.

Owner name: VOICE SIGNAL TECHNOLOGIES INC.

RIC1 Information provided on ipc code assigned before grant

Ipc: G10L 15/28 20060101ALI20070627BHEP

Ipc: G10L 21/00 20060101AFI20060426BHEP

A4 Supplementary search report drawn up and despatched

Effective date: 20071018

17Q First examination report despatched

Effective date: 20080528

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION HAS BEEN WITHDRAWN

18W Application withdrawn

Effective date: 20090523