WO1995006938A1 - System for host computer control of remote voice interface - Google Patents

System for host computer control of remote voice interface Download PDF

Info

Publication number
WO1995006938A1
WO1995006938A1 PCT/US1994/002794 US9402794W WO9506938A1 WO 1995006938 A1 WO1995006938 A1 WO 1995006938A1 US 9402794 W US9402794 W US 9402794W WO 9506938 A1 WO9506938 A1 WO 9506938A1
Authority
WO
WIPO (PCT)
Prior art keywords
apparatus
including
set forth
terminal
means
Prior art date
Application number
PCT/US1994/002794
Other languages
French (fr)
Inventor
Dayle E. Phillips
Original Assignee
Compuspeak Laboratories, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority to US11576593A priority Critical
Priority to US08/115,765 priority
Application filed by Compuspeak Laboratories, Inc. filed Critical Compuspeak Laboratories, Inc.
Publication of WO1995006938A1 publication Critical patent/WO1995006938A1/en

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/02Feature extraction for speech recognition; Selection of recognition unit
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers; Analogous equipment at exchanges
    • H04M1/26Devices for signalling identity of wanted subscriber
    • H04M1/27Devices whereby a plurality of signals may be stored simultaneously
    • H04M1/271Devices whereby a plurality of signals may be stored simultaneously controlled by voice recognition
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers; Analogous equipment at exchanges
    • H04M1/72Substation extension arrangements; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selecting
    • H04M1/725Cordless telephones
    • H04M1/72519Portable communication terminals with improved user interface to control a main telephone operation mode or to indicate the communication status

Abstract

An application is created in a host computer (12) whereupon the application is downloaded to a remote terminal (14) having voice recognition (26) and text-to-speech (28) capability. The application defines parameters for operating the remote terminal (14) including spoken utterances allowable as inputs to the terminal for completing data entry fields. In operation, entry into a field activates associated terminal instructions, which may include a voice prompt for data entry, the allowable words for input, the data to be entered into the field corresponding to the input word, and the data to be sent to the host computer (12) in response.

Description

SYSTEM FOR HOST COMPUTER CONTROL OF REMOTE VOICE INTERFACE

Background of the Invention

1. Field of the Invention

The present invention relates to the field of host computer control of a voice interactive, remote terminal. More particularly, a voice application program is created in a host computer whereupon the application is down¬ loaded to a remote terminal having voice interactive capability. The application defines parameters for operating the terminal including spoken utterances allow¬ able as inputs to the terminal for completing data entry fields.

2. Description of the Prior Art

In the prior art, host computers such as mainframe computers have been operably coupled with remote termi¬ nals for data entry concerning various applications. In a typical arrangement, the remote terminal is a personal computer having a monitor and keyboard for data entry. Such an arrangement is not practical, however, for many applications such as order filling in a warehouse in which the personnel selecting stock for shipment must be mobile with eyes and hands available for other tasks. This makes data entry on the keyboard impractical. Accordingly, the prior art points out the need for a practical system of data entry that does not require the use of a keyboard.

Summary of the Invention

The present invention solves the prior art problems discussed above and provides a distinct advance in the state of the art. More particularly, the system hereof allows for voice entry of data into a remote terminal operably coupled with a host computer from which the remote terminal receives operating parameters for the voice interactive application.

The preferred system includes a host computer operably coupled with a remote terminal having voice recognition capability for input of data. A voice interactive application program can be developed on the host computer and downloaded to the terminal, or a set of inputs and outputs in the form of tables can be defined and processed by a resident program on the terminal. The application program includes parameters regarding the operation of the terminal including parameters for receipt of spoken utterances as inputs, and for data transfer to the host computer in response to receipt of inputs.

In particularly preferred forms, the parameters define screens and data entry fields, and include instructions for implementation by the terminal in response to entry into a field and in response to receipt of particular inputs. For example, the instructions may include the text for a text-to-voice conversion as a prompt for data entry into a particular field, a list of allowable words as spoken inputs for that field, and a list of actions to be taken in response to the input of an allowable word. Such actions may include the entering of certain data entered into the field and the data to be transferred to the host computer.

Brief Description of the Drawings Figure 1 is a schematic representation of the pre¬ ferred apparatus of the present invention;

Fig. 2 is a computer program flow chart illustrating the operation of the apparatus of Fig. 1; Fig. 3 is a computer program flow chart illustrating the operation of the apparatus of Fig. 1; and

Fig. 4 is a computer program flow chart illustrating the operation of the apparatus of Fig. 1.

Detailed Description of the Preferred Embodiment

As illustrated in Fig. 1, preferred apparatus 10 in accordance with the present invention includes host computer 12 operably coupled with a plurality of remote terminals represented by remote terminal 14. Host computer 12 includes an IBM mainframe computer, for example, or a minicomputer such as a DEC MICROVAX. The intercoupling between host computer 12 and remote terminal 14 includes telephone lines, a local area network (LAN) , or a wide area network (WAN) as appropri¬ ate for the particular circumstances of use.

Remote terminal 14 includes personal computer 16 with monitor 18, conventional keyboard 20, and radio unit 22. Personal computer 16 is preferably an IBM compatible unit based upon a 486 type microprocessor having a 200 megabyte hard drive memory for data storage. Computer 16 is further equipped with terminal emulation card 24, voice recognition card 26, text-to-speech conversion card 28, and radio base station 30. Terminal emulation card 24 is preferably operable in an ETHERNET environment using a REVELATION program, and is used as an interface with host computer 12. Voice recognition card 26 is available from CompuSpeak Labora¬ tories, Inc., of Olathe, Kansas, under the mark PROPHET, and is used to process voice inputs received from radio unit 22. Text-to-speech card 28 is preferably model PROSE 4000 available from Centigram and is used for converting text stored in memory into a synthetic voice output by way of radio unit 22 to a user of the system. Radio base station 30 serves as the interface between computer 12 and radio unit 22 and is preferably model CSL-4000 available from CompuSpeak Laboratories of Olathe, Kansas. Radio unit 22 includes VOICELINK radio 32 having ten-button keypad 34, headset 36 coupled with radio 32 and having microphone 38 and earpiece 40, and barcode reader 42 also coupled with radio 32. Radio unit 22 is operable for full duplex, wireless communication with computer 16 in order to transmit voice and data therebetween. A wireless link with computer 16 is preferred because of the enhanced utility derived therefrom. Those skilled in the art will appreciate that the voice recognition and voice synthesis capability could reside in a unit worn on the belt of the user. It will also be appreciated that voice input capability could also reside directly on computer 16 through a headset directly wired thereto.

In the use of apparatus 10, a programmer builds a voice interactive application on host computer 12 and then downloads that application to remote terminal 14. With the structure and operation of apparatus 10 in accordance with the present invention, the programmer need not have expertise in voice recognition technology, and need not have expertise in the programming of personal computers. This is because the voice interac¬ tion capability resides in remote terminal 14, which is already configured to receive parameter data for an application from host computer 12. Instead, existing expertise in the programming and languages of host computer 12 can be used to create new voice interactive applications. This represents a substantial advance over the prior art.

A typical application includes a series of screens to be displayed on monitor 18 with each screen including a plurality of data entry fields. A set of instructions are stored in association with each field. When the application enters a field, the associated instructions are implemented by remote terminal 14. These instruc- tions include such things as whether to provide a voice prompt to the user of radio unit 22 and the text of that voice prompt for conversion by text-to-speech converter 28, the allowable types of inputs such as spoken utter¬ ances by way of radio unit 22, keyboard 20, keypad 34, and barcode reader 42, a pointer to the list of allowable words or spoken utterances that are to be recognized when in that field, whether to provide feedback to each voice input such as an echo of the spoken utterance. A set of actions to be implemented by terminal 14 are also stored in association with each input word defined as part of the application. Responsive actions may include, for example, printing of data to the field on monitor 18, transmitting of data to host computer 12, and providing audible help instructions. In order to create a new application on host computer 12, the programmer enters data into an applica¬ tion layout tool having a set of predefined databases that store the needed parameters as parameter data for implementing the application on remote terminal 14. These databases include the application's name and iden¬ tifier, the names and identifiers of data entry screens that will be displayed on monitor 18, the data entry fields with screen locations, the words that will be used for voice input in connection with the application, a set of actions and feedback information for each word, and feedback information for each field.

During programming, the number of screens are defined with each screen assigned a numerical identifier and particular text, such as a screen title, to be displayed on that screen. The screen data also includes the row and column number of the screen text. The programmer defines a table of the total vocabulary of words including numbers and letters to be used in the application.

Each data entry field is given an object name, as¬ signed to a particular screen, and the first data entry point is assigned row and column positions on the screen along with the width and height of the field. Addition- ally, each field is assigned a set of instructions. These include a flag to indicate whether feedback should be provided as a voice prompt immediately upon entry into the field and if so, the text of the prompt.

The field instructions also include pointers to allowable voice input words in the vocabulary table as so-called "active" words. In this way, only active words need be considered in the voice recognition process instead of all of the words in the vocabulary, which speeds operation of the program and minimizes recognition errors.

The various field instructions include certain words in common called global commands that are always active. These global commands include, for example, "Where am I?" with the response being the name of the field, "What can I say?" with the response being the list of allowable words, and "Quit listening" whereby computer 16 does not respond to voice inputs except the phrase "Begin listening again."

A set of actions are also defined and stored in association with each word in the vocabulary. These actions include the appropriate voice responses such as those for the global commands as mentioned above, and also include the text such as equivalent keystrokes to be entered as data into the active field. For example, an action associated with the input of the word "five" might include entering of the numeral "5" into the field and an audible response of the word "five" as feedback to the user, and similarly for other numerals, letters and specific words. It will be appreciated, however, that the action can include a set of predefined data not having a direct conversion between the input and the data itself. For example, the spoken word "red" might convert to "Classic Red #7859" because that is the true color name expected by the host computer.

Other actions in association with each word includes whether to provide feedback in the form of an audible "echo" when the spoken utterance is recognized. Specifi¬ cally, the text of the recognized input is processed through text-to-speech card 28 to provide an audible confirmation to the user that the spoken utterance was recognized.

After the application has been defined by the programmer, the parameter data of the application are downloaded and stored in the memory of computer 16. The parameter data are configured and stored as linked lists in the memory of computer 16 as tables for screens, fields, words, and actions. With regard to the fields, the coordinate data are converted to coordinate pairs defining the upper left and lower right positions of the field. Additionally, the position of the previous and next field is also derived and stored in the associated field instructions.

Computer 16 preferably uses REFLECTION API to provide the capability for determining the current location of the cursor, both physically and logically, on the emulated screen. With this capability, computer 16 can determine the current field of the application. In operation, host computer 12 is on line with remote terminal 14 and designates the application to be run. The screens are retrieved in order and the cursor is positioned initially in the first field of each screen. The program then searches the fields table for a field encompassing the coordinates of the current cursor position. This field is designated as the current field. With this information, terminal 14 implements the instructions associated with the current field. Fig. 2 is a computer program flow chart illustrating application flow routine 200 for receiving parameter data and implementing an application on remote terminal 14. Routine 200 enters at step 202 which calls up the program in terminal emulation card 24 so that computer 16 can emulate a terminal. In step 204, host computer 12 then downloads the parameter data tables into the memory of computer 16.

Next, computer 16 reads the screen identifier of the currently displayed screen and step 208 then asks whether this screen identifier is in the table stored in memory. If yes, the program reads the row and column data of the cursor in step 210. Step 211 then asks whether the row and column is in the table for this screen identifier. If no, the program returns to step 206. If the answer in step 211 is yes, the corresponding field is determined in step 212 to establish the current field.

Upon identifying the field, the associated instruc¬ tions are implemented including a voice prompt in step 214. The text for the voice prompt is stored in the instructions and text-to-speech converter 28 provides the audible output through earpiece 40 by way of radio interface 30 and radio 32.

Step 216 implements the other instructions associat¬ ed with the current field. These include enabling the allowed inputs. In this example, these include voice by way of microphone 38, and data by way of keypad 34 and barcode reader 42. Step 216 also activates the allowable words designated in the instructions. In step 218, the program waits to receive a user input from one of the three sources. If the input is a spoken utterance, voice recognition card 26 searches for a match with one of the active words. If a match is determined, the corresponding word is deemed to be the input.

A valid input is also used as a pointer to retrieve the associated actions from the action table. In step 220, the indicated action is an audible feedback generat¬ ed by converter 28 from the text included in the re- trieved action. For example, if the user input word is "one" the feedback response is also "one" so that the user knows that the input was properly recognized and that the program is ready for the next input.

Step 222 then asks whether the current field should be exited. If no, the program returns to step 218 to receive the next input and continues to loop until all of the inputs have been received for the current field. This is designated by the spoken utterance "finished" and the associated action with this word is to exit the current field. Other actions include, for example, writing data to the field and transmitting data to host computer 12. The program then returns to step 206 to continue the process for the next field. The steps illustrated in routine 200 continue until all of the data is entered for the current application. Host computer 12 then activates the next application.

Fig. 3 illustrates routine 300 for an order filling application. In step 302, host computer 12 displays a blank order filling screen on monitor 18 and positions the cursor. In step 304, the program reads the screen identifier and cursor position, and from this identifies the field, associated instructions, active words and associated actions. In step 306, the voice prompt "enter customer order number" is provided to the user who is moving throughout a warehouse to fill customer orders. In step 308, the allowable words are activated, which are the words for digits plus the global commands including "finished." The user responds to the prompt by uttering the digits of 9 . the customer order number and when complete, utters £, "finished." The associated action with this last entry is a voice output with the name of the company.

Host computer 12 then displays the order to be filled in step 314. Step 316 displays the order on monitor 18 providing the location, item number and quantity of the product to be retrieved for satisfying the customer order. The text of this message is convert¬ ed to voice in step 318 to inform the user. In step 320, the user is prompted to say the quantity of the product selected for shipment. After a response from the user, host computer 12 displays a new cursor position in step 322. Step 324 then reads the new cursor position and step 326 asks whether the order is complete. If no, the program returns to step 316 and continues for the next product to be selected for that customer. When the order is complete, the answer in step 326 is yes and routine 300 ends.

Routine 400 illustrated in Fig. 4 shows another example of the utility of the present invention for a voice capable, report generation application. The program enters at step 402 in which the parameter data for the application are loaded from host computer 12 from the memory of personal computer 16. In step 404, host computer 12 displays an indica¬ tions screen on monitor 18. The program then reads the screen location in step 406 for the decision name "indications," after which step 408 issues the voice prompt "enter indications." Step 410 then activates the associated words in accordance with the current field.

In step 412 the program waits for user inputs as utterances and then writes the recognized utterances to the result field in step 414. Host computer 12 then reads the result field in step 416.

Step 418 next asks whether the current application is finished. If no, the program returns to step 404 and continues until the application is complete. When the answer in step 418 is yes, this application ends. Those skilled in the art will now appreciate the utility, versatility, and advance in the state of the art that the present invention provides. As discussed above, the system hereof allows those skilled in mainframe and minicomputer programming to write voice interactive applications in a simple, yet elegant manner without the need for skill in the programming of personal computers. The programmer need only enter data into tables and by the nature of the data, define the application to be run by remote terminal 14. The voice interactive capability and microprocessor programming reside on personal computer 16, which performs those functions necessary to implement data input using voice interaction.

The table structure of the data also provides a simple way to modify existing applications. Any changes can be easily implemented merely by changing the appro¬ priate data in the tables.

The utility of the present invention is further enhanced by the provision of radio unit 22. With this capability, a user can be mobile and remote from terminal 14, and still interact effectively using only spoken utterances. Because of this, applications previously impractical are now enabled by the present invention.

Having thus described the preferred embodiment of the present invention, the following is claimed as new and desired to be secured by Letters Patent:

Claims

CLAIMS;
1. A data transfer apparatus comprising: a host computer; and a remote terminal coupled with said host computer for data transfer therebetween, said remote terminal including input means for receiving inputs including voice means for recognizing and processing selected spoken utterances as inputs, means for receiving from said host computer and for storing parameter data represen¬ tative of parameters regarding the opera¬ tion of said terminal including parame¬ ters regarding the receipt of inputs, and means responsive to receipt of spoken utter¬ ances in accordance with said parameters for providing data corresponding to said utterances.
2. The apparatus as set forth in claim 1, said host computer including a mainframe computer.
3. The apparatus as set forth in claim 1, said host computer including a mini-computer.
4. The apparatus as set forth in claim 1, said host computer including a personal computer.
5. The apparatus as set forth in claim 1, said > remote terminal including a personal computer.
6. The apparatus as set forth in claim 1, said remote terminal being coupled with said host computer by telephone lines.
7. The apparatus as set forth in claim 1, said remote terminal being coupled with said host computer by an area network.
8. The apparatus as set forth in claim 1, said input means including a keyboard.
9. The apparatus as set forth in claim 1, said voice means including a remote input device responsive to spoken utterances for producing voice signals representa¬ tive thereof.
10. The apparatus as set forth in claim 9 further including wires coupling said device with said terminal for transferring said signals thereto.
11. The apparatus as set forth in claim 9, said device including means for producing said voice signals as wireless signals, said terminal including receiver means for receiving said wireless signals. i
12. The apparatus as set forth in claim 9, said device including a keypad manually activatable for producing alphanumeric signals representative of alphanu¬ meric data.
13. The apparatus as set forth in claim 9, said device further including a barcode scanner operable for producing barcode signals representative of barcode data.
14. The apparatus as set forth in claim 1, said terminal including means for storing utterance data representative of stored utterances of selected spoken words, means for determining whether a spoken utterance received by said voice means matches one of said stored utterances.
15. The apparatus as set forth in claim 1, said parameters including a set of data entry fields with each of said fields having a set of associated instructions for implementation by said terminal.
16. The apparatus as set forth in claim 15, said terminal including means for storing said instructions as a table addressable by the associated field.
17. The apparatus as set forth in claim 15, said terminal including means for entering one of said fields for receipt of input data.
18. The apparatus as set forth in claim 15, said instructions including instructions identifying the types of allowable inputs.
19. The apparatus as set forth in claim 18, said types of allowable inputs being selected from the group including spoken utterances, a keyboard, and a barcode reader.
20. The apparatus as set forth in claim 17, said terminal including a text-to-voice converter, said in- structions including text for conversion to voice by said converter upon entry into the associated field.
21. The apparatus as set forth in claim 17, said instructions including a set of allowable spoken utter- ances, said output means being responsive only to said allowable utterances.
22. The apparatus as set forth in claim 21, said allowable utterances including textual words.
23. The apparatus as set forth in claim 21, said allowable utterances including alphanumeric words.
24. The apparatus as set forth in claim 21, said allowable utterances including textual words and alphanu¬ meric words.
25. The apparatus as set forth in claim 21, said selected utterances including said allowable utterances.
26. The apparatus as set forth in claim 21, said allowable utterances including selected global commands in common with said instructions for all of said fields.
27. The apparatus as set forth in claim 15, said terminal including means for entering data into said fields in correspondence with receipt of spoken utteranc¬ es in accordance with said parameters.
28. The apparatus as set forth in claim 1, said terminal including a monitor for displaying screens, said parameter data including data defining a screen with a set of associated data entry fields with each of said fields having a set of associated instructions for imple¬ mentation by said terminal.
29. The apparatus as set forth in claim 17, said terminal including a monitor for displaying screens, means for displaying a cursor on a displayed screen in one of said fields, and means for determining the position of said cursor and thereby determining the associated field.
30. A method of data transfer comprising the steps of: providing a host computer coupled with a remote terminal for data transfer therebetween, said terminal having means for receiving inputs including voice means for recognizing and processing selected spoken utterances as inputs; transferring from said host computer to said remote terminal and storing therein parameter data representative of parameters regarding the operation of said terminal including parameters regarding the receipt of inputs; receiving spoken utterances by said terminal; responding to receipt of spoken utterances in accor¬ dance with said parameters by providing data corresponding to said utterances.
PCT/US1994/002794 1993-09-03 1994-03-15 System for host computer control of remote voice interface WO1995006938A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US11576593A true 1993-09-03 1993-09-03
US08/115,765 1993-09-03

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
AU64463/94A AU6446394A (en) 1993-09-03 1994-03-15 System for host computer control of remote voice interface

Publications (1)

Publication Number Publication Date
WO1995006938A1 true WO1995006938A1 (en) 1995-03-09

Family

ID=22363274

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US1994/002794 WO1995006938A1 (en) 1993-09-03 1994-03-15 System for host computer control of remote voice interface

Country Status (2)

Country Link
AU (1) AU6446394A (en)
WO (1) WO1995006938A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1996016685A1 (en) * 1994-11-25 1996-06-06 Block Medical, Inc. Remotely programmable infusion system
WO1999045531A1 (en) * 1998-03-03 1999-09-10 Microsoft Corporation Apparatus and method for providing speech input to a speech recognition system
WO2000017857A1 (en) * 1998-09-21 2000-03-30 Thomson Multimedia System comprising a remote controlled apparatus and voice-operated remote control device for the apparatus
EP1061717A1 (en) * 1999-06-19 2000-12-20 Sigurd Traute Mobile telephone
US6749586B2 (en) 1994-11-25 2004-06-15 I-Flow Corporation Remotely programmable infusion system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4776016A (en) * 1985-11-21 1988-10-04 Position Orientation Systems, Inc. Voice control system
US5153504A (en) * 1991-04-23 1992-10-06 International Business Machines Corporation Pneumatically actuated hold down gate
US5209745A (en) * 1990-02-06 1993-05-11 Irr Joseph D Blood cryopreservation container
US5309504A (en) * 1991-11-18 1994-05-03 Syntellect Acquisition Corp. Automated identification of attendant positions in a telecommunication system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4776016A (en) * 1985-11-21 1988-10-04 Position Orientation Systems, Inc. Voice control system
US5209745A (en) * 1990-02-06 1993-05-11 Irr Joseph D Blood cryopreservation container
US5153504A (en) * 1991-04-23 1992-10-06 International Business Machines Corporation Pneumatically actuated hold down gate
US5309504A (en) * 1991-11-18 1994-05-03 Syntellect Acquisition Corp. Automated identification of attendant positions in a telecommunication system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
IEEE, Vol. 2, 06-10 November 1989, ARAMAKI et al., "Voice Command Robot System by Using the linguistic Knowledge of a Voice", pages 504-508. *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1996016685A1 (en) * 1994-11-25 1996-06-06 Block Medical, Inc. Remotely programmable infusion system
US5871465A (en) * 1994-11-25 1999-02-16 I-Flow Corporation Remotely programmable infusion system
US7153289B2 (en) 1994-11-25 2006-12-26 I-Flow Corporation Remotely programmable infusion system
US6749586B2 (en) 1994-11-25 2004-06-15 I-Flow Corporation Remotely programmable infusion system
US6505159B1 (en) 1998-03-03 2003-01-07 Microsoft Corporation Apparatus and method for providing speech input to a speech recognition system
WO1999045531A1 (en) * 1998-03-03 1999-09-10 Microsoft Corporation Apparatus and method for providing speech input to a speech recognition system
WO2000017857A1 (en) * 1998-09-21 2000-03-30 Thomson Multimedia System comprising a remote controlled apparatus and voice-operated remote control device for the apparatus
US6762692B1 (en) 1998-09-21 2004-07-13 Thomson Licensing S.A. System comprising a remote controlled apparatus and voice-operated remote control device for the apparatus
EP1061717A1 (en) * 1999-06-19 2000-12-20 Sigurd Traute Mobile telephone

Also Published As

Publication number Publication date
AU6446394A (en) 1995-03-22

Similar Documents

Publication Publication Date Title
US8447609B2 (en) Adjustment of temporal acoustical characteristics
US8572209B2 (en) Methods and systems for authoring of mixed-initiative multi-modal interactions and related browsing mechanisms
US5664896A (en) Speed typing apparatus and method
US7432912B2 (en) Pocket size computer adapted for use by a visually impaired user
US7145554B2 (en) Method for a high-speed writing system and high -speed writing device
EP0842463B1 (en) Reduced keyboard disambiguating system
US4942540A (en) Method an apparatus for specification of communication parameters
US8775189B2 (en) Control center for a voice controlled wireless communication device system
JP3363283B2 (en) Input device, input method, management method of an information processing system and input information
US6057845A (en) System, method, and apparatus for generation and recognizing universal commands
US6421793B1 (en) System and method for automated testing of electronic devices
US7218249B2 (en) Hand-held communication device having navigation key-based predictive text entry
US6307541B1 (en) Method and system for inputting chinese-characters through virtual keyboards to data processor
EP0686280B1 (en) Keyboard-type input apparatus
EP1514357B1 (en) Explicit character filtering of ambiguous text entry
US6864809B2 (en) Korean language predictive mechanism for text entry by a user
US7043700B1 (en) Mobile client computer programmed to predict input
AU2005259925B2 (en) Nonstandard text entry
US20020128843A1 (en) Voice controlled computer interface
US4914704A (en) Text editor for speech input
US20040225504A1 (en) Portable device for enhanced security and accessibility
US20050041011A1 (en) Method and user interface for entering text
US20060123358A1 (en) Method and system for generating input grammars for multi-modal dialog systems
JP4091398B2 (en) Automatic software input panel selection based on the state of the application program
US4901223A (en) Method and apparatus for application software control of echo response

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AU CA JP

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): AT BE CH DE DK ES FR GB GR IE IT LU MC NL PT SE

121 Ep: the epo has been informed by wipo that ep was designated in this application
DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
122 Ep: pct application non-entry in european phase
NENP Non-entry into the national phase in:

Ref country code: CA