GB2344917A - Speech command input recognition system - Google Patents

Speech command input recognition system Download PDF

Info

Publication number
GB2344917A
GB2344917A GB9929392A GB9929392A GB2344917A GB 2344917 A GB2344917 A GB 2344917A GB 9929392 A GB9929392 A GB 9929392A GB 9929392 A GB9929392 A GB 9929392A GB 2344917 A GB2344917 A GB 2344917A
Authority
GB
United Kingdom
Prior art keywords
speech
command
commands
terms
relevance
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
GB9929392A
Other versions
GB9929392D0 (en
GB2344917B (en
Inventor
Scott A Morgan
David J Roberts
Craig A Swearingen
Alan R Tannenbaum
Anthony C Temple
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
International Business Machines Corp
Original Assignee
International Business Machines Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US09/213,858 external-priority patent/US7206747B1/en
Priority claimed from US09/213,846 external-priority patent/US6937984B1/en
Priority claimed from US09/213,856 external-priority patent/US8275617B1/en
Priority claimed from US09/213,845 external-priority patent/US6192343B1/en
Application filed by International Business Machines Corp filed Critical International Business Machines Corp
Publication of GB9929392D0 publication Critical patent/GB9929392D0/en
Publication of GB2344917A publication Critical patent/GB2344917A/en
Application granted granted Critical
Publication of GB2344917B publication Critical patent/GB2344917B/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/06Creation of reference templates; Training of speech recognition systems, e.g. adaptation to the characteristics of the speaker's voice
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/26Speech to text systems
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/223Execution procedure of a spoken command
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/226Procedures used during a speech recognition process, e.g. man-machine dialogue using non-speech characteristics
    • G10L2015/228Procedures used during a speech recognition process, e.g. man-machine dialogue using non-speech characteristics of application context

Abstract

An interactive computer controlled display system with speech command input recognition and visual feedback includes means for predetermining a plurality of speech commands for respectively initiating corresponding system actions and means for providing for each command an associated set of speech terms, each term having relevance to its associated command. The system provides means responsive to a detected speech command for displaying the command, and means responsive to a detected speech term having relevance to one of the commands for displaying the relevant command. The system preferably displays the basic speech commands simultaneously along with the relevant commands. The means for providing the associated set of speech terms comprise a stored relevance table of universal speech input commands and universal computer operation terms conventionally associated with actions initiated by the input commands.

Description

SPEECH COMMAND INPUT RECOGNITION SYSTEM Technical Field The present invention relates to interactive computer controlled display systems with speech command input and more particularly to such systems which present display feedback to the interactive users.
Background of Related Art The 1990's decade has been marked by a technological revolution driven by the convergence of the data processing industry with the consumer electronics industry. This advance has been even further accelerated by the extensive consumer and business involvement in the Internet over the past few years. As a result of these changes it seems as if virtually all aspects of human endeavor in the industrialized world require human/computer interfaces. There is a need to make computer directed activities accessible to people who up to a few years ago were computer illiterate or, at best, computer indifferent.
Thus, there is continuing demand for interfaces to computers and networks which improve the ease of use for the interactive user to access functions and data from the computer. with desktop-like interfaces including windows and icons, as well as three-dimensional virtual reality simulating interfaces, the computer industry has been working hard to fulfil such user interaction by making interfaces more user friendly by making the human/computer interfaces closer and closer to real world interfaces, e. g. human/human interfaces. In such an environment, it would be expected that speaking to the computer in natural language would be a very natural way of interfacing with the computer for even novice users. Despite these potential advantages of speech recognition computer interfaces, this technology has been relatively slow in gaining extensive user acceptance.
Speech recognition technology has been available for over twenty years, but it has only recently begun to find commercial acceptance, particularly with speech dictation or"speech to text"systems, such as those marketed by International Business Machines Corporation (IBM) and Dragon Systems. That aspect of the technology is now expected to have accelerated development until it will have a substantial niche in the word processing market. On the other hand, a more universal application of speech recognition input to computers, which is still behind expectations in user acceptance, is in. command and control technology, wherein, for example, a user may navigate through a computer system's graphical user interface (GUI) by the user speaking the commands which are customarily found in the systems'menu text, icons, labels, buttons, etc.
Many of the deficiencies in speech recognition both in word processing and in command technologies are due to inherent voice recognition errors due in part to the status of the technology and in part to the variability of user speech patterns and the user's ability to remember the specific commands necessary to initiate actions. As a result, most current voice recognition systems provide some form of visual feedback which permits the user to confirm that the computer understands his speech utterances. In word processing, such visual feedback is inherent in this process, since the purpose of the process is to translate from the spoken to the visual. That may be one of the reasons that the word processing applications of speech recognition has progressed at a faster pace.
However, in speech recognition driven command and control systems, the constant need for switching back and forth from a natural speech input mode of operation, when the user is requesting help or making other queries, to the command mode of operation, when the user is issuing actual commands, tends to be very tiresome and impacts user productivity, particularly when there is an intermediate display feedback.
Summarv of the Present Invention The present invention is directed to providing solutions to one or more of the above-listed needs of speech recognition systems. Preferred embodiments of the invention provide command and control systems which are heuristic both on the part of the computer in that it learns and narrows from the natural speech to command user feedback cycles and on the part of the user, in that he tends to learn and narrow down to the computer system specific commands as a result of the feedback cycles.
The present invention is directed to an interactive computer controlled display system with speech command input recognition which includes means for predetermining a plurality of speech commands for respectively initiating each of a corresponding plurality of system actions in combination with means for providing, for each of said plurality of commands, an associated set of speech terms, each term having relevance to its associated command. Also included are means for detecting speech command and speech terms. Responsive to such detecting means, the system provides means responsive to a detected speech command for displaying said command, and means. responsive to a detected speech terr. hav ng-elevance, c one of said commands for displaying the relevant comr. and.
A system implementing the invention preferably provides interactive means for selecting a displayed command to thereby initiate a system action. These selecting means are preferably speech command input means.
The system can display the actual speech commands, i. e., commands actually spoken by the user simultaneously with the relevant commands i. e., commands not actually spoken but found in response to spoken terms having relevance to the commands.
The system of the present invention is particularly effective when used in the implementation of distinguishing actual spoken commands from spoken queries for help and other purposes. in accordance with an aspect of the invention, the means for providing said associated set of speech terms comprise a stored relevance table of universal speech input commands and universal computer operation terms conventionally associated with actions initiated by said input commands, and means for relating the particular interactive interface commands of said system with terms in said relevance table.
In another aspect, the present invention provides a system for confirming the recognition of a command by first predetermining a plurality of speech commands for respectively designating each of a corresponding plurality of system actions and providing means for detecting such speech commands. There also are means responsive to a detected speech command for displaying said command for a predetermined time period, during which time the user may give a spoken command to stop the system action designated by said displayed command. In the event that said system action is not stopped during said predetermined time period, the system action designated by said displayed command will be executed.
The user need not wait for the expiration of the time period if he notes that the displayed command is the right one; he has speech command means for executing the system action designated by said displayed command prior to the expiration of said time period. This may be as simple as just repeating the displayed command.
In a further aspect, the invention provides a speech recognition system which interprets speech queries such as help queries and presents a list of relevant proposed commands sorted in order based upon relevance of the commands. The system organizes the displayed commands being prompted to the user through probability determining means which for each of a predetermined plurality of speech commands store an associated set of speech terms, each term having relevance to its associated command combined with means responsive to a speech query for determining the probability of speech terms from said set in said query, and means responsive to said probability determining means for prompting the user with a displayed sequence of commands sorted based upon said probability of speech terms associated with said commands. when such a sorted command is selected, the system has means responsive to a speech command for carrying out the system action corresponding to the command.
Preferably, means for determining probability of speech terms weight said probability upon the firmness of recognition of the speech terms.
Excellent results are achieved when in determining said probability, a firm recognition of a speech term is accorded twice the weight of an infirm recognition of the term. Also, if the speech query has a term which is an exact match with any command, then in the sorting of displayed commands, any exact match of a speech term with any command doubles the weight accorded to that command in the sorting. The system also provides means for adding terms to previous speech terms wherein the probability determining means will redetermine probability to include such added terms. In such a situation, the probability determining means will redetermine the weights to include the additional weights of such added terms.
In a further aspect, the invention provides a speech recognition system which does not switch modes of operation when interpreting speech queries, such as help queries, or receiving actual spoken commands. The system handles both concurrently and seamlessly in the same operation mode. The present invention is directed to an interactive computer controlled display system with speech recognition comprising means for predetermining a plurality of speech commands each associated with a corresponding plurality of system actions in combination with means for concurrently detecting speech commands and speech queries for locating commands. There is also provided means responsive to a detected speech command for carrying out the system action corresponding to the command, and means responsive to a detected speech query for attempting to locate commands applicable to said query. The system also included means for displaying the detected speech query together with means for displaying located commands applicable to said query. The system may further include means responsive to a detected speech query for modifying a displayed prior speech query whereby a user may speak a displayed located command to activate said means for carrying out a system action or speak a query to modify said prior query to locate commands other than said displayed commands without switchina between command and query modes of speech detection.
Brief Description of the Drawinqs The present invention will be better understood and its numerous advantages will become more apparent to those skilled in the art by reference to the following drawings, :-n conjunction with the accompanying a description of example embodiments, ir. which : Fig. 1 is a block diagram of a generalized data processing system including a central processing unit which provides the computer controlled interactive display system with voice input used in practicing the present invention; Fig. 2 is a block diagram of a portion of the system of Fig. 1 showing a generalized expanded view of the system components involved in the implementation ; Fig. 3 is a diagrammatic view of a display screen on which an interactive dialog panel interface used for visual feedback when a speech command and/or speech term input has been made ; Fig. 4 is the display screen view of Fig. 3 after a speech term input has been made ; Fig. 5 is the display screen view of Fig. 4 after the user has finished inputting the speech term in Fig. 4. (The user may then say one of the listed commands.) ; Fig. 6 is a flowchart of the basic elements of the system and program in a computer controlled display system for creating and using the speech command recognition with visual feedback system of the present invention; and Fig. 7 is a flowchart of the steps involved in running the program set up in Fig. 6.
Detailed Description of the Preferred Embodiment Referring to Fig. 1, a typical data processing system is shown which may function as the computer controlled display terminal used in implementing the system of the present invention by receiving and interpreting speech input and providing a displayed feedback, including some recognized actual commands, as well as a set of proposed relevant commands derived by comparing speech terms (other than commands) to a relevance table. A central processing unit (CPU) 10, such as any PC microprocessor in a PC available from IBM or Dell Corp. is provided and interconnected to various other components by system bus 12. An operating system 41 runs on CPU 10, provides control and is used to coordinate the function of the various components of Fig. 1. Operating system 41 may be one of the commercially available operating systems such as the OS/2 (TM) operating system available from IBM (OS/2 is a trademark of IBM) ; Microsoftts Windows 95 (TM) or Windows NT (TM), as well as the UNIX or AIX operating systems. A speech recognition program with visual feedback or proposed relevant commands, application 40, to be subsequently described in detail, runs in conjunction with operating system 41 and provides output calls to the operating system 41, which implements the various functions to be performed by the application 40.
A read only memory (ROM) 16 is connected to CPU 10 via bus 12 and includes the basic input/output system (BIOS) that controls the basic computer functions. Random access memory (RAM) 14, I/O adapter 18 and communications adapter 34 are also interconnected to system bus 12. It should be noted that software components, including operating system 41 and application 40, are loaded into RAM 14, which is the computer system's main memory. I/O adapter 18 may be a small computer system interface (SCSI) adapter that communicates with the disk storage device 20, i. e. a hard drive. Communications adapter 34 interconnects bus 12 with an outside network enabling the data processing system to communicate with other such systems over a local area network (LAN) or wide area network (WAN), which includes, of course, the Internet. I/O devices are also connected to system bus 12 via user interface adapter 22 and display adapter 36. Keyboard 24 and mouse 26 are all interconnected to bus 12 through user interface adapter 22. Audio output is provided by speaker 28 and the speech input which is made through input device 27, which is diagrammatically depicted as a microphone which accesses the system through an appropriate interface adapter 22. The speech input and recognition will be subsequently described in greater detail, particularly with respect to Fig. 2. Display adapter 36 includes a frame buffer 39, which is a storage device that holds a representation of each pixel on the display screen 38. Images, such as speech input commands, relevant proposed commands, as well as speech input display feedback panels, may be stored in frame buffer 39 for display on monitor 38 through various components such as a digital to analog converter (not shown) and the like. By using the aforementioned I/O devices, a user is capable of inputting visual information to the system through the keyboard 24 or mouse 26 in addition to speech input through microphone 27 and receiving output information from the system via display 38 or speaker 28.
Now with respect to Fig. 2, we will describe the general system components involved in implementing the invention. voice or speech input 50 is applied through microphone 51 which represents a speech input device. Since the art of speech terminology and speech command recognition is an cld and well developed one, we will not go into the hardware and system details of a typical system which may be used to implement the present invention. It should be clear to those skilled in the art that the systems and hardware in any of the following patents may be used: US5,671,328 ; US5, 133, 11'; US5,222,146; US5,664,061; US5,553,121; and US5, 157, 334. The speech input to the system covld be actual spoken commands, which the system will recognize, andjor s ? eech terminology, which the user addresses to the computer so that the computer may propose appropriate relevant commands through feedback. The input speech goes through a recognition process which seeks a comparison to a stored set of commands 52. If an actual spoken command is clearly identified, spoken command 55, that command may be carried out and then displayed via display adapter 36 to display 38, or the spoken command may be displayed first and subsequently carried out. In this regard, the system is capable of several options, as will be subsequently described in greater detail. Suffice it to state that the present invention provides the capability of thus displaying actual commands. where the speech input contains terminology other than actual commands, the system provides for a relevance table 53, which is usually a comprehensive set of terms which may be used in any connection to each of the actual stored commands 52. If any of the input speech terms compare 54 with one of the actual commands, that actual command is characterized as a relevant command 56 which is then also presented to the user on display 38 via display adapter 36. Although the relevance will be subsequently described in detail, it would be appropriate to indicate here how such a table is created. Initially, an active vocabulary is determined. This includes collecting from a computer operation, including the operating system and all significant application programs, all words and terms from menus, buttons and other user interface controls including the invisible but active words from currently active application windows, all names of macros supplied by the speech system, the application and the user, names of other applications that the user may switch to, generic commands that are generic to any application and any other words and terms which may be currently active.
This basic active vocabulary is constructed into a relevance table wherein each word or term will be related to one or more of the actual commands and conversely, each of the actual commands will have associated with it a set of words and terms which are relevant to the command. It should be noted that this relevance table is dynamic in that it may be added to as appropriate to each particular computer operation. Let us assume that for a particular computer system there is a basic or generic relevance table of generic terminology, the active vocabulary for the particular system set is added to the basic relevance table and an expanded relevant vocabulary is dynamically created using at least some of the following expedients: each word or phrase in the active vocabulary is added to the expanded vocabulary with an indication that it is an original active vocabulary word or phrase; each word or phrase in the active vocabulary is looked up as an index into the relevance table. If found, the corresponding contents of the cell in the table are used to further expand the vocabulary with any additional words or phrases that the cell may contain. These additional terms would have an associated reference to the active entry which caused its inclusion; -each phrase is then broken into its constituent words, word pairs and n-word subphrases where applicable and the above process repeated; -users may be encouraged to come up with there own lists of words and phrases which may be indexed with respect to the relevance table; and -a synonym dictionary may be an additional source for words and phrases.
In the above description of display of commands both spoken and relevant with respect to Fig. 2, we did not go into the display of the spoken input which could include commands and speech terms which would be compared to the relevance table for relevant commands. It will be understood that the spoken input will also be displayed separately. This will be seen with respect to Figs. 3 through 5 which will provide an illustrative example of how the present invention may be used to give the visual feedback of displayed spoken commands, as well as relevant commands in accordance with the present invention. When the screen image panels are described, it will be understood that these may be rendered by storing image and text creation programs, such as those in any conventional window operating system in the RAM 14 of the system of Fig.
1. The display screens of Figs. 3 through 5 are presented to the viewer on display monitor 38 of Fig. 1. In accordance with conventional techniques, the user may control the screen interactively through a conventional I/0 device such as mouse 26, Fig. 1, and speech input is applied through microphone 27. These operate through user interface 22 to call upon programs in RAM 14 cooperating with the operating system 41 to create the images in frame buffer 39 of display adapter 36 to control the display panels on monitor 38. The ~nitial display screen of Fig. 3 shows a display screen with visual feedback display panel 70. In the panel, window 71 will show the words that the user speaks while window 72 will display all of the relevant commands, i. e. commands which were not actually spoken but some the spoken words or phrases in the window 71 were associated with the relevant commands through the relevance table, as shown in Fig. 2. Also, any spoken commands which were part of the spoken input in window 71 will also be listed along with the relevant commands in window 72. The panel also has command buttons: by pressing button 73 or saying the command,"Clear List", the user will clear both window 71 and window 72 in Fig. 3 of all proposed relevant commands and input text. Pess~ng buton 74 or say : g the command,"Never mind", causes the whole application to go away. Fig. 4 shows the screen panel 70 of Fig. 3 after the spoken encry,"Display the settings". The system could find no actual command in this terminology but was able to find the four relevant commands shown in window 72. Cursor icon 76 is adjacent the spoken term in window 71 as an indication that this field is the speech focus. In Fig. 5 we have the display of Fig. 4, after the speech focus as indicated by cursor icon 76 has been moved to window 73 and the user has chosen one of the relevant commands:"Document Properties"75 by speaking the command; as a result, the command is highlighted. Upon the relevant command being spoken, the system will carry it out.
Now with reference to Figs. 6 and 7 we will describe a process implemented by the present invention in conjunction with the flowcharts of these figures. Fig. 6 is a flowchart showing the development of a process according to the present invention for providing visual feedback to spoken commands and other terminology, including a listing of system proposed relevant spoken commands which the user may choose from. First, step 80, a set of recognizable spoken system and application commands which will drive the system being used is set up and stored. Then, there are set up appropriate processes to carry out the action called for by each recognized speech command, step 81. A process for displaying recognized speech commands is also set up. In doing so, the program developer has the option among others of displaying all recognized commands or only recognized commands which are not clearly recognized so that the user will have the opportunity of confirming the command. Then, step 83, there is set up a relevance table or table of relevant commands as previously described. This table hopefully includes substantially all descriptive phrases and terminology associated with the computer system and the actual commands to which each term is relevant. A process for looking up all spoken inputs, other than recognized commands, on this relevance table to then determine relevant commands is set up, step 84.
This involves combining the system and application commands with the relevance table to generate the vocabulary of speech terms which will be used by the speech recognition system to provide the list of relevant commands. This has been previously described with respect to Fig. 2.
Finally, there is set up a process for displaying relevant commands so that the user may choose a relevant command by speaking to set off the command action, step 85. This has been previously described with respect to Fig. 5. This completes the set up.
The running of the process will now be described with respect to Fig. 7. First, step 90, a determination is made as to whether there has been a speech input. If No, then the input is returned to step 90 where a spoken input is awaited. If the decision from step 90 is Yes, then a further determination is made in decision step 91 as to whether an command has been definitely recognized.. At this point, we should again distinguish, as we have above, between spoken commands which the user apparently does not intend to be carried out as commands, i. e., they are just part of the input terminology or spoken query seekirg relevant commands, and commands which in view of their presentation context are intended as definite commands. If a term in the context of a spoken query happens to match one of the commands, it is just listed with the relevant commands displayed as subsequently described with respect to step 97. On the other hand, if a definite command is recognized, then the decision at step 91 would be Yes, and the command is carried out in the conventional manner, step 92, and then a determination is made as to whether the session is at an end, step 93. If Yes, the session is exited. If No, the flow is returned to step 90 where a further spoken input is awaited. If the decision from step 91 was No, that a definite command was not recognized, then a comparison is made on the relevance table as previously described, step 95, and all relevant commands are displayed, step 97, to give the user the opportunity to select one of the relevant commands. At decision step 98, a determination is made as to whether the user has spoken one of the relevant commands. If Yes, then the process is returned to step 92 via branch"A"and the command is carried out. If the decision from step 98 is No, then a further decision is made, step 99, as to whether the user has spoken any further terms.
If Yes, the process is returned to step 95 where a comparison is made to the relevance table and the above process is repeated. If the decision from step 99 is No, then the process is returned to step 93 via branch "B"where a decision is made as to whether the session is over as previously described.
In this specification, the terms, relevant commands and actual commands may have been used in various descriptions. Both refer to real commands, i. e. commands which the particular system may execute. The distinction is based on whether the command is actually spoken. Thus an actual command would be one which the user actually speaks whether it be as part of the spoken entry or query which the user has uttered for the purpose of locating relevant commands or the actual command is one which the user intends to be executed in the conventional manner. On the other hand, a relevant command would be a command which was not spoken by the user but was associated with a word or term in the user's spoken entry through the relevance table.
One of the preferred implementations of the present invention is as an application program 40 made up of programming steps or instructions resident in RAM 14, Fig. 1, during computer operations. Until required by the computer system, the program instructions may be stored in another readable medium, e. g. in disk drive 20, or in a removable memory such as an optical disk for use in a CD ROM computer input, or in a floppy disk for use in a floppy disk drive computer input. Further, the program instructions may be stored in the me-ory cf another computer prior tc use in the system of the present invention and transmitted over a n. EN or a WAN, such as the Internet, wher. required by the user of the present invention. One skilled in the art should appreciate that the processes controlling the present invention are capable of being distributed in the form of computer readable media of a variety of forms.

Claims (16)

  1. CLAIMS 1. An interactive computer controlled display system with speech command input recognition comprising: means for predetermining a plurality of speech commands for respectively initiating each of a corresponding plurality of system actions, means for providing for each of said plurality of commands, an associated set of speech terms, each term having relevance to its associated command, means for detecting speech command and speech terms, means responsive to a detected speech command for displaying said command, and means responsive to a detected speech term having relevance to one of said commands for displaying the relevant command.
  2. 2. The system of claim 1 further including interactive means for selecting a displayed command to thereby initiate a system action.
  3. 3. The system of claim 2 wherein said means for selecting said displayed command includes speech command input means.
  4. 4. The system of any of claims 1 to 3 wherein said speech commands and relevant commands are displayed simultaneously.
  5. 5. The system of any one of the preceding claims wherein said means for providing said associated set of speech terms include: a stored relevance table of universal speech input commands and universal computer operation terms conventionally associated with actions initiated by said input commands, and means for relating the particular interactive interface terms cf said system with terms in said relevance table.
  6. 6. A method for providing speech command input to an interactive computer controlled display system with speech command input recognition comprising : predetermining a plurality of speech commands for respectively initiating each of a corresponding plurality of system actions, providing for each of said plurality of commands, an associated set of speech terms, each term having relevance to its associated command, detecting speech command and speech terms, displaying a speech command responsive to its detection as a speech command, and responsive to a detected speech term having relevance to one of said commands displaying the relevant command.
  7. 7. The method of claim 6 further including the step of selecting a displayed command to thereby initiate a system action in response to a user interaction.
  8. 8. The method of claim 7 wherein said step of selecting of said displayed command is performed in response to signals from a speech command input means.
  9. 9. The method of any one of claims 6 to 8 wherein said speech commands and relevant commands are displayed simultaneously.
  10. 10. The method of any one of claims 6 to 9 wherein said step of providing said associated set of speech terms includes: storing a relevance table of universal speech input commands and universal computer operations terms conventionally associated with actions initiated by said input commands, and relating the particular interactive interface terms of said system with terms in said relevance table.
  11. 11. A computer program for speech command input recognition in an interactive computer controlled display system, comprising: program code for predetermining a plurality of speech commands for respectively initiating each of a corresponding plurality of system actions, program code for providing for each of said plurality of commands, an associated set of speech terms, each term having relevance to its associated command, program code for detecting speech command and speech terms, program code responsive to a detected speech command for displaying said command, and program code responsive to a detected speech term having relevance to one of said commands for displaying the relevant command.
  12. 12. The computer program of claim 11 further including program code, responsive to user interaction, for selecting a displayed command to thereby initiate a system action.
  13. 13. The computer program of any one of claims 11 to 12 wherein said speech commands and relevant commands are displayed simultaneously.
  14. 14. The computer program of any one of claims 11 to 13 wherein said program code for providing said associated set of speech terms includes: a stored relevance table of universal speech input commands and universal computer operation terms conventionally associated with actions initiated by said input commands, and means for relating the particular interactive interface terms of said system with terms in said relevance table.
  15. 15. A computer program according to any one of claims 11 to 14, substantially as described herein with reference to any one of figures 1 to 7.
  16. 16. A system according to claim 1, wherein said means for displaying commands displays a command for a predetermined time, the system further including: means for deselecting a system action designated by a displayed command; and means for execu. ~ng a system action designated by a displayed command in the event that the system action is not deselected during said predetermined time period.
GB9929392A 1998-12-16 1999-12-14 Speech command input recognition system Expired - Fee Related GB2344917B (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US09/213,858 US7206747B1 (en) 1998-12-16 1998-12-16 Speech command input recognition system for interactive computer display with means for concurrent and modeless distinguishing between speech commands and speech queries for locating commands
US09/213,846 US6937984B1 (en) 1998-12-17 1998-12-17 Speech command input recognition system for interactive computer display with speech controlled display of recognized commands
US09/213,856 US8275617B1 (en) 1998-12-17 1998-12-17 Speech command input recognition system for interactive computer display with interpretation of ancillary relevant speech query terms into commands
US09/213,845 US6192343B1 (en) 1998-12-17 1998-12-17 Speech command input recognition system for interactive computer display with term weighting means used in interpreting potential commands from relevant speech terms

Publications (3)

Publication Number Publication Date
GB9929392D0 GB9929392D0 (en) 2000-02-09
GB2344917A true GB2344917A (en) 2000-06-21
GB2344917B GB2344917B (en) 2003-04-02

Family

ID=27498952

Family Applications (1)

Application Number Title Priority Date Filing Date
GB9929392A Expired - Fee Related GB2344917B (en) 1998-12-16 1999-12-14 Speech command input recognition system

Country Status (1)

Country Link
GB (1) GB2344917B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2358499A (en) * 1999-09-03 2001-07-25 Ibm Natural language understanding system for command execution
GB2362746A (en) * 2000-05-23 2001-11-28 Vocalis Ltd Data recognition and retrieval
GB2366009A (en) * 2000-03-22 2002-02-27 Canon Kk Natural language machine interface
GB2379786A (en) * 2001-09-18 2003-03-19 20 20 Speech Ltd Speech processing apparatus
EP1847987A1 (en) * 2006-04-17 2007-10-24 Funai Electric Co., Ltd. Electronic instrument

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0841655A2 (en) * 1996-10-31 1998-05-13 Microsoft Corporation Method and system for buffering recognized words during speech recognition

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0841655A2 (en) * 1996-10-31 1998-05-13 Microsoft Corporation Method and system for buffering recognized words during speech recognition

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2358499A (en) * 1999-09-03 2001-07-25 Ibm Natural language understanding system for command execution
GB2358499B (en) * 1999-09-03 2004-03-17 Ibm Natural language understanding system and method for command execution
US7016827B1 (en) 1999-09-03 2006-03-21 International Business Machines Corporation Method and system for ensuring robustness in natural language understanding
GB2366009A (en) * 2000-03-22 2002-02-27 Canon Kk Natural language machine interface
GB2366009B (en) * 2000-03-22 2004-07-21 Canon Kk Natural language machine interface
US7062428B2 (en) 2000-03-22 2006-06-13 Canon Kabushiki Kaisha Natural language machine interface
GB2362746A (en) * 2000-05-23 2001-11-28 Vocalis Ltd Data recognition and retrieval
GB2379786A (en) * 2001-09-18 2003-03-19 20 20 Speech Ltd Speech processing apparatus
EP1847987A1 (en) * 2006-04-17 2007-10-24 Funai Electric Co., Ltd. Electronic instrument
US7853448B2 (en) 2006-04-17 2010-12-14 Funai Electric Co., Ltd. Electronic instrument for speech recognition with standby time shortening and acoustic model deletion

Also Published As

Publication number Publication date
GB9929392D0 (en) 2000-02-09
GB2344917B (en) 2003-04-02

Similar Documents

Publication Publication Date Title
US8275617B1 (en) Speech command input recognition system for interactive computer display with interpretation of ancillary relevant speech query terms into commands
US6192343B1 (en) Speech command input recognition system for interactive computer display with term weighting means used in interpreting potential commands from relevant speech terms
US7206747B1 (en) Speech command input recognition system for interactive computer display with means for concurrent and modeless distinguishing between speech commands and speech queries for locating commands
US6937984B1 (en) Speech command input recognition system for interactive computer display with speech controlled display of recognized commands
US6820056B1 (en) Recognizing non-verbal sound commands in an interactive computer controlled speech word recognition display system
US5893063A (en) Data processing system and method for dynamically accessing an application using a voice command
US5890122A (en) Voice-controlled computer simulateously displaying application menu and list of available commands
US7451088B1 (en) System and method of handling problematic input during context-sensitive help for multi-modal dialog systems
US7548859B2 (en) Method and system for assisting users in interacting with multi-modal dialog systems
US9466293B1 (en) Speech interface system and method for control and interaction with applications on a computing system
Cohen et al. The role of voice input for human-machine communication.
TWI394065B (en) Multiple predictions in a reduced keyboard disambiguating system
US6085159A (en) Displaying voice commands with multiple variables
US6499015B2 (en) Voice interaction method for a computer graphical user interface
JP2001022494A (en) Display system by data processor control, having sound identifier for window overlapped in interactive graphical user interface
WO2015147702A1 (en) Voice interface method and system
JP2000200094A (en) Method and device for displaying feedback on display
US5897618A (en) Data processing system and method for switching between programs having a same title using a voice command
JP3476007B2 (en) Recognition word registration method, speech recognition method, speech recognition device, storage medium storing software product for registration of recognition word, storage medium storing software product for speech recognition
JPH08166866A (en) Editing support system equipped with interactive interface
GB2344917A (en) Speech command input recognition system
JPH08115194A (en) Help display method for information processing system
US20230161553A1 (en) Facilitating discovery of verbal commands using multimodal interfaces
Rosenfeld et al. Universal Human-Machine Speech Interface
WO2003079188A1 (en) Method for operating software object using natural language and program for the same

Legal Events

Date Code Title Description
746 Register noted 'licences of right' (sect. 46/1977)

Effective date: 20071113

732E Amendments to the register in respect of changes of name or changes affecting rights (sect. 32/1977)

Free format text: REGISTERED BETWEEN 20100617 AND 20100623

S47 Cancellation on entry on the register of a licence of right (sect 47/patents act 1977)

Free format text: CANCELLATION OF ENTRY; APPLICATION BY FILING PATENTS FORM 15 WITHIN 4 WEEKS FROM THE DATE OF PUBLICATION OF THIS JOURNAL NUANCE COMMUNICATIONS, INC SPEECH COMMAND INPUT RECOGNITION SYSTEM

S47 Cancellation on entry on the register of a licence of right (sect 47/patents act 1977)

Free format text: ENTRY CANCELLED; NOTICE IS HEREBY GIVEN THAT THE ENTRY ON THE REGISTER 'LICENCES OF RIGHT' UPON THE UNDER MENTIONED PATENT WAS CANCELLED ON 9 FEBRUARY 2011SPEECH COMMAND INPUT RECOGNITION SYSTEM

PCNP Patent ceased through non-payment of renewal fee

Effective date: 20181214