US20040216049A1 - Method for enhancing dictation and command discrimination - Google Patents
Method for enhancing dictation and command discrimination Download PDFInfo
- Publication number
- US20040216049A1 US20040216049A1 US10/849,663 US84966304A US2004216049A1 US 20040216049 A1 US20040216049 A1 US 20040216049A1 US 84966304 A US84966304 A US 84966304A US 2004216049 A1 US2004216049 A1 US 2004216049A1
- Authority
- US
- United States
- Prior art keywords
- text
- region
- speech
- focus point
- searching
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/24—Speech recognition using non-acoustical features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/16—Sound input; Sound output
- G06F3/167—Audio in a user interface, e.g. using voice commands for navigating, audio feedback
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
- G10L2015/226—Procedures used during a speech recognition process, e.g. man-machine dialogue using non-speech characteristics
- G10L2015/227—Procedures used during a speech recognition process, e.g. man-machine dialogue using non-speech characteristics of the speaker; Human-factor methodology
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
- G10L2015/226—Procedures used during a speech recognition process, e.g. man-machine dialogue using non-speech characteristics
- G10L2015/228—Procedures used during a speech recognition process, e.g. man-machine dialogue using non-speech characteristics of application context
Definitions
- This invention relates to the field of speech recognition, and more particularly, to a method for enhancing discrimination between and among user dictation, user voice commands, and text.
- Speech recognition is the process by which an acoustic signal received by microphone is converted to text by a computer.
- the recognized text may then be used in a variety of computer software applications for purposes such as document preparation, data entry, and command and control.
- Speech dictation systems further offer users a hands free method of operating computer systems.
- presently available speech dictation systems provide user voice commands enabling a user to select a portion of text in an electronic document.
- Such user voice commands typically employ a syntax such as “SELECT ⁇ text>”, where the user voice command “SELECT” signals that the text following the command should be selected or highlighted. After a portion of text has been selected, the user can perform any of a series of subsequent operations upon the selected text.
- the speech dictation system will search for the text phrase “how are you” within a body of text in the electronic document. Once located in the body of text, the phrase can be selected or highlighted. Subsequently, the user can perform an operation on the selected text such as a delete operation, a bold/italic/underline operation, or a correction operation. In further illustration, once the text “how are you” is highlighted, that user selected portion of text can be replaced with different text derived from a subsequent user utterance. In this manner, users can perform hands-free correction of an electronic document.
- a speech dictation system can begin at the top of the active window and select the first occurrence of the word or phrase.
- a conventional speech dictation system can provide the user with the ability to select another occurrence of the word.
- some conventional speech dictation systems provide navigational voice commands such as “NEXT” or “PREVIOUS”.
- the user By uttering the voice command “NEXT” the user instructs the speech dictation system to locate and select the next occurrence of the desired word or phrase.
- the command “PREVIOUS” instructs the speech dictation system to locate and select the previous occurrence of the desired word or phrase.
- Another disadvantage of conventional text selection methods within conventional speech dictation systems is that when searching for the user specified word or phrase, such speech dictation systems typically search the entire portion of a body of text appearing on the user's screen. Each word appearing on the user's screen is activated within the speech dictation system grammar and appears to the speech dictation system as an equally likely candidate. Because the user desires only a single word or phrase, enabling and searching the entire portion of the body of text appearing on the user's screen can be inefficient. Moreover, the technique can increase the likelihood that a misrecognition will occur.
- the invention disclosed herein provides a method and apparatus for discriminating between different occurrences of text in an electronic document and between an instance of a voice command and an instance of speech dictation through the utilization of an eye-tracking system in conjunction with a speech dictation system.
- the method and apparatus of the invention advantageously can include an eye-tracking system (ETS) for cooperative use with a speech dictation system in order to determine the focus of point of a user's gaze during a speech dictation system.
- ETS eye-tracking system
- the cooperative use of the ETS with the speech dictation system can improve accuracy of the “SELECT” user voice command functionality, or any other user voice command for selecting a portion of text within a body of text in a speech dictation system.
- the use of the ETS in the invention also can improve system performance by facilitating discrimination between user dictation and a voice command.
- a method for searching for matching text in an electronic document can include identifying a focus point in a user interface and defining a surrounding region about the focus point.
- the surrounding region can include a body of text within a user interface object configured to receive speech dictated text.
- the method can include receiving a voice command for selecting specified text within the electronic document and searching the body of text included in the surrounding region for a match to the specified text.
- the search can be limited to the body of text in the surrounding region.
- a method for searching for matching text in an electronic document can further include expanding the surrounding region to include an additional area of the user interface if a match to the specified text is not found in the body of text in the searching step.
- the additional area included by the expansion can include additional text.
- the additional text can be searched for a match to the specified text.
- the search can be limited to the body of text and the additional text.
- the expanding step can include expanding the surrounding region outwardly from the focus point by a fixed increment.
- the expanding step can include expanding the surrounding region by a fixed quantity of text adjacent to the body of text.
- the expanding step can include expanding the surrounding region outwardly from the focus point by a variable increment.
- a method for discriminating between an instance of a voice command and an instance of speech dictation can include identifying a focus point in a user interface, defining a surrounding region about the focus point; identifying user interface objects in the surrounding region; further identifying among the identified user interface objects those user interface objects which are configured to accept speech dictated text and those user interface objects which are not configured to accept speech dictated text; computing a probability based upon those user interface objects which have been further identified as being configured to accept speech dictated text and those user interface objects which have been further identified as not being configured to accept speech dictated text; receiving speech input; and, biasing a determination of whether the speech input is a voice command or speech dictation based upon the computed probability. Additionally, the method can include identifying a focus point outside of the user interface; and, biasing a determination of whether the speech input is a voice command or speech dictation based upon a default probability.
- FIG. 1 is an exemplary depiction of a user interacting with the present invention disclosed herein.
- FIG. 2 is a block diagram which illustrates a computer system suitable for use in the present invention.
- FIG. 3 is a block diagram showing a typical high level architecture for the computer system of FIG. 1.
- FIG. 4 is a block diagram showing typical components which comprise a speech recognition engine.
- FIGS. 5A and 5B taken together, constitute a flow chart for illustrating a method for discriminating between different occurrences of text in an electronic document and between an instance of a voice command and an instance of speech dictation through the utilization of an eye-tracking system in conjunction with a speech dictation system.
- an ETS eye-tracking system
- an ETS can assist a speech dictation system in discriminating among multiple occurrences of text within a body of text.
- an ETS can aid the speech dictation system in analyzing speech input to discriminate between voice commands and speech dictation.
- Such enhancements can be realized by detecting in an ETS the screen location of the focus point of a user's gaze.
- the screen location whether on or off screen, can be communicated to the speech dictation system.
- a region can be defined about the focus point (referred to as the “surrounding region”) which can assist in determining whether speech input is a voice command or speech dictation. Additionally, the surrounding region can be used to identify a specific occurrence of text specified for selection by the user.
- FIG. 1 is an exemplary depiction of a user interacting with the invention disclosed herein.
- the user gazes at a location on a video display terminal (VDT) 32 .
- the focus point of the user's gaze is denoted with an asterisk located on the screen of the VDT 32 .
- an ETS having a head-mounted hardware interface 29 .
- ETSs are well known in the art of eye-tracking and measurement. ETSs such as THE EYEGAZE DEVELOPMENT SYSTEM manufactured by LC Technologies, Inc. of Fairfax, Va., as well as EYEMOUSE and EYELINK, both manufactured by SensoMotoric Instruments, Inc. of Boston, Mass. are presently commercially available.
- Configurations for an ETS can include an eye-tracking hardware interface 29 , and an image processing system 34 .
- Eye-tracking hardware interface 29 can be a table-top mounted unit as is available from LC Technologies Inc. An exemplary table-top mounted eye-tracking unit is shown in FIG. 2.
- eye-tracking hardware interface 29 can be a head-mounted unit as is available from SensoMotoric Instruments, Inc. and depicted in FIG. 1. In either case of a table-top mounted unit or a head-mounted unit, eye-tracking hardware interface 29 can communicate information regarding a user's eye to the image processing system 34 .
- the image processing system can be a stand-alone image processing system, or alternatively exist within a conventional computer.
- the conventional computer can utilize a combination of image processing circuitry and image processing software in order to perform the function of an image processing system.
- the invention is not so limited by the selected ETS. Rather, any suitable ETS capable of communicating the location of the focal point of a user's gaze to a computer can be employed.
- FIG. 2 illustrates the circumstance where the image processing system 34 is a conventional computer based image processing system.
- an image processing system 34 can include a conventional computer 20 including a central processing unit (CPU), one or more memory devices and associated circuitry.
- the convention computer 20 can include computer memory devices 27 , which are preferably comprised of an electronic random access memory 27 A and a bulk data storage medium 27 B, such as a magnetic disk drive.
- the computer 20 can include a pointing device 21 , for instance a mouse, and at least one user interface display unit 32 such a's a video data terminal (VDT) operatively connected thereto.
- VDT video data terminal
- the computer 20 can be configured to perform speech recognition as well as text-to-speech (TTS) conversion.
- the computer 20 can further include an audio input device 30 , for example a microphone.
- the computer 20 can include an audio output device 23 , for example speakers. Both the audio input device 30 and the audio output device 23 can be operatively connected to the computer 20 through suitable interface circuitry or “sound board” (not shown). In this way, user speech can be received into the computer 20 through the audio input device 30 , and synthesized speech as well as other audio can be provided to the user through the audio output device 23 .
- the various hardware requirements for the conventional computer 20 as described above can generally be satisfied by any one of many commercially available high speed multimedia personal computers such as those offered and manufactured by International Business Machines Corporation.
- the computer 20 further can include an eye-tracking hardware interface 29 (the table-top variety shown here), operatively connected to computer 20 through a communications port of the computer 20 (not shown) and communicatively linked to the computer 20 through suitable image processing circuitry and software.
- the image processing circuitry and software can determine the location of the focal point of a user's gaze and can communicate the information to computer applications communicatively linked to the image processing software.
- a speech dictation system can be communicatively linked to the image processing software from which the speech dictation system can receive data indicating the location of the focal point of a user's gaze.
- FIG. 3 illustrates a typical architecture for a speech-enabled computer system incorporating an ETS wherein the computer system is configured to discriminate between different occurrences of text in an electronic document and between an instance of a voice command and an instance of speech dictation.
- the computer system 20 can include in memory storage 27 an operating system 24 , a speech dictation system 26 and an eye-tracking system 22 .
- a speech text processor application 28 also is provided.
- the invention is not limited in this regard and the speech dictation system 26 can be used with any other application program which is to be voice enabled.
- the speech dictation system 26 , speech text processor 28 , and the eye-tracking system 22 are shown as separate application programs. It should be noted however that the invention is not limited in this regard, and these various application programs could be implemented as a single, more complex applications program.
- the speech dictation application 26 could be combined with the speech text processor application 28 or with any other application to be used in conjunction with the speech dictation system.
- the eye-tracking system 22 can exist as an application program contained in computer 20 or alternatively within a standalone ETS capable of communicating with computer 20 via a data link.
- the system can also include a voice navigator application (not shown) to coordinate the operation of the speech dictation system for voice operation of other applications programs, but is not necessary for operation of the invention as described herein.
- FIG. 4 is a block diagram showing typical components which illustrate the speech-to-text conversion of a speech signal in the speech dictation system 26 .
- analog speech signals can be received through an audio input device as shown in FIG. 2 and processed in audio circuitry into a digitized speech signal.
- the speech signal can be transformed into a digitized set of data by sampling the speech signal at some fixed rate, typically every 10-20 msec. Subsequently, the audio circuitry can communicate the digitized speech signal to the speech dictation system 26 .
- the representation block 35 can receive the digitized speech signal and can produce a representation of the digitized speech signal which can be used in subsequent stages of the speech recognition process to determine the probability that a portion of speech signal corresponds to a particular phonetic event. This process is intended to emphasize perceptually important speaker independent features of the speech signals received from the operating system.
- search block 38 search algorithms are used to guide the search engine to the most likely words corresponding to the speech signal.
- the search process in search block 38 occurs with the help of acoustic models 40 , lexical models 42 , language models 44 and training data 46 .
- a method and apparatus for discriminating between different occurrences of text in an electronic document and between an instance of a voice command and an instance of speech dictation in accordance with the inventive arrangements is disclosed herein.
- the method and apparatus of the invention can include the cooperative use of an ETS in combination with a speech dictation system.
- this combination can improve the accuracy of the “SELECT” user voice command functionality, or any other user voice command for selecting a portion of text within a body of text in a speech dictation system.
- the combination also can improve speech dictation system performance by assisting the speech dictation system in interpreting speech input as either speech dictation or voice command.
- the aforementioned enhancements to a speech dictation system can be achieved by computing a probability based upon the detected focus point of a user's gaze that speech input temporally proximate to the user's gaze is one of speech dictation or a voice command.
- the computed probability can be used to bias the speech dictation system to interpret the speech input as one of speech dictation or a voice command.
- the speech dictation system can define an adjustable screen region surrounding the detected focus point (the “surrounding region”) in consequence of which the speech dictation system can continuously capture and update information pertaining to text and objects located within the surrounding region.
- the speech dictation system can determine whether the surrounding region primarily contains user interface objects or a text input field. If the surrounding region primarily contains a text input field, the speech dictation system can conclude that the speech input should be interpreted as speech dictation for insertion into the text input field. In contrast, if the surrounding region primarily includes user interface objects, the speech dictation system can interpret the speech input as a voice command. Finally, where the speech input is interpreted as a voice command for selecting a text in a body of text in a text input field, the speech dictation system can identify the text to be selected based upon text in the surrounding region rather than the entirety of text in the text input field. In this manner, speech dictation system resources can be more effectively devoted to a smaller region of text, rather than an entire body of text in an electronic document.
- FIG. 5A begins with step 50 wherein the user, while providing speech input to the speech dictation system naturally gazes at various locations either on the VDT 32 (on screen) or away from the VDT 32 (off screen).
- step 55 the ETS identifies the location of the focus point of the user's gaze.
- the ETS with the aid of image processing circuitry and software, determines whether the focus point of the user's gaze is a location on screen or off screen. In any event, the ETS communicates this information to the speech dictation system.
- step 60 the speech dictation system has received the location of the user's focus point from the ETS. If the location of the focus point of the user's gaze is on screen then the system proceeds to step 70 . If not, then the system continues to step 65 .
- a surrounding region can be defined about the focus point.
- the surrounding region can be defined by a perimeter according to a specified radius extending outwardly from the focus point.
- the surrounding region can be defined by overlaying a predetermined geometric area over the focus point.
- the invention is not limited to the method for computing the surrounding region. Rather, any suitable method for computing the surrounding region can suffice for the purposes of the present invention. Moreover, it will be appreciated by one skilled in the art that regardless of how the surrounding region is determined or the resulting shape of the surrounding region, the default area or size of the region within an outer perimeter can be a user adjustable value. For example, the user can specify a default area or alternatively, the user can specify a radius in which the surrounding region should extend outward from the focus point.
- step 75 after defining the surrounding region, information concerning text and objects within the region can be captured for use both in determining whether speech input should be interpreted as speech dictation or a voice command, and also in identifying a particular occurrence of specified text in an electronic document.
- the captured information can include, for example, the number of pixels dedicated to displaying user interface objects not suitable for receiving speech dictated text and the number of pixels dedicated to displaying user interface objects suitable for receiving speech dictated text. It should be appreciated by defining a limited region in which the speech dictation system can devote its resources, the speech dictation system achieves greater efficiency. For example, the speech dictation system need only activate parts of the speech dictation grammar containing text found within the surrounding region rather than an entire speech dictation grammar.
- a probability can be computed based upon which speech dictation can be interpreted as a voice command or speech dictation. Specifically, the probability can be computed by calculating a ratio of the dictatable area of the surrounding region as compared to the total area of the surrounding region. For example, if 70% of the surrounding region of can receive user dictation, then the probability is 70% or 0.70. Still, the invention is not limited to the particular manner in which the probability is computed. In fact, other calculations of probability can be based upon, for example, the number of textual or dictated words within the surrounding region as compared to the number of objects within the surrounding region available for user voice commands.
- the probability is neither zero nor one indicating a complete certainty that subsequent user utterances will be user dictation or user voice commands. Disallowing such extreme probability values makes possible the situation where the user desires to dictate speech to the speech dictation system while gazing off screen.
- step 65 the system can assign a default value to the probability.
- This default value is known as the default probability and can be pre-configured by the user.
- the default probability indicates the statistical likelihood that subsequent speech input is one of speech dictation or a voice command when the user's gaze is off screen. Accordingly, a statistical analysis based upon the default probability can indicate the likelihood of a user intending speech input to be interpreted as speech dictation when the user is looking away from the screen.
- the default probability can have an adjustable value ranging from zero (0.00) to one (1.00).
- assigning a high value to the default probability is indicative of the presumption that during speech dictation the user need not look on screen.
- the default probability does not indicate complete certainty that speech input provided when the user is looking away from the screen should be interpreted as either speech dictation or a voice command. Such a certain probability can result in error within the speech dictation system.
- step 85 after either computing a probability or relying on a default probability, speech input can be received. Based on the probability derived with the aid of the ETS, the speech input can be analyzed to determine whether the speech input should be interpreted as speech dictation or a voice command. Subsequently, the method can continue to process the speech input leading through jump circle A to decision step 95 of FIG. 5B.
- step 95 it can be determined whether the speech input received in step 85 was a “SELECT” voice command or other similar voice command for selecting text within an electronic document. If the speech input is not interpreted to be the SELECT command, the method proceeds to step 97 wherein one of two actions can occur. First, if the speech input, albeit not the SELECT voice command is determined to be another voice command, the voice command can be executed as would be the case in a conventional speech enabled application. Second, if the speech input is determined to be speech dictation, the speech input can be converted to text by a speech recognition engine. Subsequently, the converted text can be inserted in a user interface object configured to receive the converted text. In either case, the method can return to step 50 of FIG. 5A through jump circle C and the process can be repeated.
- step 100 it can be determined whether text specified by the SELECT command is located in the body of text contained in the surrounding region. For example, if the speech input has been interpreted as the SELECT command, “SELECT mouse”, it can be determined whether the body of text contained in the surrounding region includes the word “mouse”. If in step 100 a match is found for the specified text, the method can proceed to step 105 . Otherwise, the method can continue in step 110 .
- step 105 the most appropriate match for the specified text can be selected. More particularly, if there is only a single match within the body of text in the surrounding region, then the single matched instance of the text can be selected, typically by highlighting the matched occurrence of the text. In contrast, if multiple occurrences of the matched text exist within the body of text in the surrounding region, then it can be determined which instance of the specified text in the body of text in the surrounding region is closest to the focus point. Thus, the focus point of the user's gaze can be used to determine which instance of matched text should be selected. Still, the invention is not limited in this regard and other suitable methods for selecting an instance of matched text among multiple occurrences of matched text can suffice. Such alternative methods can include selecting the first occurrence of matched text in the body of text in the surrounding region.
- the identified text can be selected, typically by visually highlighting the text. It should be appreciated that in the case where an incorrect or undesired occurrence of the specified text has been selected, conventional voice commands such as “PREVIOUS” or “NEXT” may be used to navigate to other occurrences of the specified text in the surrounding region. In any event, the method can return to step 50 of FIG. 5A through jump circle C to begin the process anew. Thus, by repeating the process, the method can again, compute the surrounding region and determine the probability that subsequently received speech input is speech dictation or a voice command.
- step 110 if no match is found within the body of text in the surrounding region, it can be determined whether the surrounding region contains all of the viewable user interface which is configured for receiving speech dictation. If so, it can be assumed that no match exists in the body of text on screen and the user can be notified as such in step 115 .
- the system can provide the user with additional options for continuing and further expanding the search for the user specified text. For example, the user can be queried as to whether the user desires to search the remaining portions of the currently open electronic document.
- more targeted options can be presented to the user such as expanding the surrounding region by a predetermined or user adjustable number of words or paragraphs before or after the surrounding region.
- the method can return to step 50 of FIG. 5A through jump circle C to begin the process over again.
- step 100 it is determined that the surrounding region does not contain all of the viewable user interface which is configured for receiving speech dictation, then it cannot be assumed that no match exists in the body of text on screen.
- the area covered by the surrounding region can be expanded to include further text. Any suitable method for performing an expansion of the surrounding region can suffice.
- the outer perimeter of the surrounding region can be extended outward from the user focus point equally in all directions by a predetermined or dynamically computed value.
- the surrounding region can be expanded outward from the focus point by a predetermined value representing an area measurement.
- the a default predetermined value can be used for determining the extent of the expansion.
- the default value can be adjustable in order to provide a fine tuning capability.
- a user can specify how much larger the surrounding region should grow during an iteration of the search. Taking the previous example, if the user specified text “mouse” was not found within the body of text in the surrounding region, then the perimeter of the surrounding region can be expanded outwardly from the focus point by one centimeter in all directions. Alternatively, the surrounding region can be expanded by a predetermined area of 5 square centimeters or a particular number of pixels.
- step 125 information pertaining to objects and text within the newly expanded surrounding region can be computed, collected and stored for future use in the method of the invention. Additionally, the new body of text now within the newly expanded surrounding region can be activated within the speech dictation system grammar. Also, attributes of objects existing within the newly expanded surrounding region can be identified. After identifying text and objects within the newly expanded surrounding region, the search for matched text in the body of text can be repeated beginning through jump circle B in step 100 . In this manner, the method can systematically and incrementally expand the search for the user specified text within a body of text up to and beyond the on screen portion of the body of text.
- the present invention can be realized in hardware, software, or a combination of hardware and software.
- the method of the present invention can be realized in a centralized fashion in one computer system, or in a distributed fashion where different elements are spread across several interconnected computer systems. Any kind of computer system or other apparatus adapted for carrying out the methods described herein is suited.
- a typical combination of hardware and software could be a general purpose computer system with a computer program that, when being loaded and executed, controls the computer system such that it carries out the methods described herein.
- the present invention can also be embedded in a computer program product, which comprises all the features enabling the implementation of the methods described herein, and which when loaded in a computer system is able to carry out these methods.
- Computer program means or computer program in the present context means any expression, in any language, code or notation, of a set of instructions intended to cause a system having an information processing capability to perform a particular function either directly or after either or both of the following: a) conversion to another language, code or notation; b) reproduction in a different material form.
Abstract
A method for discriminating between an instance of a voice command and an instance of speech dictation can include identifying a focus point in a user interface; defining a surrounding region about the focus point; identifying user interface objects in the surrounding region; further identifying among the identified user interface objects those user interface objects which are configured to accept speech dictated text and those user interface objects which are not configured to accept speech dictated text; computing a probability based upon those user interface objects which have been further identified as being configured to accept speech dictated text and those user interface objects which have been further identified as not being configured to accept speech dictated text; receiving speech input; and, biasing a determination of whether the speech input is a voice command or speech dictation based upon the computed probability.
Description
- This application is a continuation of, and accordingly claims the benefit of, U.S. patent application Ser. No. 09/665,939 filed in the U.S. Patent and Trademark Office on Sep. 20, 2000.
- 1. Technical Field
- This invention relates to the field of speech recognition, and more particularly, to a method for enhancing discrimination between and among user dictation, user voice commands, and text.
- 2. Description of the Related Art
- Speech recognition is the process by which an acoustic signal received by microphone is converted to text by a computer. The recognized text may then be used in a variety of computer software applications for purposes such as document preparation, data entry, and command and control. Speech dictation systems further offer users a hands free method of operating computer systems.
- In regard to electronic document preparation, presently available speech dictation systems provide user voice commands enabling a user to select a portion of text in an electronic document. Such user voice commands typically employ a syntax such as “SELECT <text>”, where the user voice command “SELECT” signals that the text following the command should be selected or highlighted. After a portion of text has been selected, the user can perform any of a series of subsequent operations upon the selected text.
- Thus, if a user says, “SELECT how are you”, the speech dictation system will search for the text phrase “how are you” within a body of text in the electronic document. Once located in the body of text, the phrase can be selected or highlighted. Subsequently, the user can perform an operation on the selected text such as a delete operation, a bold/italic/underline operation, or a correction operation. In further illustration, once the text “how are you” is highlighted, that user selected portion of text can be replaced with different text derived from a subsequent user utterance. In this manner, users can perform hands-free correction of an electronic document.
- Presently, known implementations of the “SELECT” command, or other similar user voice commands for selecting text, suffer from several disadvantages. One such disadvantage is that there may be multiple occurrences of the phrase or word that the user would like to select within a body of text. For example, within a body of text, there are likely to be many occurrences of the word “the”. Thus, if the user says “SELECT the”, the speech dictation system may not be able to determine which occurrence of the word “the” the user would like to select.
- In addressing this problem, conventional speech dictation systems rely upon a system of rules for determining which occurrence of the user desired word or phrase the user would like to select. For example, a speech dictation system can begin at the top of the active window and select the first occurrence of the word or phrase. However, if the user did not want to select the first occurrence of the word or phrase, a conventional speech dictation system can provide the user with the ability to select another occurrence of the word. In particular, some conventional speech dictation systems provide navigational voice commands such as “NEXT” or “PREVIOUS”.
- By uttering the voice command “NEXT” the user instructs the speech dictation system to locate and select the next occurrence of the desired word or phrase. Similarly, the command “PREVIOUS” instructs the speech dictation system to locate and select the previous occurrence of the desired word or phrase. Although such conventional systems allow the user to navigate to the desired occurrence of a particular word or phrase, users must develop strategies for navigating to the desired occurrence. This can result in wasted time and user frustration, especially in cases where the user perceives the speech dictation system to be inaccurate or inefficient.
- Another disadvantage of conventional text selection methods within conventional speech dictation systems is that when searching for the user specified word or phrase, such speech dictation systems typically search the entire portion of a body of text appearing on the user's screen. Each word appearing on the user's screen is activated within the speech dictation system grammar and appears to the speech dictation system as an equally likely candidate. Because the user desires only a single word or phrase, enabling and searching the entire portion of the body of text appearing on the user's screen can be inefficient. Moreover, the technique can increase the likelihood that a misrecognition will occur.
- Yet another disadvantage of conventional text selection methods within conventional speech dictation systems is that often it is not readily apparent to the speech dictation system whether a user has uttered a word during speech dictation or a voice command, for example a voice command that activates a drop-down menu. For instance, if a user utters the word “File”, depending upon the circumstance, the user could either intend to activate the File menu in the menu bar or insert the word “file” in the electronic document. Accordingly, it is not always apparent to the conventional speech dictation system whether a user utterance is a voice command or speech dictation.
- Consequently, although presently available speech dictation systems offer methods of interacting with a computer to audibly command an application, to provide speech dictation in an electronic document and to select text within the electronic document, there remains a need for an improved method of discriminating between user voice commands, user dictations, text, and combinations thereof.
- The invention disclosed herein provides a method and apparatus for discriminating between different occurrences of text in an electronic document and between an instance of a voice command and an instance of speech dictation through the utilization of an eye-tracking system in conjunction with a speech dictation system. The method and apparatus of the invention advantageously can include an eye-tracking system (ETS) for cooperative use with a speech dictation system in order to determine the focus of point of a user's gaze during a speech dictation system. In particular, the cooperative use of the ETS with the speech dictation system can improve accuracy of the “SELECT” user voice command functionality, or any other user voice command for selecting a portion of text within a body of text in a speech dictation system. The use of the ETS in the invention also can improve system performance by facilitating discrimination between user dictation and a voice command.
- In accordance with the inventive arrangements, a method for searching for matching text in an electronic document can include identifying a focus point in a user interface and defining a surrounding region about the focus point. Notably, the surrounding region can include a body of text within a user interface object configured to receive speech dictated text. Additionally, the method can include receiving a voice command for selecting specified text within the electronic document and searching the body of text included in the surrounding region for a match to the specified text. Significantly, the search can be limited to the body of text in the surrounding region.
- A method for searching for matching text in an electronic document can further include expanding the surrounding region to include an additional area of the user interface if a match to the specified text is not found in the body of text in the searching step. Notably, the additional area included by the expansion can include additional text. Accordingly, the additional text can be searched for a match to the specified text. Finally, as before, the search can be limited to the body of text and the additional text.
- In a representative embodiment of the present invention, the expanding step can include expanding the surrounding region outwardly from the focus point by a fixed increment. Alternatively, the expanding step can include expanding the surrounding region by a fixed quantity of text adjacent to the body of text. Finally, the expanding step can include expanding the surrounding region outwardly from the focus point by a variable increment.
- A method for discriminating between an instance of a voice command and an instance of speech dictation can include identifying a focus point in a user interface, defining a surrounding region about the focus point; identifying user interface objects in the surrounding region; further identifying among the identified user interface objects those user interface objects which are configured to accept speech dictated text and those user interface objects which are not configured to accept speech dictated text; computing a probability based upon those user interface objects which have been further identified as being configured to accept speech dictated text and those user interface objects which have been further identified as not being configured to accept speech dictated text; receiving speech input; and, biasing a determination of whether the speech input is a voice command or speech dictation based upon the computed probability. Additionally, the method can include identifying a focus point outside of the user interface; and, biasing a determination of whether the speech input is a voice command or speech dictation based upon a default probability.
- There are presently shown in the drawings embodiments of which are presently preferred, it being understood, however, that the invention is not so limited to the precise arrangements and instrumentalities shown, wherein:
- FIG. 1 is an exemplary depiction of a user interacting with the present invention disclosed herein.
- FIG. 2 is a block diagram which illustrates a computer system suitable for use in the present invention.
- FIG. 3 is a block diagram showing a typical high level architecture for the computer system of FIG. 1.
- FIG. 4 is a block diagram showing typical components which comprise a speech recognition engine.
- FIGS. 5A and 5B, taken together, constitute a flow chart for illustrating a method for discriminating between different occurrences of text in an electronic document and between an instance of a voice command and an instance of speech dictation through the utilization of an eye-tracking system in conjunction with a speech dictation system.
- Utilization of an eye-tracking system (ETS) in conjunction with a speech dictation system can improve the performance of a speech dictation system. Specifically, in accordance with the inventive arrangements, an ETS can assist a speech dictation system in discriminating among multiple occurrences of text within a body of text. Additionally, an ETS can aid the speech dictation system in analyzing speech input to discriminate between voice commands and speech dictation. Such enhancements can be realized by detecting in an ETS the screen location of the focus point of a user's gaze. Advantageously, the screen location, whether on or off screen, can be communicated to the speech dictation system. Based upon the location of the focus point of the user's gaze, a region can be defined about the focus point (referred to as the “surrounding region”) which can assist in determining whether speech input is a voice command or speech dictation. Additionally, the surrounding region can be used to identify a specific occurrence of text specified for selection by the user.
- FIG. 1 is an exemplary depiction of a user interacting with the invention disclosed herein. In FIG. 1, the user gazes at a location on a video display terminal (VDT)32. The focus point of the user's gaze is denoted with an asterisk located on the screen of the
VDT 32. Also depicted is an ETS having a head-mountedhardware interface 29. ETSs are well known in the art of eye-tracking and measurement. ETSs such as THE EYEGAZE DEVELOPMENT SYSTEM manufactured by LC Technologies, Inc. of Fairfax, Va., as well as EYEMOUSE and EYELINK, both manufactured by SensoMotoric Instruments, Inc. of Boston, Mass. are presently commercially available. - Configurations for an ETS can include an eye-tracking
hardware interface 29, and animage processing system 34. Eye-trackinghardware interface 29 can be a table-top mounted unit as is available from LC Technologies Inc. An exemplary table-top mounted eye-tracking unit is shown in FIG. 2. Alternatively, eye-trackinghardware interface 29 can be a head-mounted unit as is available from SensoMotoric Instruments, Inc. and depicted in FIG. 1. In either case of a table-top mounted unit or a head-mounted unit, eye-trackinghardware interface 29 can communicate information regarding a user's eye to theimage processing system 34. - The image processing system can be a stand-alone image processing system, or alternatively exist within a conventional computer. In the case where the image processing system exists within a conventional computer, the conventional computer can utilize a combination of image processing circuitry and image processing software in order to perform the function of an image processing system. It should be appreciated by those skilled in the art that the invention is not so limited by the selected ETS. Rather, any suitable ETS capable of communicating the location of the focal point of a user's gaze to a computer can be employed.
- FIG. 2 illustrates the circumstance where the
image processing system 34 is a conventional computer based image processing system. In particular, animage processing system 34 can include aconventional computer 20 including a central processing unit (CPU), one or more memory devices and associated circuitry. Theconvention computer 20 can includecomputer memory devices 27, which are preferably comprised of an electronicrandom access memory 27A and a bulkdata storage medium 27B, such as a magnetic disk drive. Finally, thecomputer 20 can include apointing device 21, for instance a mouse, and at least one userinterface display unit 32 such a's a video data terminal (VDT) operatively connected thereto. - Notably, the
computer 20 can be configured to perform speech recognition as well as text-to-speech (TTS) conversion. As such, thecomputer 20 can further include anaudio input device 30, for example a microphone. Additionally, thecomputer 20 can include anaudio output device 23, for example speakers. Both theaudio input device 30 and theaudio output device 23 can be operatively connected to thecomputer 20 through suitable interface circuitry or “sound board” (not shown). In this way, user speech can be received into thecomputer 20 through theaudio input device 30, and synthesized speech as well as other audio can be provided to the user through theaudio output device 23. The various hardware requirements for theconventional computer 20 as described above can generally be satisfied by any one of many commercially available high speed multimedia personal computers such as those offered and manufactured by International Business Machines Corporation. - In accordance with the inventive arrangements, the
computer 20 further can include an eye-tracking hardware interface 29 (the table-top variety shown here), operatively connected tocomputer 20 through a communications port of the computer 20 (not shown) and communicatively linked to thecomputer 20 through suitable image processing circuitry and software. Specifically, the image processing circuitry and software can determine the location of the focal point of a user's gaze and can communicate the information to computer applications communicatively linked to the image processing software. In the present invention, a speech dictation system can be communicatively linked to the image processing software from which the speech dictation system can receive data indicating the location of the focal point of a user's gaze. - FIG. 3 illustrates a typical architecture for a speech-enabled computer system incorporating an ETS wherein the computer system is configured to discriminate between different occurrences of text in an electronic document and between an instance of a voice command and an instance of speech dictation. As shown in FIG. 3, the
computer system 20 can include inmemory storage 27 anoperating system 24, aspeech dictation system 26 and an eye-trackingsystem 22. In the example shown, a speechtext processor application 28 also is provided. However the invention is not limited in this regard and thespeech dictation system 26 can be used with any other application program which is to be voice enabled. - In FIG. 2, the
speech dictation system 26,speech text processor 28, and the eye-trackingsystem 22 are shown as separate application programs. It should be noted however that the invention is not limited in this regard, and these various application programs could be implemented as a single, more complex applications program. For example thespeech dictation application 26 could be combined with the speechtext processor application 28 or with any other application to be used in conjunction with the speech dictation system. Additionally, the eye-trackingsystem 22 can exist as an application program contained incomputer 20 or alternatively within a standalone ETS capable of communicating withcomputer 20 via a data link. The system can also include a voice navigator application (not shown) to coordinate the operation of the speech dictation system for voice operation of other applications programs, but is not necessary for operation of the invention as described herein. - FIG. 4 is a block diagram showing typical components which illustrate the speech-to-text conversion of a speech signal in the
speech dictation system 26. Typically, analog speech signals can be received through an audio input device as shown in FIG. 2 and processed in audio circuitry into a digitized speech signal. Specifically, the speech signal can be transformed into a digitized set of data by sampling the speech signal at some fixed rate, typically every 10-20 msec. Subsequently, the audio circuitry can communicate the digitized speech signal to thespeech dictation system 26. - The
representation block 35 can receive the digitized speech signal and can produce a representation of the digitized speech signal which can be used in subsequent stages of the speech recognition process to determine the probability that a portion of speech signal corresponds to a particular phonetic event. This process is intended to emphasize perceptually important speaker independent features of the speech signals received from the operating system. - In the modeling/
classification block 36, algorithms can process the speech signals further to adapt speaker-independent acoustic models to those of the current speaker. Finally, insearch block 38, search algorithms are used to guide the search engine to the most likely words corresponding to the speech signal. The search process insearch block 38 occurs with the help ofacoustic models 40,lexical models 42,language models 44 andtraining data 46. - A method and apparatus for discriminating between different occurrences of text in an electronic document and between an instance of a voice command and an instance of speech dictation in accordance with the inventive arrangements is disclosed herein. The method and apparatus of the invention can include the cooperative use of an ETS in combination with a speech dictation system. Notably, this combination can improve the accuracy of the “SELECT” user voice command functionality, or any other user voice command for selecting a portion of text within a body of text in a speech dictation system. The combination also can improve speech dictation system performance by assisting the speech dictation system in interpreting speech input as either speech dictation or voice command.
- The aforementioned enhancements to a speech dictation system can be achieved by computing a probability based upon the detected focus point of a user's gaze that speech input temporally proximate to the user's gaze is one of speech dictation or a voice command. The computed probability can be used to bias the speech dictation system to interpret the speech input as one of speech dictation or a voice command. Specifically, the speech dictation system can define an adjustable screen region surrounding the detected focus point (the “surrounding region”) in consequence of which the speech dictation system can continuously capture and update information pertaining to text and objects located within the surrounding region.
- Upon receiving speech input, the speech dictation system can determine whether the surrounding region primarily contains user interface objects or a text input field. If the surrounding region primarily contains a text input field, the speech dictation system can conclude that the speech input should be interpreted as speech dictation for insertion into the text input field. In contrast, if the surrounding region primarily includes user interface objects, the speech dictation system can interpret the speech input as a voice command. Finally, where the speech input is interpreted as a voice command for selecting a text in a body of text in a text input field, the speech dictation system can identify the text to be selected based upon text in the surrounding region rather than the entirety of text in the text input field. In this manner, speech dictation system resources can be more effectively devoted to a smaller region of text, rather than an entire body of text in an electronic document.
- FIGS. 5A and 5B, taken together, constitute a flow chart for illustrating a method for discriminating between different occurrences of text in an electronic document and between an instance of a voice command and an instance of speech dictation. The method can be performed in conjunction with a computer system configured both for the use of a speech dictation system and an ETS. FIG. 5A begins with
step 50 wherein the user, while providing speech input to the speech dictation system naturally gazes at various locations either on the VDT 32 (on screen) or away from the VDT 32 (off screen). - In
step 55, the ETS identifies the location of the focus point of the user's gaze. The ETS, with the aid of image processing circuitry and software, determines whether the focus point of the user's gaze is a location on screen or off screen. In any event, the ETS communicates this information to the speech dictation system. Instep 60, the speech dictation system has received the location of the user's focus point from the ETS. If the location of the focus point of the user's gaze is on screen then the system proceeds to step 70. If not, then the system continues to step 65. - If in
step 60 it is determined that the location of the focus point is on screen, the ETS will have identified the on screen location of the focus point of the user's gaze. Consequently, instep 70, a surrounding region can be defined about the focus point. In one representative embodiment, the surrounding region can be defined by a perimeter according to a specified radius extending outwardly from the focus point. Alternatively, the surrounding region can be defined by overlaying a predetermined geometric area over the focus point. - Still, the invention is not limited to the method for computing the surrounding region. Rather, any suitable method for computing the surrounding region can suffice for the purposes of the present invention. Moreover, it will be appreciated by one skilled in the art that regardless of how the surrounding region is determined or the resulting shape of the surrounding region, the default area or size of the region within an outer perimeter can be a user adjustable value. For example, the user can specify a default area or alternatively, the user can specify a radius in which the surrounding region should extend outward from the focus point.
- In
step 75, after defining the surrounding region, information concerning text and objects within the region can be captured for use both in determining whether speech input should be interpreted as speech dictation or a voice command, and also in identifying a particular occurrence of specified text in an electronic document. In particular, the captured information can include, for example, the number of pixels dedicated to displaying user interface objects not suitable for receiving speech dictated text and the number of pixels dedicated to displaying user interface objects suitable for receiving speech dictated text. It should be appreciated by defining a limited region in which the speech dictation system can devote its resources, the speech dictation system achieves greater efficiency. For example, the speech dictation system need only activate parts of the speech dictation grammar containing text found within the surrounding region rather than an entire speech dictation grammar. - In
step 80, a probability can be computed based upon which speech dictation can be interpreted as a voice command or speech dictation. Specifically, the probability can be computed by calculating a ratio of the dictatable area of the surrounding region as compared to the total area of the surrounding region. For example, if 70% of the surrounding region of can receive user dictation, then the probability is 70% or 0.70. Still, the invention is not limited to the particular manner in which the probability is computed. In fact, other calculations of probability can be based upon, for example, the number of textual or dictated words within the surrounding region as compared to the number of objects within the surrounding region available for user voice commands. Notwithstanding, regardless of how the probability is computed, it should be appreciated that preferably the probability is neither zero nor one indicating a complete certainty that subsequent user utterances will be user dictation or user voice commands. Disallowing such extreme probability values makes possible the situation where the user desires to dictate speech to the speech dictation system while gazing off screen. - If, in
decision step 60, it is determined that the focus point of the user's gaze is at a location off screen, instep 65 the system can assign a default value to the probability. This default value is known as the default probability and can be pre-configured by the user. The default probability indicates the statistical likelihood that subsequent speech input is one of speech dictation or a voice command when the user's gaze is off screen. Accordingly, a statistical analysis based upon the default probability can indicate the likelihood of a user intending speech input to be interpreted as speech dictation when the user is looking away from the screen. - The default probability can have an adjustable value ranging from zero (0.00) to one (1.00). Notably, it should be appreciated by those skilled in the art that assigning a high value to the default probability is indicative of the presumption that during speech dictation the user need not look on screen. However, it is preferable that the default probability does not indicate complete certainty that speech input provided when the user is looking away from the screen should be interpreted as either speech dictation or a voice command. Such a certain probability can result in error within the speech dictation system.
- In
step 85, after either computing a probability or relying on a default probability, speech input can be received. Based on the probability derived with the aid of the ETS, the speech input can be analyzed to determine whether the speech input should be interpreted as speech dictation or a voice command. Subsequently, the method can continue to process the speech input leading through jump circle A todecision step 95 of FIG. 5B. - In
decision step 95, it can be determined whether the speech input received instep 85 was a “SELECT” voice command or other similar voice command for selecting text within an electronic document. If the speech input is not interpreted to be the SELECT command, the method proceeds to step 97 wherein one of two actions can occur. First, if the speech input, albeit not the SELECT voice command is determined to be another voice command, the voice command can be executed as would be the case in a conventional speech enabled application. Second, if the speech input is determined to be speech dictation, the speech input can be converted to text by a speech recognition engine. Subsequently, the converted text can be inserted in a user interface object configured to receive the converted text. In either case, the method can return to step 50 of FIG. 5A through jump circle C and the process can be repeated. - Returning to
decision step 95, if it is determined that the speech input received instep 85 was a SELECT voice command or other similar voice command for selecting text within an electronic document, instep 100 it can be determined whether text specified by the SELECT command is located in the body of text contained in the surrounding region. For example, if the speech input has been interpreted as the SELECT command, “SELECT mouse”, it can be determined whether the body of text contained in the surrounding region includes the word “mouse”. If in step 100 a match is found for the specified text, the method can proceed to step 105. Otherwise, the method can continue instep 110. - If a match is found for the specified text in accordance with
step 100, instep 105, the most appropriate match for the specified text can be selected. More particularly, if there is only a single match within the body of text in the surrounding region, then the single matched instance of the text can be selected, typically by highlighting the matched occurrence of the text. In contrast, if multiple occurrences of the matched text exist within the body of text in the surrounding region, then it can be determined which instance of the specified text in the body of text in the surrounding region is closest to the focus point. Thus, the focus point of the user's gaze can be used to determine which instance of matched text should be selected. Still, the invention is not limited in this regard and other suitable methods for selecting an instance of matched text among multiple occurrences of matched text can suffice. Such alternative methods can include selecting the first occurrence of matched text in the body of text in the surrounding region. - Once the appropriate occurrence of the specified text has been identified, the identified text can be selected, typically by visually highlighting the text. It should be appreciated that in the case where an incorrect or undesired occurrence of the specified text has been selected, conventional voice commands such as “PREVIOUS” or “NEXT” may be used to navigate to other occurrences of the specified text in the surrounding region. In any event, the method can return to step50 of FIG. 5A through jump circle C to begin the process anew. Thus, by repeating the process, the method can again, compute the surrounding region and determine the probability that subsequently received speech input is speech dictation or a voice command.
- Returning now to the
decision step 110, if no match is found within the body of text in the surrounding region, it can be determined whether the surrounding region contains all of the viewable user interface which is configured for receiving speech dictation. If so, it can be assumed that no match exists in the body of text on screen and the user can be notified as such instep 115. In another embodiment not depicted in FIG. 5B, where no match exists on screen, the system can provide the user with additional options for continuing and further expanding the search for the user specified text. For example, the user can be queried as to whether the user desires to search the remaining portions of the currently open electronic document. Alternatively, more targeted options can be presented to the user such as expanding the surrounding region by a predetermined or user adjustable number of words or paragraphs before or after the surrounding region. In any case, subsequently, the method can return to step 50 of FIG. 5A through jump circle C to begin the process over again. - In contrast, if in
step 100 it is determined that the surrounding region does not contain all of the viewable user interface which is configured for receiving speech dictation, then it cannot be assumed that no match exists in the body of text on screen. Thus, continuing withstep 120, the area covered by the surrounding region can be expanded to include further text. Any suitable method for performing an expansion of the surrounding region can suffice. For example, the outer perimeter of the surrounding region can be extended outward from the user focus point equally in all directions by a predetermined or dynamically computed value. Alternatively, the surrounding region can be expanded outward from the focus point by a predetermined value representing an area measurement. - In one representative embodiment of the present invention, the a default predetermined value can be used for determining the extent of the expansion. The default value can be adjustable in order to provide a fine tuning capability. In this manner a user can specify how much larger the surrounding region should grow during an iteration of the search. Taking the previous example, if the user specified text “mouse” was not found within the body of text in the surrounding region, then the perimeter of the surrounding region can be expanded outwardly from the focus point by one centimeter in all directions. Alternatively, the surrounding region can be expanded by a predetermined area of 5 square centimeters or a particular number of pixels.
- Subsequent to the expansion of the surrounding region, in
step 125, information pertaining to objects and text within the newly expanded surrounding region can be computed, collected and stored for future use in the method of the invention. Additionally, the new body of text now within the newly expanded surrounding region can be activated within the speech dictation system grammar. Also, attributes of objects existing within the newly expanded surrounding region can be identified. After identifying text and objects within the newly expanded surrounding region, the search for matched text in the body of text can be repeated beginning through jump circle B instep 100. In this manner, the method can systematically and incrementally expand the search for the user specified text within a body of text up to and beyond the on screen portion of the body of text. - Notably, the present invention can be realized in hardware, software, or a combination of hardware and software. The method of the present invention can be realized in a centralized fashion in one computer system, or in a distributed fashion where different elements are spread across several interconnected computer systems. Any kind of computer system or other apparatus adapted for carrying out the methods described herein is suited. A typical combination of hardware and software could be a general purpose computer system with a computer program that, when being loaded and executed, controls the computer system such that it carries out the methods described herein.
- The present invention can also be embedded in a computer program product, which comprises all the features enabling the implementation of the methods described herein, and which when loaded in a computer system is able to carry out these methods. Computer program means or computer program in the present context means any expression, in any language, code or notation, of a set of instructions intended to cause a system having an information processing capability to perform a particular function either directly or after either or both of the following: a) conversion to another language, code or notation; b) reproduction in a different material form.
- While the foregoing specification illustrates and describes the preferred embodiments of this invention, it is to be understood that the invention is not limited to the precise construction herein disclosed. The invention can be embodied in other specific forms without departing from the spirit or essential attributes. Accordingly, reference should be made to the following claims, rather than to the foregoing specification, as indicating the scope of the invention.
Claims (18)
1. A method for searching for matching text in an electronic document comprising:
identifying a focus point in a user interface;
defining a surrounding region about said focus point said surrounding region including a body of text within a user interface object configured to receive speech dictated text, wherein said body of text is a subset of displayed text contained within the electronic document;
receiving a voice command for selecting specified text within the electronic document; and
searching said body of text included in the surrounding region for a match to said specified text, said searching limited to said body of text in said surrounding region.
2. The method of claim 1 , further comprising:
if a match to said specified text is not found in said body of text in said searching step, expanding said surrounding region to include an additional area of said user interface, said additional area including additional text; and
searching said additional text for a match to said specified text, said searching limited to said body of text and said additional text.
3. The method of claim 2 , wherein said expanding step comprises:
expanding said surrounding region outwardly from said focus point by a fixed increment.
4. The method of claim 2 , wherein said expanding step comprises:
expanding said surrounding region by a fixed quantity of text adjacent to said body of text.
5. The method of claim 2 , wherein said expanding step comprises:
expanding said surrounding region outwardly from said focus point by a variable increment.
6. A machine readable storage having stored thereon a computer program for searching for matching text in an electronic document, said computer program having a plurality of code sections executable by a machine for causing the machine to perform the steps of:
identifying a focus point in a user interface;
defining a surrounding region about said focus point said surrounding region including a body of text within a user interface object configured to receive speech dictated text, wherein said body of text is a subset of displayed text contained within the electronic document;
receiving a voice command for selecting specified text within the electronic document; and
searching said body of text included in the surrounding region for a match to said specified text, said searching limited to said body of text in said surrounding region.
7. The machine readable storage of claim 6 , further comprising:
if a match to said specified text is not found in said body of text in said searching step, expanding said surrounding region to include an additional area of said user interface, said additional area including additional text; and
searching said additional text for a match to said specified text, said searching limited to said body of text and said additional text.
8. The machine readable storage of claim 7 , wherein said expanding step comprises:
expanding said surrounding region outwardly from said focus point by a fixed increment.
9. The machine readable storage of claim 7 , wherein said expanding step comprises:
expanding said surrounding region by a fixed quantity of text adjacent to said body of text.
10. The machine readable storage of claim 7 , wherein said expanding step comprises:
expanding said surrounding region outwardly from said focus point by a variable increment.
11. A speech recognition method comprising the step of:
receiving a speech input;
determining from said speech input a voice command for selecting specified text within an electronic document;
visually presenting said electronic document within an application displayed in a graphic user interface;
identifying a focus point within said application using eye-tracking technology;
defining a text region within said electronic document surrounding said focus point, wherein said text region contains a subset of the text displayed within the electronic document; and
searching said text region for a match to said specified text, said search limited to said text region.
12. The method of claim 11 , said defining step further comprising defining said text region by a fixed quantity of text about said focus point.
13. The method of claim 11 , further comprising the step of:
if a match to said specified text is not found in said searching step, expanding said text region to include additional text.
14. The method of claim 11 , said determining step further comprising the steps of:
defining an interface object region within said application surrounding said focus point;
identifying applications objects within said interface object region that include presented electronic documents; and
calculating a probability that said speech input includes a voice command for selecting text based at least in part upon said identifying of application objects.
15. A machine readable storage having stored thereon a computer program for searching for matching text in an electronic document, said computer program having a plurality of code sections executable by a machine for causing the machine to perform the steps of:
receiving a speech input;
determining from said speech input a voice command for selecting specified text within an electronic document;
visually presenting said electronic document within an application displayed in a graphic user interface;
identifying a focus point within said application using eye-tracking technology;
defining a text region within said electronic document surrounding said focus point, wherein said text region contains a subset of the text displayed within the electronic document; and
searching said text region for a match to said specified text, said search limited to said text region.
16. The machine readable storage of claim 15 , said defining step further comprising defining said text region by a fixed quantity of text about said focus point.
17. The machine readable storage of claim 15 , further comprising the step of:
if a match to said specified text is not found in said searching step, expanding said text region to include additional text.
18. The machine readable storage of claim 15 , said determining step further comprising the steps of:
defining an interface object region within said application surrounding said focus point;
identifying applications objects within said interface object region that include presented electronic documents; and
calculating a probability that said speech input includes a voice command for selecting text based at least in part upon said identifying of application objects.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US10/849,663 US20040216049A1 (en) | 2000-09-20 | 2004-05-20 | Method for enhancing dictation and command discrimination |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US09/665,939 US6795806B1 (en) | 2000-09-20 | 2000-09-20 | Method for enhancing dictation and command discrimination |
US10/849,663 US20040216049A1 (en) | 2000-09-20 | 2004-05-20 | Method for enhancing dictation and command discrimination |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US09/665,939 Continuation US6795806B1 (en) | 2000-09-20 | 2000-09-20 | Method for enhancing dictation and command discrimination |
Publications (1)
Publication Number | Publication Date |
---|---|
US20040216049A1 true US20040216049A1 (en) | 2004-10-28 |
Family
ID=24672168
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US09/665,939 Expired - Lifetime US6795806B1 (en) | 2000-09-20 | 2000-09-20 | Method for enhancing dictation and command discrimination |
US10/849,663 Abandoned US20040216049A1 (en) | 2000-09-20 | 2004-05-20 | Method for enhancing dictation and command discrimination |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US09/665,939 Expired - Lifetime US6795806B1 (en) | 2000-09-20 | 2000-09-20 | Method for enhancing dictation and command discrimination |
Country Status (14)
Country | Link |
---|---|
US (2) | US6795806B1 (en) |
EP (1) | EP1320848B1 (en) |
JP (1) | JP3943492B2 (en) |
KR (1) | KR100586286B1 (en) |
CN (1) | CN1205602C (en) |
AT (1) | ATE336779T1 (en) |
AU (1) | AU2001286090A1 (en) |
CA (1) | CA2420093A1 (en) |
DE (1) | DE60122352T2 (en) |
ES (1) | ES2269449T3 (en) |
HK (1) | HK1057940A1 (en) |
IL (1) | IL154852A0 (en) |
TW (1) | TW521262B (en) |
WO (1) | WO2002025637A1 (en) |
Cited By (103)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100315482A1 (en) * | 2009-06-15 | 2010-12-16 | Microsoft Corporation | Interest Determination For Auditory Enhancement |
US20120116748A1 (en) * | 2010-11-08 | 2012-05-10 | Sling Media Pvt Ltd | Voice Recognition and Feedback System |
WO2012167276A1 (en) * | 2011-06-03 | 2012-12-06 | Apple Inc. | Automatically creating a mapping between text data and audio data |
US20150161992A1 (en) * | 2012-07-09 | 2015-06-11 | Lg Electronics Inc. | Speech recognition apparatus and method |
US9262612B2 (en) | 2011-03-21 | 2016-02-16 | Apple Inc. | Device access using voice authentication |
US9265458B2 (en) | 2012-12-04 | 2016-02-23 | Sync-Think, Inc. | Application of smooth pursuit cognitive testing paradigms to clinical drug development |
US9318108B2 (en) | 2010-01-18 | 2016-04-19 | Apple Inc. | Intelligent automated assistant |
US9330720B2 (en) | 2008-01-03 | 2016-05-03 | Apple Inc. | Methods and apparatus for altering audio output signals |
US9338493B2 (en) | 2014-06-30 | 2016-05-10 | Apple Inc. | Intelligent automated assistant for TV user interactions |
US9380976B2 (en) | 2013-03-11 | 2016-07-05 | Sync-Think, Inc. | Optical neuroinformatics |
US9483461B2 (en) | 2012-03-06 | 2016-11-01 | Apple Inc. | Handling speech synthesis of content for multiple languages |
US9495129B2 (en) | 2012-06-29 | 2016-11-15 | Apple Inc. | Device, method, and user interface for voice-activated navigation and browsing of a document |
US9535906B2 (en) | 2008-07-31 | 2017-01-03 | Apple Inc. | Mobile device having human language translation capability with positional feedback |
US9582608B2 (en) | 2013-06-07 | 2017-02-28 | Apple Inc. | Unified ranking with entropy-weighted information for phrase-based semantic auto-completion |
US9620104B2 (en) | 2013-06-07 | 2017-04-11 | Apple Inc. | System and method for user-specified pronunciation of words for speech synthesis and recognition |
US9626955B2 (en) | 2008-04-05 | 2017-04-18 | Apple Inc. | Intelligent text-to-speech conversion |
US9633674B2 (en) | 2013-06-07 | 2017-04-25 | Apple Inc. | System and method for detecting errors in interactions with a voice-based digital assistant |
US9633660B2 (en) | 2010-02-25 | 2017-04-25 | Apple Inc. | User profiling for voice input processing |
US9646614B2 (en) | 2000-03-16 | 2017-05-09 | Apple Inc. | Fast, language-independent method for user authentication by voice |
US9646609B2 (en) | 2014-09-30 | 2017-05-09 | Apple Inc. | Caching apparatus for serving phonetic pronunciations |
US9668121B2 (en) | 2014-09-30 | 2017-05-30 | Apple Inc. | Social reminders |
US9697820B2 (en) | 2015-09-24 | 2017-07-04 | Apple Inc. | Unit-selection text-to-speech synthesis using concatenation-sensitive neural networks |
US9715875B2 (en) | 2014-05-30 | 2017-07-25 | Apple Inc. | Reducing the need for manual start/end-pointing and trigger phrases |
US9721566B2 (en) | 2015-03-08 | 2017-08-01 | Apple Inc. | Competing devices responding to voice triggers |
US9760559B2 (en) | 2014-05-30 | 2017-09-12 | Apple Inc. | Predictive text input |
US9785630B2 (en) | 2014-05-30 | 2017-10-10 | Apple Inc. | Text prediction using combined word N-gram and unigram language models |
US9798393B2 (en) | 2011-08-29 | 2017-10-24 | Apple Inc. | Text correction processing |
US9818400B2 (en) | 2014-09-11 | 2017-11-14 | Apple Inc. | Method and apparatus for discovering trending terms in speech requests |
US9842105B2 (en) | 2015-04-16 | 2017-12-12 | Apple Inc. | Parsimonious continuous-space phrase representations for natural language processing |
US9842101B2 (en) | 2014-05-30 | 2017-12-12 | Apple Inc. | Predictive conversion of language input |
US9858925B2 (en) | 2009-06-05 | 2018-01-02 | Apple Inc. | Using context information to facilitate processing of commands in a virtual assistant |
US9865280B2 (en) | 2015-03-06 | 2018-01-09 | Apple Inc. | Structured dictation using intelligent automated assistants |
US9886432B2 (en) | 2014-09-30 | 2018-02-06 | Apple Inc. | Parsimonious handling of word inflection via categorical stem + suffix N-gram language models |
US9886953B2 (en) | 2015-03-08 | 2018-02-06 | Apple Inc. | Virtual assistant activation |
US9899019B2 (en) | 2015-03-18 | 2018-02-20 | Apple Inc. | Systems and methods for structured stem and suffix language models |
US9934775B2 (en) | 2016-05-26 | 2018-04-03 | Apple Inc. | Unit-selection text-to-speech synthesis based on predicted concatenation parameters |
US9953088B2 (en) | 2012-05-14 | 2018-04-24 | Apple Inc. | Crowd sourcing information to fulfill user requests |
US9966065B2 (en) | 2014-05-30 | 2018-05-08 | Apple Inc. | Multi-command single utterance input method |
US9966068B2 (en) | 2013-06-08 | 2018-05-08 | Apple Inc. | Interpreting and acting upon commands that involve sharing information with remote devices |
US9971774B2 (en) | 2012-09-19 | 2018-05-15 | Apple Inc. | Voice-based media searching |
US9972304B2 (en) | 2016-06-03 | 2018-05-15 | Apple Inc. | Privacy preserving distributed evaluation framework for embedded personalized systems |
US10043516B2 (en) | 2016-09-23 | 2018-08-07 | Apple Inc. | Intelligent automated assistant |
US10049668B2 (en) | 2015-12-02 | 2018-08-14 | Apple Inc. | Applying neural network language models to weighted finite state transducers for automatic speech recognition |
US10049663B2 (en) | 2016-06-08 | 2018-08-14 | Apple, Inc. | Intelligent automated assistant for media exploration |
US10057736B2 (en) | 2011-06-03 | 2018-08-21 | Apple Inc. | Active transport based notifications |
US10067938B2 (en) | 2016-06-10 | 2018-09-04 | Apple Inc. | Multilingual word prediction |
US10074360B2 (en) | 2014-09-30 | 2018-09-11 | Apple Inc. | Providing an indication of the suitability of speech recognition |
US10079014B2 (en) | 2012-06-08 | 2018-09-18 | Apple Inc. | Name recognition system |
US10078631B2 (en) | 2014-05-30 | 2018-09-18 | Apple Inc. | Entropy-guided text prediction using combined word and character n-gram language models |
US10083688B2 (en) | 2015-05-27 | 2018-09-25 | Apple Inc. | Device voice control for selecting a displayed affordance |
US10089072B2 (en) | 2016-06-11 | 2018-10-02 | Apple Inc. | Intelligent device arbitration and control |
US10101822B2 (en) | 2015-06-05 | 2018-10-16 | Apple Inc. | Language input correction |
US10127220B2 (en) | 2015-06-04 | 2018-11-13 | Apple Inc. | Language identification from short strings |
US10127911B2 (en) | 2014-09-30 | 2018-11-13 | Apple Inc. | Speaker identification and unsupervised speaker adaptation techniques |
US10169329B2 (en) | 2014-05-30 | 2019-01-01 | Apple Inc. | Exemplar-based natural language processing |
US10176167B2 (en) | 2013-06-09 | 2019-01-08 | Apple Inc. | System and method for inferring user intent from speech inputs |
WO2019013517A1 (en) | 2017-07-11 | 2019-01-17 | Samsung Electronics Co., Ltd. | Apparatus and method for voice command context |
US10185542B2 (en) | 2013-06-09 | 2019-01-22 | Apple Inc. | Device, method, and graphical user interface for enabling conversation persistence across two or more instances of a digital assistant |
US10186254B2 (en) | 2015-06-07 | 2019-01-22 | Apple Inc. | Context-based endpoint detection |
US10192552B2 (en) | 2016-06-10 | 2019-01-29 | Apple Inc. | Digital assistant providing whispered speech |
US10223066B2 (en) | 2015-12-23 | 2019-03-05 | Apple Inc. | Proactive assistance based on dialog communication between devices |
US10241644B2 (en) | 2011-06-03 | 2019-03-26 | Apple Inc. | Actionable reminder entries |
US10241752B2 (en) | 2011-09-30 | 2019-03-26 | Apple Inc. | Interface for a virtual digital assistant |
US10249300B2 (en) | 2016-06-06 | 2019-04-02 | Apple Inc. | Intelligent list reading |
US10255907B2 (en) | 2015-06-07 | 2019-04-09 | Apple Inc. | Automatic accent detection using acoustic models |
US10269345B2 (en) | 2016-06-11 | 2019-04-23 | Apple Inc. | Intelligent task discovery |
US10276170B2 (en) | 2010-01-18 | 2019-04-30 | Apple Inc. | Intelligent automated assistant |
US10283110B2 (en) | 2009-07-02 | 2019-05-07 | Apple Inc. | Methods and apparatuses for automatic speech recognition |
US10297253B2 (en) | 2016-06-11 | 2019-05-21 | Apple Inc. | Application integration with a digital assistant |
US10318871B2 (en) | 2005-09-08 | 2019-06-11 | Apple Inc. | Method and apparatus for building an intelligent automated assistant |
US10356243B2 (en) | 2015-06-05 | 2019-07-16 | Apple Inc. | Virtual assistant aided communication with 3rd party service in a communication session |
US10354011B2 (en) | 2016-06-09 | 2019-07-16 | Apple Inc. | Intelligent automated assistant in a home environment |
US10366158B2 (en) | 2015-09-29 | 2019-07-30 | Apple Inc. | Efficient word encoding for recurrent neural network language models |
US10410637B2 (en) | 2017-05-12 | 2019-09-10 | Apple Inc. | User-specific acoustic models |
US10446143B2 (en) | 2016-03-14 | 2019-10-15 | Apple Inc. | Identification of voice inputs providing credentials |
US10446141B2 (en) | 2014-08-28 | 2019-10-15 | Apple Inc. | Automatic speech recognition based on user feedback |
US10482874B2 (en) | 2017-05-15 | 2019-11-19 | Apple Inc. | Hierarchical belief states for digital assistants |
US10490187B2 (en) | 2016-06-10 | 2019-11-26 | Apple Inc. | Digital assistant providing automated status report |
US10496753B2 (en) | 2010-01-18 | 2019-12-03 | Apple Inc. | Automatically adapting user interfaces for hands-free interaction |
US10509862B2 (en) | 2016-06-10 | 2019-12-17 | Apple Inc. | Dynamic phrase expansion of language input |
US10521466B2 (en) | 2016-06-11 | 2019-12-31 | Apple Inc. | Data driven natural language event detection and classification |
US10552013B2 (en) | 2014-12-02 | 2020-02-04 | Apple Inc. | Data detection |
US10553209B2 (en) | 2010-01-18 | 2020-02-04 | Apple Inc. | Systems and methods for hands-free notification summaries |
US10567477B2 (en) | 2015-03-08 | 2020-02-18 | Apple Inc. | Virtual assistant continuity |
US10568032B2 (en) | 2007-04-03 | 2020-02-18 | Apple Inc. | Method and system for operating a multi-function portable electronic device using voice-activation |
US10593346B2 (en) | 2016-12-22 | 2020-03-17 | Apple Inc. | Rank-reduced token representation for automatic speech recognition |
US10659851B2 (en) | 2014-06-30 | 2020-05-19 | Apple Inc. | Real-time digital assistant knowledge updates |
US10671428B2 (en) | 2015-09-08 | 2020-06-02 | Apple Inc. | Distributed personal assistant |
US10679605B2 (en) | 2010-01-18 | 2020-06-09 | Apple Inc. | Hands-free list-reading by intelligent automated assistant |
US10691473B2 (en) | 2015-11-06 | 2020-06-23 | Apple Inc. | Intelligent automated assistant in a messaging environment |
US10706373B2 (en) | 2011-06-03 | 2020-07-07 | Apple Inc. | Performing actions associated with task items that represent tasks to perform |
US10705794B2 (en) | 2010-01-18 | 2020-07-07 | Apple Inc. | Automatically adapting user interfaces for hands-free interaction |
US10733993B2 (en) | 2016-06-10 | 2020-08-04 | Apple Inc. | Intelligent digital assistant in a multi-tasking environment |
US10747498B2 (en) | 2015-09-08 | 2020-08-18 | Apple Inc. | Zero latency digital assistant |
US10755703B2 (en) | 2017-05-11 | 2020-08-25 | Apple Inc. | Offline personal assistant |
US10791176B2 (en) | 2017-05-12 | 2020-09-29 | Apple Inc. | Synchronization and task delegation of a digital assistant |
US10789041B2 (en) | 2014-09-12 | 2020-09-29 | Apple Inc. | Dynamic thresholds for always listening speech trigger |
US10810274B2 (en) | 2017-05-15 | 2020-10-20 | Apple Inc. | Optimizing dialogue policy decisions for digital assistants using implicit feedback |
US11010550B2 (en) | 2015-09-29 | 2021-05-18 | Apple Inc. | Unified language modeling framework for word prediction, auto-completion and auto-correction |
US11025565B2 (en) | 2015-06-07 | 2021-06-01 | Apple Inc. | Personalized prediction of responses for instant messaging |
US11217255B2 (en) | 2017-05-16 | 2022-01-04 | Apple Inc. | Far-field extension for digital assistant services |
US11587559B2 (en) | 2015-09-30 | 2023-02-21 | Apple Inc. | Intelligent device identification |
US11830289B2 (en) | 2017-12-11 | 2023-11-28 | Analog Devices, Inc. | Multi-modal far field user interfaces and vision-assisted audio processing |
Families Citing this family (65)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6968333B2 (en) | 2000-04-02 | 2005-11-22 | Tangis Corporation | Soliciting information based on a computer user's context |
US6920616B1 (en) * | 1998-12-18 | 2005-07-19 | Tangis Corporation | Interface for exchanging context data |
US9183306B2 (en) | 1998-12-18 | 2015-11-10 | Microsoft Technology Licensing, Llc | Automated selection of appropriate information based on a computer user's context |
US7225229B1 (en) | 1998-12-18 | 2007-05-29 | Tangis Corporation | Automated pushing of computer user's context data to clients |
US7779015B2 (en) * | 1998-12-18 | 2010-08-17 | Microsoft Corporation | Logging and analyzing context attributes |
US6513046B1 (en) | 1999-12-15 | 2003-01-28 | Tangis Corporation | Storing and recalling information to augment human memories |
US7046263B1 (en) | 1998-12-18 | 2006-05-16 | Tangis Corporation | Requesting computer user's context data |
US6801223B1 (en) | 1998-12-18 | 2004-10-05 | Tangis Corporation | Managing interactions between computer users' context models |
US6791580B1 (en) | 1998-12-18 | 2004-09-14 | Tangis Corporation | Supplying notifications related to supply and consumption of user context data |
US6842877B2 (en) | 1998-12-18 | 2005-01-11 | Tangis Corporation | Contextual responses based on automated learning techniques |
US8181113B2 (en) | 1998-12-18 | 2012-05-15 | Microsoft Corporation | Mediating conflicts in computer users context data |
US7231439B1 (en) | 2000-04-02 | 2007-06-12 | Tangis Corporation | Dynamically swapping modules for determining a computer user's context |
US7464153B1 (en) | 2000-04-02 | 2008-12-09 | Microsoft Corporation | Generating and supplying user context data |
US20020054130A1 (en) * | 2000-10-16 | 2002-05-09 | Abbott Kenneth H. | Dynamically displaying current status of tasks |
EP1215658A3 (en) * | 2000-12-05 | 2002-08-14 | Hewlett-Packard Company | Visual activation of voice controlled apparatus |
GB2388209C (en) | 2001-12-20 | 2005-08-23 | Canon Kk | Control apparatus |
US7881493B1 (en) * | 2003-04-11 | 2011-02-01 | Eyetools, Inc. | Methods and apparatuses for use of eye interpretation information |
US20040268216A1 (en) * | 2003-06-24 | 2004-12-30 | Jacobs Paul E | Method and apparatus for transferring a document into a folder |
US7629989B2 (en) * | 2004-04-02 | 2009-12-08 | K-Nfb Reading Technology, Inc. | Reducing processing latency in optical character recognition for portable reading machine |
KR100716438B1 (en) * | 2004-07-27 | 2007-05-10 | 주식회사 현대오토넷 | Apparatus and method for supplying a voice user interface in a car telematics system |
US7580837B2 (en) | 2004-08-12 | 2009-08-25 | At&T Intellectual Property I, L.P. | System and method for targeted tuning module of a speech recognition system |
US7844464B2 (en) * | 2005-07-22 | 2010-11-30 | Multimodal Technologies, Inc. | Content-based audio playback emphasis |
US7242751B2 (en) | 2004-12-06 | 2007-07-10 | Sbc Knowledge Ventures, L.P. | System and method for speech recognition-enabled automatic call routing |
US7751551B2 (en) | 2005-01-10 | 2010-07-06 | At&T Intellectual Property I, L.P. | System and method for speech-enabled call routing |
US7657020B2 (en) | 2005-06-03 | 2010-02-02 | At&T Intellectual Property I, Lp | Call routing system and method of using the same |
US7697827B2 (en) | 2005-10-17 | 2010-04-13 | Konicek Jeffrey C | User-friendlier interfaces for a camera |
US20070150916A1 (en) * | 2005-12-28 | 2007-06-28 | James Begole | Using sensors to provide feedback on the access of digital content |
US8036917B2 (en) * | 2006-11-22 | 2011-10-11 | General Electric Company | Methods and systems for creation of hanging protocols using eye tracking and voice command and control |
US8689203B2 (en) * | 2008-02-19 | 2014-04-01 | Microsoft Corporation | Software update techniques based on ascertained identities |
US20090248397A1 (en) * | 2008-03-25 | 2009-10-01 | Microsoft Corporation | Service Initiation Techniques |
US20120124467A1 (en) * | 2010-11-15 | 2012-05-17 | Xerox Corporation | Method for automatically generating descriptive headings for a text element |
US9361718B2 (en) * | 2011-09-08 | 2016-06-07 | Intel Corporation | Interactive screen viewing |
US9691381B2 (en) * | 2012-02-21 | 2017-06-27 | Mediatek Inc. | Voice command recognition method and related electronic device and computer-readable medium |
US9423870B2 (en) * | 2012-05-08 | 2016-08-23 | Google Inc. | Input determination method |
CN103885743A (en) * | 2012-12-24 | 2014-06-25 | 大陆汽车投资(上海)有限公司 | Voice text input method and system combining with gaze tracking technology |
US9436287B2 (en) * | 2013-03-15 | 2016-09-06 | Qualcomm Incorporated | Systems and methods for switching processing modes using gestures |
KR20140132246A (en) * | 2013-05-07 | 2014-11-17 | 삼성전자주식회사 | Object selection method and object selection apparatus |
US20140350942A1 (en) * | 2013-05-23 | 2014-11-27 | Delphi Technologies, Inc. | Vehicle human machine interface with gaze direction and voice recognition |
CN103729059A (en) * | 2013-12-27 | 2014-04-16 | 北京智谷睿拓技术服务有限公司 | Interactive method and device |
US9412363B2 (en) | 2014-03-03 | 2016-08-09 | Microsoft Technology Licensing, Llc | Model based approach for on-screen item selection and disambiguation |
US9966079B2 (en) * | 2014-03-24 | 2018-05-08 | Lenovo (Singapore) Pte. Ltd. | Directing voice input based on eye tracking |
US20150364140A1 (en) * | 2014-06-13 | 2015-12-17 | Sony Corporation | Portable Electronic Equipment and Method of Operating a User Interface |
US10317992B2 (en) | 2014-09-25 | 2019-06-11 | Microsoft Technology Licensing, Llc | Eye gaze for spoken language understanding in multi-modal conversational interactions |
US20170262051A1 (en) * | 2015-03-20 | 2017-09-14 | The Eye Tribe | Method for refining control by combining eye tracking and voice recognition |
WO2016151396A1 (en) * | 2015-03-20 | 2016-09-29 | The Eye Tribe | Method for refining control by combining eye tracking and voice recognition |
FR3034215B1 (en) * | 2015-03-27 | 2018-06-15 | Valeo Comfort And Driving Assistance | CONTROL METHOD, CONTROL DEVICE, SYSTEM AND MOTOR VEHICLE COMPRISING SUCH A CONTROL DEVICE |
DE102015210430A1 (en) * | 2015-06-08 | 2016-12-08 | Robert Bosch Gmbh | A method for recognizing a speech context for a voice control, a method for determining a voice control signal for a voice control and apparatus for carrying out the methods |
JP6553418B2 (en) * | 2015-06-12 | 2019-07-31 | パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカPanasonic Intellectual Property Corporation of America | Display control method, display control device and control program |
US9934782B2 (en) * | 2015-09-22 | 2018-04-03 | Meshrose Ltd. | Automatic performance of user interaction operations on a computing device |
US9886958B2 (en) | 2015-12-11 | 2018-02-06 | Microsoft Technology Licensing, Llc | Language and domain independent model based approach for on-screen item selection |
US20170345410A1 (en) * | 2016-05-26 | 2017-11-30 | Tyler Murray Smith | Text to speech system with real-time amendment capability |
US10223067B2 (en) | 2016-07-15 | 2019-03-05 | Microsoft Technology Licensing, Llc | Leveraging environmental context for enhanced communication throughput |
CN106527729A (en) * | 2016-11-17 | 2017-03-22 | 科大讯飞股份有限公司 | Non-contact type input method and device |
US10142686B2 (en) | 2017-03-30 | 2018-11-27 | Rovi Guides, Inc. | System and methods for disambiguating an ambiguous entity in a search query based on the gaze of a user |
US10795671B2 (en) * | 2017-11-21 | 2020-10-06 | International Business Machines Corporation | Audiovisual source code documentation |
CN107957779A (en) * | 2017-11-27 | 2018-04-24 | 海尔优家智能科技(北京)有限公司 | A kind of method and device searched for using eye motion control information |
US10467335B2 (en) | 2018-02-20 | 2019-11-05 | Dropbox, Inc. | Automated outline generation of captured meeting audio in a collaborative document context |
US11488602B2 (en) | 2018-02-20 | 2022-11-01 | Dropbox, Inc. | Meeting transcription using custom lexicons based on document history |
US10657954B2 (en) * | 2018-02-20 | 2020-05-19 | Dropbox, Inc. | Meeting audio capture and transcription in a collaborative document context |
US11157075B2 (en) * | 2018-05-01 | 2021-10-26 | Dell Products, L.P. | Gaze-activated voice services for interactive workspaces |
CN111833846B (en) * | 2019-04-12 | 2023-06-02 | 广东小天才科技有限公司 | Method and device for starting dictation state according to intention, and storage medium |
US11689379B2 (en) | 2019-06-24 | 2023-06-27 | Dropbox, Inc. | Generating customized meeting insights based on user interactions and meeting media |
CN111090473A (en) * | 2019-07-29 | 2020-05-01 | 广东小天才科技有限公司 | Dictation starting method based on electronic equipment and electronic equipment |
JP7402322B2 (en) * | 2020-05-15 | 2023-12-20 | 株式会社Nttドコモ | information processing system |
US20230065847A1 (en) * | 2021-08-31 | 2023-03-02 | International Business Machines Corporation | Network bandwidth conservation during video conferencing |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5777614A (en) * | 1994-10-14 | 1998-07-07 | Hitachi, Ltd. | Editing support system including an interactive interface |
US6078310A (en) * | 1996-06-26 | 2000-06-20 | Sun Microsystems, Inc. | Eyetracked alert messages |
US6351273B1 (en) * | 1997-04-30 | 2002-02-26 | Jerome H. Lemelson | System and methods for controlling automatic scrolling of information on a display or screen |
US6393136B1 (en) * | 1999-01-04 | 2002-05-21 | International Business Machines Corporation | Method and apparatus for determining eye contact |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP3530591B2 (en) | 1994-09-14 | 2004-05-24 | キヤノン株式会社 | Speech recognition apparatus, information processing apparatus using the same, and methods thereof |
ATE196560T1 (en) | 1994-12-23 | 2000-10-15 | Siemens Ag | METHOD FOR CONVERTING VOICE ENTRED INFORMATION INTO MACHINE READABLE DATA |
US5799279A (en) | 1995-11-13 | 1998-08-25 | Dragon Systems, Inc. | Continuous speech recognition of text and commands |
DE50104533D1 (en) | 2000-01-27 | 2004-12-23 | Siemens Ag | SYSTEM AND METHOD FOR VIEWPOINTED LANGUAGE PROCESSING |
-
2000
- 2000-09-20 US US09/665,939 patent/US6795806B1/en not_active Expired - Lifetime
-
2001
- 2001-08-14 TW TW90119955A patent/TW521262B/en not_active IP Right Cessation
- 2001-09-13 AU AU2001286090A patent/AU2001286090A1/en not_active Abandoned
- 2001-09-13 DE DE2001622352 patent/DE60122352T2/en not_active Expired - Lifetime
- 2001-09-13 CN CNB018146899A patent/CN1205602C/en not_active Expired - Lifetime
- 2001-09-13 CA CA 2420093 patent/CA2420093A1/en not_active Abandoned
- 2001-09-13 ES ES01965449T patent/ES2269449T3/en not_active Expired - Lifetime
- 2001-09-13 EP EP01965449A patent/EP1320848B1/en not_active Expired - Lifetime
- 2001-09-13 WO PCT/GB2001/004092 patent/WO2002025637A1/en active IP Right Grant
- 2001-09-13 IL IL15485201A patent/IL154852A0/en unknown
- 2001-09-13 JP JP2002529757A patent/JP3943492B2/en not_active Expired - Lifetime
- 2001-09-13 KR KR1020037003790A patent/KR100586286B1/en not_active IP Right Cessation
- 2001-09-13 AT AT01965449T patent/ATE336779T1/en not_active IP Right Cessation
-
2004
- 2004-01-31 HK HK04100682A patent/HK1057940A1/en not_active IP Right Cessation
- 2004-05-20 US US10/849,663 patent/US20040216049A1/en not_active Abandoned
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5777614A (en) * | 1994-10-14 | 1998-07-07 | Hitachi, Ltd. | Editing support system including an interactive interface |
US6078310A (en) * | 1996-06-26 | 2000-06-20 | Sun Microsystems, Inc. | Eyetracked alert messages |
US6351273B1 (en) * | 1997-04-30 | 2002-02-26 | Jerome H. Lemelson | System and methods for controlling automatic scrolling of information on a display or screen |
US6393136B1 (en) * | 1999-01-04 | 2002-05-21 | International Business Machines Corporation | Method and apparatus for determining eye contact |
Cited By (138)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9646614B2 (en) | 2000-03-16 | 2017-05-09 | Apple Inc. | Fast, language-independent method for user authentication by voice |
US10318871B2 (en) | 2005-09-08 | 2019-06-11 | Apple Inc. | Method and apparatus for building an intelligent automated assistant |
US10568032B2 (en) | 2007-04-03 | 2020-02-18 | Apple Inc. | Method and system for operating a multi-function portable electronic device using voice-activation |
US9330720B2 (en) | 2008-01-03 | 2016-05-03 | Apple Inc. | Methods and apparatus for altering audio output signals |
US10381016B2 (en) | 2008-01-03 | 2019-08-13 | Apple Inc. | Methods and apparatus for altering audio output signals |
US9865248B2 (en) | 2008-04-05 | 2018-01-09 | Apple Inc. | Intelligent text-to-speech conversion |
US9626955B2 (en) | 2008-04-05 | 2017-04-18 | Apple Inc. | Intelligent text-to-speech conversion |
US10108612B2 (en) | 2008-07-31 | 2018-10-23 | Apple Inc. | Mobile device having human language translation capability with positional feedback |
US9535906B2 (en) | 2008-07-31 | 2017-01-03 | Apple Inc. | Mobile device having human language translation capability with positional feedback |
US10475446B2 (en) | 2009-06-05 | 2019-11-12 | Apple Inc. | Using context information to facilitate processing of commands in a virtual assistant |
US11080012B2 (en) | 2009-06-05 | 2021-08-03 | Apple Inc. | Interface for a virtual digital assistant |
US10795541B2 (en) | 2009-06-05 | 2020-10-06 | Apple Inc. | Intelligent organization of tasks items |
US9858925B2 (en) | 2009-06-05 | 2018-01-02 | Apple Inc. | Using context information to facilitate processing of commands in a virtual assistant |
US20100315482A1 (en) * | 2009-06-15 | 2010-12-16 | Microsoft Corporation | Interest Determination For Auditory Enhancement |
US8416715B2 (en) | 2009-06-15 | 2013-04-09 | Microsoft Corporation | Interest determination for auditory enhancement |
US10283110B2 (en) | 2009-07-02 | 2019-05-07 | Apple Inc. | Methods and apparatuses for automatic speech recognition |
US9318108B2 (en) | 2010-01-18 | 2016-04-19 | Apple Inc. | Intelligent automated assistant |
US10679605B2 (en) | 2010-01-18 | 2020-06-09 | Apple Inc. | Hands-free list-reading by intelligent automated assistant |
US10553209B2 (en) | 2010-01-18 | 2020-02-04 | Apple Inc. | Systems and methods for hands-free notification summaries |
US10276170B2 (en) | 2010-01-18 | 2019-04-30 | Apple Inc. | Intelligent automated assistant |
US11423886B2 (en) | 2010-01-18 | 2022-08-23 | Apple Inc. | Task flow identification based on user intent |
US10496753B2 (en) | 2010-01-18 | 2019-12-03 | Apple Inc. | Automatically adapting user interfaces for hands-free interaction |
US9548050B2 (en) | 2010-01-18 | 2017-01-17 | Apple Inc. | Intelligent automated assistant |
US10706841B2 (en) | 2010-01-18 | 2020-07-07 | Apple Inc. | Task flow identification based on user intent |
US10705794B2 (en) | 2010-01-18 | 2020-07-07 | Apple Inc. | Automatically adapting user interfaces for hands-free interaction |
US10049675B2 (en) | 2010-02-25 | 2018-08-14 | Apple Inc. | User profiling for voice input processing |
US9633660B2 (en) | 2010-02-25 | 2017-04-25 | Apple Inc. | User profiling for voice input processing |
US20120116748A1 (en) * | 2010-11-08 | 2012-05-10 | Sling Media Pvt Ltd | Voice Recognition and Feedback System |
US8600732B2 (en) * | 2010-11-08 | 2013-12-03 | Sling Media Pvt Ltd | Translating programming content to match received voice command language |
US9262612B2 (en) | 2011-03-21 | 2016-02-16 | Apple Inc. | Device access using voice authentication |
US10102359B2 (en) | 2011-03-21 | 2018-10-16 | Apple Inc. | Device access using voice authentication |
WO2012167276A1 (en) * | 2011-06-03 | 2012-12-06 | Apple Inc. | Automatically creating a mapping between text data and audio data |
US10057736B2 (en) | 2011-06-03 | 2018-08-21 | Apple Inc. | Active transport based notifications |
US10672399B2 (en) | 2011-06-03 | 2020-06-02 | Apple Inc. | Switching between text data and audio data based on a mapping |
US10241644B2 (en) | 2011-06-03 | 2019-03-26 | Apple Inc. | Actionable reminder entries |
US10706373B2 (en) | 2011-06-03 | 2020-07-07 | Apple Inc. | Performing actions associated with task items that represent tasks to perform |
US11120372B2 (en) | 2011-06-03 | 2021-09-14 | Apple Inc. | Performing actions associated with task items that represent tasks to perform |
US9798393B2 (en) | 2011-08-29 | 2017-10-24 | Apple Inc. | Text correction processing |
US10241752B2 (en) | 2011-09-30 | 2019-03-26 | Apple Inc. | Interface for a virtual digital assistant |
US9483461B2 (en) | 2012-03-06 | 2016-11-01 | Apple Inc. | Handling speech synthesis of content for multiple languages |
US9953088B2 (en) | 2012-05-14 | 2018-04-24 | Apple Inc. | Crowd sourcing information to fulfill user requests |
US10079014B2 (en) | 2012-06-08 | 2018-09-18 | Apple Inc. | Name recognition system |
US9495129B2 (en) | 2012-06-29 | 2016-11-15 | Apple Inc. | Device, method, and user interface for voice-activated navigation and browsing of a document |
US9443510B2 (en) * | 2012-07-09 | 2016-09-13 | Lg Electronics Inc. | Speech recognition apparatus and method |
US20150161992A1 (en) * | 2012-07-09 | 2015-06-11 | Lg Electronics Inc. | Speech recognition apparatus and method |
US9971774B2 (en) | 2012-09-19 | 2018-05-15 | Apple Inc. | Voice-based media searching |
US9265458B2 (en) | 2012-12-04 | 2016-02-23 | Sync-Think, Inc. | Application of smooth pursuit cognitive testing paradigms to clinical drug development |
US9380976B2 (en) | 2013-03-11 | 2016-07-05 | Sync-Think, Inc. | Optical neuroinformatics |
US9966060B2 (en) | 2013-06-07 | 2018-05-08 | Apple Inc. | System and method for user-specified pronunciation of words for speech synthesis and recognition |
US9620104B2 (en) | 2013-06-07 | 2017-04-11 | Apple Inc. | System and method for user-specified pronunciation of words for speech synthesis and recognition |
US9633674B2 (en) | 2013-06-07 | 2017-04-25 | Apple Inc. | System and method for detecting errors in interactions with a voice-based digital assistant |
US9582608B2 (en) | 2013-06-07 | 2017-02-28 | Apple Inc. | Unified ranking with entropy-weighted information for phrase-based semantic auto-completion |
US9966068B2 (en) | 2013-06-08 | 2018-05-08 | Apple Inc. | Interpreting and acting upon commands that involve sharing information with remote devices |
US10657961B2 (en) | 2013-06-08 | 2020-05-19 | Apple Inc. | Interpreting and acting upon commands that involve sharing information with remote devices |
US10176167B2 (en) | 2013-06-09 | 2019-01-08 | Apple Inc. | System and method for inferring user intent from speech inputs |
US10185542B2 (en) | 2013-06-09 | 2019-01-22 | Apple Inc. | Device, method, and graphical user interface for enabling conversation persistence across two or more instances of a digital assistant |
US10078631B2 (en) | 2014-05-30 | 2018-09-18 | Apple Inc. | Entropy-guided text prediction using combined word and character n-gram language models |
US10169329B2 (en) | 2014-05-30 | 2019-01-01 | Apple Inc. | Exemplar-based natural language processing |
US9966065B2 (en) | 2014-05-30 | 2018-05-08 | Apple Inc. | Multi-command single utterance input method |
US9760559B2 (en) | 2014-05-30 | 2017-09-12 | Apple Inc. | Predictive text input |
US9785630B2 (en) | 2014-05-30 | 2017-10-10 | Apple Inc. | Text prediction using combined word N-gram and unigram language models |
US9842101B2 (en) | 2014-05-30 | 2017-12-12 | Apple Inc. | Predictive conversion of language input |
US9715875B2 (en) | 2014-05-30 | 2017-07-25 | Apple Inc. | Reducing the need for manual start/end-pointing and trigger phrases |
US10497365B2 (en) | 2014-05-30 | 2019-12-03 | Apple Inc. | Multi-command single utterance input method |
US11133008B2 (en) | 2014-05-30 | 2021-09-28 | Apple Inc. | Reducing the need for manual start/end-pointing and trigger phrases |
US9338493B2 (en) | 2014-06-30 | 2016-05-10 | Apple Inc. | Intelligent automated assistant for TV user interactions |
US10659851B2 (en) | 2014-06-30 | 2020-05-19 | Apple Inc. | Real-time digital assistant knowledge updates |
US10904611B2 (en) | 2014-06-30 | 2021-01-26 | Apple Inc. | Intelligent automated assistant for TV user interactions |
US9668024B2 (en) | 2014-06-30 | 2017-05-30 | Apple Inc. | Intelligent automated assistant for TV user interactions |
US10446141B2 (en) | 2014-08-28 | 2019-10-15 | Apple Inc. | Automatic speech recognition based on user feedback |
US10431204B2 (en) | 2014-09-11 | 2019-10-01 | Apple Inc. | Method and apparatus for discovering trending terms in speech requests |
US9818400B2 (en) | 2014-09-11 | 2017-11-14 | Apple Inc. | Method and apparatus for discovering trending terms in speech requests |
US10789041B2 (en) | 2014-09-12 | 2020-09-29 | Apple Inc. | Dynamic thresholds for always listening speech trigger |
US9986419B2 (en) | 2014-09-30 | 2018-05-29 | Apple Inc. | Social reminders |
US10127911B2 (en) | 2014-09-30 | 2018-11-13 | Apple Inc. | Speaker identification and unsupervised speaker adaptation techniques |
US9886432B2 (en) | 2014-09-30 | 2018-02-06 | Apple Inc. | Parsimonious handling of word inflection via categorical stem + suffix N-gram language models |
US10074360B2 (en) | 2014-09-30 | 2018-09-11 | Apple Inc. | Providing an indication of the suitability of speech recognition |
US9668121B2 (en) | 2014-09-30 | 2017-05-30 | Apple Inc. | Social reminders |
US9646609B2 (en) | 2014-09-30 | 2017-05-09 | Apple Inc. | Caching apparatus for serving phonetic pronunciations |
US11556230B2 (en) | 2014-12-02 | 2023-01-17 | Apple Inc. | Data detection |
US10552013B2 (en) | 2014-12-02 | 2020-02-04 | Apple Inc. | Data detection |
US9865280B2 (en) | 2015-03-06 | 2018-01-09 | Apple Inc. | Structured dictation using intelligent automated assistants |
US10311871B2 (en) | 2015-03-08 | 2019-06-04 | Apple Inc. | Competing devices responding to voice triggers |
US9886953B2 (en) | 2015-03-08 | 2018-02-06 | Apple Inc. | Virtual assistant activation |
US11087759B2 (en) | 2015-03-08 | 2021-08-10 | Apple Inc. | Virtual assistant activation |
US10567477B2 (en) | 2015-03-08 | 2020-02-18 | Apple Inc. | Virtual assistant continuity |
US9721566B2 (en) | 2015-03-08 | 2017-08-01 | Apple Inc. | Competing devices responding to voice triggers |
US9899019B2 (en) | 2015-03-18 | 2018-02-20 | Apple Inc. | Systems and methods for structured stem and suffix language models |
US9842105B2 (en) | 2015-04-16 | 2017-12-12 | Apple Inc. | Parsimonious continuous-space phrase representations for natural language processing |
US10083688B2 (en) | 2015-05-27 | 2018-09-25 | Apple Inc. | Device voice control for selecting a displayed affordance |
US10127220B2 (en) | 2015-06-04 | 2018-11-13 | Apple Inc. | Language identification from short strings |
US10356243B2 (en) | 2015-06-05 | 2019-07-16 | Apple Inc. | Virtual assistant aided communication with 3rd party service in a communication session |
US10101822B2 (en) | 2015-06-05 | 2018-10-16 | Apple Inc. | Language input correction |
US10186254B2 (en) | 2015-06-07 | 2019-01-22 | Apple Inc. | Context-based endpoint detection |
US11025565B2 (en) | 2015-06-07 | 2021-06-01 | Apple Inc. | Personalized prediction of responses for instant messaging |
US10255907B2 (en) | 2015-06-07 | 2019-04-09 | Apple Inc. | Automatic accent detection using acoustic models |
US11500672B2 (en) | 2015-09-08 | 2022-11-15 | Apple Inc. | Distributed personal assistant |
US10747498B2 (en) | 2015-09-08 | 2020-08-18 | Apple Inc. | Zero latency digital assistant |
US10671428B2 (en) | 2015-09-08 | 2020-06-02 | Apple Inc. | Distributed personal assistant |
US9697820B2 (en) | 2015-09-24 | 2017-07-04 | Apple Inc. | Unit-selection text-to-speech synthesis using concatenation-sensitive neural networks |
US10366158B2 (en) | 2015-09-29 | 2019-07-30 | Apple Inc. | Efficient word encoding for recurrent neural network language models |
US11010550B2 (en) | 2015-09-29 | 2021-05-18 | Apple Inc. | Unified language modeling framework for word prediction, auto-completion and auto-correction |
US11587559B2 (en) | 2015-09-30 | 2023-02-21 | Apple Inc. | Intelligent device identification |
US11526368B2 (en) | 2015-11-06 | 2022-12-13 | Apple Inc. | Intelligent automated assistant in a messaging environment |
US10691473B2 (en) | 2015-11-06 | 2020-06-23 | Apple Inc. | Intelligent automated assistant in a messaging environment |
US10049668B2 (en) | 2015-12-02 | 2018-08-14 | Apple Inc. | Applying neural network language models to weighted finite state transducers for automatic speech recognition |
US10223066B2 (en) | 2015-12-23 | 2019-03-05 | Apple Inc. | Proactive assistance based on dialog communication between devices |
US10446143B2 (en) | 2016-03-14 | 2019-10-15 | Apple Inc. | Identification of voice inputs providing credentials |
US9934775B2 (en) | 2016-05-26 | 2018-04-03 | Apple Inc. | Unit-selection text-to-speech synthesis based on predicted concatenation parameters |
US9972304B2 (en) | 2016-06-03 | 2018-05-15 | Apple Inc. | Privacy preserving distributed evaluation framework for embedded personalized systems |
US10249300B2 (en) | 2016-06-06 | 2019-04-02 | Apple Inc. | Intelligent list reading |
US11069347B2 (en) | 2016-06-08 | 2021-07-20 | Apple Inc. | Intelligent automated assistant for media exploration |
US10049663B2 (en) | 2016-06-08 | 2018-08-14 | Apple, Inc. | Intelligent automated assistant for media exploration |
US10354011B2 (en) | 2016-06-09 | 2019-07-16 | Apple Inc. | Intelligent automated assistant in a home environment |
US10192552B2 (en) | 2016-06-10 | 2019-01-29 | Apple Inc. | Digital assistant providing whispered speech |
US10490187B2 (en) | 2016-06-10 | 2019-11-26 | Apple Inc. | Digital assistant providing automated status report |
US10733993B2 (en) | 2016-06-10 | 2020-08-04 | Apple Inc. | Intelligent digital assistant in a multi-tasking environment |
US10067938B2 (en) | 2016-06-10 | 2018-09-04 | Apple Inc. | Multilingual word prediction |
US10509862B2 (en) | 2016-06-10 | 2019-12-17 | Apple Inc. | Dynamic phrase expansion of language input |
US11037565B2 (en) | 2016-06-10 | 2021-06-15 | Apple Inc. | Intelligent digital assistant in a multi-tasking environment |
US11152002B2 (en) | 2016-06-11 | 2021-10-19 | Apple Inc. | Application integration with a digital assistant |
US10521466B2 (en) | 2016-06-11 | 2019-12-31 | Apple Inc. | Data driven natural language event detection and classification |
US10089072B2 (en) | 2016-06-11 | 2018-10-02 | Apple Inc. | Intelligent device arbitration and control |
US10297253B2 (en) | 2016-06-11 | 2019-05-21 | Apple Inc. | Application integration with a digital assistant |
US10269345B2 (en) | 2016-06-11 | 2019-04-23 | Apple Inc. | Intelligent task discovery |
US10043516B2 (en) | 2016-09-23 | 2018-08-07 | Apple Inc. | Intelligent automated assistant |
US10553215B2 (en) | 2016-09-23 | 2020-02-04 | Apple Inc. | Intelligent automated assistant |
US10593346B2 (en) | 2016-12-22 | 2020-03-17 | Apple Inc. | Rank-reduced token representation for automatic speech recognition |
US10755703B2 (en) | 2017-05-11 | 2020-08-25 | Apple Inc. | Offline personal assistant |
US10791176B2 (en) | 2017-05-12 | 2020-09-29 | Apple Inc. | Synchronization and task delegation of a digital assistant |
US11405466B2 (en) | 2017-05-12 | 2022-08-02 | Apple Inc. | Synchronization and task delegation of a digital assistant |
US10410637B2 (en) | 2017-05-12 | 2019-09-10 | Apple Inc. | User-specific acoustic models |
US10810274B2 (en) | 2017-05-15 | 2020-10-20 | Apple Inc. | Optimizing dialogue policy decisions for digital assistants using implicit feedback |
US10482874B2 (en) | 2017-05-15 | 2019-11-19 | Apple Inc. | Hierarchical belief states for digital assistants |
US11217255B2 (en) | 2017-05-16 | 2022-01-04 | Apple Inc. | Far-field extension for digital assistant services |
WO2019013517A1 (en) | 2017-07-11 | 2019-01-17 | Samsung Electronics Co., Ltd. | Apparatus and method for voice command context |
EP3616050A4 (en) * | 2017-07-11 | 2020-03-18 | Samsung Electronics Co., Ltd. | Apparatus and method for voice command context |
US11830289B2 (en) | 2017-12-11 | 2023-11-28 | Analog Devices, Inc. | Multi-modal far field user interfaces and vision-assisted audio processing |
Also Published As
Publication number | Publication date |
---|---|
JP2004510239A (en) | 2004-04-02 |
ATE336779T1 (en) | 2006-09-15 |
JP3943492B2 (en) | 2007-07-11 |
AU2001286090A1 (en) | 2002-04-02 |
CN1205602C (en) | 2005-06-08 |
US6795806B1 (en) | 2004-09-21 |
DE60122352D1 (en) | 2006-09-28 |
TW521262B (en) | 2003-02-21 |
EP1320848A1 (en) | 2003-06-25 |
WO2002025637A1 (en) | 2002-03-28 |
HK1057940A1 (en) | 2004-04-23 |
KR20030046453A (en) | 2003-06-12 |
EP1320848B1 (en) | 2006-08-16 |
KR100586286B1 (en) | 2006-06-07 |
IL154852A0 (en) | 2003-10-31 |
CA2420093A1 (en) | 2002-03-28 |
ES2269449T3 (en) | 2007-04-01 |
DE60122352T2 (en) | 2007-09-06 |
CN1449558A (en) | 2003-10-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US6795806B1 (en) | Method for enhancing dictation and command discrimination | |
US6314397B1 (en) | Method and apparatus for propagating corrections in speech recognition software | |
US5950160A (en) | Method and system for displaying a variable number of alternative words during speech recognition | |
US5829000A (en) | Method and system for correcting misrecognized spoken words or phrases | |
EP0867857B1 (en) | Enrolment in speech recognition | |
EP0965978B9 (en) | Non-interactive enrollment in speech recognition | |
JP4570176B2 (en) | An extensible speech recognition system that gives users audio feedback | |
US6910012B2 (en) | Method and system for speech recognition using phonetically similar word alternatives | |
US5794189A (en) | Continuous speech recognition | |
US6792409B2 (en) | Synchronous reproduction in a speech recognition system | |
US5884258A (en) | Method and system for editing phrases during continuous speech recognition | |
US5899976A (en) | Method and system for buffering recognized words during speech recognition | |
EP1346343B1 (en) | Speech recognition using word-in-phrase command | |
US7447635B1 (en) | Natural language interface control system | |
US6591236B2 (en) | Method and system for determining available and alternative speech commands | |
US6345249B1 (en) | Automatic analysis of a speech dictated document | |
US6963834B2 (en) | Method of speech recognition using empirically determined word candidates | |
JP2006189730A (en) | Speech interactive method and speech interactive device | |
EP0840287A2 (en) | Method and system for selecting recognized words when correcting recognized speech |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |