MXPA98002754A - System and method for providing remote automatic voice recognition services via a network - Google Patents
System and method for providing remote automatic voice recognition services via a networkInfo
- Publication number
- MXPA98002754A MXPA98002754A MXPA/A/1998/002754A MX9802754A MXPA98002754A MX PA98002754 A MXPA98002754 A MX PA98002754A MX 9802754 A MX9802754 A MX 9802754A MX PA98002754 A MXPA98002754 A MX PA98002754A
- Authority
- MX
- Mexico
- Prior art keywords
- client
- voice
- information
- grammar
- asr
- Prior art date
Links
- 230000000875 corresponding Effects 0.000 claims description 23
- 230000004044 response Effects 0.000 claims description 13
- 238000004891 communication Methods 0.000 claims description 6
- 230000003213 activating Effects 0.000 claims description 5
- 238000004519 manufacturing process Methods 0.000 claims 17
- 235000013550 pizza Nutrition 0.000 description 39
- 238000000034 method Methods 0.000 description 30
- 230000000644 propagated Effects 0.000 description 15
- 230000005540 biological transmission Effects 0.000 description 10
- 235000010384 tocopherol Nutrition 0.000 description 9
- 235000019731 tricalcium phosphate Nutrition 0.000 description 9
- 230000004913 activation Effects 0.000 description 7
- 238000010586 diagram Methods 0.000 description 7
- 238000011068 load Methods 0.000 description 7
- 238000000605 extraction Methods 0.000 description 5
- 238000009434 installation Methods 0.000 description 5
- 235000013580 sausages Nutrition 0.000 description 3
- 239000003795 chemical substances by application Substances 0.000 description 2
- 238000007906 compression Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 239000003550 marker Substances 0.000 description 1
- 230000001105 regulatory Effects 0.000 description 1
- 230000005236 sound signal Effects 0.000 description 1
Abstract
A system and method for operating an automatic speech recognition service using a client-server architecture is used to make the ASR services accessible to a client that is located far from the location of the main ASR machine. The present invention uses client-server communications over a packet network, such as the Internet, where the ASR server receives a grammar from the client, receives information representing the voice of the client, performs voice recognition, and returns information based in the voice recognized to the client
Description
SYSTEM AND METHOD TO PROVIDE REMOTE AUTOMATIC VOICE RECOGNITION SERVICES VIA A NETWORK
PACKAGE
TECHNICAL FIELD
This invention relates to speech recognition in general and, more particularly, provides a way to provide automatic speech recognition services remotely accessible via a packet network.
BACKGROUND OF THE INVENTION
The techniques for achieving automatic speech recognition (ASR) are well known. Among the known ASR techniques are those that use grammars. A grammar is a representation of the language or phrases that are expected to be used or spoken in a given context. In a sense, then, ASR grammars typically restrict the speech recognizer into a vocabulary that is a subset of the universe of potentially spoken words; and grammars can include subgramics. An ASR grammar rule can then be used to represent the set of "phrases" or combinations of words from one or more grammars or subgrams that can be expected in a
REF .: 26876 given context. "Grammar" can also refer generally to a statistical language model (where a model represents phrases), such as those used in language comprehension systems. Products and services that use some form of automatic speech recognition ("ASR") methodology have recently been commercially introduced. For example, AT &T has developed an ASR machine based on a grammar called WATSON that allows the development of complete ASR services. The desirable attributes of the complex ASR services that could use such ASR technology include high accuracy in recognition; robustness to allow recognition where the announcers have different accents or dialects, and / or in the presence of background noise; ability to handle large vocabularies; and understand natural language. To achieve these attributes for complex ASR services, ASR techniques and machines typically require computer-based systems that have significant processing power to achieve the desired speech recognition capability. Processing capacity as used herein refers to processor speed, memory, disk space, as well as access to application databases. Such requirements have restricted the development of complex ASR services that are available on one's desktop, because the processing requirements exceed the capabilities of most desktop systems, which are typically based on computer technology. personal (PC). The packet networks are general purpose data networks which are very suitable for sending stored data of various types, including voice or audio. The Internet, the largest and most renowned of the existing packet networks, connects more than 4 million computers in some 140 countries. The global and exponential growth of the Internet is common knowledge today. Typically, one has access to a packet network, such as the Internet, through a client program running on a computer, such as a PC, and thus packet networks are inherently subtracted to the client / server. One way to access information about a packet network is through the use of a browser or network browser (such as Netscape Navigator, available from Netscape Communications, Inc., and Internet Explorer, available from Microsoft Corp. ) which allows a client to interact with the servers in the network. The servers of the network and the information available in it are typically identified and treated through a Uniform Resource Locator (URL) - compatible address. URL addressing is widely used in Internet and intranet applications and is well known to those skilled in the art (an "intranet" is a packet network modeled on Internet-based functionality and used, for example, by local or internal companies ). What is desired is a way to allow ASR services to be available to a user in one place, such as at his desk, ie remote from the system hosting the ASR machine.
BRIEF DESCRIPTION OF THE INVENTION
A system and method of operation of an automatic speech recognition service using a client-server architecture is used to make the ASR services accessible to a client that is located away from the place of the main ASR machine. In accordance with the present invention, using client-server communications over a packet network, such as the Internet, the ASR server receives a grammar from the client, receives information representing the voice of the client, performs speech recognition, and returns the information based on the voice recognized to the client. Alternative embodiments of the present include a variety of ways to access the desired grammar, the use of compression or feature extraction as a processing step in the ASR client before transferring the spoken information to the ASR server, establishing a dialogue between the client and the server, and operating a form filling service.
BRIEF DESCRIPTION OF THE DRAWINGS
FIGURE 1 is a diagram showing a client-server relationship of a system that provides remote ASR services in accordance with the present invention. FIGURE 2 is a diagram showing an installation process for enabling remote ASR services in accordance with the present invention. FIGURE 3 is a diagram showing an alternative installation process for enabling remote ASR services in accordance with the present invention. FIGURE 4 is a diagram showing a process for regulating the selection according to the present invention. FIGURE 5 is a diagram showing a process for enabling remote automatic speech recognition in accordance with the present invention. FIGURE 6 is a diagram showing an alternative process for enabling remote automatic speech recognition in accordance with the present invention.
FIGURE 7 is a diagram showing another alternative process for enabling remote automatic speech recognition in accordance with the present invention.
DETAILED DESCRIPTION OF THE INVENTION
The present invention is directed to a system based on the client-server architecture to provide remotely available ASR services. In accordance with the present invention, ASR services can be provided to a user - for example, on the user's desktop - over a packet network, such as the Internet, if the user needs to obtain computer equipment having the Extensive processing capacity required to execute all ASR techniques. A basic client-server architecture used in accordance with the present invention is shown in FIGURE 1. An ASR 100 server is a machine with ASR programming programs and systems, which run on a system, denoted as server node 110, which can be linked through the packet network 120 (such as the Internet) to other computers. The server node 110 may typically be a computer that has sufficient processing power to execute complex ASR-based applications, such as the AT &T WATSON system. The packet network 120 can, illustratively, be the Internet or an intranet. The ASR 130 client is a relatively small program (when compared to the ASR 100 server) running on the client PC 140. The client PC 140 is a computer, such as a personal computer (PC), which has enough processing capacity to execute client applications, such as a network browser or browser. The client's PC includes physical computing components, such as a microphone, and programs and programming systems for input and capture of audio sounds, such as voice. The methods for connecting microphones to a PC and capturing audio sounds, such as voice, on the PC are well known. Examples of voice handling capabilities for PC include Ipterphase of
Microsoft Voice Application Programmer (SAPI) and the
Advanced Speech Application Programmer's AT & T
(ASAPI). The details of Microsoft's SAPI can be found in, for example, a publication entitled "Speech API Developers Guide, WindowsMR 95 Edition", Version 1.0,
Microsoft Corporation (1995), and the details of the ASAPI of
AT & T are provided in a publication entitled "Advanced
Speech API Developers Guide ", Version 1.0, AT &T Corporation
(nineteen ninety six); Each of these publications is incorporated here as a reference. An alternative embodiment of the present invention may use an interface between the ASR 130 client and one or more voice channels, so that the voice input may be provided by other audio sources than a microphone. The client PC 140 also has the ability to communicate with other computers over a packet network (such as the Internet). The methods for establishing a communication link with other computers over a packet network (such as the Internet) are well known and include, for example, the use of a modem to dial an Internet service provider over a telephone line. The server ASR 100, through the node of the server 110, and the client ASR 130, through the client PC 140, can communicate with one another over the packet network 120 using known methods suitable for communicating information (including the transmission of data) over a packet network using, for example, a standard communications protocol such as the Transmission Control Protocol / Internet Protocol (TCP / IP). A TCP / IP connection is analogous to a "pipeline" through which information can be transmitted over the packet network from one point to another. The establishment of a TCP / IP connection between the ASR 100 server and the ASR 130 client will allow the transfer of data between the ASR 100 server and the ASR 130 client over the packet 120 network necessary to allow the ASR services according to the present invention. . The ASR 130 client is also interconnected with audio / voice input and output capabilities and text / graphics display capabilities of the client PC 140. The methods and interfaces to handle audio and voice input and output are well known, and the methods and interfaces to handle the display of texts and graphics are also well known. The client ASR 130 can be installed to run on a PC of the client 140 in various ways. For example, the ASR 130 client may be loaded on the client PC 140 from a permanent data storage medium, such as a magnetic disk or CD-ROM. Alternatively, the ASR 130 client can be downloaded from a source of information or locatable data on the packet network such as the Internet. The download of the ASR 130 client can, for example, be performed once to reside permanently on the client's PC 140; alternatively, the ASR 130 client can be downloaded for single-use or limited purposes. The ASR 130 client can be implemented, for example as a program module and small interchangeable programming systems for another program, such as a network browser or browser, running on the customer's PC 140. One way to accomplish this is to make the ASR 130 client a ^ component of Active X programming and systems according to the Microsoft Active X standard. In this way, the ASR 130 client can, for example, be loaded on the client PC 140 in conjunction with a session of the network browser or browser as follows: a user examines the worldwide network using the customer's PC 140, enter a network site that has ASR capability; the network site requests permission from the user to download an ASR client module on the client PC 140 in accordance with the indicated Active X control; after the user's authorization, the ASR 130 client is downloaded to the client's PC 140. Similarly, the ASR server 100 can be installed to run on the server node 110 in various ways, for example, the ASR server can be loaded on the server node 100 from a permanent data storage medium, such as a magnetic disk, or CD-ROM, or, alternatively, the ASR 100 server can be downloaded from a source of information or locatable data on the packet network, such as the Internet. Now, additional details will be described to provide remote ASR services in accordance with the present invention with reference to FIGURES 2-7. It is presumed for the following discussion with respect to each of these figures that the client-server relationship is as shown in FIGURE 1. An installation phase was used to prepare the ASR 100 server and the ASR 130 client to perform a task of automatic voice recognition as part of the ASR application. For convenience, the points shown in FIGURE 1 and that appear in other figures will be identified by the same reference numbers as in FIGURE 1. Referring now to FIGURE 2, an installation phase in a process to provide services will now be described. Remote ASR. In step 201, the ASR 130 client receives a request for the request to load a grammar from the client. The client's grammar is illustratively a data file that contains information that represents the language (for example, words and phrases) that are expected to be spoken in the context of the particular ASR application. The data file can be in a known format, such as the standard grammar format (SGF) which is part of the Microsoft SAPI. For purposes of illustration, an ASR application will be used to take the order of a pizza to describe the present invention. An ASR service application, such as an application for sorting a crop, could typically include a program that interfaces with and uses the ASR 130 client as a resource used to perform the tasks of the ASR application. Such an ASR application could be recid and run, in whole or in part, on a customer's PC 140. Considering the example of ordering a pizza, the PIZZA customer's grammar could include information that represents words that one can use to order a pizza, such as "pizza", "pepperoni", etc. In effect, sub-grammar can be used to build an appropriate grammar. For the example of ordering a pizza, the subglasses for the PIZZA grammar could include SIZE and COVERAGE. The subgrammatic SIZE could consist of the words used to describe the size of the desired pizza, such as "small", "medium" and "large". The COVERAGE subgroup may consist of the words used to describe the different coverages that can be ordered with a pizza, for example, "sausage", "pepperoni", "mushrooms" and the like. An ASR 130 client can be given the desired grammar from the application or, alternatively, the ASR 130 client can choose the grammar of a predetermined set based on the information provided by the application. Either way, the ASR client 130 then in step 202 sends the desired gram file to the ASR 100 server over a TCP / IP connection. A new TCP / IP connection can be made as part of the establishment of a new communication transfer between the client PC 140 and the server node 100, or the TCP / IP connection may already exist as a result of a communication transfer established between the client PC 140 and the server node 110 that has not been terminated. In the illustration of ordering a pizza, the ASR 130 client could perform the transmission of a file containing the PIZZA grammar to the ASR 100 server over a TCP / IP connection. In step 203, the ASR server 100 receives the client's grammar sent from the ASR client 130 and, in step 204, the ASR server loads the grammar of the transmitted client. How it is used here, "loading" the client's grammar means making the grammar accessible for use by the ASR 100 server, for example by storing the grammar in the RAM of the server node 110. In step 205, the ASR 100 server returns a "manipulator" of grammar to the customer 130. A "manipulator" of grammar is a marker, such as, for example, a pointer to the memory containing the loaded grammar, which allows the ASR client to easily refer to the grammar for the remainder of the grammar. assignment of communications or the execution of the application. The ASR client 130 receives the grammar manipulator from the ASR 100 server in step 206 and returns the manipulator to the application in step 207. For the example of ordering a pizza, the ASR 100 server could receive and load the grammar file from the server. transmitted pizza and transmit it again to the client ASR 130 a manipulator that points to the loaded PIZZA grammar. The ASR client, in turn, could receive the PIZZA manipulator from the ASR 100 server and return the PIZZA manipulator to the application to order a pizza. In this way, the application can simply refer to the PIZZA manipulator when it carries out or initiates an ASR task as part of the application to order a pizza. An alternative installation method will now be described with reference to FIGURE 3. It is assumed for the remainder of the description here that the transmission or communication of information or data between the ASR 100 server and the ASR 130 client takes place over a TCP connection / Established IP In step 301, the ASR 130 client receives a request from the application to load a grammar from the client. Instead of sending the client's grammar as a data file to the ASR 100 server in step 302, however, the ASR 130 client instead sends the ASR 100 server an identifier that represents a "canned" grammar; A "canned" grammar could, for example, be a common grammar, such as the TIME OF DAY or DATE, which the ASR 100 server might already have stored. Alternatively, the ASR 130 client could send an IP address to the ASR 100 server, such as a URL compatible address, where the ASR 100 server could find the desired grammar file. The server ASR 100 in step 303 receives the identifier of the grammar or address of the URL grammar of the client ASR 130, locates and loads the client's grammar requested in step 304, and in step 305 a grammar manipulator returns to the ASR client 130. Similarly the steps described above with respect to FIGURE 2, the ASR client 130 receives the grammar manipulator from server ASR 100 in step 306 and returns the manipulator to the application in step 307. For the example of ordering a pizza, the steps described above in relation to FIGURE 2 could be the same, except that the ASR 130 client could send to the server ASR 100 a grammar identifier for the PIZZA grammar (if this was a "canned" grammar) or a URL for the location of a file that contains the PIZZA grammar; the ASR 100 server could, in turn, retrieve a PIZZA grammar file based on the grammar identifier or URL (such as that sent by the ASR client) and then load the requested PIZZA grammar. After the grammar has been loaded and the grammar handler has returned to the ASR 130 client, an ASR service application needs to select a grammar rule to be activated. FIGURE 4 shows a process for the selection of the grammar rule according to the present invention. The ASR client 130 receives from the request the request to activate a grammar rule in step 401. In step 402, the ASR client sends the request to activate a rule to the ASR 100 server; as shown in FIGURE 4, the ASR client 130 also in step 402 sends the previously returned grammar manipulator to the ASR 100 server (which may allow the ASR server to activate the appropriate grammar rule for the particular grammar according to the identified by the grammar manipulator). The ASR server 100 in step 403 receives the request to activate the rule and the grammar handler (if it was sent). In step 404, the ASR server 100 activates the requested rule and, in step 405, the notification that the requested rule has been activated returns to the ASR 130 client. The client ASR 130 receives the notification of the activation of the rule in step 406 and notifies the application in step 407 that the rule has been activated. Once the application receives the news of activation of the rule, you can then start the voice recognition. For illustration purposes of the process shown in FIGURE 4, again consider the example of ordering a pizza. A rule that can be used for the recognition of a pizza order can set the desired phrase in an order to include the subgrammatics SIZE and COVERINGS along with the word "pizza", and can be denoted as follows: (ORDER = SIZE "pizza "" with "COVERAGE.}. Referring again to FIGURE 4, the ASR 130 client could receive from the application the request to activate a rule to order a pizza and send the ORDER rule previously exposed to the ASR 100 server along with The manipulator of the PIZZA gr-amica The ASR server receives the request to activate the rule together with the manipulator of the PIZZA grammar and activates the ORDER grammar, so that the recognizer could be restricted to recognize words of the subgrammatic SIZE, the word "pizza", the word "with" and the words of the subgrammatic COVERAGE After activating the ORDER rule, the server ASR 100 sends the notification of the activation of the rule to the client ASR 130 which, in turn, notifies the application. Once a grammar rule has been activated, voice processing for speech recognition purposes in the grammar according to the rule can take place. Referring to FIGURE 5, in step 501 the ASR 130 client receives a request for the request to initiate a speech recognition task. In step 502, the client ASR 130 requests to propagate the audio of the audio input of the PC 140 '. The propagation of the audio refers to the audio that is being processed "in the air" while it is softer; the system does not expect the entire audio (ie, the entire voice) to enter before sending the audio to the digital processing; the propagation of the audio can also refer to the partial transmission or part of the audio signal when additional audio is being introduced. Illustratively, an audio propagation request can be made by making a program call and programming systems appropriate to the operating system that is being executed on the client PC 140 so that the propagation of the audio in the input microphone is digitized by the sound processor of the customer's PC 140. The propagation of the digitized audio of the microphone input is then passed along the client ASR 130. The client ASR 130 then initiates the transmission of the digitized audio propagated to the server ASR 100 in step 503; Like the audio input of the microphone, the digitized audio is sent to the ASR 100 server "on air" even when voice continues to be input. In step 504, the ASR 100 server performs voice recognition in the digitized audio propagated as the audio is received from the ASR 130 client. Voice recognition is performed using the known recognition algorithms, such as those employed by the WATSON speech recognition machine of AT & T, and is carried out within the restrictions of the selected grammar according to what is defined by the activated rule.
In step 505, the ASR 100 server returns the propagated text (i.e., partially recognized voice) when the incoming voice is recognized. In this way, when the ASR 100 server reaches its initial results, it returns to those results to the ASR 130 client even when the ASR 100 server continues the additional propagated audio process that is being sent by the ASR 130 client. This process of returning the Recognized text "on air" allows the ASR 130 client (or the interconnection application with the ASR 130 client) to provide feedback to the speaker. When the ASR 100 server continues to process the additional propagated input audio, it can correct the results of the initial speech recognition, so that the returned text can actually update (or correct) portions of the text already returned to the ASR 130 client as part of the voice recognition task. Once all the propagated audio has been received from the ASR 130 client, the ASR server completes its speech recognition process and returns a final version of the recognized text (including the corrections) in step 506. In step 507, the client ASR 130 receives the recognized text from the ASR 100 server and returns the text to the application in step 508. Again, this can be done "on the air" while the recognized text enters, and the ASR client passes to the application any corrections of the recognized text received from the ASR 100 server. Referring to the example of ordering a pizza, once the ORDER rule has been activated and the application notified, the ASR 130 client will receive the request to initiate speech recognition and initiate the propagation of the input audio of the microphone. The announcer can be ordered to order the pizza, and once he starts talking, the ASR 130 client sends the digitized propagated audio to the ASR 100 server. In this way, when the announcer establishes, for example, that he wants to order a "large pizza with sausages and pepperoni ", the ASR 130 client will have sent the digitized propagated audio for the first word of the order along the ASR 100 server even when the second word is being spoken. The server ASR 100, when the order is spoken, will return the first word as "big" text when the rest of the order is being spoken. Finally, once the speaker stops talking, the final recognized text for the order, "large pizza with sausage, pepperoni" can be returned to the ASR 130 client and, consequently, to the application. An alternative embodiment for carrying out the voice recognition process according to the present invention is shown in FIGURE 6. In a manner similar to the speech recognition process shown in FIGURE 5, in step 601 the ASR 130 client receives the request of the request to initiate a speech recognition task, and in step 602, the client ASR 130 requests the propagation of the audio from the audio input of the client PC 140. The propagation of the digitized audio of the input of the The microphone is then passed along the ASR 130 client. In step 603, the ASR 130 client compresses the digitized audio "on air" and then initiates the transmission of compressed digitized audio propagated to the ASR 100 server, while the voice input keep going. In step 604, the ASR 100 server decompresses the compressed air received from the ASR 130 client before performing speech recognition of the propagated digital audio. As described above with reference to FIGURE 5, speech recognition is performed within the descriptions of the selected grammar as defined by the activated rule. In step 605, the ASR 100 server returns the propagated text (i.e., partially recognized voice) as the incoming voice is recognized. In this way, the ASR 100 server returns the initial results to the ASR 130 client even though the ASR 100 server continues to process the additional compressed propagated audio that is being sent by the ASR 130 client., and can update or correct portions of the text already returned to the ASR 130 client as part of the speech recognition task. Once all the propagated audio has been received from the ASR 130 client, the ASR server completes its speech recognition processing and returns the final version of the recognized text (including the 'corrections) in step 606. The ASR 130 client receives the recognized text of the ASR server 100 in step 607 as it enters and returns the text to the application in step 608. Another alternative embodiment for carrying out the speech recognition process according to the present invention is shown in FIG. FIGURE 7. Similar to the voice recognition process shown in FIGURES 5 and 6, in step 701 the ASR client 130 receives the application request to initiate a speech recognition task and, in step 702, the client ASR 130 requests to propagate the audio of the audio input of the client PC 140. The propagation of the digitized audio of the microphone input is then passed to the ASR 130 client. In step 703, the ASR 1 client 30 processes the digitized audio "on air" to extract useful features for the speech recognition process and then initiates the transmission of the extracted characteristics to the ASR 100 server, while the voice input continues. The extraction of the relevant characteristics of the voice implies an independent process of the grammar that is typically part of the algorithms used for the recognition of the voice, and can be carried out using the methods known to those skilled in the art, such as those based on the linear prediction code (LPC) or the Mel filter bank processing. Feature extraction provides information obtained from the characteristics of speech signals while eliminating unnecessary information, such as volume. After receiving the features extracted from the ASR 130 client, the ASR 100 server in step 704 performs voice recognition on the incoming features that are arriving "on air" (ie, in a manner analogous to the propagation of the audio). Voice recognition takes place within the restrictions of the selected grammar as defined by the activated rule. As is the case with the modes discussed above with reference to FIGURES 5 and 6, in step 705 the ASR 100 server returns the propagated text (i.e., partially acknowledged voice) to the ASR 130 client when the input characteristics are recognized. The ASR 100 server continues to process the extracted additional features that are being sent to the ASR 130 client, and can update or correct portions of the text already returned to the ASR 130 client. The ASR server completes its voice recognition process of receipt of all the features extracted from the ASR 130 client, and returns a final version of the recognized text (including the corrections) in step 706. The ASR 130 client receives the recognized text from the ASR 100 server in step 707 choo enters and returns the text to the application in step 708. The alternative embodiments described above with respect to FIGS. 6 and 7 each provide additional processing at the client end. For the mode in FIGURE 6, this includes compression of the propagated audio (with decompression of the audio at the end of the server); for the modality in FIGURE 7, this part included the speech recognition process in the form of feature extraction. Using such additional processing at the client end significantly reduces the amount of data transmitted from the ASR 130 client to the ASR 100 server. Thus, less data is required to represent the speech signals that are being transmitted. Where feature extraction takes place at the client's end, such benefits potentially increase acutely, because the extracted characteristics (as opposed to digitized voice signals) require less data and do not need to send characteristics during periods of silence. The data reduction produces a desired double benefit: (1) it allows the reduction in bandwidth required to achieve a certain level of operation, and (2) it reduces the transmission time to send voice data from the ASR client to the ASR server through the TCP / IP connection. Although typically a grammar rule will be activated prior to the start of transmission of the voice information from the ASR 130 client to the ASR 100 server, the activation of the rule could take place after some or all of the voice information to be recognized has been sent from the ASR 130 client to the ASR 100 server. In such circumstances, the ASR 100 server could perform speech recognition efforts until the grammar rule has been activated. The voice sent by the ASR 130 client before the activation of a grammar rule could be temporarily stored by the ASR 100 server to be processed by the recognizer or, alternatively, such voice could be ignored. In addition, multiple speech recognition tasks can be performed using the techniques of the present invention. For example, an ASR application could request the ASR 130 client to instruct the ASR 100 server to load a canned grammar by a telephone number (for example, "TELEPHONE NUMBER") and then request the activation of the numbers mentioned that cover the rule. After a telephone number and recognized according to the present invention is mentioned (for example, in response to a request to mention the telephone number, the ASR 130 client sends the digitized mentioned numbers to the ASR 100 server for recognition), the ASR application could then be to ask the ASR 130 client to install and initiate the recognition of ordering a pizza - (for example, load PIZZA grammar, activate ORDER rule, and initiate speech recognition) according to the examples described above with reference to FIGURES 2-5. In addition to the simple example of ordering a pizza previously used for illustration, a wide array of potential ASR servers can be provided over a packet network, in accordance with the present invention. An example of an ASR application allowed by the present application is a form filling service to complete a form in response to spoken responses for the information required for each of a number of blank spaces in the form. In accordance with the present invention, a form filling service can be implemented wherein the ASR 130 client sends the grammars representing the possible choices for each of the blanks to the ASR 100 server. For each blank space, the client ASR 130 requests the activation of the appropriate grammar rule and sends a corresponding spoken response made in response to the request for the necessary information to complete the blank space. The ASR 100 server applies an appropriate speech recognition algorithm according to the selected grammar and rule, and returns the text to be inserted in the form. Other ASR services may involve an exchange of information (eg, a dialogue) between the server and the client. For example, an ASR service application for handling flight reservations may, in accordance with the present invention as described herein, use a dialogue between the ASR 100 server and the ASR 130 client to perform the ASR task. A dialogue can proceed as follows: Announcer (through the ASR 130 client to the ASR 100 server): "I want a flight to Los Angeles" The response of the ASR server to the ASR client (in the form of text or, alternatively, the voice returned by the ASR 100 server to the ASR 130 client): "What city will it come from?" Announcer (through the ASR client to the ASR server): "Washington, DC". The response of the ASR server to the ASR client: "What day do you want to leave?" Announcer (from the ASR client to the ASR server): "Tuesday". ASR server response to the ASR client: "What time do you want to leave?" Announcer (from the ASR client to the ASR server): "At 4 o'clock in the afternoon". The response of the ASR client to the ASR server: "I can register it on flight 4567 from XYZ Airline from Washington, DC to Los Angeles on Tuesday at 4 o'clock PM. Would you like to reserve a seat on this flight?" In this case, the information received from the server
ASR 110 is not literally the text of the recognized voice, but its information was based on the recognized voice (which will depend on the application). Each part of the dialogue can be carried out according to the client-server method
ASR described above. As can be seen from this example, such an ASR service application requires that the ASR client and the ASR server not only have the ability to handle natural language, but also access to a large database that is constantly changing. To achieve this, it may be desirable to have an ASR service application currently installed and running on a server node 110, instead of a client PC 140. The client PC 140 could, in this case, simply have to execute a relatively small "agent" program which, in the control of the application program that is running on the server node 110, starts the ASR 130 client and cares for the voice input through the ASR 130 client along the ASR server 100. An example of such an "agent" program may be, for example, one that places a "conversation header" on the screen of the client PC 140 to assist interaction between an individual who is using the service application. ASR on the client PC 140 and, through the ASR 130 client and the ASR 100 server, which sends spoken information of the person along the ASR 100 server for recognition. In summary, the present invention provides a way to provide ASR services that can be made available to users over a packet network, such as the Internet, to a remote location on the system that hosts an ASR machine using a client-server architecture. What has been described is merely illustrative of the application of the principles of the present invention. Other arrangements and methods may be implemented by those skilled in the art without departing from the spirit and scope of the present invention.
It is noted that in relation to this date, the best method known to the applicant to carry out the aforementioned invention, is that which is clear from the present description of the invention. Having described the invention as above, property is claimed as contained in the following:
Claims (59)
1. A method for operating an automatic speech recognition service accessible by a client on a packet network, characterized in that it comprises the steps of: a. receive information from the client about the network that corresponds to a grammar used for speech recognition; b. receive information about the voice from the client about the network; c. recognize the received voice information by applying an automatic speech recognition algorithm according to the grammar; and d. Send the information based on the recognized voice over the network packto the client.
2. The method according to claim 1, characterized in that it also comprises the step of, if the information corresponding to a grammar is an address corresponding to the location of a grammar, obtaining access to a grammar located in the corresponding grammar direction .
3. The method according to claim 2, characterized in that the address corresponding to the location of a grammar is an address compatible with the uniform resource locator.
4. The method according to claim 1, characterized in that the information representing the voice above the client in the form of a propagation.
5. The method in accordance with the claim 1, characterized in that the information representing the voice received from the client comprises digitized voice.
6. The method according to claim 1, characterized in that the information representing the voice received from the client comprises compressed digitized speech.
7. The method according to claim 1, characterized in that the information representing the voice received from the client comprises features extracted from the client of the digitized voice.
8. The method according to claim 1, characterized in that the step of recognizing the received voice information is repeated as a new voice information received from the client.
9. The method according to claim 1, characterized in that the information based on the recognized voice comprises text information.
10. The method according to claim 1, characterized in that the information based on the recognized voice comprises additional voice.
11. The method in accordance with the claim I, characterized in that the step of sending information based on the recognized voice is repeated as recognized additional voice information.
12. The method in accordance with the claim II, characterized in that it comprises the step of sending the client revised information based on the previously recognized voice sent to the client.
13. The method according to claim 1, characterized in that the steps of b, c and d are repeated to create an exchange of information between the client and the server.
14. The method according to claim 13, characterized in that the information based on the recognized voice comprises text information.
15. The method in accordance with the claim 13, characterized in that the information based on the recognized voice comprises additional voice.
16. The method according to claim 1, characterized in that it also comprises the step of activating a grammar rule in response to a request received from the client on the packet network.
17. The method according to claim 1, characterized in that it also comprises the step of sending a manipulator corresponding to the grammar to the client on the network.
18. A system for operating an automatic voice recognition service accessible by a client on a packet network, characterized in that it comprises: a. a programmable processor; b. memory; c. an audio input device; and d. a communication interface to establish a communication link with the client over the packet network; where the processor is programmed to execute the steps of: i. receive from the client about the network packet information corresponding to a grammar used for speech recognition; ii. receive information about the voice from the client about the network; iii. recognize the received voice information by applying an automatic speech recognition algorithm according to the grammar; and iv. Send information based on the recognized voice over the network packto the client.
19. The system according to claim 18, characterized in that the processor is further programmed to execute the step of, if the information corresponding to a grammar, is an address that corresponds to the location of a grammar, to obtain access to a grammar located in the address of the corresponding grammar.
20. The system according to claim 19, characterized in that the address corresponding to the location of a grammar is an address compatible with the uniform resource locator.
21. The system according to claim 18, characterized in that the information representing the voice arrives from the client in the manner of a propagation.
22. The system according to claim 18, characterized in that the information representing the voice received from the client comprises digitized voice.
23. The system according to claim 18, characterized in that the information representing the voice received from the client comprises compressed digitized speech.
24. The system according to claim 18, characterized in that the information representing the voice received from the client comprises features extracted by the digitized voice client.
25. The system according to claim 18, characterized in that the processor is further programmed to repeat the step of recognizing the received voice information as new voice information received from the client.
26. The system according to claim 18, characterized in that the information based on the recognized voice comprises text information.
27. The system according to claim 18, characterized in that the information based on the recognized voice comprises additional voice.
28. The system according to claim 18, characterized in that the processor is further programmed to repeat the step of sending the information based on the recognized voice as recognized additional speech information.
29. The system according to claim 28, characterized in that the processor is further programmed to execute the step of sending the revised client information based on the previously recognized voice sent to the client.
30. The system according to claim 18, wherein the processor is further programmed to repeat the steps of b, c and d to create an exchange of information between the client and the server.
31. The system according to claim 30, characterized in that the information based on the recognized voice comprises text information.
32. The system according to claim 30, characterized in that the information based on the recognized voice comprises additional voice.
33. The system according to claim 18, characterized in that the processor is further programmed to execute the step of activating a grammar rule in response to a request received from the client on the packet network.
34. The system according to claim 18, characterized in that the processor is further programmed to execute the step of sending on the network package to the client a manipulator corresponding to the grammar.
35. A manufacturing article, comprising a computer-readable medium having instructions stored thereon to operate an automatic voice recognition service accessible by a client over a packet network, instructions which, when executed by a processor, cause the The processor executes a series of steps characterized because they include: a. receiving from the client on the packet network information corresponding to a grammar used for speech recognition; b. receiving from the client on the packet network information representing the voice; c. recognize the received voice information by applying an automatic speech recognition algorithm according to the grammar; and d. Send information based on the recognized voice over the network package to the client.
36. The article of manufacture according to claim 35, characterized in that the instructions, when performed by a processor, further make the processor execute the step of, if the information corresponding to a grammar is an address corresponding to the location of a grammar, have access to a grammar located in the direction of the corresponding grammar.
37. The article of manufacture according to claim 36, characterized in that the address corresponding to the location of a grammar is an address compatible with the uniform resource locator.
38. The article of manufacture according to claim 35, characterized in that the information representing the voice arrives from the client in the manner of propagation.
39. The article of manufacture according to claim 35, characterized in that the information representing the voice received from the client comprises digitized voice.
40. The article of manufacture according to claim 35, characterized in that the information representing the voice received from the client comprises compressed digitized speech.
41. The article of manufacture according to claim 35, characterized in that the information representing the voice received from the client comprises characteristics extracted by the customer of the digitized voice.
42. The article of manufacture according to claim 35, characterized in that the instructions, when executed by a processor, further cause the processor to repeat the step of recognizing the received voice information as new voice information received from the client.
43. The article of manufacture according to claim 35, characterized in that the information based on the recognized voice comprises text information.
44. The article of manufacture according to claim 35, characterized in that the information based on the recognized voice comprises additional voice.
45. The manufacturing article according to claim 35, characterized in that the instructions, when executed by the processor, further make the processor repeat the step of sending voice-based information recognized as recognized additional voice information.
46. The article of manufacture according to claim 45, characterized in that the instructions, when executed by a processor, further make the processor execute the step of sending the client revised information based on the previously recognized voice sent to the client.
47. The article of manufacture according to claim 35, characterized in that the instructions, when executed by a processor, further cause the processor to repeat the steps of b, c and d to create an exchange of information between the client and the server.
48. The article of manufacture according to claim 47, characterized in that the information based on the recognized voice comprises text information.
49. The article of manufacture according to claim 47, characterized in that the information based on the recognized voice comprises additional voice.
50. The article of manufacture according to claim 35, characterized in that the instructions, when executed by a processor, further make the processor execute the step of activating a grammar rule in response to a request received from the client on the packet network .
51. The article of manufacture according to claim 35, characterized in that the instructions, when executed by a processor, further make the processor execute the step of sending on the packet network to the client a manipulator corresponding to the grammar.
52. A method for operating an automatic form filling service accessible by a client on a packet network, characterized in that it comprises the steps of: a. receive from the client on the network package information corresponding to a grammar used for speech recognition, where the grammar corresponds to the words associated with the text information to be inserted in the form; b. receive information about the voice from the client about the network; c. recognize the received voice information by applying an automatic speech recognition algorithm according to the grammar; and d. send the text that corresponds to the recognized voice over the network package to the client to be inserted in the form.
53. The method according to claim 52, characterized in that it also comprises the step of, if the information corresponding to a grammar is an address corresponding to the location of a grammar, having access to a grammar located in the corresponding grammar address .
54. The method according to claim 53, characterized in that the address corresponding to the location of a grammar is an address compatible with the uniform resource locator.
55. The method according to claim 52, characterized in that the information representing the voice received from the client comprises digitized voice included.
56. The method in accordance with the claim 52, characterized in that the information representing the voice received from the client comprises digitized voice included.
57. The method in accordance with the claim 52, characterized in that the information representing the voice received from the client comprises features extracted by the customer of the digitized voice.
58. The method according to claim 52, characterized in that it also comprises the step of activating a grammar rule in response to a request received from a client on the packet network.
59. The method according to claim 52, characterized in that it also comprises the step of sending a manipulator corresponding to the grammar to the client on the packet network.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US08/833,210 US6078886A (en) | 1997-04-14 | 1997-04-14 | System and method for providing remote automatic speech recognition services via a packet network |
US08833210 | 1997-04-14 |
Publications (2)
Publication Number | Publication Date |
---|---|
MX9802754A MX9802754A (en) | 1998-12-31 |
MXPA98002754A true MXPA98002754A (en) | 1999-02-01 |
Family
ID=
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US6366886B1 (en) | System and method for providing remote automatic speech recognition services via a packet network | |
US6856960B1 (en) | System and method for providing remote automatic speech recognition and text-to-speech services via a packet network | |
US9065914B2 (en) | System and method of providing generated speech via a network | |
CA2345660C (en) | System and method for providing network coordinated conversational services | |
US7519536B2 (en) | System and method for providing network coordinated conversational services | |
US6192338B1 (en) | Natural language knowledge servers as network resources | |
US7496516B2 (en) | Open architecture for a voice user interface | |
US20050091057A1 (en) | Voice application development methodology | |
US20050033582A1 (en) | Spoken language interface | |
US8027839B2 (en) | Using an automated speech application environment to automatically provide text exchange services | |
WO2002091364A1 (en) | Dynamic generation of voice application information from a web server | |
MXPA98002754A (en) | System and method for providing remote automatic voice recognition services via a network | |
Pargellis et al. | A language for creating speech applications. |