US20060143015A1 - System and method for facilitating call routing using speech recognition - Google Patents

System and method for facilitating call routing using speech recognition Download PDF

Info

Publication number
US20060143015A1
US20060143015A1 US11/363,456 US36345606A US2006143015A1 US 20060143015 A1 US20060143015 A1 US 20060143015A1 US 36345606 A US36345606 A US 36345606A US 2006143015 A1 US2006143015 A1 US 2006143015A1
Authority
US
United States
Prior art keywords
user
action
user utterances
routing
utterances
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US11/363,456
Inventor
Benjamin Knott
Robert Bushey
John Martin
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
AT&T Labs Inc
Interactions LLC
AT&T Alex Holdings LLC
Original Assignee
SBC Technology Resources Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SBC Technology Resources Inc filed Critical SBC Technology Resources Inc
Priority to US11/363,456 priority Critical patent/US20060143015A1/en
Publication of US20060143015A1 publication Critical patent/US20060143015A1/en
Assigned to SBC KNOWLEDGE VENTURES, L.P. reassignment SBC KNOWLEDGE VENTURES, L.P. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: BUSHEY, ROBERT R., KNOTT, BENJAMIN A., MARTIN, JOHN M.
Assigned to AT&T KNOWLEDGE VENTURES, L.P. reassignment AT&T KNOWLEDGE VENTURES, L.P. CHANGE OF NAME (SEE DOCUMENT FOR DETAILS). Assignors: SBC KNOWLEDGE VENTURES, L.P.
Priority to US11/834,520 priority patent/US7653549B2/en
Priority to US12/634,434 priority patent/US8112282B2/en
Assigned to AT&T INTELLECTUAL PROPERTY I, L.P. reassignment AT&T INTELLECTUAL PROPERTY I, L.P. CHANGE OF NAME (SEE DOCUMENT FOR DETAILS). Assignors: AT&T KNOWLEDGE VENTURES, L.P.
Assigned to AT&T ALEX HOLDINGS, LLC reassignment AT&T ALEX HOLDINGS, LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: AT&T INTELLECTUAL PROPERTY I, L.P.
Assigned to INTERACTIONS LLC reassignment INTERACTIONS LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: AT&T ALEX HOLDINGS, LLC
Assigned to ORIX VENTURES, LLC reassignment ORIX VENTURES, LLC SECURITY INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: INTERACTIONS LLC
Assigned to SILICON VALLEY BANK reassignment SILICON VALLEY BANK FIRST AMENDMENT TO INTELLECTUAL PROPERTY SECURITY AGREEMENT Assignors: INTERACTIONS LLC
Assigned to SILICON VALLEY BANK reassignment SILICON VALLEY BANK INTELLECTUAL PROPERTY SECURITY AGREEMENT Assignors: INTERACTIONS LLC
Assigned to INTERACTIONS LLC, INTERACTIONS CORPORATION reassignment INTERACTIONS LLC TERMINATION AND RELEASE OF SECURITY INTEREST IN INTELLECTUAL PROPERTY Assignors: ORIX GROWTH CAPITAL, LLC
Assigned to INTERACTIONS LLC reassignment INTERACTIONS LLC RELEASE OF SECURITY INTEREST IN INTELLECTUAL PROPERTY RECORDED AT REEL/FRAME: 049388/0082 Assignors: SILICON VALLEY BANK
Assigned to INTERACTIONS LLC reassignment INTERACTIONS LLC RELEASE OF SECURITY INTEREST IN INTELLECTUAL PROPERTY RECORDED AT REEL/FRAME: 036100/0925 Assignors: SILICON VALLEY BANK
Abandoned legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M3/00Automatic or semi-automatic exchanges
    • H04M3/42Systems providing special services or facilities to subscribers
    • H04M3/487Arrangements for providing information services, e.g. recorded voice services or time announcements
    • H04M3/493Interactive information services, e.g. directory enquiries ; Arrangements therefor, e.g. interactive voice response [IVR] systems or voice portals
    • H04M3/4936Speech interaction details
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2201/00Electronic components, circuits, software, systems or apparatus used in telephone systems
    • H04M2201/40Electronic components, circuits, software, systems or apparatus used in telephone systems using speech recognition
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M3/00Automatic or semi-automatic exchanges
    • H04M3/42Systems providing special services or facilities to subscribers
    • H04M3/487Arrangements for providing information services, e.g. recorded voice services or time announcements
    • H04M3/493Interactive information services, e.g. directory enquiries ; Arrangements therefor, e.g. interactive voice response [IVR] systems or voice portals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M3/00Automatic or semi-automatic exchanges
    • H04M3/42Systems providing special services or facilities to subscribers
    • H04M3/54Arrangements for diverting calls for one subscriber to another predetermined subscriber

Definitions

  • the present invention relates generally to speech-enabled applications and, more particularly, to a system and method for optimizing prompts for speech-enabled applications.
  • ACR Automatic Call Routing
  • the ACR In order for an ACR application to properly route calls, the ACR generally must interpret the intent of the customer, identify the type or category of customer call, and identify the correct routing destination for the call type.
  • An ACR application may attempt to match one or words in a statement by a customer to a particular pre-defined action to be taken by the ACR application.
  • a computer-implemented method for optimizing prompts for a speech-enabled application.
  • the speech-enabled application is operable to receive communications from a number of users and communicate one or more prompts to each user to illicit a response from the user that indicates the purpose of the user's communication.
  • the method includes determining a number of prompt alternatives (each including one or more prompts) to evaluate and determining an evaluation period for each prompt alternative.
  • the method also includes automatically presenting each prompt alternative to users during the associated evaluation period and automatically recording the results of user responses to each prompt alternative.
  • the method includes automatically analyzing the recorded results for each prompt alternative based on one or more performance criteria and automatically implementing one of the prompt alternatives based on the analysis of the recorded results.
  • inventions of the present invention include a method and system for optimizing prompts for speech-enabled applications that improve the operation of such applications. For example, particular embodiments automate the evaluation of various prompts or other user instructions for speech-enabled applications and then automatically implement the most effective prompt(s). Such embodiments can automatically present numerous prompt variations to users, evaluate the impact of each prompt on some measure of system performance, and adopt the prompt(s) that lead to the best system performance. This automation of prompt evaluation and implementation can reduce development time and ensure high system performance.
  • FIG. 1 is a block diagram depicting an example embodiment of a service center system according to one embodiment of the present invention
  • FIG. 2 is a flow diagram depicting an example automatic call routing method according to one embodiment of the present invention
  • FIG. 3 is a diagram depicting an example embodiment of an automatic call router action-object matrix according to one embodiment of the present invention.
  • FIG. 4 is a flow diagram depicting an example method for optimizing prompts for speech-enabled applications according to one embodiment of the present invention.
  • FIG. 1 is a block diagram depicting an example embodiment of a service center system 100 according to one embodiment of the present invention.
  • System 100 enables users to conduct transactions via a service center 102 .
  • service center 102 may be a customer service call center for a telephone services company.
  • the present invention may be used in conjunction with any other types of call centers, as well as with any systems that use speech recognition to perform an action or otherwise facilitate an action in response to speech input of a user.
  • the term “transaction” or it variants refers to any action that a user desires to perform in conjunction with or have performed by service center 102 .
  • the example service center 102 includes one or more computing apparatuses 104 that are operably coupled to one or more transaction processing service solutions 106 . Included in computing apparatus 104 is a processor 108 . Operably coupled to processor 108 of computing apparatus 104 is a memory 110 . Computing apparatus 104 employs processor 108 and memory 110 to execute and store, respectively, one or more instructions of a program of instructions (i.e., software).
  • a program of instructions i.e., software
  • Communication interface 112 is preferably operable to couple computing apparatus 104 and/or service center 102 to an internal and/or external communication network 114 .
  • Communication network 114 may be the public-switched telephone network (PSTN), a cable network, an internet protocol (IP) network, a wireless network, a hybrid cable/PSTN network, a hybrid IP/PSTN network, a hybrid wireless/PSTN network, the Internet, and/or any other suitable communication network or combination of communication networks.
  • PSTN public-switched telephone network
  • IP internet protocol
  • Communication interface 112 preferably cooperates with communication network 114 and one or more user communication devices 116 to permit a user associated with each user communication device 116 to conduct transactions via service center 102 .
  • User communication device 116 may be a wireless or wireline telephone, dial-up modem, cable modem, DSL modem, phone sets, fax equipment, answering machines, set-top boxes, televisions, POS (point-of-sale) equipment, PBX (private branch exchange) systems, personal computers, laptop computers, personal digital assistants (PDAs), other nascent technologies, or any other appropriate type or combination of communication equipment available to a user.
  • Communication device 116 may be equipped for connectivity to communication network 114 via a PSTN, DSL, cable network, wireless network, or any other appropriate communications channel.
  • service center 102 permits a user to request, using speech, processing or performance of one or more transactions by service solutions 106 .
  • computing apparatus 104 may include or have access to one or more storage devices 118 including one or more programs of instructions operable to interpret user intent from the user's speech, identify a solution sought by the user, and route the user to an appropriate service solution 106 .
  • storage 118 includes an action-object matrix 120 , a look-up table 122 , utterance storage 124 , a prompt library 126 , one or more speech recognition modules (such as a statistical language modeling engine 128 ), and one or more dialog modules 129 . Furthermore, to analyze and optimize the performance of the prompts used by service center 102 , storage 118 also includes a prompt test control module 144 and a prompt test analysis module 146 . Additional details regarding the operation and cooperation of the various components included in storage 118 will be discussed in greater detail below.
  • computing apparatus 104 is communicatively coupled to one or more connection switches or redirect devices 130 .
  • Connection switch or redirect device 130 enables computing apparatus 104 , upon determining an appropriate destination for the processing of a user-selected transaction, to route the user via communication network 132 and, optionally, one or more switches 134 , to an appropriate service agent or module of service solutions 106 .
  • Service solutions 106 preferably include a plurality of service agents or modules operable to perform one or more operations in association with the processing of a selected user transaction.
  • service solutions 106 may include one or more service agents or modules operable to perform billing service solutions 136 , repair service solutions 138 , options service solutions 140 , how-to-use service solutions 142 , as well as any other appropriate service solutions.
  • the service agents or modules implemented in or associated with service solutions 106 may include, but are not limited to, automated or self-service data processing apparatuses, live technician support (human support), or combinations thereof.
  • FIG. 2 illustrates an example method 150 for a speech-enabled call routing using an action-object matrix according to one embodiment of the present invention.
  • Method 150 of FIG. 2 may be implemented in one or more computing apparatuses 104 of one or more service centers 102 . As such the method will be described with reference to the operation of service center 102 of FIG. 1 .
  • step 152 Upon initialization of service center 102 at step 152 , method 150 proceeds to step 154 where service center 102 provides for and awaits an incoming communication from a user communication device 116 via communication network 114 .
  • a user may connect with service center 102 in any other suitable manner.
  • step 156 Upon detection of an incoming contact at step 154 , method 150 preferably proceeds to step 156 where a communication connection with the user communication device 116 is established.
  • establishing a communication connection with an incoming contact from a user at step 156 may include, but is not limited to, receiving a user phone call via a PSTN or other wireline network, a wireless network, or any of numerous other communication networks.
  • step 158 one or more prompts, announcements, or other instructions to the user (collectively referred to herein as “prompts”) are communicated to the user of user communication device 116 .
  • the communication of one or more prompts is aimed at eliciting a request from the user for the processing of one or more transactions or operations.
  • dialog module 129 may access prompt library 126 of storage 118 to generate a user transaction selection prompt such as, “Thank you for calling our service center. Please tell me how we may help you today.”
  • any other suitable prompts designed to elicit a response from the user regarding a transaction that the user desires to be performed may be used.
  • a prompt will serve to illicit an unambiguous statement of the user's intent. If the user utterance is ambiguous or incomplete service center 102 will need to engage in additional dialog to clarify the user's intentions. For example, in response to initial prompt from service center 102 , the most desirable outcome is for the user's response to the prompt to result in a “direct route” to the appropriate destination. However, if the user's response requires additional clarification (further user responses) before service center 102 can determine the appropriate destination, service center 102 will need to employ dialog module 129 to provide additional prompts to the user in an attempt to illicit an unambiguous statement from the user. This additional-prompting increases costs (for example, by occupying incoming communication channels) and reduces customer satisfaction.
  • service center 102 awaits a user response to the communicated prompt.
  • method 150 preferably proceeds to 162 where a natural language response (a user “utterance”) from the user responsive to the communicated prompt is preferably received.
  • Receipt of an utterance from a user may include storage of the user's utterance in utterance storage 124 of computing apparatus storage 118 . Permanent or temporary storage of a user utterance may enable and/or simplify the performance of speech recognition analysis thereon.
  • step 164 the user utterance is evaluated to interpret or identify an intent of the user. and a requested transaction to be performed.
  • evaluation of a user utterance at step 164 may include the use of one or more speech recognition technologies, such as that available from statistical language modeling engine 128 of computing apparatus 104 .
  • statistical language modeling engine 128 may cooperate with utterance storage 124 in the evaluation of the user utterance.
  • statistical language modeling engine 128 may evaluate the user utterance received at step 162 in cooperation with action-object matrix 120 , which defines a number of different action-objects (and which is described in greater detail below in conjunction with FIG. 3 ).
  • the speech recognition technology preferably employed by computing apparatus 104 seeks to identify an action, an object or an action-object combination from the user utterance.
  • action-object matrix 120 By creating a finite number of transaction options (i.e., action-objects) via action-object matrix 120 , proper routing of a user to a service agent or module 136 , 138 , 140 or 142 , may be accomplished with improved efficiency (for example, substantially eliminating user routing errors and, therefore, user re-routing).
  • Each action-object in action-object matrix 120 defines a particular action to be taken and an object that is the subject of the action (in other words, a transaction to be performed).
  • the action-object “pay/bill” defines an action “pay” to be carried out on an object “bill.”
  • the assignment of an action-object to a user utterance enables efficient routing of the user to enable performance of a desired transaction.
  • statistical language modeling engine 128 may store and associate one or more salient action terms and one or more salient object terms with each action-object. The statistical language modeling engine 128 can then search for these salient terms in a user utterance to assign the user utterance to a particular action-object.
  • the salient terms may be the actual action and object of the action-object and/or the salient terms may be different from the action and object.
  • the action-object “pay/bill” may be associated with the salient action term “pay” and the salient object term “bill.”
  • the “pay/bill” action-object may be associated with the salient object terms “account” and “invoice.” Therefore, any user utterance including the term “pay” and at least one of the terms “bill,” “account” or “invoice” would preferably be associated with the “pay/bill” action-object. Multiple salient action terms could also or alternatively be associated with this action-object.
  • At least a portion of the user utterance evaluation performed at step 164 may include determining whether the user utterance includes a salient action term, a salient object term, or both a salient action term and a salient object term.
  • step 176 one or more additional prompts may be communicated to the user using dialog module 129 , using a different dialog module 129 than the module that communicated the initial prompt, or using any other suitable component.
  • the prompts presented at step 176 are preferably designed to elicit the selection of an object (via a salient object term) in a subsequent user utterance. For example, referring to the action-object matrix depicted in FIG. 3 , it may have been determined from the initial user utterance that the user desires to “inquire” about something.
  • computing apparatus 104 may cooperate with dialog module 129 , prompt library 126 and action-object matrix 120 to prompt the user for selection of an object associated with the “inquire” action.
  • objects associated with the “inquire” action include, in one embodiment, optional services, basic service, billing, cancellation, repair, payment, specials, and name and number.
  • action-object matrix depicted generally in FIG. 3 is included primarily for purposes of illustration. As such, alternate embodiments of an action-object matrix (or embodiments not using an action-object matrix) may be implemented without departing from the spirit and scope of teachings of the present invention.
  • step 178 one or more prompts designed to elicit the selection of an action (via a salient action term) in a subsequent user utterance.
  • one or more prompts designed to elicit the selection of an action (via a salient action term) in a subsequent user utterance.
  • computing apparatus 104 may cooperate with dialog module 129 , action-object matrix 120 and prompt library 126 to generate one or more prompts directed to eliciting user selection of an “action” associated with the bill “object”.
  • examples of actions associated with a “bill” object may include, in one embodiment, inquiry, information, fixing or repairing, and paying.
  • Method 150 may loop through steps 176 or 178 one or more times in an attempt to illicit an appropriate salient action term or an appropriate salient object term, respectively, for any desired number of loops. If evaluation of the user utterances does not lead to the utterance of a salient action term 168 nor a salient object term 170 after a predetermined number of loops, if neither a salient action term 168 or a salient object term 170 are identified (an “other” utterance 174 ), or if salient action terms 168 and/or salient object terms 170 associated with multiple action-objects are identified, then method 150 proceeds to step 180 where a disambiguation dialogue may be initiated and performed by dialog module 129 . In such an event, method 150 preferably provides for additional appropriate dialogue to be performed with the user in an effort to elicit a usable “action-object” combination from the user (for example, asking the user to be more specific in his or her request).
  • method 150 preferably returns to step 160 where a response may be awaited as described above. Method 150 then preferably proceeds through the operations at steps 162 and 164 until an “action-object” combination 172 has been elicited from the user in a user utterance. An escape sequence may also be included in method 150 where it has been determined that a user requires human assistance, for example.
  • step 182 computing apparatus 104 preferably cooperates with action-object matrix 120 and look-up table 122 to identify a preferred or proper routing destination for processing the user-selected transaction.
  • the routing destinations identified at step 182 may include routing destinations associated with the service agents or modules available in service solutions 106 .
  • service agents or modules 136 , 138 , 140 and 142 may include automated transaction processing available via computing apparatus 104 or a similar device, live support, or combinations thereof, as well as other suitable transaction processing options.
  • method 150 preferably proceeds to step 184 where the user connection is preferably routed to the appropriate destination indicated in look-up table 122 .
  • method 150 preferably proceeds to step 186 where one or more aspects of the user utterance or utterances are optionally forwarded to the service agent or module destination to which the caller and/or user connection is routed. For example, in particular embodiments, method 150 provides for the identified action-object to be forwarded to the service agent associated with the selected routing destination. In yet other embodiments, no information is forwarded and the user is simply routed to the appropriate destination. Following the routing of the user connection (and any forwarding of information), method 150 preferably returns to step 154 where another user connection is awaited.
  • an action-object matrix 120 according to one embodiment of the present invention is shown.
  • the example action-object matrix 120 shown in FIG. 3 includes a number of columns of actions 202 and a number of rows of objects 204 .
  • the intersection of an action column with an object row generally defines an action-object pair identifying a transaction available via service center 102 (for example, using one or more service modules or agents 136 , 138 , 140 and 142 ).
  • action-object matrix 120 is used in association with other components of service center 102 to interpret user intent and identify a desired transaction from a user utterance. For example, using actions 202 and objects 204 of action-object matrix 120 , in conjunction with the method 150 described above, a user utterance such as “How much do I owe on my bill?” may be evaluated to relate to the action-object “inquire/bill” 206 . In a further example, the user utterance, “I have a problem with a charge on my bill” may be associated with the action-object “fix-repair/bill” 208 .
  • the user utterance, “Where can I go to pay my phone bill?” may be associated with the action-object “where/payment” 210 .
  • the user utterance, “How do I set up Call Forwarding?” may be associated with the action-object “how-to-use/option” services 212 .
  • the user utterance, “I'd like to get CallNotes” may be associated with the action-object “acquire/optional services” 214 .
  • service center 102 uses one or more salient action terms and one or more salient object terms associated with each action-object to associate a user utterance with the action-object.
  • the salient terms may be stored in association with action-object matrix 120 or elsewhere in service center 102 (or at a location remote to service center 102 ). If stored in association with action-object matrix 120 , the salient terms may be linked to particular action-objects, to particular actions (for salient action terms), or to particular objects (for salient object terms).
  • look-up table 122 is used to identify the routing destination associated with an identified action-object. For example, upon identifying action-object “inquire/bill” 206 from a user utterance, computing apparatus 104 may utilize action-object matrix 120 and look-up table 122 to determine that the appropriate routing destination for the “inquire/bill” action-object 206 is “Bill” service agent or module 136 .
  • computing apparatus 104 may determine that an appropriate routing destination for the user connection includes “Repair” service agent or module 138 . Additional implementations of associating a look-up table with an action-object matrix may be utilized without departing from the spirit and scope of teachings of the present invention.
  • FIG. 4 is a flow diagram depicting an example method 300 for optimizing prompts for speech-enabled applications according to one embodiment of the present invention.
  • the example method 300 may be applied to any speech-enabled application that use prompts (again, this term refers to prompts, announcements, or any other instructions provided to a user) to illicit a response to determine a user's intent (regardless of how the user's intent is determined).
  • step 304 a particular dialog module 129 to evaluate is selected.
  • a speech-enabled application such as service center 102
  • an application may have one dialog module 129 that provides the initial prompt (and any associated announcements) and may include other dialog modules that provide additional prompts to obtain more detailed or unambiguous responses from a user.
  • control of the prompt testing process may be performed by test control module 144 and thus module 144 may select a dialog module to evaluate.
  • test control module 144 may serially select each dialog module 129 for testing at particular intervals. This selection may be performed automatically based on a pre-determined configuration or may be based on input from a person configuring the test procedure.
  • Method 300 continues at step 306 where the prompt alternatives to evaluate for the selected dialog module are determined.
  • prompt library 126 may include multiple alternative prompts for the initial prompt provided by service center 102 .
  • Test control module 144 may access prompt library 126 to retrieve these alternative prompts and may determine which prompts are to be evaluated.
  • prompt library 126 may initially include several alternative initial prompts and test control module 144 may initially select all the alternative prompts for testing.
  • test control module 144 may eliminate certain prompts that performed poorly relative to the other prompts and may repeat the testing process on the remaining prompts if necessary.
  • the particular prompts to be evaluated may be determined based on input from a person configuring the test procedure.
  • one or more of the prompt alternatives may include a combination of prompts (including announcements, etc.) to be evaluated.
  • method 300 could be used to test an announcement followed by a prompt or a series of prompts. Therefore, at step 308 it is determined whether prompt combinations are being evaluated. If so, method 300 continues to step 310 where the order and particular combinations of the prompts to be tested are determined. For example, the same initial prompt could be tested with three different preceding announcements, or an announcement could be tested with three different prompts following the announcement. As another example, an initial prompt could be followed by pauses of different lengths (or no pause) before a series of example responses are provided to the user.
  • the order of two or more prompts may be tested, with each different order being a different alternative. Any particular combination and/or order of prompts may be tested. Furthermore, other variations of the way in which multiple prompts are played may be evaluated. The information regarding the different combinations to be tested may be provided to and stored by test control module 144 for use in executing the testing.
  • step 312 the number of prompt cycles, length of time, and/or other characteristics of the evaluation period during which each prompt alternative is to be evaluated is determined. For example, a first prompt alternative “How may I help you?” may be used as the initial prompt for a one week period and then a second prompt alternative “What is the purpose of your call?” may be used for the following one week period. Alternatively, as an example, the first prompt alternative may be tested for the first one thousand prompt cycles (for example, calls from users) and the second prompt alternative may be tested for the next thousand prompt cycles.
  • prompt alternatives may be testing in an alternating fashion (for example, a first prompt may tested for a single user, a second prompt may be tested on the next user, and the process may be repeated any desired number of times).
  • one or more of the alternative prompts may be tested for differing lengths of times, number of prompt cycles, or other evaluation periods as desired. Information about the testing period(s) to be used for each prompt alternative may be provided to and stored by test control module 144 for use in executing the testing.
  • Method 300 continues at step 314 , where a first prompt alternative is selected from the multiple prompt alternatives determined to be part of the evaluation at step 306 .
  • This step may be performed automatically by test control module 144 . For example, if the two prompt alternatives to be evaluated are “How may I help you?” and “What is the purpose of your call?”, then test control module 144 may select one of these alternatives to begin testing.
  • the selected prompt alternative is retrieved from prompt library 126 (or any other suitable location).
  • the prompt alternative may be an audio file (such as a .wav file) that is retrieved from prompt library 126 ).
  • the retrieved prompt alternative is presented to a user.
  • a .wav file may be played for the user.
  • the playing or other presentation of the prompt alternative (which, again, may include a combination of prompts, announcements, pauses, etc.) is repeated for each user (such as callers calling into service center 102 ) during the evaluation period determined at step 312 .
  • the selected prompt may be retrieved and presented by the associated dialog module 129 , test control module 144 , or any combination thereof (and reference to dialog module 129 performing this task is meant to include any of these options).
  • test analysis module 146 or any other suitable component records the results of each user's actions taken in response to the prompt alternative. Any suitable results of the user's interaction with service center 102 or other application presenting a prompt can be recorded for later evaluation. For example, the user's actual response(s) to the prompt alternatives may be recorded. In addition or alternatively, whether the user's response included a salient action term and/or a salient object term (for example, based on an evaluation of the response by statistical language modeling engine 128 ) may be recorded.
  • test analysis module 146 or another suitable component may record whether the user's response resulted in a direct route or whether it required additional dialog from a dialog module 129 to clarify the user's intent.
  • the component such as test analysis module 146 , recording this information for evaluation can cooperate with any other suitable components of the system being analyzed to obtain this information.
  • test analysis module 146 may communicate with statistical language modeling engine 128 , action-object matrix 120 , dialog module(s) 129 , or any other appropriate components of service center 102 to record suitable information for evaluating the performance of a particular prompt alternative.
  • suitable performance criteria may be unrelated to the routing of users/callers to a destination.
  • a variety of performance measures may be evaluated, such as the percentage of timeouts (for example, when no response is received from a user), the percentage of “too much speech” (for example, the user says more than the speech engine can process), the percentage of utterances that are “in grammar” (for example, the utterance matches a defined grammar item in an engine that recognizes types of grammar), or the number of times a caller asks for “help.”
  • step 322 it is determined whether there are additional prompt alternatives to evaluate. For example, test control module 144 can determine whether all the prompt alternatives selected at step 306 have been evaluated. If there are additional prompt alternatives to evaluate, method 300 returns to step 314 , where the next prompt alternatives to evaluate is selected. Steps 316 through 320 are then performed for that prompt alternatives, as described above.
  • test analysis module 146 or any other suitable component analyzes the information recorded at step 320 for each of the prompt alternatives. For example, test analysis module 146 may determine which prompt alternative resulted in the most direct routes (for example, the highest percentage of direct routes) or which alternative had the most initial responses that resulted in a match with an action-object (for example, responses that included both a salient action term and a salient object term). Any number of additional or alternative performance criteria that assist in identifying the most effective prompt alternative may be analyzed by analysis module 146 , as desired. This analysis may be performed in “real-time” while service center 102 or other system being evaluated is continuing to interact with users. Furthermore, some or all of analysis step 324 may occur for each prompt alternative while that prompt alternative or other prompt alternatives are being tested.
  • test analysis module 146 compares the results of the analysis and determines the “best” prompt alternative at step 326 . Which prompt alternative is the best depends on the criteria being evaluated. As examples only, the prompt alternative resulting in the most direct routes or resulting in the most action-object matches (either initially or after further prompting) may be selected as the best prompt alternative at step 326 . Once the best prompt alternative is selected, test analysis module 146 or any other suitable component automatically adjusts the dialog module at step 328 to implement the chosen prompt alternative. This automatic evaluation and adjustment of prompts is advantageous since manually testing and adjusting various prompts is time consuming and may interrupt or impede the performance of the system being tested.
  • method 300 continues to step 330 where it is determine whether there are additional dialog modules 129 to evaluate. If so, method 300 returns to step 304 where a new dialog module 129 is selected for evaluation. If not, method 300 ends.
  • Method 300 may be repeated as often as desired during the operation of a service center or other speech-enabled application to which the method might apply. For example, method 300 may be performed when a service center or other speech-enabled application is initially put into operation. Thereafter, the method could be performed periodically as desired (for example, every six months). Such periodic testing may be helpful as users become experienced with the prompts used in a system. The users' behavior can change with such experience and periodic updating of the prompts can be beneficial to “tune” the prompts to take into account the users' changed behavior.

Landscapes

  • Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Signal Processing (AREA)
  • Telephonic Communication Services (AREA)

Abstract

A computer-implemented method is described for optimizing prompts for a speech-enabled application. The speech-enabled application is operable to receive communications from a number of users and communicate one or more prompts to each user to illicit a response from the user that indicates the purpose of the user's communication. The method includes determining a number of prompt alternatives (each including one or more prompts) to evaluate and determining an evaluation period for each prompt alternative. The method also includes automatically presenting each prompt alternative to users during the associated evaluation period and automatically recording the results of user responses to each prompt alternative. Furthermore, the method includes automatically analyzing the recorded results for each prompt alternative based on one or more performance criteria and automatically implementing one of the prompt alternatives based on the analysis of the recorded results.

Description

    TECHNICAL FIELD OF THE INVENTION
  • The present invention relates generally to speech-enabled applications and, more particularly, to a system and method for optimizing prompts for speech-enabled applications.
  • BACKGROUND OF THE INVENTION
  • Developments in speech recognition technologies support more natural language interaction between services, systems and customers than previously supported. One of the most promising applications of speech recognition technology, Automatic Call Routing (ACR), seeks to determine why a customer has called a service center and to route the customer to an appropriate service agent for customer request servicing. Speech recognition technology generally allows an ACR application to recognize natural language statements from the customer, thus minimizing reliance on conventional menu systems. This permits a customer to state the purpose of their call “in their own words”.
  • In order for an ACR application to properly route calls, the ACR generally must interpret the intent of the customer, identify the type or category of customer call, and identify the correct routing destination for the call type. An ACR application may attempt to match one or words in a statement by a customer to a particular pre-defined action to be taken by the ACR application.
  • Although speech recognition technology has been improving over the years, speech recognition systems are limited by the quality and robustness of the statistical language models or other techniques used to recognize speech. Given these limits, developers of these systems strive to develop prompts, announcements, and other instructions to the users of such systems that guide these users to provide speech input that conforms with the capabilities of the particular speech recognition technology used by the system. Subtle differences in the way prompts or other instructions are worded may result in substantial differences in system performance.
  • SUMMARY OF THE INVENTION
  • In accordance with a particular embodiment of the present invention, a computer-implemented method is provided for optimizing prompts for a speech-enabled application. The speech-enabled application is operable to receive communications from a number of users and communicate one or more prompts to each user to illicit a response from the user that indicates the purpose of the user's communication. The method includes determining a number of prompt alternatives (each including one or more prompts) to evaluate and determining an evaluation period for each prompt alternative. The method also includes automatically presenting each prompt alternative to users during the associated evaluation period and automatically recording the results of user responses to each prompt alternative. Furthermore, the method includes automatically analyzing the recorded results for each prompt alternative based on one or more performance criteria and automatically implementing one of the prompt alternatives based on the analysis of the recorded results.
  • Technical advantages of particular embodiments of the present invention include a method and system for optimizing prompts for speech-enabled applications that improve the operation of such applications. For example, particular embodiments automate the evaluation of various prompts or other user instructions for speech-enabled applications and then automatically implement the most effective prompt(s). Such embodiments can automatically present numerous prompt variations to users, evaluate the impact of each prompt on some measure of system performance, and adopt the prompt(s) that lead to the best system performance. This automation of prompt evaluation and implementation can reduce development time and ensure high system performance.
  • Other technical advantages will be readily apparent to one skilled in the art from the following figures, descriptions and claims. Moreover, while specific advantages have been enumerated above, various embodiments may include all, some or none of the enumerated advantages.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • A more complete understanding of the present embodiments and advantages thereof may be acquired by referring to the following description taken in conjunction with the accompanying drawings, in which like reference numbers indicate like features, and wherein:
  • FIG. 1 is a block diagram depicting an example embodiment of a service center system according to one embodiment of the present invention;
  • FIG. 2 is a flow diagram depicting an example automatic call routing method according to one embodiment of the present invention;
  • FIG. 3 is a diagram depicting an example embodiment of an automatic call router action-object matrix according to one embodiment of the present invention; and
  • FIG. 4 is a flow diagram depicting an example method for optimizing prompts for speech-enabled applications according to one embodiment of the present invention.
  • DETAILED DESCRIPTION OF THE DRAWINGS
  • FIG. 1 is a block diagram depicting an example embodiment of a service center system 100 according to one embodiment of the present invention. System 100 enables users to conduct transactions via a service center 102. For example, as referred to herein, service center 102 may be a customer service call center for a telephone services company. However, as described below, the present invention may be used in conjunction with any other types of call centers, as well as with any systems that use speech recognition to perform an action or otherwise facilitate an action in response to speech input of a user. As used herein, the term “transaction” or it variants refers to any action that a user desires to perform in conjunction with or have performed by service center 102.
  • The example service center 102 includes one or more computing apparatuses 104 that are operably coupled to one or more transaction processing service solutions 106. Included in computing apparatus 104 is a processor 108. Operably coupled to processor 108 of computing apparatus 104 is a memory 110. Computing apparatus 104 employs processor 108 and memory 110 to execute and store, respectively, one or more instructions of a program of instructions (i.e., software).
  • Also included in computing apparatus 104 is communication interface 112. Communication interface 112 is preferably operable to couple computing apparatus 104 and/or service center 102 to an internal and/or external communication network 114. Communication network 114 may be the public-switched telephone network (PSTN), a cable network, an internet protocol (IP) network, a wireless network, a hybrid cable/PSTN network, a hybrid IP/PSTN network, a hybrid wireless/PSTN network, the Internet, and/or any other suitable communication network or combination of communication networks.
  • Communication interface 112 preferably cooperates with communication network 114 and one or more user communication devices 116 to permit a user associated with each user communication device 116 to conduct transactions via service center 102. User communication device 116 may be a wireless or wireline telephone, dial-up modem, cable modem, DSL modem, phone sets, fax equipment, answering machines, set-top boxes, televisions, POS (point-of-sale) equipment, PBX (private branch exchange) systems, personal computers, laptop computers, personal digital assistants (PDAs), other nascent technologies, or any other appropriate type or combination of communication equipment available to a user. Communication device 116 may be equipped for connectivity to communication network 114 via a PSTN, DSL, cable network, wireless network, or any other appropriate communications channel.
  • In operation, service center 102 permits a user to request, using speech, processing or performance of one or more transactions by service solutions 106. To enable such processing, computing apparatus 104 may include or have access to one or more storage devices 118 including one or more programs of instructions operable to interpret user intent from the user's speech, identify a solution sought by the user, and route the user to an appropriate service solution 106.
  • To aid in the interpretation, identification and routing operations of service center 102, storage 118 includes an action-object matrix 120, a look-up table 122, utterance storage 124, a prompt library 126, one or more speech recognition modules (such as a statistical language modeling engine 128), and one or more dialog modules 129. Furthermore, to analyze and optimize the performance of the prompts used by service center 102, storage 118 also includes a prompt test control module 144 and a prompt test analysis module 146. Additional details regarding the operation and cooperation of the various components included in storage 118 will be discussed in greater detail below.
  • In the illustrated embodiment, computing apparatus 104 is communicatively coupled to one or more connection switches or redirect devices 130. Connection switch or redirect device 130 enables computing apparatus 104, upon determining an appropriate destination for the processing of a user-selected transaction, to route the user via communication network 132 and, optionally, one or more switches 134, to an appropriate service agent or module of service solutions 106.
  • Service solutions 106 preferably include a plurality of service agents or modules operable to perform one or more operations in association with the processing of a selected user transaction. For example, if service center 102 is a telephone services call center, service solutions 106 may include one or more service agents or modules operable to perform billing service solutions 136, repair service solutions 138, options service solutions 140, how-to-use service solutions 142, as well as any other appropriate service solutions. The service agents or modules implemented in or associated with service solutions 106 may include, but are not limited to, automated or self-service data processing apparatuses, live technician support (human support), or combinations thereof.
  • FIG. 2 illustrates an example method 150 for a speech-enabled call routing using an action-object matrix according to one embodiment of the present invention. However, it should be emphasized that embodiments of the present invention may be used in association with any speech-enabled applications using prompts and is certainly not limited to service centers routing users using action-objects. Method 150 of FIG. 2 may be implemented in one or more computing apparatuses 104 of one or more service centers 102. As such the method will be described with reference to the operation of service center 102 of FIG. 1.
  • Upon initialization of service center 102 at step 152, method 150 proceeds to step 154 where service center 102 provides for and awaits an incoming communication from a user communication device 116 via communication network 114. However, a user may connect with service center 102 in any other suitable manner.
  • Upon detection of an incoming contact at step 154, method 150 preferably proceeds to step 156 where a communication connection with the user communication device 116 is established. As suggested above, establishing a communication connection with an incoming contact from a user at step 156 may include, but is not limited to, receiving a user phone call via a PSTN or other wireline network, a wireless network, or any of numerous other communication networks.
  • Once a communication connection has been established at step 156, method 150 proceeds to step 158 where one or more prompts, announcements, or other instructions to the user (collectively referred to herein as “prompts”) are communicated to the user of user communication device 116. In particular embodiments, the communication of one or more prompts is aimed at eliciting a request from the user for the processing of one or more transactions or operations. For example, at step 158, dialog module 129 may access prompt library 126 of storage 118 to generate a user transaction selection prompt such as, “Thank you for calling our service center. Please tell me how we may help you today.” Furthermore, any other suitable prompts designed to elicit a response from the user regarding a transaction that the user desires to be performed may be used.
  • Preferably, a prompt will serve to illicit an unambiguous statement of the user's intent. If the user utterance is ambiguous or incomplete service center 102 will need to engage in additional dialog to clarify the user's intentions. For example, in response to initial prompt from service center 102, the most desirable outcome is for the user's response to the prompt to result in a “direct route” to the appropriate destination. However, if the user's response requires additional clarification (further user responses) before service center 102 can determine the appropriate destination, service center 102 will need to employ dialog module 129 to provide additional prompts to the user in an attempt to illicit an unambiguous statement from the user. This additional-prompting increases costs (for example, by occupying incoming communication channels) and reduces customer satisfaction.
  • At step 160 of method 150, service center 102 awaits a user response to the communicated prompt. Upon detection of a user response at step 160, method 150 preferably proceeds to 162 where a natural language response (a user “utterance”) from the user responsive to the communicated prompt is preferably received. Receipt of an utterance from a user may include storage of the user's utterance in utterance storage 124 of computing apparatus storage 118. Permanent or temporary storage of a user utterance may enable and/or simplify the performance of speech recognition analysis thereon.
  • Following receipt of a user utterance at step 162, method 150 proceeds to step 164 where the user utterance is evaluated to interpret or identify an intent of the user. and a requested transaction to be performed. In particular embodiments, evaluation of a user utterance at step 164 may include the use of one or more speech recognition technologies, such as that available from statistical language modeling engine 128 of computing apparatus 104. As suggested above, statistical language modeling engine 128 may cooperate with utterance storage 124 in the evaluation of the user utterance.
  • In certain embodiments, statistical language modeling engine 128 may evaluate the user utterance received at step 162 in cooperation with action-object matrix 120, which defines a number of different action-objects (and which is described in greater detail below in conjunction with FIG. 3). In the evaluation of a user utterance at step 164, the speech recognition technology preferably employed by computing apparatus 104 seeks to identify an action, an object or an action-object combination from the user utterance. By creating a finite number of transaction options (i.e., action-objects) via action-object matrix 120, proper routing of a user to a service agent or module 136, 138, 140 or 142, may be accomplished with improved efficiency (for example, substantially eliminating user routing errors and, therefore, user re-routing).
  • Each action-object in action-object matrix 120 defines a particular action to be taken and an object that is the subject of the action (in other words, a transaction to be performed). For example, the action-object “pay/bill” defines an action “pay” to be carried out on an object “bill.” As described below, the assignment of an action-object to a user utterance enables efficient routing of the user to enable performance of a desired transaction.
  • To assist in assigning a particular action-object to a user utterance in particular embodiments, statistical language modeling engine 128 may store and associate one or more salient action terms and one or more salient object terms with each action-object. The statistical language modeling engine 128 can then search for these salient terms in a user utterance to assign the user utterance to a particular action-object. The salient terms may be the actual action and object of the action-object and/or the salient terms may be different from the action and object. For example, the action-object “pay/bill” may be associated with the salient action term “pay” and the salient object term “bill.” In addition, the “pay/bill” action-object may be associated with the salient object terms “account” and “invoice.” Therefore, any user utterance including the term “pay” and at least one of the terms “bill,” “account” or “invoice” would preferably be associated with the “pay/bill” action-object. Multiple salient action terms could also or alternatively be associated with this action-object. At least a portion of the user utterance evaluation performed at step 164 may include determining whether the user utterance includes a salient action term, a salient object term, or both a salient action term and a salient object term.
  • If it is determined that the user utterance contains only a salient action term(s) 168, method 150 proceeds to step 176 where one or more additional prompts may be communicated to the user using dialog module 129, using a different dialog module 129 than the module that communicated the initial prompt, or using any other suitable component. The prompts presented at step 176 are preferably designed to elicit the selection of an object (via a salient object term) in a subsequent user utterance. For example, referring to the action-object matrix depicted in FIG. 3, it may have been determined from the initial user utterance that the user desires to “inquire” about something. Having identified that the user wishes to make an “inquiry” (the action), computing apparatus 104 may cooperate with dialog module 129, prompt library 126 and action-object matrix 120 to prompt the user for selection of an object associated with the “inquire” action. As illustrated in FIG. 3, examples of objects associated with the “inquire” action include, in one embodiment, optional services, basic service, billing, cancellation, repair, payment, specials, and name and number. It should be understood that the action-object matrix depicted generally in FIG. 3 is included primarily for purposes of illustration. As such, alternate embodiments of an action-object matrix (or embodiments not using an action-object matrix) may be implemented without departing from the spirit and scope of teachings of the present invention.
  • Similarly, if it is determined that the user utterance contains only a salient object term 170, method 150 preferably proceeds to step 178 where one or more prompts designed to elicit the selection of an action (via a salient action term) in a subsequent user utterance. For example, referring again to the action-object matrix generally depicted in FIG. 3, if it is determined from the initial user utterance that the user desires some sort of action associated with a “bill”, computing apparatus 104 may cooperate with dialog module 129, action-object matrix 120 and prompt library 126 to generate one or more prompts directed to eliciting user selection of an “action” associated with the bill “object”. As shown in FIG. 3, examples of actions associated with a “bill” object may include, in one embodiment, inquiry, information, fixing or repairing, and paying.
  • Method 150 may loop through steps 176 or 178 one or more times in an attempt to illicit an appropriate salient action term or an appropriate salient object term, respectively, for any desired number of loops. If evaluation of the user utterances does not lead to the utterance of a salient action term 168 nor a salient object term 170 after a predetermined number of loops, if neither a salient action term 168 or a salient object term 170 are identified (an “other” utterance 174), or if salient action terms 168 and/or salient object terms 170 associated with multiple action-objects are identified, then method 150 proceeds to step 180 where a disambiguation dialogue may be initiated and performed by dialog module 129. In such an event, method 150 preferably provides for additional appropriate dialogue to be performed with the user in an effort to elicit a usable “action-object” combination from the user (for example, asking the user to be more specific in his or her request).
  • Following prompting for an “object” at step 176, prompting for an “action” at step 178, or initiation and performance of disambiguation dialogue at 180, method 150 preferably returns to step 160 where a response may be awaited as described above. Method 150 then preferably proceeds through the operations at steps 162 and 164 until an “action-object” combination 172 has been elicited from the user in a user utterance. An escape sequence may also be included in method 150 where it has been determined that a user requires human assistance, for example.
  • After identification of an “action-object” combination 172 (either from the initial utterance or from the repeated prompting described above), method 150 preferably proceeds to step 182. At step 182, computing apparatus 104 preferably cooperates with action-object matrix 120 and look-up table 122 to identify a preferred or proper routing destination for processing the user-selected transaction. As suggested above, the routing destinations identified at step 182 may include routing destinations associated with the service agents or modules available in service solutions 106. As mentioned above, service agents or modules 136, 138, 140 and 142 may include automated transaction processing available via computing apparatus 104 or a similar device, live support, or combinations thereof, as well as other suitable transaction processing options.
  • Following identification of a preferred or proper routing destination at step 182, method 150 preferably proceeds to step 184 where the user connection is preferably routed to the appropriate destination indicated in look-up table 122. Following the routing of the user connection, method 150 preferably proceeds to step 186 where one or more aspects of the user utterance or utterances are optionally forwarded to the service agent or module destination to which the caller and/or user connection is routed. For example, in particular embodiments, method 150 provides for the identified action-object to be forwarded to the service agent associated with the selected routing destination. In yet other embodiments, no information is forwarded and the user is simply routed to the appropriate destination. Following the routing of the user connection (and any forwarding of information), method 150 preferably returns to step 154 where another user connection is awaited.
  • It should be understood that some of the steps illustrated in FIG. 2 may be combined, modified or deleted where appropriate, and additional steps may also be added to the method. Additionally, as indicated above, the steps may be performed in any suitable order without departing from the scope of the present invention. Furthermore, it should be understood that although embodiments of the present invention are described in conjunction with a service center using action-objects and salient terms, the present invention may be used in conjunction with any speech-enabled applications that use prompts to illicit a response to determine the user's intent.
  • Referring again to FIG. 3, an action-object matrix 120 according to one embodiment of the present invention is shown. The example action-object matrix 120 shown in FIG. 3 includes a number of columns of actions 202 and a number of rows of objects 204. The intersection of an action column with an object row generally defines an action-object pair identifying a transaction available via service center 102 (for example, using one or more service modules or agents 136, 138, 140 and 142).
  • As described above, action-object matrix 120 is used in association with other components of service center 102 to interpret user intent and identify a desired transaction from a user utterance. For example, using actions 202 and objects 204 of action-object matrix 120, in conjunction with the method 150 described above, a user utterance such as “How much do I owe on my bill?” may be evaluated to relate to the action-object “inquire/bill” 206. In a further example, the user utterance, “I have a problem with a charge on my bill” may be associated with the action-object “fix-repair/bill” 208. In still another example, the user utterance, “Where can I go to pay my phone bill?” may be associated with the action-object “where/payment” 210. In yet another example, the user utterance, “How do I set up Call Forwarding?” may be associated with the action-object “how-to-use/option” services 212. In a further example, the user utterance, “I'd like to get CallNotes” may be associated with the action-object “acquire/optional services” 214.
  • As mentioned above, service center 102 uses one or more salient action terms and one or more salient object terms associated with each action-object to associate a user utterance with the action-object. The salient terms may be stored in association with action-object matrix 120 or elsewhere in service center 102 (or at a location remote to service center 102). If stored in association with action-object matrix 120, the salient terms may be linked to particular action-objects, to particular actions (for salient action terms), or to particular objects (for salient object terms).
  • After an action-object has been identified through the user of action-object matrix 120 and other components of service center 102, look-up table 122 is used to identify the routing destination associated with an identified action-object. For example, upon identifying action-object “inquire/bill” 206 from a user utterance, computing apparatus 104 may utilize action-object matrix 120 and look-up table 122 to determine that the appropriate routing destination for the “inquire/bill” action-object 206 is “Bill” service agent or module 136. In another example, upon identifying action-object “fix-repair/bill” 208 from a user utterance, computing apparatus 104 cooperating with action-object matrix storage 120 and look-up table 122 may determine that an appropriate routing destination for the user connection includes “Repair” service agent or module 138. Additional implementations of associating a look-up table with an action-object matrix may be utilized without departing from the spirit and scope of teachings of the present invention.
  • FIG. 4 is a flow diagram depicting an example method 300 for optimizing prompts for speech-enabled applications according to one embodiment of the present invention. As described above, although the example method 300 is described with respect to service center 102, the method may be applied to any speech-enabled application that use prompts (again, this term refers to prompts, announcements, or any other instructions provided to a user) to illicit a response to determine a user's intent (regardless of how the user's intent is determined).
  • Upon initialization at step 302, method 300 proceeds to step 304 where a particular dialog module 129 to evaluate is selected. A speech-enabled application, such as service center 102, may include multiple dialog modules 129 and each dialog module may be tested separately. For example, an application may have one dialog module 129 that provides the initial prompt (and any associated announcements) and may include other dialog modules that provide additional prompts to obtain more detailed or unambiguous responses from a user. Using service center 102 as an example, control of the prompt testing process may be performed by test control module 144 and thus module 144 may select a dialog module to evaluate. If service center 102 includes multiple dialog modules 129, test control module 144 may serially select each dialog module 129 for testing at particular intervals. This selection may be performed automatically based on a pre-determined configuration or may be based on input from a person configuring the test procedure.
  • Method 300 continues at step 306 where the prompt alternatives to evaluate for the selected dialog module are determined. For example, prompt library 126 may include multiple alternative prompts for the initial prompt provided by service center 102. Test control module 144 may access prompt library 126 to retrieve these alternative prompts and may determine which prompts are to be evaluated. For example, prompt library 126 may initially include several alternative initial prompts and test control module 144 may initially select all the alternative prompts for testing. As an example, after testing all of the alternative initial prompts, test control module 144 may eliminate certain prompts that performed poorly relative to the other prompts and may repeat the testing process on the remaining prompts if necessary. Alternatively, the particular prompts to be evaluated may be determined based on input from a person configuring the test procedure.
  • In some cases, one or more of the prompt alternatives may include a combination of prompts (including announcements, etc.) to be evaluated. For example, method 300 could be used to test an announcement followed by a prompt or a series of prompts. Therefore, at step 308 it is determined whether prompt combinations are being evaluated. If so, method 300 continues to step 310 where the order and particular combinations of the prompts to be tested are determined. For example, the same initial prompt could be tested with three different preceding announcements, or an announcement could be tested with three different prompts following the announcement. As another example, an initial prompt could be followed by pauses of different lengths (or no pause) before a series of example responses are provided to the user. As yet another example, the order of two or more prompts may be tested, with each different order being a different alternative. Any particular combination and/or order of prompts may be tested. Furthermore, other variations of the way in which multiple prompts are played may be evaluated. The information regarding the different combinations to be tested may be provided to and stored by test control module 144 for use in executing the testing.
  • Once the order the prompts in each of the prompt alternatives has been determined (or if it is determined at step 308 that the prompt alternatives do not include any prompt combinations), method 300 proceeds to step 312 where the number of prompt cycles, length of time, and/or other characteristics of the evaluation period during which each prompt alternative is to be evaluated is determined. For example, a first prompt alternative “How may I help you?” may be used as the initial prompt for a one week period and then a second prompt alternative “What is the purpose of your call?” may be used for the following one week period. Alternatively, as an example, the first prompt alternative may be tested for the first one thousand prompt cycles (for example, calls from users) and the second prompt alternative may be tested for the next thousand prompt cycles. Furthermore, prompt alternatives may be testing in an alternating fashion (for example, a first prompt may tested for a single user, a second prompt may be tested on the next user, and the process may be repeated any desired number of times). Moreover, one or more of the alternative prompts may be tested for differing lengths of times, number of prompt cycles, or other evaluation periods as desired. Information about the testing period(s) to be used for each prompt alternative may be provided to and stored by test control module 144 for use in executing the testing.
  • Method 300 continues at step 314, where a first prompt alternative is selected from the multiple prompt alternatives determined to be part of the evaluation at step 306. This step may be performed automatically by test control module 144. For example, if the two prompt alternatives to be evaluated are “How may I help you?” and “What is the purpose of your call?”, then test control module 144 may select one of these alternatives to begin testing.
  • At step 316, the selected prompt alternative is retrieved from prompt library 126 (or any other suitable location). For example, the prompt alternative may be an audio file (such as a .wav file) that is retrieved from prompt library 126). At step 318, the retrieved prompt alternative is presented to a user. For example, a .wav file may be played for the user. The playing or other presentation of the prompt alternative (which, again, may include a combination of prompts, announcements, pauses, etc.) is repeated for each user (such as callers calling into service center 102) during the evaluation period determined at step 312. The selected prompt may be retrieved and presented by the associated dialog module 129, test control module 144, or any combination thereof (and reference to dialog module 129 performing this task is meant to include any of these options).
  • At step 320, test analysis module 146 or any other suitable component (such as test control module 144) records the results of each user's actions taken in response to the prompt alternative. Any suitable results of the user's interaction with service center 102 or other application presenting a prompt can be recorded for later evaluation. For example, the user's actual response(s) to the prompt alternatives may be recorded. In addition or alternatively, whether the user's response included a salient action term and/or a salient object term (for example, based on an evaluation of the response by statistical language modeling engine 128) may be recorded. As another example, test analysis module 146 or another suitable component may record whether the user's response resulted in a direct route or whether it required additional dialog from a dialog module 129 to clarify the user's intent. The component, such as test analysis module 146, recording this information for evaluation can cooperate with any other suitable components of the system being analyzed to obtain this information. For example, test analysis module 146 may communicate with statistical language modeling engine 128, action-object matrix 120, dialog module(s) 129, or any other appropriate components of service center 102 to record suitable information for evaluating the performance of a particular prompt alternative.
  • Furthermore, it is again emphasized that the present invention may apply to any type of speech-enabled applications, and is not limited to the service center example provided herein. Therefore, suitable performance criteria may be unrelated to the routing of users/callers to a destination. For a given prompt, or set of prompts, a variety of performance measures may be evaluated, such as the percentage of timeouts (for example, when no response is received from a user), the percentage of “too much speech” (for example, the user says more than the speech engine can process), the percentage of utterances that are “in grammar” (for example, the utterance matches a defined grammar item in an engine that recognizes types of grammar), or the number of times a caller asks for “help.” These are but a few examples and there are a wide range of different measures for prompt performance other than action-objects and direct routes.
  • At the end of the evaluation period for the first prompt alternative, method 300 proceeds to step 322 where it is determined whether there are additional prompt alternatives to evaluate. For example, test control module 144 can determine whether all the prompt alternatives selected at step 306 have been evaluated. If there are additional prompt alternatives to evaluate, method 300 returns to step 314, where the next prompt alternatives to evaluate is selected. Steps 316 through 320 are then performed for that prompt alternatives, as described above.
  • Once it is determined at step 322 that no further prompt alternatives remain to be evaluated (for the selected dialog module), method 300 proceeds to step 324 where test analysis module 146 or any other suitable component analyzes the information recorded at step 320 for each of the prompt alternatives. For example, test analysis module 146 may determine which prompt alternative resulted in the most direct routes (for example, the highest percentage of direct routes) or which alternative had the most initial responses that resulted in a match with an action-object (for example, responses that included both a salient action term and a salient object term). Any number of additional or alternative performance criteria that assist in identifying the most effective prompt alternative may be analyzed by analysis module 146, as desired. This analysis may be performed in “real-time” while service center 102 or other system being evaluated is continuing to interact with users. Furthermore, some or all of analysis step 324 may occur for each prompt alternative while that prompt alternative or other prompt alternatives are being tested.
  • Based on the analysis performed at step 324, test analysis module 146 compares the results of the analysis and determines the “best” prompt alternative at step 326. Which prompt alternative is the best depends on the criteria being evaluated. As examples only, the prompt alternative resulting in the most direct routes or resulting in the most action-object matches (either initially or after further prompting) may be selected as the best prompt alternative at step 326. Once the best prompt alternative is selected, test analysis module 146 or any other suitable component automatically adjusts the dialog module at step 328 to implement the chosen prompt alternative. This automatic evaluation and adjustment of prompts is advantageous since manually testing and adjusting various prompts is time consuming and may interrupt or impede the performance of the system being tested.
  • After the dialog module 129 being tested is appropriately adjusted, method 300 continues to step 330 where it is determine whether there are additional dialog modules 129 to evaluate. If so, method 300 returns to step 304 where a new dialog module 129 is selected for evaluation. If not, method 300 ends. Method 300 may be repeated as often as desired during the operation of a service center or other speech-enabled application to which the method might apply. For example, method 300 may be performed when a service center or other speech-enabled application is initially put into operation. Thereafter, the method could be performed periodically as desired (for example, every six months). Such periodic testing may be helpful as users become experienced with the prompts used in a system. The users' behavior can change with such experience and periodic updating of the prompts can be beneficial to “tune” the prompts to take into account the users' changed behavior.
  • It should be understood that some of the steps illustrated in FIG. 4 may be combined, modified or deleted where appropriate, and additional steps may also be added to the method. Additionally, as indicated above, the steps may be performed in any suitable order without departing from the scope of the present invention.
  • Although the present invention has been described in detail with reference to particular embodiments, it should be understood that various other changes, substitutions, and alterations may be made hereto without departing from the spirit and scope of the present invention. For example, although the present invention has been described with reference to a number of components included within service center 102, other and different components may be utilized to accommodate particular needs. The present invention contemplates great flexibility in the arrangement of these elements as well as their internal components. Moreover, speech-enabled applications or systems other than service centers may also be used in conjunction with embodiments of the present invention.
  • Furthermore, numerous other changes, substitutions, variations, alterations and modifications may be ascertained by those skilled in the art and it is intended that the present invention encompass all such changes, substitutions, variations, alterations and modifications as falling within the spirit and scope of the appended claims. Moreover, the present invention is not intended to be limited in any way by any statement in the specification that is not otherwise reflected in the claims.

Claims (24)

1-28. (canceled)
29. A computer-implemented method for facilitating call routing, the method comprising:
receiving one or more user utterances from a user during a communication session;
automatically identifying an action and an object from the one or more user utterances using one or more speech recognition techniques;
automatically accessing an action-object data set defining, for each of a plurality of action-object combinations, a transaction corresponding to that action-object combination;
automatically identifying from the action-object data set the transaction corresponding to the action and object identified from the user utterance; and
routing the communication session to a routing destination based on the identified transaction.
30. A method according to claim 29, wherein routing the user communication based on the identified transaction comprises:
automatically accessing a routing table corresponding a plurality of different routing destinations with a plurality of different transactions; and
automatically identifying from the routing table the routing destination corresponding with the identified transaction.
31. A method according to claim 29, wherein the action-object data set comprises an action-object matrix.
32. A method according to claim 29, wherein receiving one or more user utterances and automatically analyzing the one or more user utterances to identify an action and an object comprises:
automatically analyzing one or more first user utterances to attempt to identify an action and an object from the one or more first user utterances; and
if at least one of the action and the object is not identified from the analysis of the one or more first user utterances, automatically prompting the user for one or more second user utterances for determining the unidentified action or object.
33. A method according to claim 29, wherein receiving one or more user utterances and automatically analyzing the one or more user utterances to identify an action and an object comprises:
automatically analyzing one or more first user utterances to identify at least one of an action and an object from the one or more first user utterances; and
if an action, but not an object, is identified from the analysis of the one or more first user utterances:
automatically prompting the user for an object;
receiving one or more second user utterances from the user; and
automatically analyzing the one or more second user utterances to identify an object from the one or more second user utterances.
34. A method according to claim 29, wherein receiving one or more user utterances and automatically analyzing the one or more user utterances to identify an action and an object comprises:
automatically analyzing one or more first user utterances to identify at least one of an action and an object from the one or more first user utterances; and
if an object, but not an action, is identified from the analysis of the one or more first user utterances:
automatically prompting the user for an action;
receiving one or more second user utterances from the user; and
automatically analyzing the one or more second user utterances to identify an action from the one or more second user utterances.
35. A method according to claim 29, wherein the routing destination comprises an agent associated with the identified transaction.
36. A method according to claim 29, wherein the routing destination comprises an automated transaction module for facilitating the identified transaction for the user.
37. A method according to claim 29, further comprising forwarding at least one of the identified action and object to the routing destination.
38. A computer-implemented system for facilitating call routing, the system comprising:
data storage including an action-object data set defining, for each of a plurality of different action-object combinations, a transaction corresponding to that action-object combination;
a dialog module that receives one or more user utterances from a user during a communication session;
a language modeling engine coupled to the dialog module and configured to automatically identify an action and an object from the one or more user utterances using one or more speech recognition techniques, and identify from the action-object data set the transaction corresponding to the action and object identified from the user utterance; and
a routing module coupled to the language modeling engine and configured to route the user communication to a routing destination based on the identified transaction.
39. A system according to claim 38, further comprising a routing table corresponding a plurality of different routing destinations with a plurality of different transactions; and
wherein the routing module is configured to identify from the routing table the routing destination corresponding with the identified transaction.
40. A system according to claim 38, wherein the language modeling engine is configured to automatically analyze one or more first user utterances to attempt to identify an action and an object from the one or more first user utterances, and if at least one of the action and the object is not identified from the analysis of the one or more first user utterances, automatically prompt the user for one or more second user utterances for determining the unidentified action or object.
41. A system according to claim 38, wherein the language modeling engine is configured to:
automatically analyze one or more first user utterances to identify at least one of an action and an object from the one or more first user utterances; and
if an action, but not an object, is identified from the analysis of the one or more first user utterances:
automatically prompt the user for an object; and
automatically analyzing one or more second user utterances received from the user to identify an object from the one or more second user utterances.
42. A system according to claim 38, wherein the language modeling engine is configured to:
automatically analyze one or more first user utterances to identify at least one of an action and an object from the one or more first user utterances; and
if an object, but not an action, is identified from the analysis of the one or more first user utterances:
automatically prompt the user for an action; and
automatically analyzing one or more second user utterances received from the user to identify an action from the one or more second user utterances.
43. A system according to claim 38, wherein the routing destination comprises an agent associated with the identified transaction.
44. A system according to claim 38, wherein the routing destination comprises an automated transaction module for facilitating the identified transaction for the user.
45. A system according to claim 38, wherein the routing module is configured to forward at least one of the identified action and object to the routing destination.
46. A computer-readable medium including computer-executable instructions for facilitating call routing, comprising:
instructions for receiving one or more user utterances from a user during a communication session;
instructions for identifying an action and an object from the one or more user utterances using one or more speech recognition techniques;
instructions for accessing an action-object data set defining, for each of a plurality of action-object combinations, a transaction corresponding to that action-object combination;
instructions for identifying from the action-object data set the transaction corresponding to the action and object identified from the user utterance; and
instructions for routing the communication session to a routing destination based on the identified transaction.
47. A computer-readable medium according to claim 46, wherein the instructions for routing the user communication based on the identified transaction comprise:
instructions for accessing a routing table corresponding a plurality of different routing destinations with a plurality of different transactions; and
instructions for identifying from the routing table the routing destination corresponding with the identified transaction.
48. A computer-readable medium according to claim 46, comprising:
instructions for analyzing one or more first user utterances to attempt to identify an action and an object from the one or more first user utterances; and
instructions for, if at least one of the action and the object is not identified from the analysis of the one or more first user utterances, prompting the user for one or more second user utterances for determining the unidentified action or object.
49. A computer-readable medium according to claim 46, comprising:
instructions for analyzing one or more first user utterances to identify at least one of an action and an object from the one or more first user utterances; and
instructions for, if an action, but not an object, is identified from the analysis of the one or more first user utterances:
prompting the user for an object;
receiving one or more second user utterances from the user; and
analyzing the one or more second user utterances to identify an object from the one or more second user utterances.
50. A computer-readable medium according to claim 46, comprising:
instructions for analyzing one or more first user utterances to identify at least one of an action and an object from the one or more first user utterances; and
instructions for, if an object, but not an action, is identified from the analysis of the one or more first user utterances:
prompting the user for an action;
receiving one or more second user utterances from the user; and
analyzing the one or more second user utterances to identify an action from the one or more second user utterances.
51. A computer-readable medium according to claim 46, further comprising instructions for forwarding at least one of the identified action and object to the routing destination.
US11/363,456 2004-09-16 2006-02-27 System and method for facilitating call routing using speech recognition Abandoned US20060143015A1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
US11/363,456 US20060143015A1 (en) 2004-09-16 2006-02-27 System and method for facilitating call routing using speech recognition
US11/834,520 US7653549B2 (en) 2004-09-16 2007-08-06 System and method for facilitating call routing using speech recognition
US12/634,434 US8112282B2 (en) 2004-09-16 2009-12-09 Evaluating prompt alternatives for speech-enabled applications

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US10/942,605 US7043435B2 (en) 2004-09-16 2004-09-16 System and method for optimizing prompts for speech-enabled applications
US11/363,456 US20060143015A1 (en) 2004-09-16 2006-02-27 System and method for facilitating call routing using speech recognition

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US10/942,605 Continuation US7043435B2 (en) 2004-09-16 2004-09-16 System and method for optimizing prompts for speech-enabled applications

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US11/834,520 Division US7653549B2 (en) 2004-09-16 2007-08-06 System and method for facilitating call routing using speech recognition

Publications (1)

Publication Number Publication Date
US20060143015A1 true US20060143015A1 (en) 2006-06-29

Family

ID=36100360

Family Applications (4)

Application Number Title Priority Date Filing Date
US10/942,605 Expired - Lifetime US7043435B2 (en) 2004-09-16 2004-09-16 System and method for optimizing prompts for speech-enabled applications
US11/363,456 Abandoned US20060143015A1 (en) 2004-09-16 2006-02-27 System and method for facilitating call routing using speech recognition
US11/834,520 Active 2025-05-28 US7653549B2 (en) 2004-09-16 2007-08-06 System and method for facilitating call routing using speech recognition
US12/634,434 Active 2024-11-10 US8112282B2 (en) 2004-09-16 2009-12-09 Evaluating prompt alternatives for speech-enabled applications

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US10/942,605 Expired - Lifetime US7043435B2 (en) 2004-09-16 2004-09-16 System and method for optimizing prompts for speech-enabled applications

Family Applications After (2)

Application Number Title Priority Date Filing Date
US11/834,520 Active 2025-05-28 US7653549B2 (en) 2004-09-16 2007-08-06 System and method for facilitating call routing using speech recognition
US12/634,434 Active 2024-11-10 US8112282B2 (en) 2004-09-16 2009-12-09 Evaluating prompt alternatives for speech-enabled applications

Country Status (1)

Country Link
US (4) US7043435B2 (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060116877A1 (en) * 2004-12-01 2006-06-01 Pickering John B Methods, apparatus and computer programs for automatic speech recognition
US20060136222A1 (en) * 2004-12-22 2006-06-22 New Orchard Road Enabling voice selection of user preferences
US20070143099A1 (en) * 2005-12-15 2007-06-21 International Business Machines Corporation Method and system for conveying an example in a natural language understanding application
US20080151886A1 (en) * 2002-09-30 2008-06-26 Avaya Technology Llc Packet prioritization and associated bandwidth and buffer management techniques for audio over ip
US20080208594A1 (en) * 2007-02-27 2008-08-28 Cross Charles W Effecting Functions On A Multimodal Telephony Device
US20080208586A1 (en) * 2007-02-27 2008-08-28 Soonthorn Ativanichayaphong Enabling Natural Language Understanding In An X+V Page Of A Multimodal Application
US7978827B1 (en) 2004-06-30 2011-07-12 Avaya Inc. Automatic configuration of call handling based on end-user needs and characteristics
US8218751B2 (en) 2008-09-29 2012-07-10 Avaya Inc. Method and apparatus for identifying and eliminating the source of background noise in multi-party teleconferences
US8571869B2 (en) 2005-02-28 2013-10-29 Nuance Communications, Inc. Natural language system and method based on unisolated performance metric
US8593959B2 (en) 2002-09-30 2013-11-26 Avaya Inc. VoIP endpoint call admission
US10991368B2 (en) * 2018-06-25 2021-04-27 Hyundai Motor Company Dialogue system and dialogue processing method

Families Citing this family (34)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7729918B2 (en) * 2001-03-14 2010-06-01 At&T Intellectual Property Ii, Lp Trainable sentence planning system
WO2002073449A1 (en) 2001-03-14 2002-09-19 At & T Corp. Automated sentence planning in a task classification system
US7574362B2 (en) 2001-03-14 2009-08-11 At&T Intellectual Property Ii, L.P. Method for automated sentence planning in a task classification system
US20030115062A1 (en) * 2002-10-29 2003-06-19 Walker Marilyn A. Method for automated sentence planning
US7580837B2 (en) 2004-08-12 2009-08-25 At&T Intellectual Property I, L.P. System and method for targeted tuning module of a speech recognition system
US7043435B2 (en) * 2004-09-16 2006-05-09 Sbc Knowledgfe Ventures, L.P. System and method for optimizing prompts for speech-enabled applications
US7242751B2 (en) * 2004-12-06 2007-07-10 Sbc Knowledge Ventures, L.P. System and method for speech recognition-enabled automatic call routing
US8332226B1 (en) * 2005-01-07 2012-12-11 At&T Intellectual Property Ii, L.P. System and method of dynamically modifying a spoken dialog system to reduce hardware requirements
US7751551B2 (en) 2005-01-10 2010-07-06 At&T Intellectual Property I, L.P. System and method for speech-enabled call routing
US8260617B2 (en) * 2005-04-18 2012-09-04 Nuance Communications, Inc. Automating input when testing voice-enabled applications
US7657020B2 (en) 2005-06-03 2010-02-02 At&T Intellectual Property I, Lp Call routing system and method of using the same
US7853453B2 (en) * 2005-06-30 2010-12-14 Microsoft Corporation Analyzing dialog between a user and an interactive application
US7873523B2 (en) * 2005-06-30 2011-01-18 Microsoft Corporation Computer implemented method of analyzing recognition results between a user and an interactive application utilizing inferred values instead of transcribed speech
US20070006082A1 (en) * 2005-06-30 2007-01-04 Microsoft Corporation Speech application instrumentation and logging
US7773731B2 (en) * 2005-12-14 2010-08-10 At&T Intellectual Property I, L. P. Methods, systems, and products for dynamically-changing IVR architectures
US7577664B2 (en) 2005-12-16 2009-08-18 At&T Intellectual Property I, L.P. Methods, systems, and products for searching interactive menu prompting system architectures
US8457973B2 (en) * 2006-03-04 2013-06-04 AT&T Intellectual Propert II, L.P. Menu hierarchy skipping dialog for directed dialog speech recognition
US7961856B2 (en) * 2006-03-17 2011-06-14 At&T Intellectual Property I, L. P. Methods, systems, and products for processing responses in prompting systems
US7930183B2 (en) * 2006-03-29 2011-04-19 Microsoft Corporation Automatic identification of dialog timing problems for an interactive speech dialog application using speech log data indicative of cases of barge-in and timing problems
US8386248B2 (en) * 2006-09-22 2013-02-26 Nuance Communications, Inc. Tuning reusable software components in a speech application
US20080215342A1 (en) * 2007-01-17 2008-09-04 Russell Tillitt System and method for enhancing perceptual quality of low bit rate compressed audio data
US8150020B1 (en) 2007-04-04 2012-04-03 At&T Intellectual Property Ii, L.P. System and method for prompt modification based on caller hang ups in IVRs
US8027835B2 (en) * 2007-07-11 2011-09-27 Canon Kabushiki Kaisha Speech processing apparatus having a speech synthesis unit that performs speech synthesis while selectively changing recorded-speech-playback and text-to-speech and method
JP2011033680A (en) * 2009-07-30 2011-02-17 Sony Corp Voice processing device and method, and program
US9634855B2 (en) 2010-05-13 2017-04-25 Alexander Poltorak Electronic personal interactive device that determines topics of interest using a conversational agent
US9378505B2 (en) 2010-07-26 2016-06-28 Revguard, Llc Automated multivariate testing technique for optimized customer outcome
KR20140013950A (en) * 2012-07-26 2014-02-05 삼성전자주식회사 Electric apparatus controlling method and interactive server
US11722598B2 (en) * 2015-01-06 2023-08-08 Cyara Solutions Pty Ltd System and methods for an automated chatbot testing platform
WO2016168661A1 (en) * 2015-04-17 2016-10-20 Level 3 Communications, Llc Illicit route viewing system and method of operation
KR102379753B1 (en) * 2017-03-29 2022-03-29 삼성전자주식회사 Device and method for performing payment using utterance
US10924605B2 (en) 2017-06-09 2021-02-16 Onvocal, Inc. System and method for asynchronous multi-mode messaging
US10477022B2 (en) 2017-11-22 2019-11-12 Repnow Inc. Automated telephone host system interaction
CN111199732B (en) * 2018-11-16 2022-11-15 深圳Tcl新技术有限公司 Emotion-based voice interaction method, storage medium and terminal equipment
CN110933239A (en) * 2019-12-30 2020-03-27 秒针信息技术有限公司 Method and apparatus for detecting dialect

Citations (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4914590A (en) * 1988-05-18 1990-04-03 Emhart Industries, Inc. Natural language understanding system
US5434777A (en) * 1992-05-27 1995-07-18 Apple Computer, Inc. Method and apparatus for processing natural language
US5581600A (en) * 1992-06-15 1996-12-03 Watts; Martin O. Service platform
US5633909A (en) * 1994-06-17 1997-05-27 Centigram Communications Corporation Apparatus and method for generating calls and testing telephone equipment
US5642518A (en) * 1993-06-18 1997-06-24 Hitachi, Ltd. Keyword assigning method and system therefor
US5924105A (en) * 1997-01-27 1999-07-13 Michigan State University Method and product for determining salient features for use in information searching
US6138098A (en) * 1997-06-30 2000-10-24 Lernout & Hauspie Speech Products N.V. Command parsing and rewrite system
US6202043B1 (en) * 1996-11-12 2001-03-13 Invention Machine Corporation Computer based system for imaging and analyzing a process system and indicating values of specific design changes
US6212517B1 (en) * 1997-07-02 2001-04-03 Matsushita Electric Industrial Co., Ltd. Keyword extracting system and text retrieval system using the same
US6314402B1 (en) * 1999-04-23 2001-11-06 Nuance Communications Method and apparatus for creating modifiable and combinable speech objects for acquiring information from a speaker in an interactive voice response system
US6335964B1 (en) * 1997-09-19 2002-01-01 International Business Machines Corp. Voice processing system
US6405170B1 (en) * 1998-09-22 2002-06-11 Speechworks International, Inc. Method and system of reviewing the behavior of an interactive speech recognition application
US20020072914A1 (en) * 2000-12-08 2002-06-13 Hiyan Alshawi Method and apparatus for creation and user-customization of speech-enabled services
US6434524B1 (en) * 1998-09-09 2002-08-13 One Voice Technologies, Inc. Object interactive user interface using speech recognition and natural language processing
US6516051B2 (en) * 2000-06-01 2003-02-04 International Business Machines Corporation Testing voice message applications
US20030091163A1 (en) * 1999-12-20 2003-05-15 Attwater David J Learning of dialogue states and language model of spoken information system
US6598022B2 (en) * 1999-12-07 2003-07-22 Comverse Inc. Determining promoting syntax and parameters for language-oriented user interfaces for voice activated services
US20030195739A1 (en) * 2002-04-16 2003-10-16 Fujitsu Limited Grammar update system and method
US20030212561A1 (en) * 2002-05-08 2003-11-13 Williams Douglas Carter Method of generating test scripts using a voice-capable markup language
US20040083092A1 (en) * 2002-09-12 2004-04-29 Valles Luis Calixto Apparatus and methods for developing conversational applications
US6792086B1 (en) * 1999-08-24 2004-09-14 Microstrategy, Inc. Voice network access provider system and method
US20050033582A1 (en) * 2001-02-28 2005-02-10 Michael Gadd Spoken language interface
US20050132262A1 (en) * 2003-12-15 2005-06-16 Sbc Knowledge Ventures, L.P. System, method and software for a speech-enabled call routing application using an action-object matrix
US20050254632A1 (en) * 2004-05-12 2005-11-17 Sbc Knowledge Ventures, L.P. System, method and software for transitioning between speech-enabled applications using action-object matrices
US7043435B2 (en) * 2004-09-16 2006-05-09 Sbc Knowledgfe Ventures, L.P. System and method for optimizing prompts for speech-enabled applications

Family Cites Families (31)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5241580A (en) * 1990-12-18 1993-08-31 Bell Communications Research, Inc. Method for validating customized telephone services
US5390232A (en) * 1992-12-28 1995-02-14 At&T Corp. System for control of subscriber progragmmability
US5724406A (en) * 1994-03-22 1998-03-03 Ericsson Messaging Systems, Inc. Call processing system and method for providing a variety of messaging services
US5493606A (en) * 1994-05-31 1996-02-20 Unisys Corporation Multi-lingual prompt management system for a network applications platform
US5572570A (en) * 1994-10-11 1996-11-05 Teradyne, Inc. Telecommunication system tester with voice recognition capability
US6368177B1 (en) * 1995-11-20 2002-04-09 Creator, Ltd. Method for using a toy to conduct sales over a network
US6192108B1 (en) * 1997-09-19 2001-02-20 Mci Communications Corporation Performing automated testing using automatically generated logs
US6606598B1 (en) * 1998-09-22 2003-08-12 Speechworks International, Inc. Statistical computing and reporting for interactive speech applications
US6671672B1 (en) * 1999-03-30 2003-12-30 Nuance Communications Voice authentication system having cognitive recall mechanism for password verification
GB9926134D0 (en) * 1999-11-05 2000-01-12 Ibm Interactive voice response system
US6724864B1 (en) * 2000-01-20 2004-04-20 Comverse, Inc. Active prompts
US6744885B1 (en) * 2000-02-24 2004-06-01 Lucent Technologies Inc. ASR talkoff suppressor
US20040006473A1 (en) * 2002-07-02 2004-01-08 Sbc Technology Resources, Inc. Method and system for automated categorization of statements
US7068643B1 (en) * 2000-11-03 2006-06-27 Intervoice Limited Partnership Extensible interactive voice response
US20020077819A1 (en) * 2000-12-20 2002-06-20 Girardo Paul S. Voice prompt transcriber and test system
US6810111B1 (en) * 2001-06-25 2004-10-26 Intervoice Limited Partnership System and method for measuring interactive voice response application efficiency
US7711570B2 (en) * 2001-10-21 2010-05-04 Microsoft Corporation Application abstraction with dialog purpose
US20030210139A1 (en) * 2001-12-03 2003-11-13 Stephen Brooks Method and system for improved security
US7493259B2 (en) * 2002-01-04 2009-02-17 Siebel Systems, Inc. Method for accessing data via voice
US6804330B1 (en) * 2002-01-04 2004-10-12 Siebel Systems, Inc. Method and system for accessing CRM data via voice
US7729915B2 (en) * 2002-06-12 2010-06-01 Enterprise Integration Group, Inc. Method and system for using spatial metaphor to organize natural language in spoken user interfaces
US7249321B2 (en) * 2002-10-03 2007-07-24 At&T Knowlege Ventures, L.P. System and method for selection of a voice user interface dialogue
US6847711B2 (en) * 2003-02-13 2005-01-25 Sbc Properties, L.P. Method for evaluating customer call center system designs
US7263173B2 (en) * 2003-06-30 2007-08-28 Bellsouth Intellectual Property Corporation Evaluating performance of a voice mail system in an inter-messaging network
US7379535B2 (en) * 2003-06-30 2008-05-27 At&T Delaware Intellectual Property, Inc. Evaluating performance of a voice mail sub-system in an inter-messaging network
US7224776B2 (en) * 2003-12-15 2007-05-29 International Business Machines Corporation Method, system, and apparatus for testing a voice response system
US7308079B2 (en) * 2003-12-15 2007-12-11 International Business Machines Corporation Automating testing path responses to external systems within a voice response system
US7512545B2 (en) * 2004-01-29 2009-03-31 At&T Intellectual Property I, L.P. Method, software and system for developing interactive call center agent personas
US7460650B2 (en) 2004-05-24 2008-12-02 At&T Intellectual Property I, L.P. Method for designing an automated speech recognition (ASR) interface for a customer call center
US7580837B2 (en) * 2004-08-12 2009-08-25 At&T Intellectual Property I, L.P. System and method for targeted tuning module of a speech recognition system
US7110949B2 (en) * 2004-09-13 2006-09-19 At&T Knowledge Ventures, L.P. System and method for analysis and adjustment of speech-enabled systems

Patent Citations (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4914590A (en) * 1988-05-18 1990-04-03 Emhart Industries, Inc. Natural language understanding system
US5434777A (en) * 1992-05-27 1995-07-18 Apple Computer, Inc. Method and apparatus for processing natural language
US5581600A (en) * 1992-06-15 1996-12-03 Watts; Martin O. Service platform
US5642518A (en) * 1993-06-18 1997-06-24 Hitachi, Ltd. Keyword assigning method and system therefor
US5633909A (en) * 1994-06-17 1997-05-27 Centigram Communications Corporation Apparatus and method for generating calls and testing telephone equipment
US6202043B1 (en) * 1996-11-12 2001-03-13 Invention Machine Corporation Computer based system for imaging and analyzing a process system and indicating values of specific design changes
US5924105A (en) * 1997-01-27 1999-07-13 Michigan State University Method and product for determining salient features for use in information searching
US6138098A (en) * 1997-06-30 2000-10-24 Lernout & Hauspie Speech Products N.V. Command parsing and rewrite system
US6212517B1 (en) * 1997-07-02 2001-04-03 Matsushita Electric Industrial Co., Ltd. Keyword extracting system and text retrieval system using the same
US6335964B1 (en) * 1997-09-19 2002-01-01 International Business Machines Corp. Voice processing system
US6434524B1 (en) * 1998-09-09 2002-08-13 One Voice Technologies, Inc. Object interactive user interface using speech recognition and natural language processing
US6405170B1 (en) * 1998-09-22 2002-06-11 Speechworks International, Inc. Method and system of reviewing the behavior of an interactive speech recognition application
US6314402B1 (en) * 1999-04-23 2001-11-06 Nuance Communications Method and apparatus for creating modifiable and combinable speech objects for acquiring information from a speaker in an interactive voice response system
US6792086B1 (en) * 1999-08-24 2004-09-14 Microstrategy, Inc. Voice network access provider system and method
US6598022B2 (en) * 1999-12-07 2003-07-22 Comverse Inc. Determining promoting syntax and parameters for language-oriented user interfaces for voice activated services
US20030091163A1 (en) * 1999-12-20 2003-05-15 Attwater David J Learning of dialogue states and language model of spoken information system
US6516051B2 (en) * 2000-06-01 2003-02-04 International Business Machines Corporation Testing voice message applications
US20020072914A1 (en) * 2000-12-08 2002-06-13 Hiyan Alshawi Method and apparatus for creation and user-customization of speech-enabled services
US20050033582A1 (en) * 2001-02-28 2005-02-10 Michael Gadd Spoken language interface
US20030195739A1 (en) * 2002-04-16 2003-10-16 Fujitsu Limited Grammar update system and method
US20030212561A1 (en) * 2002-05-08 2003-11-13 Williams Douglas Carter Method of generating test scripts using a voice-capable markup language
US20040083092A1 (en) * 2002-09-12 2004-04-29 Valles Luis Calixto Apparatus and methods for developing conversational applications
US20050132262A1 (en) * 2003-12-15 2005-06-16 Sbc Knowledge Ventures, L.P. System, method and software for a speech-enabled call routing application using an action-object matrix
US20050254632A1 (en) * 2004-05-12 2005-11-17 Sbc Knowledge Ventures, L.P. System, method and software for transitioning between speech-enabled applications using action-object matrices
US7043435B2 (en) * 2004-09-16 2006-05-09 Sbc Knowledgfe Ventures, L.P. System and method for optimizing prompts for speech-enabled applications

Cited By (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8593959B2 (en) 2002-09-30 2013-11-26 Avaya Inc. VoIP endpoint call admission
US20080151886A1 (en) * 2002-09-30 2008-06-26 Avaya Technology Llc Packet prioritization and associated bandwidth and buffer management techniques for audio over ip
US7877500B2 (en) 2002-09-30 2011-01-25 Avaya Inc. Packet prioritization and associated bandwidth and buffer management techniques for audio over IP
US7877501B2 (en) 2002-09-30 2011-01-25 Avaya Inc. Packet prioritization and associated bandwidth and buffer management techniques for audio over IP
US8015309B2 (en) 2002-09-30 2011-09-06 Avaya Inc. Packet prioritization and associated bandwidth and buffer management techniques for audio over IP
US8370515B2 (en) 2002-09-30 2013-02-05 Avaya Inc. Packet prioritization and associated bandwidth and buffer management techniques for audio over IP
US7978827B1 (en) 2004-06-30 2011-07-12 Avaya Inc. Automatic configuration of call handling based on end-user needs and characteristics
US8694316B2 (en) 2004-12-01 2014-04-08 Nuance Communications, Inc. Methods, apparatus and computer programs for automatic speech recognition
US20060116877A1 (en) * 2004-12-01 2006-06-01 Pickering John B Methods, apparatus and computer programs for automatic speech recognition
US9502024B2 (en) 2004-12-01 2016-11-22 Nuance Communications, Inc. Methods, apparatus and computer programs for automatic speech recognition
US20060136222A1 (en) * 2004-12-22 2006-06-22 New Orchard Road Enabling voice selection of user preferences
US9083798B2 (en) 2004-12-22 2015-07-14 Nuance Communications, Inc. Enabling voice selection of user preferences
US8571869B2 (en) 2005-02-28 2013-10-29 Nuance Communications, Inc. Natural language system and method based on unisolated performance metric
US8977549B2 (en) 2005-02-28 2015-03-10 Nuance Communications, Inc. Natural language system and method based on unisolated performance metric
US9384190B2 (en) 2005-12-15 2016-07-05 Nuance Communications, Inc. Method and system for conveying an example in a natural language understanding application
US8612229B2 (en) * 2005-12-15 2013-12-17 Nuance Communications, Inc. Method and system for conveying an example in a natural language understanding application
US10192543B2 (en) 2005-12-15 2019-01-29 Nuance Communications, Inc. Method and system for conveying an example in a natural language understanding application
US20070143099A1 (en) * 2005-12-15 2007-06-21 International Business Machines Corporation Method and system for conveying an example in a natural language understanding application
US20080208586A1 (en) * 2007-02-27 2008-08-28 Soonthorn Ativanichayaphong Enabling Natural Language Understanding In An X+V Page Of A Multimodal Application
US20080208594A1 (en) * 2007-02-27 2008-08-28 Cross Charles W Effecting Functions On A Multimodal Telephony Device
US8218751B2 (en) 2008-09-29 2012-07-10 Avaya Inc. Method and apparatus for identifying and eliminating the source of background noise in multi-party teleconferences
US10991368B2 (en) * 2018-06-25 2021-04-27 Hyundai Motor Company Dialogue system and dialogue processing method

Also Published As

Publication number Publication date
US8112282B2 (en) 2012-02-07
US20060069569A1 (en) 2006-03-30
US20080040118A1 (en) 2008-02-14
US7653549B2 (en) 2010-01-26
US20100088101A1 (en) 2010-04-08
US7043435B2 (en) 2006-05-09

Similar Documents

Publication Publication Date Title
US7043435B2 (en) System and method for optimizing prompts for speech-enabled applications
US8117030B2 (en) System and method for analysis and adjustment of speech-enabled systems
US8254534B2 (en) Method and apparatus for automatic telephone menu navigation
US7346151B2 (en) Method and apparatus for validating agreement between textual and spoken representations of words
US20050055216A1 (en) System and method for the automated collection of data for grammar creation
US20170302797A1 (en) Computer-Implemented System And Method For Call Response Processing
US7609829B2 (en) Multi-platform capable inference engine and universal grammar language adapter for intelligent voice application execution
US7242752B2 (en) Behavioral adaptation engine for discerning behavioral characteristics of callers interacting with an VXML-compliant voice application
US7590542B2 (en) Method of generating test scripts using a voice-capable markup language
US20140341361A1 (en) Method for selecting interactive voice response modes using human voice detection analysis
US9288320B2 (en) System and method for servicing a call
EP2781079B1 (en) System and method for servicing a call
KR20060012601A (en) System and method for automated customer feedback
US20130156165A1 (en) System and method for servicing a call
US9210264B2 (en) System and method for live voice and voicemail detection
US20050171792A1 (en) System and method for language variation guided operator selection
US20050246177A1 (en) System, method and software for enabling task utterance recognition in speech enabled systems
Basson et al. User participation and compliance in speech automated telecommunications applications
KR20230149217A (en) Generating method of voice classification model for telemarketing and telemarketing system using the same

Legal Events

Date Code Title Description
AS Assignment

Owner name: SBC KNOWLEDGE VENTURES, L.P., NEVADA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:KNOTT, BENJAMIN A.;BUSHEY, ROBERT R.;MARTIN, JOHN M.;REEL/FRAME:018673/0228

Effective date: 20040915

AS Assignment

Owner name: AT&T KNOWLEDGE VENTURES, L.P., TEXAS

Free format text: CHANGE OF NAME;ASSIGNOR:SBC KNOWLEDGE VENTURES, L.P.;REEL/FRAME:018908/0355

Effective date: 20060224

STCB Information on status: application discontinuation

Free format text: EXPRESSLY ABANDONED -- DURING EXAMINATION

AS Assignment

Owner name: AT&T ALEX HOLDINGS, LLC, TEXAS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:AT&T INTELLECTUAL PROPERTY I, L.P.;REEL/FRAME:034482/0831

Effective date: 20141210

Owner name: AT&T INTELLECTUAL PROPERTY I, L.P., GEORGIA

Free format text: CHANGE OF NAME;ASSIGNOR:AT&T KNOWLEDGE VENTURES, L.P.;REEL/FRAME:034611/0616

Effective date: 20071001

AS Assignment

Owner name: INTERACTIONS LLC, MASSACHUSETTS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:AT&T ALEX HOLDINGS, LLC;REEL/FRAME:034642/0640

Effective date: 20141210

AS Assignment

Owner name: ORIX VENTURES, LLC, NEW YORK

Free format text: SECURITY INTEREST;ASSIGNOR:INTERACTIONS LLC;REEL/FRAME:034677/0768

Effective date: 20141218

AS Assignment

Owner name: SILICON VALLEY BANK, MASSACHUSETTS

Free format text: FIRST AMENDMENT TO INTELLECTUAL PROPERTY SECURITY AGREEMENT;ASSIGNOR:INTERACTIONS LLC;REEL/FRAME:036100/0925

Effective date: 20150709

AS Assignment

Owner name: SILICON VALLEY BANK, MASSACHUSETTS

Free format text: INTELLECTUAL PROPERTY SECURITY AGREEMENT;ASSIGNOR:INTERACTIONS LLC;REEL/FRAME:049388/0082

Effective date: 20190603

AS Assignment

Owner name: INTERACTIONS LLC, MASSACHUSETTS

Free format text: TERMINATION AND RELEASE OF SECURITY INTEREST IN INTELLECTUAL PROPERTY;ASSIGNOR:ORIX GROWTH CAPITAL, LLC;REEL/FRAME:061749/0825

Effective date: 20190606

Owner name: INTERACTIONS CORPORATION, MASSACHUSETTS

Free format text: TERMINATION AND RELEASE OF SECURITY INTEREST IN INTELLECTUAL PROPERTY;ASSIGNOR:ORIX GROWTH CAPITAL, LLC;REEL/FRAME:061749/0825

Effective date: 20190606

AS Assignment

Owner name: INTERACTIONS LLC, MASSACHUSETTS

Free format text: RELEASE OF SECURITY INTEREST IN INTELLECTUAL PROPERTY RECORDED AT REEL/FRAME: 049388/0082;ASSIGNOR:SILICON VALLEY BANK;REEL/FRAME:060558/0474

Effective date: 20220624

AS Assignment

Owner name: INTERACTIONS LLC, MASSACHUSETTS

Free format text: RELEASE OF SECURITY INTEREST IN INTELLECTUAL PROPERTY RECORDED AT REEL/FRAME: 036100/0925;ASSIGNOR:SILICON VALLEY BANK;REEL/FRAME:060559/0576

Effective date: 20220624