WO2017065770A1 - System and method for multi-language communication sequencing - Google Patents

System and method for multi-language communication sequencing Download PDF

Info

Publication number
WO2017065770A1
WO2017065770A1 PCT/US2015/055686 US2015055686W WO2017065770A1 WO 2017065770 A1 WO2017065770 A1 WO 2017065770A1 US 2015055686 W US2015055686 W US 2015055686W WO 2017065770 A1 WO2017065770 A1 WO 2017065770A1
Authority
WO
WIPO (PCT)
Prior art keywords
sequence
language
communication
prompt
text
Prior art date
Application number
PCT/US2015/055686
Other languages
French (fr)
Inventor
Scott P. BAUER
James R. ULLYOT
Original Assignee
Interactive Intelligence Group, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Interactive Intelligence Group, Inc. filed Critical Interactive Intelligence Group, Inc.
Priority to EP15906376.7A priority Critical patent/EP3363016A4/en
Priority to CA3005710A priority patent/CA3005710C/en
Priority to AU2015411582A priority patent/AU2015411582B2/en
Priority to CN201580085355.8A priority patent/CN108475503B/en
Priority to PCT/US2015/055686 priority patent/WO2017065770A1/en
Priority to KR1020187013755A priority patent/KR20180082455A/en
Publication of WO2017065770A1 publication Critical patent/WO2017065770A1/en

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L13/00Speech synthesis; Text to speech systems
    • G10L13/02Methods for producing synthetic speech; Speech synthesisers
    • G10L13/033Voice editing, e.g. manipulating the voice of the synthesiser
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L13/00Speech synthesis; Text to speech systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/60Information retrieval; Database structures therefor; File system structures therefor of audio data
    • G06F16/68Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/683Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • G06F16/685Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using automatically derived transcript of audio data, e.g. lyrics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • G06F9/454Multi-language systems; Localisation; Internationalisation
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L13/00Speech synthesis; Text to speech systems
    • G10L13/02Methods for producing synthetic speech; Speech synthesisers
    • G10L13/04Details of speech synthesis systems, e.g. synthesiser structure or memory management
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M3/00Automatic or semi-automatic exchanges
    • H04M3/42Systems providing special services or facilities to subscribers
    • H04M3/487Arrangements for providing information services, e.g. recorded voice services or time announcements
    • H04M3/493Interactive information services, e.g. directory enquiries ; Arrangements therefor, e.g. interactive voice response [IVR] systems or voice portals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2203/00Aspects of automatic or semi-automatic exchanges
    • H04M2203/35Aspects of automatic or semi-automatic exchanges related to information services provided via a voice call
    • H04M2203/355Interactive dialogue design tools, features or methods

Definitions

  • the present invention generally relates to telecommunications systems and methods, as well as business environments. More particularly, the present invention pertains to audio playback in interactions within the business environments.
  • Communication flows may support one or more languages, which may need to be created, removed, or edited.
  • prompts, data, expressions, pauses, and text-to-speech may be added. This may be done through the use of inline-selectors, which comprise a prompt or TTS, or through the use of dialogs, which may also provide error feedback.
  • a main sequence may be capable of handling multiple languages which are supported and managed independent of each other.
  • a method for sequencing communication to a party utilizing a plurality of languages in an interactive voice response system comprising the steps of:
  • the communication is in the at least one supported language; enabling, for editing to the sequence, one or more of: prompts, data, expressions, pauses, and text-to-speech; enabling an alternate language for the communication, wherein the alternate language comprises an alternate sequence.
  • a method for sequencing communication to a party utilizing a plurality of languages in an interactive voice response system, the method comprising the steps of: selecting, through a graphical user interface, by a user, a prompt; and creating, by a computer processor, at run-time, a communication sequence using the prompt.
  • a method for sequencing communication to a party utilizing a plurality of languages in an interactive voice response system, the method comprising the steps of: entering, by a user, text into a graphical user interface, wherein the text is transformed into text-to-speech by a computer processor; and creating, by the computer processor, a communication sequence using the text-to-speech.
  • Figures la- Id are diagrams illustrating embodiments of inline selectors.
  • Figures 2a-2e are diagrams illustrating embodiments of sequence selectors.
  • Figure 3a-3b are diagrams illustrating embodiments of audio sequences.
  • Figures 4a-4e are diagrams illustrating embodiments of multi-language sequences.
  • Figures 5a- 5b are diagrams illustrating embodiments of audio sequence editing.
  • Figure 6 is a diagram illustrating an embodiment of an error.
  • a business environment such as a contact center or enterprise environment
  • interactive voice response systems are often utilized, particularly for inbound and outbound interactions (e.g., calls, web interactions, video chat, etc.).
  • the communication flows for different media types may be designed to automatically answer communications, present parties to interactions with menu choices, and provide routing of the interaction according to a party's choice.
  • Options present may be based on the industry or business in which the flow is used. For example, a bank may offer a customer the option to enter an account number, while another business may ask the communicant their name. Another company may simply have the customer select a number correlated to an option.
  • Systems may also be required to support many languages. In an embodiment, consolidated multi-language support for automatic runtime data playback, speech recognition, and text-to-speech (TTS), may be used.
  • TTS text-to-speech
  • the call flows, or logic for the handling of a communication, that an IVR uses to accomplish interactions may comprise several different languages.
  • a main sequence provides an audio sequence for all supported languages in a flow with the ability for a system user (e.g., flow author) to specify alternate sequences on a per language basis.
  • the main sequence may also be comprised of one or more items.
  • the main sequence may be capable of handling multiple languages which are supported in the IVR flow.
  • the languages may be managed independently of each other in the event an alternate sequence is triggered.
  • error feedback may be triggered by the system and provided to a user for the correction of issues that arise.
  • flows may comprise multiple sequences.
  • the initial greeting in a flow comprises a sequence
  • a communicant may be presented with a menu at which point they may be provided with another sequence, such as 'press 1 for sales', 'press 2 for Jim', etc.
  • the selection of an option in this example, triggers another sequence for presentation to the communicant.
  • Prompts such as "hello” may be created for greeting, for example, and stored within a database which is accessed by a run-time engine, such as a media server like Interactive Intelligence Group, Inc.'s Interaction Edge® product, that executes the IVR logic.
  • a prompt may have one or more resources attached to it.
  • Resources may comprise audio (e.g., a spoken "hello"), TTS (e.g., a synthesized "hello"), or a language (e.g., en-US).
  • the resource may comprise TTS and Audio and a language tag.
  • the resource may comprise TTS or Audio, and a language tag.
  • the language tag may comprise an IETF language tag (or other means for tagging a language) and may be used to identify a resource within a prompt.
  • the language tag may also provide the grouping that is used for audio and TTS.
  • a prompt may only have one prompt resource per language. For example, two resources may not be associated with the German language.
  • audio sequences may be edited where a prompt is followed by TTS or vice versa.
  • a user may decide to specify a prompt or to specify TTS.
  • the prompt or TTS may be turned into a sequence later as business needs dictate. For example, during the development of a flow, TTS may be initially used and at some later time converted to a sequence.
  • Audio sequences comprise an ordered list of indexed items to play back to a communicant interacting with the IVR.
  • the items may include, in no particular order, TTS, data playback, prompts, pauses or breaks, and embedded audio expressions.
  • a main sequence may be designated, with that designated sequence applying to all supported languages set on a flow. Alternate sequences may also be present in the flow. These alternate sequences may be enabled for specific languages, such that when an interaction exits the main sequences, such as by the selection of a new language, the alternate sequence for that new language takes over.
  • the alternate sequence may be duplicated from the main sequence initially and further edited by a flow author. The main sequence may then be used for all supported languages in the flow with the exception of the alternate sequence enabled by the flow author. If alternate sequences are enabled for each supported language in the flow, the main sequence no longer applies since each alternate language overrides the main sequence.
  • the sequencing of wording in prompts can be language specific. In an embodiment, one prompt may be sufficient for all languages, such as
  • Audio sequences may be configured through a dialog (e.g., a modal dialog or a window) or an inline selector.
  • inline selectors comprise for an easy means of configuration for TTS or prompt.
  • Figures 1 a- 1 d are diagrams illustrating embodiments of inline selectors, indicated generally at 100.
  • an inline selector comprises a one-item sequence, such as a TTS or a prompt.
  • an author may detail languages for the flow to support.
  • an initial greeting may be made using TTS or a previously created prompt.
  • the author may enter TTS for the initial greeting or select a pre-existing prompt, without having to open the sequence editor for configuration.
  • the inline selectors comprise TTS that will be played as an initial greeting.
  • the inline selectors comprise a prompt selection that will be played as an initial greeting.
  • Figure la is an example of a one-item sequence utilizing TTS
  • Figure lb is an example of a one-item sequence utilizing prompts.
  • the inline selector such as in Figure la and Figure lb, comprises the "audio" 105.
  • An audio expression may also be included 106.
  • an icon 107 may be present where upon selecting the icon, a window for editing the audio sequence opens.
  • a window may also open for the addition of prompts.
  • errors and their descriptions 108 may be displayed for items, such as in Figure lc, where the error indicates that there is a problem with an audio sequence ("1 or more audio sequences are in error", for example). Attention may be called to the error by highlighting or by a font color change to the error and/or error descriptions, for example.
  • Figure 1 d is an embodiment of an audio sequence without an error, indicating that ' 1 audio sequence is set' 109.
  • An icon such as the dialogue clouds 1 10 exemplified in Fig Id, may also be indicative that this entry is not an inline entry of TTS or a prompt.
  • the user may have manually entered the sequence through a dialog as opposed to selecting a TTS or a prompt.
  • FIGs 2a-2d are diagrams generally illustrating embodiments of sequence selectors. Each of figures 2a-2d illustrate a single supported language, for simplicity. These windows generally indicate examples for configuring the dialog and sequence editing of audio expressions.
  • the window illustrates the audio expression is a TTS 201.
  • a user may decide to add additional dialog, such as "Add Prompt”, “Add Data”, “Add TTS”, “Add Expression”, and “Add Blank Audio”, to name a few non- limiting examples. These options may be displayed in a task bar 202.
  • “Add TTS” has been selected.
  • an additional item in the sequence may be created.
  • this is identified as second in the sequence and is "Text to Speech" 203. Any number of items may be added to the sequence with the order of items editable.
  • a TTS string may additionally be promoted to a prompt and audio added in one or more languages, as further described in Figure 2c.
  • Blank Audio has initially been selected 204.
  • Blank audio may allow a user to configure the system to delay or pause in playback for a specified duration. In an embodiment, this may be performed from a drop-down menu 205, such as seen in Figure 2b. Different durations may be presented for selection, such as 100 ms, 250 ms, 500 ms, etc.
  • simple TTS may be promoted to managed prompts that include audio and TTS for multiple languages, such illustrated in Figure 2c.
  • a flow author may specify the prompt name 206 and description 207 in order to create the prompt.
  • the name is "ThanksforContacting” and the description "Used at the end of an interaction to say thanks for contacting us”.
  • the TTS is set on each of the prompt resources, which are determined by the supported languages set on the flow 208.
  • Figure 2c English, United States, has been designated.
  • a flow author may specify the audio to be included as "thank you for contacting us" 209. In an
  • two resources may be presented as prompt resources, if the supported languages are English and Spanish, for example.
  • Additional data may also be included in the main sequence.
  • Figure 2d for example, four items have been included in the main sequence.
  • Each item may be created by selecting the dialog "Add Data" from the task bar 202.
  • Different types of data may be added, such as: dates and/or times, currencies, numbers that may represent customer information, etc.
  • different options may become available from the system for a user to choose.
  • data in item 1, 208 may comprise currency.
  • a user may decide to accept major units only from the options available.
  • options may also include selecting between feminine, masculine, neuter, articles, etc., 210.
  • a sequence may also be altered/reordered/removed dependent on the language.
  • a veterinary clinic has an IVR with a call flow running in Spanish - United States (es-US). Confirmation with a caller is being performed automatically as to what pets the caller has on file. For this particular customer they have one female cat on file, which needs confirmation.
  • es-US Spanish - United States
  • TTS "Usted tiene” ( you have )
  • TTS "gata"
  • the generated expression comprises: Append(ToAudioTTS("Usted tiene"), ToAudioNumber(l, Language. Gender.Feminine), ToAudioTTS("gata”)).
  • Articles may also be supported for languages. Meta-data may be retained about a language on whether or not it supports gender, what gender types are there (e.g., masculine, feminine, neuter), or case. If one of those options is specified by a flow author and the runtime has a special audio handler set up for that option, that handler will be played back to the communicant.
  • case and gender may also be combined together on playback and are not exclusive of each other. For example, using "ToAudioNumber(l, Language. Gender.masculine, Language.Case.article)", the gender options are grouped together and then the case options are grouped together. In an embodiment, the case and gender may be supported in the same dropdown menu of a user interface.
  • Errors may also be automatically indicated by the system during sequence editing.
  • an example is provided of an in-line error, 21 1.
  • In-line errors may be indicated by means such as a color change, a warning, high-lighting, icons, etc.
  • the item entry field is highlighted.
  • a user has added an item to the sequence, but did not specify expression text in the dialog.
  • the system recognizes an error has occurred and provides an indication, such as feedback, to allow the user to correct the error in a quick edit form.
  • an editor may be opened which provides more detailed feedback, such as converting audio to numbers, for example.
  • an indication is being made that "There is no expression defined" 212, allowing the user to quickly pinpoint the error and, in this example, define an expression.
  • the expression may become: "ToAudioTTS(Substring(Flow.CustomerSSN, Length(Flow.CustomerSSC)-4,4), Format.String.PlayChars)".
  • the expression in this example is being used to extract part of the data.
  • the data comprises the social security number of the customer with the last four characters picked to be read back to the customer as spoken integers in the language in which the flow is running.
  • Expressions may be used to also perform mathematical calculations and text manipulation, such as adding orders together or calculating a delivery date.
  • Expressions may also comprise grammars that return a type of audio to provide more control with the type of data played back. In an embodiment, this may also be applied to communications and/or to flows that run while a communicant (e.g., caller) is waiting on hold for an agent (e.g., In-Queue flows).
  • a communicant e.g., caller
  • an agent e.g., In-Queue flows
  • Audio sequences may be edited.
  • Figures 3a and 3b examples of audio sequences are generally provided.
  • An audio sequence may be presented and a user may decide to use the large/long expression editor.
  • index 1, 301 describes a prompt, such as "Prompt. Hello" 302, followed by an item for TTS 303.
  • a user may indicate that they want the time to be provided 304.
  • Another data item 305 may be added to provide the current time 306.
  • integrated expression help may be provided such that a user may obtain more detailed error feedback, if available.
  • the output of the audio sequencing editor comprises an expression.
  • the system may append to an audio prompt the custom audio "the time is” followed by an insert of the time, as exemplified with the expression
  • an expression may be generated for that language in addition to an expression generated for the main sequence. Items within the audio sequence editor are validated for correctness individually in order to display appropriate errors for each sequence item. In an embodiment, if one or more sequence items are in error within a sequence, either the main sequence or language specific sequence tab near the dialog will reflect that it is in error as well.
  • Figures 4a-4d are diagrams generally illustrating multi-language sequences.
  • a plurality of language sequences may be defined such that there may be one or more main language sequences, or a main language with alternate language sequences, to name some non-limiting examples. Errors may automatically indicate if a main language sequence does not support an alternate language sequence.
  • TTS may be selected for a language in which the TTS engine may be unable to read the selected language's TTS back. A validation error may thus be generated reflecting that TTS cannot be used in that language.
  • FIG 4a an example of a multi-language sequence is provided.
  • the audio sequence presented comprises a prompt 404, such as "PromptHello” 405, followed by an item for TTS 406, such as "The time is” 407.
  • a third item for data 408 is also presented to provide the current time, such as "Flow.currentTime” 409.
  • a language such as es-US 403 may be designated for the main sequence, with edits to items being made.
  • the item for TTS 406 may be edited to "es el momento" 407 and the sequence reordered with the item for TTS moved into position 3 and the data item 408 moved into position 2.
  • Alternate sequences may be enabled for the main language, such as fr-CA 402, as illustrated in Figure 4c.
  • an indicator may confirm with the user that they want to enable alternate sequences for French (Canada) 410.
  • Each language may have different pieces of information associated with it, as generally exemplified in Figure 4d.
  • information such as "Supports runtime data playback” 411, "Supports speech recognition” 412, and “Supports text to speech” 413, may be included to allow for more information about what the system supports.
  • a "yes" after each piece of information indicates that these are supported in the desired language.
  • indications may be made as to whether that language sequence supports certain features or not.
  • the main audio sequence may not be designated to play at run time, whether by error or intentionally.
  • an indicator 414 may let the user know that this sequence will not play. As a result, the system may revert to one of the alternate sequences.
  • Figures 5a-5c are general diagrams of different options available for audio sequence editing.
  • item 3 501 of the dialog exemplified in Figure 5a, for example, data for playback may be chosen.
  • options may include to present time as a "date", “date and time”, “month”, etc.
  • the options may be presented in a drop down menu 503, for example, or by another means such as a separate window.
  • the indexed item may be highlighted and include a tool tip indicating that an error has occurred.
  • item 1, 601 has been highlighted 602 to indicate an error.
  • the message "Select prompt" is provided 603 to the user.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • General Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Acoustics & Sound (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Library & Information Science (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Signal Processing (AREA)
  • Machine Translation (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

A system and method are presented for multi-language communication sequencing. Communication flows may support one or more languages, which may need to be created, removed, or edited. During sequence editing, prompts, data, expressions, pauses, and text-to-speech may be added. This may be done through the use of inline-selectors, which comprise a prompt or TTS, or through the use of dialogs, which may also provide error feedback. A main sequence may be capable of handling multiple languages which are supported and managed independent of each other.

Description

SYSTEM AND METHOD FOR MULTI-LANGUAGE COMMUNICATION SEQUENCING BACKGROUND
[0001 ] The present invention generally relates to telecommunications systems and methods, as well as business environments. More particularly, the present invention pertains to audio playback in interactions within the business environments.
SUMMARY
[0002] A system and method are presented for multi-language communication sequencing.
Communication flows may support one or more languages, which may need to be created, removed, or edited. During sequence editing, prompts, data, expressions, pauses, and text-to-speech may be added. This may be done through the use of inline-selectors, which comprise a prompt or TTS, or through the use of dialogs, which may also provide error feedback. A main sequence may be capable of handling multiple languages which are supported and managed independent of each other.
[0003] In one embodiment, a method is presented for sequencing communication to a party utilizing a plurality of languages in an interactive voice response system, the method comprising the steps of:
creating, by a user of the system, a prompt, wherein the prompt has a plurality of resources of attached; enabling, by the interactive voice response system, at least one supported language for the
communication, wherein the communication is in the at least one supported language; enabling, for editing to the sequence, one or more of: prompts, data, expressions, pauses, and text-to-speech; enabling an alternate language for the communication, wherein the alternate language comprises an alternate sequence.
[0004] In another embodiment, a method is presented for sequencing communication to a party utilizing a plurality of languages in an interactive voice response system, the method comprising the steps of: selecting, through a graphical user interface, by a user, a prompt; and creating, by a computer processor, at run-time, a communication sequence using the prompt.
[0005] In another embodiment, a method is presented for sequencing communication to a party utilizing a plurality of languages in an interactive voice response system, the method comprising the steps of: entering, by a user, text into a graphical user interface, wherein the text is transformed into text-to-speech by a computer processor; and creating, by the computer processor, a communication sequence using the text-to-speech..
BRIEF DESCRIPTION OF THE DRAWINGS
[0006] Figures la- Id are diagrams illustrating embodiments of inline selectors.
[0007] Figures 2a-2e are diagrams illustrating embodiments of sequence selectors.
[0008] Figure 3a-3b are diagrams illustrating embodiments of audio sequences.
[0009] Figures 4a-4e are diagrams illustrating embodiments of multi-language sequences.
[0010] Figures 5a- 5b are diagrams illustrating embodiments of audio sequence editing.
[001 1 ] Figure 6 is a diagram illustrating an embodiment of an error.
DETAILED DESCRIPTION
[0012] For the purposes of promoting an understanding of the principles of the invention, reference will now be made to the embodiment illustrated in the drawings and specific language will be used to describe the same. It will nevertheless be understood that no limitation of the scope of the invention is thereby intended. Any alterations and further modifications in the described embodiments, and any further applications of the principles of the invention as described herein are contemplated as would normally occur to one skilled in the art to which the invention relates.
[0013] In a business environment, such as a contact center or enterprise environment, interactive voice response systems are often utilized, particularly for inbound and outbound interactions (e.g., calls, web interactions, video chat, etc.). The communication flows for different media types may be designed to automatically answer communications, present parties to interactions with menu choices, and provide routing of the interaction according to a party's choice. Options present may be based on the industry or business in which the flow is used. For example, a bank may offer a customer the option to enter an account number, while another business may ask the communicant their name. Another company may simply have the customer select a number correlated to an option. Systems may also be required to support many languages. In an embodiment, consolidated multi-language support for automatic runtime data playback, speech recognition, and text-to-speech (TTS), may be used.
[0014] In an embodiment, the call flows, or logic for the handling of a communication, that an IVR uses to accomplish interactions may comprise several different languages. In the management of these flows, a main sequence provides an audio sequence for all supported languages in a flow with the ability for a system user (e.g., flow author) to specify alternate sequences on a per language basis. The main sequence may also be comprised of one or more items. The main sequence may be capable of handling multiple languages which are supported in the IVR flow. The languages may be managed independently of each other in the event an alternate sequence is triggered. During editing of the sequences, error feedback may be triggered by the system and provided to a user for the correction of issues that arise.
[0015] In an embodiment, flows may comprise multiple sequences. For example, the initial greeting in a flow comprises a sequence, a communicant may be presented with a menu at which point they may be provided with another sequence, such as 'press 1 for sales', 'press 2 for Jim', etc. The selection of an option, in this example, triggers another sequence for presentation to the communicant.
[0016] Because business environments are not always consistent, changes may be needed to the audio without having to deconstruct the IVR. The TTS of a new prompt on the related prompt resources will remain the same TTS set by the author in the flow, which can be modified as appropriate.
[0017] Prompts such as "hello" may be created for greeting, for example, and stored within a database which is accessed by a run-time engine, such as a media server like Interactive Intelligence Group, Inc.'s Interaction Edge® product, that executes the IVR logic. A prompt may have one or more resources attached to it. Resources may comprise audio (e.g., a spoken "hello"), TTS (e.g., a synthesized "hello"), or a language (e.g., en-US). In an embodiment, the resource may comprise TTS and Audio and a language tag. In another example, the resource may comprise TTS or Audio, and a language tag. The language tag may comprise an IETF language tag (or other means for tagging a language) and may be used to identify a resource within a prompt. The language tag may also provide the grouping that is used for audio and TTS. In an embodiment, a prompt may only have one prompt resource per language. For example, two resources may not be associated with the German language.
[0018] In an embodiment, audio sequences may be edited where a prompt is followed by TTS or vice versa. A user may decide to specify a prompt or to specify TTS. The prompt or TTS may be turned into a sequence later as business needs dictate. For example, during the development of a flow, TTS may be initially used and at some later time converted to a sequence.
[0019] Audio sequences comprise an ordered list of indexed items to play back to a communicant interacting with the IVR. The items may include, in no particular order, TTS, data playback, prompts, pauses or breaks, and embedded audio expressions. A main sequence may be designated, with that designated sequence applying to all supported languages set on a flow. Alternate sequences may also be present in the flow. These alternate sequences may be enabled for specific languages, such that when an interaction exits the main sequences, such as by the selection of a new language, the alternate sequence for that new language takes over. The alternate sequence may be duplicated from the main sequence initially and further edited by a flow author. The main sequence may then be used for all supported languages in the flow with the exception of the alternate sequence enabled by the flow author. If alternate sequences are enabled for each supported language in the flow, the main sequence no longer applies since each alternate language overrides the main sequence. Thus, the sequencing of wording in prompts can be language specific. In an embodiment, one prompt may be sufficient for all languages, such as a
"thanks for calling" prompt. Within that prompt, each language has the appropriate audio for use in the prompt, which is utilized in the main sequence.
[0020] Audio sequences may be configured through a dialog (e.g., a modal dialog or a window) or an inline selector. In an embodiment, inline selectors comprise for an easy means of configuration for TTS or prompt. Figures 1 a- 1 d are diagrams illustrating embodiments of inline selectors, indicated generally at 100. In an embodiment, an inline selector comprises a one-item sequence, such as a TTS or a prompt.
[0021 ] With regards to interactions, an author may detail languages for the flow to support. In an embodiment, an initial greeting may be made using TTS or a previously created prompt. For example, the author may enter TTS for the initial greeting or select a pre-existing prompt, without having to open the sequence editor for configuration. In an embodiment, the inline selectors comprise TTS that will be played as an initial greeting. In another embodiment, the inline selectors comprise a prompt selection that will be played as an initial greeting.
[0022] Figure la is an example of a one-item sequence utilizing TTS which Figure lb is an example of a one-item sequence utilizing prompts. The inline selector, such as in Figure la and Figure lb, comprises the "audio" 105. An audio expression may also be included 106. Along with the audio expression, an icon 107 may be present where upon selecting the icon, a window for editing the audio sequence opens. A window may also open for the addition of prompts. These editing windows are described in greater detail in Figures 2a-2e below.
[0023] In an embodiment, errors and their descriptions 108 may be displayed for items, such as in Figure lc, where the error indicates that there is a problem with an audio sequence ("1 or more audio sequences are in error", for example). Attention may be called to the error by highlighting or by a font color change to the error and/or error descriptions, for example.
[0024] Figure 1 d is an embodiment of an audio sequence without an error, indicating that ' 1 audio sequence is set' 109. An icon, such as the dialogue clouds 1 10 exemplified in Fig Id, may also be indicative that this entry is not an inline entry of TTS or a prompt. In an embodiment, the user may have manually entered the sequence through a dialog as opposed to selecting a TTS or a prompt.
[0025] Figures 2a-2d are diagrams generally illustrating embodiments of sequence selectors. Each of figures 2a-2d illustrate a single supported language, for simplicity. These windows generally indicate examples for configuring the dialog and sequence editing of audio expressions. In Figure 2a, the window illustrates the audio expression is a TTS 201. A user may decide to add additional dialog, such as "Add Prompt", "Add Data", "Add TTS", "Add Expression", and "Add Blank Audio", to name a few non- limiting examples. These options may be displayed in a task bar 202. In Figure 2a, "Add TTS" has been selected. As a result, an additional item in the sequence may be created. In Figure 2a, this is identified as second in the sequence and is "Text to Speech" 203. Any number of items may be added to the sequence with the order of items editable. In an embodiment, a TTS string may additionally be promoted to a prompt and audio added in one or more languages, as further described in Figure 2c.
[0026] In Figure 2b, "Add Blank Audio" has initially been selected 204. Blank audio may allow a user to configure the system to delay or pause in playback for a specified duration. In an embodiment, this may be performed from a drop-down menu 205, such as seen in Figure 2b. Different durations may be presented for selection, such as 100 ms, 250 ms, 500 ms, etc.
[0027] Further, simple TTS may be promoted to managed prompts that include audio and TTS for multiple languages, such illustrated in Figure 2c. A flow author may specify the prompt name 206 and description 207 in order to create the prompt. Here, the name is "ThanksforContacting" and the description "Used at the end of an interaction to say thanks for contacting us". After the prompt has been created in the user interface, the TTS is set on each of the prompt resources, which are determined by the supported languages set on the flow 208. In Figure 2c, English, United States, has been designated. A flow author may specify the audio to be included as "thank you for contacting us" 209. In an
embodiment, two resources may be presented as prompt resources, if the supported languages are English and Spanish, for example.
[0028] Additional data may also be included in the main sequence. In Figure 2d, for example, four items have been included in the main sequence. Each item may be created by selecting the dialog "Add Data" from the task bar 202. Different types of data may be added, such as: dates and/or times, currencies, numbers that may represent customer information, etc. Depending on the type of data selected, different options may become available from the system for a user to choose. For example, data in item 1, 208, may comprise currency. A user may decide to accept major units only from the options available. For item 2, 209, a decimal has been selected. A user may decide that they want the system to speak each digit, speak the entire value, etc.
[0029] In certain languages that utilize gender and/or case, options may also include selecting between feminine, masculine, neuter, articles, etc., 210. A sequence may also be altered/reordered/removed dependent on the language. [0030] In an example of gender utilization, a veterinary clinic has an IVR with a call flow running in Spanish - United States (es-US). Confirmation with a caller is being performed automatically as to what pets the caller has on file. For this particular customer they have one female cat on file, which needs confirmation. An example sequence follows, such that:
[0031] TTS: "Usted tiene" ( you have )
[0032] Data: 1, Female
[0033] TTS: "gata"
[0034] At runtime the IVR would return: "Usted tiene una gata".
[0035] The generated expression comprises: Append(ToAudioTTS("Usted tiene"), ToAudioNumber(l, Language. Gender.Feminine), ToAudioTTS("gata")).
[0036] In an embodiment, where submitted numbers to 'ToAudioNumber' have gender specific representations, the runtime playback will play the correct prompt. For the example of the veterinary clinic above, "una" is used since the number ' 1 ' will need to agree with the gender of the noun (the female cat) that follows.
[0037] Articles may also be supported for languages. Meta-data may be retained about a language on whether or not it supports gender, what gender types are there (e.g., masculine, feminine, neuter), or case. If one of those options is specified by a flow author and the runtime has a special audio handler set up for that option, that handler will be played back to the communicant. In an embodiment, case and gender may also be combined together on playback and are not exclusive of each other. For example, using "ToAudioNumber(l, Language. Gender.masculine, Language.Case.article)", the gender options are grouped together and then the case options are grouped together. In an embodiment, the case and gender may be supported in the same dropdown menu of a user interface.
[0038] Errors may also be automatically indicated by the system during sequence editing. In Figure 2e, an example is provided of an in-line error, 21 1. In-line errors may be indicated by means such as a color change, a warning, high-lighting, icons, etc. In Figure 2e, the item entry field is highlighted. In this example, a user has added an item to the sequence, but did not specify expression text in the dialog. The system recognizes an error has occurred and provides an indication, such as feedback, to allow the user to correct the error in a quick edit form. In embodiments with longer expressions, an editor may be opened which provides more detailed feedback, such as converting audio to numbers, for example. In Figure 2e, an indication is being made that "There is no expression defined" 212, allowing the user to quickly pinpoint the error and, in this example, define an expression.
[0039] Expressions may also be included in the sequence graphical user interface, which allow for greater flexibility such as, for example, 'ToAudioTTS(If(Hour(GetCurrentDateTimeUTC())>=12, "Good Afternoon", "Good Morning"))'. If a caller is in Greenwich, England, the expression would play TTS of "Good Morning" if running before 12:00 PM, otherwise, "Good Afternoon". Expressions may also allow for dynamic playback within a sequence, such as the TTS of "are the last four digits of your social security number". The expression may become: "ToAudioTTS(Substring(Flow.CustomerSSN, Length(Flow.CustomerSSC)-4,4), Format.String.PlayChars)". The expression in this example is being used to extract part of the data. The data comprises the social security number of the customer with the last four characters picked to be read back to the customer as spoken integers in the language in which the flow is running. Expressions may be used to also perform mathematical calculations and text manipulation, such as adding orders together or calculating a delivery date.
[0040] Expressions may also comprise grammars that return a type of audio to provide more control with the type of data played back. In an embodiment, this may also be applied to communications and/or to flows that run while a communicant (e.g., caller) is waiting on hold for an agent (e.g., In-Queue flows).
[0041] Audio sequences may be edited. In Figures 3a and 3b, examples of audio sequences are generally provided. An audio sequence may be presented and a user may decide to use the large/long expression editor. In figure 3a, for example, index 1, 301, describes a prompt, such as "Prompt. Hello" 302, followed by an item for TTS 303. A user may indicate that they want the time to be provided 304. Another data item 305 may be added to provide the current time 306. In Figure 3b, integrated expression help may be provided such that a user may obtain more detailed error feedback, if available. The output of the audio sequencing editor comprises an expression. Here, the system may append to an audio prompt the custom audio "the time is" followed by an insert of the time, as exemplified with the expression
"Append(ToAudio(Prompt.Hello), ToAudioTTS("The time is"), ToAudioTime(Flow.currentTime))" 307.
[0042] In embodiments where an alternate language is enabled, for example, an expression may be generated for that language in addition to an expression generated for the main sequence. Items within the audio sequence editor are validated for correctness individually in order to display appropriate errors for each sequence item. In an embodiment, if one or more sequence items are in error within a sequence, either the main sequence or language specific sequence tab near the dialog will reflect that it is in error as well.
[0043] Figures 4a-4d are diagrams generally illustrating multi-language sequences. A plurality of language sequences may be defined such that there may be one or more main language sequences, or a main language with alternate language sequences, to name some non-limiting examples. Errors may automatically indicate if a main language sequence does not support an alternate language sequence. For example, TTS may be selected for a language in which the TTS engine may be unable to read the selected language's TTS back. A validation error may thus be generated reflecting that TTS cannot be used in that language. In Figure 4a, an example of a multi-language sequence is provided. Languages which may be supported include US English (en-US) 401, Canadian French (fr-CA) 402, and US Spanish (es-US) 403, to name a few non-limiting examples. The audio sequence presented comprises a prompt 404, such as "PromptHello" 405, followed by an item for TTS 406, such as "The time is" 407. A third item for data 408 is also presented to provide the current time, such as "Flow.currentTime" 409.
[0044] In Figure 4b, a language, such as es-US 403 may be designated for the main sequence, with edits to items being made. In this example, the item for TTS 406 may be edited to "es el momento" 407 and the sequence reordered with the item for TTS moved into position 3 and the data item 408 moved into position 2. Alternate sequences may be enabled for the main language, such as fr-CA 402, as illustrated in Figure 4c. In an embodiment, an indicator may confirm with the user that they want to enable alternate sequences for French (Canada) 410. [0045] Each language may have different pieces of information associated with it, as generally exemplified in Figure 4d. For example, information such as "Supports runtime data playback" 411, "Supports speech recognition" 412, and "Supports text to speech" 413, may be included to allow for more information about what the system supports. In this non-limiting example, a "yes" after each piece of information indicates that these are supported in the desired language. Thus, indications may be made as to whether that language sequence supports certain features or not.
[0046] In another embodiment, the main audio sequence may not be designated to play at run time, whether by error or intentionally. In this scenario, as generally indicated in Figure 4e, an indicator 414 may let the user know that this sequence will not play. As a result, the system may revert to one of the alternate sequences.
[0047] Figures 5a-5c are general diagrams of different options available for audio sequence editing. In item 3, 501, of the dialog exemplified in Figure 5a, for example, data for playback may be chosen. In an embodiment, if the current time is indicated (e.g., "Flow.currentTime") in an item, options may include to present time as a "date", "date and time", "month", etc. The options may be presented in a drop down menu 503, for example, or by another means such as a separate window.
[0048] In an embodiment, such as illustrated generally in Figure 5b, if an integer is indicated in the data item 504 (e.g., "Flow. decimal" 505), options may be presented 506 which include for the synthesized speech to "speak each digit", provide the "entire value", provide "as percentage", etc.
[0049] In embodiments where errors arise, these may be indicated to a user, such as generally presented in Figure 6. In an embodiment, the indexed item may be highlighted and include a tool tip indicating that an error has occurred. In this example, item 1, 601, has been highlighted 602 to indicate an error. Within the item, the message "Select prompt" is provided 603 to the user.
[0050] Application of the embodiments described herein is not limited to calls. Communications in general may be applied, such as for text based interactions like web chat, to name a non-limiting example. In the case of a web chat, runtime might utilize the TTS component of a prompt resource instead of trying to pick up audio. As such, TTS of "Hello" on a web chat would be the text 'Hello'. [0051 ] While the invention has been illustrated and described in detail in the drawings and foregoing description, the same is to be considered as illustrative and not restrictive in character, it being understood that only the preferred embodiment has been shown and described and that all equivalents, changes, and modifications that come within the spirit of the invention as described herein and/or by the following claims are desired to be protected.
[0052] Hence, the proper scope of the present invention should be determined only by the broadest interpretation of the appended claims so as to encompass all such modifications as well as all relationships equivalent to those illustrated in the drawings and described in the specification.

Claims

1. A method for sequencing communication to a party utilizing a plurality of languages in an
interactive voice response system, the method comprising the steps of:
a. creating, by a user of the system, a prompt, wherein the prompt has a plurality of
resources of attached;
b. enabling, by the interactive voice response system, at least one supported language for the communication, wherein the communication is in the at least one supported language; c. enabling, for editing to the sequence, one or more of: prompts, data, expressions, pauses, and text-to-speech; and
d. enabling an alternate language for the communication, wherein the alternate language comprises an alternate sequence.
2. The method of claim 1, wherein the plurality of resources comprise a language tag wherein the language tag comprises text-to-speech.
3. The method of claim 1, wherein the plurality of resources comprise a language tag, wherein the language tag comprises audio.
4. The method of claim 1, wherein the alternate language belongs to an alternate sequence that overrides the main sequence in the event the alternate language is selected.
5. The method of claim 1, wherein the data comprises: dates, times, currencies, numbers, and
database lookups.
6. The method of claim 1, wherein the pause comprises a delay of audio playback.
7. The method of claim 1, wherein the editing comprises addition, deletion, or re-arranging.
8. The method of claim 7, wherein validation is provided in real-time for editing.
9. The method of claim 8, wherein the validation comprises errors placed adjacent to a sequence step in error.
10. The method of claim 1, wherein the enabling, for editing to the sequence, comprises enabling the raw source of a sequence for editing.
1 1. The method of claim 1 , wherein the text-to-speech is capable of automatic conversion into a prompt for a supported language resource.
12. The method of claim 1, wherein the enabling an alternate language for the communication
comprises saving a snapshot of the main sequence and applying the snapshot as a starting point for the alternate sequence.
13. A method for sequencing communication to a party utilizing a plurality of languages in an
interactive voice response system, the method comprising the steps of:
a. selecting, through a graphical user interface, by a user, a prompt; and
b. creating, by a computer processor, at run-time, a communication sequence using the prompt.
14. The method of claim 13, wherein the prompt has a plurality of resources of attached.
15. The method of claim 13, wherein the communication sequence comprises: a sequence item, wherein the sequence item comprises the prompt.
16. The method of claim 13, wherein the creating comprises: replacing existing sequence items with the created communication sequence.
17. A method for sequencing communication to a party utilizing a plurality of languages in an
interactive voice response system, the method comprising the steps of:
a. entering, by a user, text into a graphical user interface, wherein the text is transformed into text-to-speech by a computer processor; and
b. creating, by the computer processor, a communication sequence using the text-to-speech.
18. The method of claim 17, wherein the text-to-speech has a plurality of characters attached.
19. The method of claim 18, wherein the plurality of characters comprise words.
20. The method of claim 17, wherein the communication sequence comprises: a sequence item, wherein the sequence item comprises text-to-speech.
21. The method of claim 17, wherein the creating comprises: replacing existing sequence items with the created communication sequence.
PCT/US2015/055686 2015-10-15 2015-10-15 System and method for multi-language communication sequencing WO2017065770A1 (en)

Priority Applications (6)

Application Number Priority Date Filing Date Title
EP15906376.7A EP3363016A4 (en) 2015-10-15 2015-10-15 System and method for multi-language communication sequencing
CA3005710A CA3005710C (en) 2015-10-15 2015-10-15 System and method for multi-language communication sequencing
AU2015411582A AU2015411582B2 (en) 2015-10-15 2015-10-15 System and method for multi-language communication sequencing
CN201580085355.8A CN108475503B (en) 2015-10-15 2015-10-15 System and method for multilingual communication sequencing
PCT/US2015/055686 WO2017065770A1 (en) 2015-10-15 2015-10-15 System and method for multi-language communication sequencing
KR1020187013755A KR20180082455A (en) 2015-10-15 2015-10-15 System and method for multi-language communication sequencing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/US2015/055686 WO2017065770A1 (en) 2015-10-15 2015-10-15 System and method for multi-language communication sequencing

Publications (1)

Publication Number Publication Date
WO2017065770A1 true WO2017065770A1 (en) 2017-04-20

Family

ID=58517748

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2015/055686 WO2017065770A1 (en) 2015-10-15 2015-10-15 System and method for multi-language communication sequencing

Country Status (6)

Country Link
EP (1) EP3363016A4 (en)
KR (1) KR20180082455A (en)
CN (1) CN108475503B (en)
AU (1) AU2015411582B2 (en)
CA (1) CA3005710C (en)
WO (1) WO2017065770A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111078830B (en) * 2019-07-11 2023-11-24 广东小天才科技有限公司 Dictation prompting method and electronic equipment

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020184002A1 (en) * 2001-05-30 2002-12-05 International Business Machines Corporation Method and apparatus for tailoring voice prompts of an interactive voice response system
US20030204404A1 (en) 2002-04-25 2003-10-30 Weldon Phyllis Marie Dyer Systems, methods and computer program products for designing, deploying and managing interactive voice response (IVR) systems
US20040044517A1 (en) * 2002-08-30 2004-03-04 Robert Palmquist Translation system
US6904401B1 (en) * 2000-11-01 2005-06-07 Microsoft Corporation System and method for providing regional settings for server-based applications
US20050152516A1 (en) * 2003-12-23 2005-07-14 Wang Sandy C. System for managing voice files of a voice prompt server
EP1679867A1 (en) 2005-01-06 2006-07-12 Orange SA Customisation of VoiceXML Application
US20090202049A1 (en) 2008-02-08 2009-08-13 Nuance Communications, Inc. Voice User Interfaces Based on Sample Call Descriptions

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6205418B1 (en) * 1997-06-25 2001-03-20 Lucent Technologies Inc. System and method for providing multiple language capability in computer-based applications
KR100620826B1 (en) * 1998-10-02 2006-09-13 인터내셔널 비지네스 머신즈 코포레이션 Conversational computing via conversational virtual machine
US7403888B1 (en) * 1999-11-05 2008-07-22 Microsoft Corporation Language input user interface
DE602006003723D1 (en) * 2006-03-17 2009-01-02 Svox Ag Text-to-speech synthesis
US8352270B2 (en) * 2009-06-09 2013-01-08 Microsoft Corporation Interactive TTS optimization tool
TWI413105B (en) * 2010-12-30 2013-10-21 Ind Tech Res Inst Multi-lingual text-to-speech synthesis system and method
KR101358999B1 (en) * 2011-11-21 2014-02-07 (주) 퓨처로봇 method and system for multi language speech in charactor
US9483461B2 (en) * 2012-03-06 2016-11-01 Apple Inc. Handling speech synthesis of content for multiple languages

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6904401B1 (en) * 2000-11-01 2005-06-07 Microsoft Corporation System and method for providing regional settings for server-based applications
US20020184002A1 (en) * 2001-05-30 2002-12-05 International Business Machines Corporation Method and apparatus for tailoring voice prompts of an interactive voice response system
US20030204404A1 (en) 2002-04-25 2003-10-30 Weldon Phyllis Marie Dyer Systems, methods and computer program products for designing, deploying and managing interactive voice response (IVR) systems
US20040044517A1 (en) * 2002-08-30 2004-03-04 Robert Palmquist Translation system
US20050152516A1 (en) * 2003-12-23 2005-07-14 Wang Sandy C. System for managing voice files of a voice prompt server
EP1679867A1 (en) 2005-01-06 2006-07-12 Orange SA Customisation of VoiceXML Application
US20090202049A1 (en) 2008-02-08 2009-08-13 Nuance Communications, Inc. Voice User Interfaces Based on Sample Call Descriptions

Also Published As

Publication number Publication date
CA3005710C (en) 2021-03-23
CN108475503A (en) 2018-08-31
KR20180082455A (en) 2018-07-18
AU2015411582A1 (en) 2018-06-07
CA3005710A1 (en) 2017-04-20
EP3363016A4 (en) 2019-05-15
CN108475503B (en) 2023-09-22
AU2015411582B2 (en) 2019-11-21
EP3363016A1 (en) 2018-08-22

Similar Documents

Publication Publication Date Title
US11409425B2 (en) Transactional conversation-based computing system
KR102258121B1 (en) Escalation to a human operator
US9575936B2 (en) Word cloud display
EP2030198B1 (en) Applying service levels to transcripts
US7286985B2 (en) Method and apparatus for preprocessing text-to-speech files in a voice XML application distribution system using industry specific, social and regional expression rules
US9728190B2 (en) Summarization of audio data
US11954140B2 (en) Labeling/names of themes
US20210182326A1 (en) Call summary
CN101138228A (en) Customisation of voicexml application
US10078689B2 (en) Labeling/naming of themes
US20050027536A1 (en) System and method for enabling automated dialogs
US11228681B1 (en) Systems for summarizing contact center calls and methods of using same
WO2021102278A1 (en) Systems and methods for dialog management
CN107624177B (en) Automatic visual display of options for audible presentation for improved user efficiency and interaction performance
CN116235177A (en) Systems and methods related to robotic authoring by mining intent from dialogue data using known intent of an associated sample utterance
US11054970B2 (en) System and method for multi-language communication sequencing
AU2015411582B2 (en) System and method for multi-language communication sequencing
US20160034509A1 (en) 3d analytics
US11743386B2 (en) System and method of controlling and implementing a communication platform as a service
US10706235B2 (en) System and method for generating a rich persistent conversation history using a communication protocol
US20240211701A1 (en) Automatic alternative text suggestions for speech recognition engines of contact center systems
WO2024137258A1 (en) Automatic alternative text suggestions for speech recognition engines of contact center systems
Dunn Building Prompts

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15906376

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 20187013755

Country of ref document: KR

Kind code of ref document: A

WWE Wipo information: entry into national phase

Ref document number: 2015906376

Country of ref document: EP

ENP Entry into the national phase

Ref document number: 3005710

Country of ref document: CA

ENP Entry into the national phase

Ref document number: 2015411582

Country of ref document: AU

Date of ref document: 20151015

Kind code of ref document: A

WWE Wipo information: entry into national phase

Ref document number: 201580085355.8

Country of ref document: CN