EP3370164A1 - Artificial intelligence digital agent - Google Patents

Artificial intelligence digital agent Download PDF

Info

Publication number
EP3370164A1
EP3370164A1 EP18154752.2A EP18154752A EP3370164A1 EP 3370164 A1 EP3370164 A1 EP 3370164A1 EP 18154752 A EP18154752 A EP 18154752A EP 3370164 A1 EP3370164 A1 EP 3370164A1
Authority
EP
European Patent Office
Prior art keywords
data
processors
intent
text
entity
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
EP18154752.2A
Other languages
German (de)
French (fr)
Inventor
Matteo Luca Maga
Tariq Mohammad Salameh
Federica Rossi
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Accenture Global Solutions Ltd
Original Assignee
Accenture Global Solutions Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Accenture Global Solutions Ltd filed Critical Accenture Global Solutions Ltd
Publication of EP3370164A1 publication Critical patent/EP3370164A1/en
Ceased legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/332Query formulation
    • G06F16/3329Natural language query formulation or dialogue systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/3331Query processing
    • G06F16/334Query execution
    • G06F16/3344Query execution using natural language analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/338Presentation of query results
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/903Querying
    • G06F16/9032Query formulation
    • G06F16/90332Natural language query formulation or dialogue systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/279Recognition of textual entities
    • G06F40/289Phrasal analysis, e.g. finite state techniques or chunking
    • G06F40/295Named entity recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/26Speech to text systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Databases & Information Systems (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Acoustics & Sound (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Machine Translation (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

Implementations are directed to receiving communication data from a device, the communication data including data input by a user of the device, receiving text data based on the communication data, providing an intent set and an entity set based on processing the text data through an artificial intelligence service, the intent set including one or more intents indicated in the text data, the entity set including one or more entities indicated in the text data, the artificial intelligence service implementing a convolution neural networks (CNN), identifying a set of actions based on one or more of the text data, the intent set, and the entity set, receiving a set of results including at least one result from executing an action of the set of actions, providing result data, and transmitting the result data to the device.

Description

    BACKGROUND
  • Users (e.g., customers of an enterprise) can call into a call center in an effort to address issues, gather information, and/or use services. Call centers have introduced automated services that enable users to drill-down through menus, for example, in an effort to focus resources to attend to a particular user (e.g., identify a particular department, and/or customer service representative that may be best suited to address the user's needs). Example automated services can include artificial intelligence that processes the user's spoken words to route the call to particular resources. Such automated services, however, can have disadvantages. For example, although an automated service may be able to route a call, the automated service is limited in other functionality (e.g., performing requested services).
  • SUMMARY
  • Implementations of the present disclosure are generally directed to a computer-implemented platform for an artificial intelligence (AI) -based digital agent. More particularly, implementations of the present disclosure are directed to an AI-based digital agent that can audibly interact with users, and that can execute one or more actions based on user interactions.
  • In some implementations, actions include receiving communication data from a device, the communication data including data input by a user of the device, receiving text data based on the communication data, providing an intent set and an entity set based on processing the text data through an artificial intelligence service, the intent set including one or more intents indicated in the text data, the entity set including one or more entities indicated in the text data, the artificial intelligence service implementing one or more convolution neural networks (CNNs), identifying a set of actions based on one or more of the text data, the intent set, and the entity set, the set of actions including one or more actions to be executed by one or more computer-implemented services, receiving a set of results including at least one result from a computer-implemented service executing an action of the set of actions, providing result data including data describing the at least one result, and transmitting the result data to the device. Other implementations of this aspect include corresponding systems, apparatus, and computer programs, configured to perform the actions of the methods, encoded on computer storage devices.
  • These and other implementations can each optionally include one or more of the following features: the artificial intelligence service comprises an intent classification model using natural language processing (NLP) to provide the intent set; the NLP includes word embedding; the artificial intelligence service comprises an entity extraction model using named entity recognition (NER) to provide the entity set; actions further include determining that one or both of the intent set and the entity set is empty, and in response, transmitting at least one disambiguation question to the device; actions further include determining that an expected entity is absent from the entity set based on an intent of the intent set, and in response, transmitting at least one disambiguation question to the device; actions further include determining that the set of results includes a deficiency, and in response, transmitting at least one disambiguation question to the device; the communication data includes audio data, and the result data includes audio result data; the communication data includes text data, and the result data includes text result data; and the result data includes audio data that is provided by a voice response composition module based on text result data.
  • The present disclosure also provides a computer-readable storage medium coupled to one or more processors and having instructions stored thereon which, when executed by the one or more processors, cause the one or more processors to perform operations in accordance with implementations of the methods provided herein.
  • The present disclosure further provides a system for implementing the methods provided herein. The system includes one or more processors, and a computer-readable storage medium coupled to the one or more processors having instructions stored thereon which, when executed by the one or more processors, cause the one or more processors to perform operations in accordance with implementations of the methods provided herein.
  • It is appreciated that methods in accordance with the present disclosure can include any combination of the aspects and features described herein. That is, methods in accordance with the present disclosure are not limited to the combinations of aspects and features specifically described herein, but also include any combination of the aspects and features provided.
  • The details of one or more implementations of the present disclosure are set forth in the accompanying drawings and the description below. Other features and advantages of the present disclosure will be apparent from the description and drawings, and from the claims.
  • BRIEF DESCRIPTION OF DRAWINGS
    • FIG. 1 depicts an example high-level architecture in accordance with implementations of the present disclosure.
    • FIG. 2 depicts an example architecture in accordance with implementations of the present disclosure.
    • FIG. 3 depicts an example process that can be executed in accordance with implementations of the present disclosure.
    DETAILED DESCRIPTION
  • Implementations of the present disclosure are generally directed to a computer-implemented platform for an artificial intelligence (AI) -based digital agent. More particularly, implementations of the present disclosure are directed to an AI-based digital agent that can audibly interact with users, and that can execute one or more actions based on user interactions.
  • As described in further detail herein, implementations of the present disclosure include actions of receiving communication data from a device, the communication data including data input by a user of the device, receiving text data based on the communication data, providing an intent set and an entity set based on processing the text data through an artificial intelligence service, the intent set including one or more intents indicated in the text data, the entity set including one or more entities indicated in the text data, the artificial intelligence service implementing one or more convolution neural networks (CNNs), identifying a set of actions based on one or more of the text data, the intent set, and the entity set, the set of actions including one or more actions to be executed by one or more computer-implemented services, receiving a set of results including at least one result from a computer-implemented service executing an action of the set of actions, providing result data including data describing the at least one result, and transmitting the result data to the device.
  • FIG. 1 depicts an example high-level architecture 100 in accordance with implementations of the present disclosure. The example architecture 100 includes a device 102, a back-end system 108, and a network 110. In some examples, the network 110 includes a local area network (LAN), wide area network (WAN), the Internet, a cellular telephone network, a public switched telephone network (PSTN), a private branch exchange (PBX), or any appropriate combination thereof, and connects web sites, devices (e.g., the device 102), and back-end systems (e.g., the back-end system 108). In some examples, the network 110 can be accessed over a wired and/or a wireless communications link. For example, mobile devices, such as smartphones can utilize a cellular network to access the network 110.
  • In the depicted example, the back-end system 108 includes at least one server system 112, and data store 114 (e.g., database). In some examples, at least one server system 112 hosts one or more computer-implemented services that users can interact with using devices. For example, the server system 112 can host an AI-based digital agent in accordance with implementations of the present disclosure. In some examples, the device 102 can each include any appropriate type of computing device such as a desktop computer, a laptop computer, a handheld computer, a tablet computer, a personal digital assistant (PDA), a cellular telephone, a network appliance, a camera, a smartphone, a telephone, a mobile phone, an enhanced general packet radio service (EGPRS) mobile phone, a media player, a navigation device, an email device, a game console, or an appropriate combination of any two or more of these devices, or other data processing devices.
  • In the depicted example, the device 102 is used by a user 120. In accordance with the present disclosure, the user 120 uses the device 102 to audibly interact with the AI-based digital assistant of the present disclosure. In some examples, the user 120 can include a customer of an enterprise that provides the AI-based digital agent, or on behalf of which the AI-based digital assistant is provided. For example, the user 120 can include a customer that calls into a call center of the enterprise using the device 102, and is connected to the AI-based digital assistant (e.g., hosted on the back-end system 108). In accordance with implementations of the present disclosure, and as described in further detail herein, the user 120 can provide verbal input (e.g., speech) to the AI-based digital assistant, which can process the verbal input to request additional information (e.g., disambiguate), perform one or more actions, and/or provide one or more audible responses.
  • FIG. 2 depicts an example architecture 200 in accordance with implementations of the present disclosure. In some examples, components of the example architecture 200 can be hosted on one or more back-end systems (e.g., the back-end system 108 of FIG. 1). In the depicted example, the example architecture 200 includes an interaction manager 202, an action handler 204, a speech-to-text service 206, an artificial intelligence (machine intelligence) service 208, and a training data service 210. In some examples, each component of the example architecture 200 is provided as one or more computer-executable programs executed by one or more computing devices. In some examples, the interaction manager 202, and the action handler 204 are operated by, or on behalf of an enterprise (e.g., hosted on the back-end system 108 of FIG. 1, which is operated by, or on behalf of the enterprise).
  • In some examples, the speech-to-text service 206, the artificial intelligence service 208, and/or the training data service 210 are operated by, or on behalf of the enterprise (e.g., hosted on the back-end system 108 of FIG. 1, which is operated by, or on behalf of the enterprise), or are provided by one or more third-party service providers (e.g., hosted on a back-end system other than the back-end system 108, operated by, or on behalf of the one or more third-party service providers). An example speech-to-text service 206 includes Google Cloud Speech provided by Google, Inc. of Mountain View, CA. In some examples, Google Cloud Speech converts audio data to text data by processing the audio data through neural network models. Although an example speech-to-text service 206 is referenced herein, implementations of the present disclosure can be realized using any appropriate speech-to-text service. An example artificial intelligence service 208 includes TensorFlow provided by Google, Inc. of Mountain View, CA. In some examples, TensorFlow can be described as an open source software library for numerical computation using data flow graphs.
  • In the depicted example, the interaction manager 202 includes a text classification module 220, an action identification module 222, a disambiguation question module 224, a text response composition module 226, and a voice response composition module 228. The action handler 204 includes a parameter extraction module 230 (optional), an action orchestration module 232, and an action results module 234. The artificial intelligence service 208 includes an intent classification model (e.g., based on natural language processing (NLP)), and an entity extraction model 242 (e.g., based on named entity recognition (NER)). The training data service 210 includes a text labeling/classifying module 250, and a training data preparation module 252.
  • In accordance with implementations of the present disclosure, the artificial intelligence service 208 implements a convolutional neural network (CNN). In some examples, the CNN enables more efficient and faster processing of the text data than other types of AI networks. In general, a CNN can be described as a neural network having overlapping "reception fields" that perform convolution tasks. More particularly, a CNN is a type of feed-forward artificial neural network, which includes connectivity patterns between neurons, where receptive fields of different neurons partially overlap. In a CNN, a response of an individual neuron to data (stimuli) within its receptive field is mathematically approximated by a convolution operation.
  • In contrast, other neural networks, such as a recurrent neural network (RNN) implement recurrent connections, which form cycles in the RNN's topology. In some examples, a RNN can be described as being sequential, and not stateless. A RNN can suffer from the so-called vanishing (or exploding) gradient problem, where information is (rapidly) lost over time. Consequently, whatever the model learned in the past might be lost in the future, if it was overridden by intensive new information, for example.
  • In accordance with implementations of the present disclosure, the artificial intelligence service 108 implements word embedding in the NLP. In some examples, word embedding can be described as the collective name for a set of language modeling and feature learning techniques within the NLP, where words and/or phrases from a vocabulary are mapped to vectors of real numbers. Conceptually, word embedding involves a mathematical embedding from a space with one dimension per word to a continuous vector space with a much lower dimension. In general, word embedding enables the model to understand different words having the same meaning (synonyms), and understand such words without the need to actually teach the machine each word individually.
  • In accordance with implementations of the present disclosure, and as described in further detail herein, the interaction manager 202 receives communication data, and processes the communication data to provide a response, and/or to initiate execution of one or more actions. In some implementations, the communication data is provided as audio data. In some examples, the audio data corresponds to speech of a user that is recorded (e.g., during a user telephone). Accordingly, the response can include an audio response. In this manner, the AI-based digital assistant of the present disclosure can operate as a voice-based agent. In some implementations, the communication data is provided as text data. In some examples, the text data corresponds to a message transmitted by a user (e.g., a text message, a chat message). Accordingly, the response can include a text response. In this manner, the AI-based digital assistant of the present disclosure can operate as a chat bot, for example.
  • The example architecture 200 is described in further detail herein with reference to processing communication data including audio data, and providing an audio response. It is contemplated, however, that the communication data can include text data, as introduced above.
  • In the depicted example, the user 120 can audibly communicate with the interaction manager 202 using the device 102. For example, the user 120 can establish a communication path (e.g., telephone call) to communicate data from the device 102 to the interaction manager 202 (e.g., over the network 110 of FIG. 1). In some examples, the user 120 can speak to the device 102, which records the speech as audio data 260 that is transmitted to the interaction manager 202 (e.g., as streaming audio data; in one or more audio data files). The audio data 260 can be provided in any appropriate format (e.g., .wav, .mp3, .wma).
  • The interaction manager 202 provides the audio data 260 to the speech-to-text service 206 (e.g., through an application program interface (API) of the speech-to-text service 206). The speech-to-text service 206 processes the audio data 260 to provide text data 262. The text data 262 can be provided in any appropriate format (e.g., .txt, .csv). The text classification module 220 receives the text data 262, and processes the text data in coordination with the artificial intelligence service 208. In some examples, the text classification module 220 provides a request to the artificial intelligence service 208 (e.g., through an API of the artificial intelligence service 208), the request including at least a portion of the text data 262. In some examples, the text classification module 220 can inject one or more actions based on one or more classification rules. An example classification rule can include filtering curse words.
  • The artificial intelligence service 208 processes the received text data to provide an intent set, and an entity set. More particularly, the artificial intelligence service 208 processes the received text data through the intent classification model 240 using NLP to determine one or more intents of the text data, the one or more intents being included in the intent set. In some examples, an intent indicates a reason as to why the user is communicating with the AI-based digital assistant. For example, the text data can include "How many miles are in my frequent flier account," and example intents can be determined to be AccountQuery, and StatusQuery by the intent classification model 240. In some examples, an intent might not be determined from the text data. Consequently, the intent set can be empty. The artificial intelligence service 208 processes the received text data through the entity extraction model 240 using NER to determine one or more entities implicated within the text data, the one or more entities being included in the entity set. In some examples, an entity indicates a person, place, or thing (e.g., persons, organizations, locations, expressions of times, quantities, monetary values, percentages, etc.) implicated in the text data. For example, the text data can include "I would like to book travel from Austin to Frankfurt," and example entities can be determined to be LocationAustin, LocationFrankfurt, ThingTravel by the entity extraction model 240. In some examples, an entity might not be determined from the text data. Consequently, the entity set can be empty.
  • In some implementations, the text classification module 220 provides feedback for machine-learning. For example, the text classification module 220 can determine that some of the text data 262 was improperly, or poorly classified by the artificial intelligence service 208. For example, the artificial intelligence service 208 can provide intent classification, as well as a score indicative of how accurately the class was identified (e.g., a confidence index). In some examples, the scores (one score for each classification) can be compared to respective, customizable thresholds (e.g., per class). If the score of a class does not exceed the threshold, it can be determined that the class is poor/improper.
  • In some examples, the text classification 220 provides at least a portion of text data 264 to the training data service 210, which processes the text data 264 using the text labeling/classifying module 250, and the training data preparation module 252 to provide training data 266. The training data 266 is provided to the artificial intelligence service 208 to further train one or both of the intent classification model 240, and the entities extraction model 242. Although the training data service 210 is depicted as a separate service, the training data service 210 can be included as part of another service (e.g., the training data service 210 can be included in the artificial intelligence service 208).
  • It is determined whether disambiguation 270 is required. Although the disambiguation 270 is schematically depicted as an independent function, the text classification module 220, and/or the action identification module 222 can determine whether disambiguation is required. In some examples, disambiguation can be described as clarification of the text data 260, one or more entities identified in the text data 260, and/or one or more intents determined from the text data 260.
  • In some examples, disambiguation is required, if the intent set, and/or the entity set are empty. For example, if an intent cannot be determined from the text data 262, disambiguation can be required (e.g., request that the user repeat or clarify their question). In some examples, disambiguation is required, if an intent of the intent set does not correspond to a pre-defined list of intents. In some examples, a pre-defined list of intents can be provided for a particular domain, within which the AI-based digital agent is operating (e.g., flight reservations). In some examples, multiple pre-defined lists of intents can be provided, each pre-defined list of intents corresponding to a respective domain. In some examples, each intent provided in the intent set can be compared to intents of the pre-defined list of intents. If an intent of the intent set is not included in the pre-defined list of intents, disambiguation may be required. Continuing with the example above, an example intent in the intent set can include JewelryPurchase, which is not included in a pre-defined list of intents for the domain flight reservations. Consequently, disambiguation can be required in view of the intent JewelryPurchase being included in the set of intents.
  • In some examples, disambiguation can be required, if a number and or type of entities in the entity set do not correspond to an intent of the intent set. For example, to perform an action based on an intent, two or more entities can be required (e.g., a departure city, and an arrival city are required to determine flights). If, however, only a single entity is provided, or a single entity of the type required for the intent (e.g., only an arrival city is provided) in the entity set, disambiguation can be required (e.g., request the user to specify a departure city). In other words, for a given intent, one or more types of entities may be expected. If an expected entity (e.g., departure city) is absent from the entity set, disambiguation can be required. In some examples, disambiguation can be required, if an entity is too general. Continuing with the example above, an example entity set can include LocationAustin, LocationFrankfurt, and ThingTravel. It can be determined that travel is too general for one or more actions to be determined. Consequently, disambiguation may be required to clarify what is meant in the text data 260 (e.g., request that the user clarify whether plane, train, or automobile travel is being requested).
  • If disambiguation is required, at least a portion of one or more of the text data 260, the intent set, and the entity set is provided to the disambiguation question module 224. In some examples, the disambiguation question module 224 provides one or more disambiguation questions. In some examples, the disambiguation module 224 includes a pre-defined list of disambiguation questions based on the use-case (domain) that the AI-based digital agent is operating in (e.g., flight reservations). In some examples, a disambiguation question can be selected based on a look-up (e.g., using an index of disambiguation question) using one or more deficiencies of the intent set, and/or the entity set. For example, if the intent set is empty, the disambiguation question "I'm sorry, I did not understand your request, please repeat your question" can be selected. As another example, and in the example domain of flight reservations, if the entity set is empty, or only a single entity is included, example disambiguation questions can respectively include "What is the departure city, and the arrival city?" or "What is the departure city?" Continuing with the example above, in which it is determined that travel is too general for one or more actions to be identified, an example disambiguation question can include "Would you like automobile, boat, train, and/or airplane travel?"
  • In some examples, the disambiguation question is provided as text data, which is provided to the voice response composition module 226. The voice response composition module 226 processes the text data to provide audio data 272. For example, the voice response composition module 226 accesses a library of audio data based on one or more segments of the text data. In some examples, an index can be searched based on a segment (e.g., portion of the text data), an audio data can be retrieved. In some examples, audio data of respective segments can be appended together to provide the audio data 272. The audio data 272 is provided to the device 102 (e.g., over the network 110), and the device 102 plays the audio to the user 120.
  • If disambiguation is not required, at least a portion of one or more of the text data 260, the intent set, and the entity set are provided to the action identification module 222. The action identification module 222 provides a set of actions that are to be performed by the action handler 204. In some examples, the action identification module 222 references a library of available actions 222a. In some examples, the action identification module 222 accesses an index of the library of available actions 222a based on the intent(s) and the entit(y/ies).
  • In some examples, the set of actions includes one or more actions. Continuing with the example above, it can be determined that the user 120 is to book a flight from Austin, TX to Frankfurt, Germany, departing on February 26, 2017, and returning on March 2, 2017 (e.g., after one or more rounds of disambiguation). Consequently, an example action can include submission of a search query to a flight search engine, the search query including one or more search terms (e.g., depCity:AUS, arrCity:FRA, depDate: 2/26/17, retDate: 3/2/17). As another example, it can be determined that the user 120 is to purchase the fare using a credit card with given number, expiration data, and security code. Consequently, an example action can include submission of a payment authorization request to a payment service (e.g., the user's credit card company).
  • The set of actions can be provided to the parameter extraction model 230 of the action handler 204. The parameter extraction model 230 can process the set of actions to include one or more parameters. As introduced above, the response returned from the artificial intelligence service 208 to the text classification module 220 should be an intent set, and an entity set. After provision of the intent set, parameter extraction can be performed to select the proper/needed parameters to execute each action. Accordingly, the parameter extraction can eliminate any unnecessary parameters.
  • In some examples, the parameter extraction module 230 is optional. Consequently, the set of actions can be provided directly to the action orchestration module 232 from the action identification module 222. In some examples, this is optional in the case that the entity set is empty.
  • The action orchestration module 232 processes the set of actions to initiate performance of each action in the set of actions. In some examples, for each action, the action orchestration module 232 identifies one or more services 280 that are to be called for performance of the actions. In some examples, a service 280 is identified based on a type of action that is to be performed (e.g., flight search, credit card payment) from a pre-defined list of services (e.g., corresponding to the domain). One or more of the services 280 can be provided by a third-party service provider, and can be hosted on a back-end system (e.g., other than the back-end system 108 of FIG. 1). In some examples, the action orchestration module 232 transmits a request to one or more services 280 (e.g., through respective APIs of the services 280), each request including information to be processed by a respective service 280 to provide a result. Each service 280 processes a respective request, and transmits one or more results to the action orchestration module 232.
  • Continuing with the example above, the action orchestration module 232 can determine that a particular search service is to be called for performing a search using the example search query [depCity:AUS, arrCity:FRA, depDate: 2/26/17, retDate: 3/2/17]. The search service can process the request, and provide search results based thereon. Example search results can include one or more flights that are responsive to the search terms of the search query. The action orchestration module 232 provides a set of results to the action results module 234. In some examples, the action results module 234 parses the results of the action orchestration (e.g., whether a result includes a set of database results, or API (SOAP/HTTP) response) to a form that can be read by the interaction manager 202.
  • It can be determined whether disambiguation 282 is required. Although the disambiguation 282 is schematically depicted as an independent function, a module of the interaction handler 202 can determine whether disambiguation is required. If disambiguation is required, at least a portion of the set of results is provided to the disambiguation question module 224 to initiate provision of audio data 272 to the device 102, the audio data 272 providing one or more disambiguation questions, as described herein. In some examples, disambiguation can be required, if it is determined that the set of results includes one or more deficiencies. Continuing with the example above, it can be determined that the set of results includes, as an example deficiency, too many results to be efficiently communicated to the user 120. Consequently, an example disambiguation question can include "Would you like direct flights?" (e.g., a question having an answer that could be used to narrow results included in the set of results).
  • If disambiguation is not required, the set of results is provided to the text response composition module 228. The text response composition module 228 provides text data based on each result in the set of results. In some examples, the text composition module 228 references a library of text responses 228a. In some examples, the text composition module 228 accesses an index of the library of available text responses 228a based on a type of action, and the respective results. For example, if the action included credit card payment authorization, a result can include parameters [Visa, $489.07, ABC123DEF] indicating that a Visa payment of $489.07 has been approved and assigned the confirmation number ABC123DEF. Continuing with this example, text retrieved from the library of text responses 228a can include [credit card, payment, amount, approved, confirmation].
  • The text response composition module 228 provides the text data to the voice response composition module 226, which processes the text data to provide audio data 272, as described herein. Continuing with the above example, an example voice response can include "Your Visa payment of $489.07 has been approved, and your payment confirmation is ABC123DEF.
  • In accordance with implementations of the present disclosure, the voice response composition (e.g., provided by the voice response composition module 226) enables a more natural interaction between the AI-based digital assistant, and the user, and also enables a better representation of voice and better choice of correct words to deliver to the user. In this manner, the AI-based digital assistant of the present disclosure provides a seamless experience to the user, obviating potential user hesitation in interacting with the digital agent, because it is a machine. As described herein, the voice response composition composes voice responses" in real-time using multiple recorded voices. In this manner, the user experiences a seamless transition, in which differences between interacting with the AI-based digital assistant and a human being is minimized.
  • FIG. 3 depicts an example process 300 that can be executed in implementations of the present disclosure. In some examples, the example process 300 is provided using one or more computer-executable programs executed by one or more computing devices (e.g., the back-end system 108 of FIG. 1). In some examples, the example process 300 can be executed to provide an AI-based digital assistant, as described herein.
  • Audio data is received (302). For example, the interaction manager 202 receives the audio data 260 from the device 102 over the network 110. Audio data is provided to a speech-to-text service (304). For example, the interaction manager 202 provides the audio data 260 to the speech-to-text service 206. Text data is received (306). For example, the interaction manager 202 receives the text data 262 from the speech-to-text service 206.
  • Text data is provided to an artificial intelligence service (308). For example, the interaction manager 202 (e.g., the text classification module 220) provides the text data 262 (or at least a portion of the text data 262) to the artificial intelligence system 208. Output of the artificial intelligence service is received (310). For example, the interaction manager 202 (e.g., the text classification module 220) receives output of the artificial intelligence system 208. The output includes an intent set, and an entity set, as described herein.
  • It is determined whether disambiguation is required (312). For example, the interaction manager 202 determines whether disambiguation is required, as described herein. If disambiguation is required, disambiguation is performed (314). For example, and as described herein, the disambiguation question module 224 provides a disambiguation question as text data, which is provided to the voice response composition module 226. The voice response composition module 226 processes the text data to provide audio data 272. The audio data 272 is provided to the device 102 (e.g., over the network 110), and the device 102 plays the audio to the user 120.
  • If disambiguation is not required, one or more actions are determined (316). For example, and as described herein, the action identification module 222 provides a set of actions that are to be performed by the action handler 204 by referencing the library of available actions 222a. Execution of each of the one or more actions is initiated (318). For example, and as described herein, the action orchestration module 232 processes the set of actions to initiate performance of each action in the set of actions, by identifying one or more services 280 that are to be called for performance of the actions, and transmitting respective requests to the one or more services 280.
  • Results of execution of the one or more actions are received (320). For example, the action orchestration module 232 receives respective results from each of the one or more services 280. It is determined whether disambiguation is required (322). For example, if the set of results includes too many results to be efficiently communicated to the user 120, disambiguation can be required. If disambiguation is required, disambiguation is performed (314), as described herein. If disambiguation is not required, one or more text responses are provided (324). For example, the text response composition module 228 provides text data based on each result in the set of results (e.g., referencing the library of text responses 228a). An audio response is provided (326). For example, the text response composition module 228 provides the text data to the voice response composition module 226, which processes the text data to provide audio data 272. The audio response is transmitted (328). For example, the interaction handler 202 transmits the audio data 272 to the device 102 over the network 110.
  • Implementations and all of the functional operations described in this specification may be realized in digital electronic circuitry, or in computer software, firmware, or hardware, including the structures disclosed in this specification and their structural equivalents, or in combinations of one or more of them. Implementations may be realized as one or more computer program products, i.e., one or more modules of computer program instructions encoded on a computer readable medium for execution by, or to control the operation of, data processing apparatus. The computer readable medium may be a machine-readable storage device, a machine-readable storage substrate, a memory device, a composition of matter effecting a machine-readable propagated signal, or a combination of one or more of them. The term "computing system" encompasses all apparatus, devices, and machines for processing data, including by way of example a programmable processor, a computer, or multiple processors or computers. The apparatus may include, in addition to hardware, code that creates an execution environment for the computer program in question (e.g., code that constitutes processor firmware, a protocol stack, a database management system, an operating system, or any appropriate combination of one or more thereof). A propagated signal is an artificially generated signal (e.g., a machine-generated electrical, optical, or electromagnetic signal) that is generated to encode information for transmission to suitable receiver apparatus.
  • A computer program (also known as a program, software, software application, script, or code) may be written in any appropriate form of programming language, including compiled or interpreted languages, and it may be deployed in any appropriate form, including as a stand alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment. A computer program does not necessarily correspond to a file in a file system. A program may be stored in a portion of a file that holds other programs or data (e.g., one or more scripts stored in a markup language document), in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub programs, or portions of code). A computer program may be deployed to be executed on one computer or on multiple computers that are located at one site or distributed across multiple sites and interconnected by a communication network.
  • The processes and logic flows described in this specification may be performed by one or more programmable processors executing one or more computer programs to perform functions by operating on input data and generating output. The processes and logic flows may also be performed by, and apparatus may also be implemented as, special purpose logic circuitry (e.g., an FPGA (field programmable gate array) or an ASIC (application specific integrated circuit)).
  • Processors suitable for the execution of a computer program include, by way of example, both general and special purpose microprocessors, and any one or more processors of any appropriate kind of digital computer. Generally, a processor will receive instructions and data from a read only memory or a random access memory or both. Elements of a computer can include a processor for performing instructions and one or more memory devices for storing instructions and data. Generally, a computer will also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data (e.g., magnetic, magneto optical disks, or optical disks). However, a computer need not have such devices. Moreover, a computer may be embedded in another device (e.g., a mobile telephone, a personal digital assistant (PDA), a mobile audio player, a Global Positioning System (GPS) receiver). Computer readable media suitable for storing computer program instructions and data include all forms of non-volatile memory, media and memory devices, including by way of example semiconductor memory devices (e.g., EPROM, EEPROM, and flash memory devices); magnetic disks (e.g., internal hard disks or removable disks); magneto optical disks; and CD ROM and DVD-ROM disks. The processor and the memory may be supplemented by, or incorporated in, special purpose logic circuitry.
  • To provide for interaction with a user, implementations may be realized on a computer having a display device (e.g., a CRT (cathode ray tube), LCD (liquid crystal display) monitor) for displaying information to the user and a keyboard and a pointing device (e.g., a mouse, a trackball, a touch-pad), by which the user may provide input to the computer. Other kinds of devices may be used to provide for interaction with a user as well; for example, feedback provided to the user may be any appropriate form of sensory feedback (e.g., visual feedback, auditory feedback, tactile feedback); and input from the user may be received in any appropriate form, including acoustic, speech, or tactile input.
  • Implementations may be realized in a computing system that includes a back end component (e.g., as a data server), a middleware component (e.g., an application server), and/or a front end component (e.g., a client computer having a graphical user interface or a Web browser, through which a user may interact with an implementation), or any appropriate combination of one or more such back end, middleware, or front end components. The components of the system may be interconnected by any appropriate form or medium of digital data communication (e.g., a communication network). Examples of communication networks include a local area network ("LAN") and a wide area network ("WAN"), e.g., the Internet.
  • The computing system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
  • While this specification contains many specifics, these should not be construed as limitations on the scope of the disclosure or of what may be claimed, but rather as descriptions of features specific to particular implementations. Certain features that are described in this specification in the context of separate implementations may also be implemented in combination in a single implementation. Conversely, various features that are described in the context of a single implementation may also be implemented in multiple implementations separately or in any suitable sub-combination. Moreover, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination may in some cases be excised from the combination, and the claimed combination may be directed to a sub-combination or variation of a sub-combination.
  • Similarly, while operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. In certain circumstances, multitasking and parallel processing may be advantageous. Moreover, the separation of various system components in the implementations described above should not be understood as requiring such separation in all implementations, and it should be understood that the described program components and systems may generally be integrated together in a single software product or packaged into multiple software products.
  • A number of implementations have been described. Nevertheless, it will be understood that various modifications may be made without departing from the spirit and scope of the disclosure. For example, various forms of the flows shown above may be used, with steps reordered, added, or removed. Accordingly, other implementations are within the scope of the following claims.

Claims (13)

  1. A computer-implemented method for providing an artificial intelligence (AI) -based digital assistant, the method being executed by one or more processors and comprising:
    receiving, by the one or more processors, communication data from a device, the communication data comprising data input by a user of the device;
    receiving, by the one or more processors, text data based on the communication data;
    providing, by the one or more processors, an intent set and an entity set based on processing the text data through an artificial intelligence service, the intent set comprising one or more intents indicated in the text data, the entity set comprising one or more entities indicated in the text data, the artificial intelligence service implementing one or more convolution neural networks (CNNs);
    identifying, by the one or more processors, a set of actions based on one or more of the text data, the intent set, and the entity set, the set of actions comprising one or more actions to be executed by one or more computer-implemented services;
    receiving, by the one or more processors, a set of results comprising at least one result from a computer-implemented service executing an action of the set of actions;
    providing, by the one or more processors, result data comprising data describing the at least one result; and
    transmitting, by the one or more processors, the result data to the device.
  2. The method of claim 1, wherein the artificial intelligence service comprises an intent classification model using natural language processing (NLP) to provide the intent set.
  3. The method of claim 2, wherein the NLP comprises word embedding.
  4. The method of claim 1, 2 or 3, wherein the artificial intelligence service comprises an entity extraction model using named entity recognition (NER) to provide the entity set.
  5. The method of any of the preceding claims, further comprising determining that one or both of the intent set and the entity set is empty, and in response, transmitting at least one disambiguation question to the device.
  6. The method of any of the preceding claims, further comprising determining that an expected entity is absent from the entity set based on an intent of the intent set, and in response, transmitting at least one disambiguation question to the device.
  7. The method of any of the preceding claims, further comprising determining that the set of results includes a deficiency, and in response, transmitting at least one disambiguation question to the device.
  8. The method of any of the preceding claims, wherein the communication data comprises audio data, and the result data comprises audio result data.
  9. The method of any of the preceding claims, wherein the communication data comprises text data, and the result data comprises text result data.
  10. The method of any of the preceding claims, wherein the result data comprises audio data that is provided by a voice response composition module based on text result data.
  11. Computer program instructions which, when executed by one or more processors, cause the one or more processors to perform the method of any of the preceding claims.
  12. A system, comprising:
    one or more processors; and
    a computer-readable storage device coupled to the one or more processors and having instructions stored thereon which, when executed by the one or more processors, cause the one or more processors to perform operations for providing an artificial intelligence (AI) -based digital assistant, the operations comprising:
    receiving communication data from a device, the communication data comprising data input by a user of the device;
    receiving text data based on the communication data;
    providing an intent set and an entity set based on processing the text data through an artificial intelligence service, the intent set comprising one or more intents indicated in the text data, the entity set comprising one or more entities indicated in the text data, the artificial intelligence service implementing one or more convolution neural networks (CNNs);
    identifying a set of actions based on one or more of the text data, the intent set, and the entity set, the set of actions comprising one or more actions to be executed by one or more computer-implemented services;
    receiving a set of results comprising at least one result from a computer-implemented service executing an action of the set of actions;
    providing result data comprising data describing the at least one result; and
    transmitting the result data to the device.
  13. The system of claim 12, wherein the instructions, when executed by the one or more processors, cause the one or more processors to perform operations according to one or more of claims 2 to 10.
EP18154752.2A 2017-03-02 2018-02-01 Artificial intelligence digital agent Ceased EP3370164A1 (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US15/448,401 US20180253638A1 (en) 2017-03-02 2017-03-02 Artificial Intelligence Digital Agent

Publications (1)

Publication Number Publication Date
EP3370164A1 true EP3370164A1 (en) 2018-09-05

Family

ID=61132337

Family Applications (1)

Application Number Title Priority Date Filing Date
EP18154752.2A Ceased EP3370164A1 (en) 2017-03-02 2018-02-01 Artificial intelligence digital agent

Country Status (5)

Country Link
US (1) US20180253638A1 (en)
EP (1) EP3370164A1 (en)
CN (1) CN108536733A (en)
AU (2) AU2018201387A1 (en)
SG (1) SG10201800824PA (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11113608B2 (en) 2017-10-30 2021-09-07 Accenture Global Solutions Limited Hybrid bot framework for enterprises
US11164562B2 (en) * 2019-01-10 2021-11-02 International Business Machines Corporation Entity-level clarification in conversation services
KR102204740B1 (en) * 2019-02-28 2021-01-19 네이버 주식회사 Method and system for processing unclear intention query in conversation system
CN110705274B (en) * 2019-09-06 2023-03-24 电子科技大学 Fusion type word meaning embedding method based on real-time learning
US11798539B2 (en) * 2020-09-25 2023-10-24 Genesys Telecommunications Laboratories, Inc. Systems and methods relating to bot authoring by mining intents from conversation data via intent seeding

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140316764A1 (en) * 2013-04-19 2014-10-23 Sri International Clarifying natural language input using targeted questions

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101759009B1 (en) * 2013-03-15 2017-07-17 애플 인크. Training an at least partial voice command system
US9436918B2 (en) * 2013-10-07 2016-09-06 Microsoft Technology Licensing, Llc Smart selection of text spans
US9836452B2 (en) * 2014-12-30 2017-12-05 Microsoft Technology Licensing, Llc Discriminating ambiguous expressions to enhance user experience
CN104615589A (en) * 2015-02-15 2015-05-13 百度在线网络技术(北京)有限公司 Named-entity recognition model training method and named-entity recognition method and device
CN105094315B (en) * 2015-06-25 2018-03-06 百度在线网络技术(北京)有限公司 The method and apparatus of human-machine intelligence's chat based on artificial intelligence
CN106407333B (en) * 2016-09-05 2020-03-03 北京百度网讯科技有限公司 Spoken language query identification method and device based on artificial intelligence
US10229680B1 (en) * 2016-12-29 2019-03-12 Amazon Technologies, Inc. Contextual entity resolution

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140316764A1 (en) * 2013-04-19 2014-10-23 Sri International Clarifying natural language input using targeted questions

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
ASLI CELIKYILMAZ ET AL: "Convolutional Neural Network Based Semantic Tagging with Entity Embeddings", NIPS WORKSHOP ON MACHINE LEARNING FOR SLU & INTERACTION, December 2015 (2015-12-01), XP055491040, Retrieved from the Internet <URL:https://www.microsoft.com/en-us/research/publication/convolutional-neural-network-based-semantic-tagging-with-entity-embeddings/> [retrieved on 20180709], DOI: 10.1186/s12911-017-0468-7 *
CICERO NOGUEIRA DOS SANTOS ET AL: "Boosting Named Entity Recognition with Neural Character Embeddings", ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY 14853, 19 May 2015 (2015-05-19), XP080802663 *
HOMA B. HASHEMI ET AL: "Query Intent Detection using Convolutional Neural Networks", ACM INTERNATIONAL CONFERENCE ON WEB SEARCH AND DATA MINING; WORKSHOP ON QUERY UNDERSTANDING FOR SEARCH ON ALL DEVICES, 22 February 2016 (2016-02-22), XP055491037, Retrieved from the Internet <URL:http://people.cs.pitt.edu/~hashemi/papers/QRUMS2016_HBHashemi.pdf> [retrieved on 20180709], DOI: 10.1145/1235 *
JASON D. WILLIAMS ET AL: "Rapidly Scaling Dialog Systems with Interactive Learning", NATURAL LANGUAGE DIALOG SYSTEMS AND INTELLIGENT ASSISTANTS, 11 January 2015 (2015-01-11), Cham, pages 1 - 13, XP055328618, ISBN: 978-3-319-19291-8, Retrieved from the Internet <URL:https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/iwsds2015.pdf> [retrieved on 20161213], DOI: 10.1007/978-3-319-19291-8_1 *
JOHANN HAUSWALD ET AL: "DjiNN and Tonic", PROCEEDINGS OF THE 42ND ANNUAL INTERNATIONAL SYMPOSIUM ON COMPUTER ARCHITECTURE, ISCA '15, 17 June 2015 (2015-06-17), New York, New York, USA, pages 27 - 40, XP055336293, ISBN: 978-1-4503-3402-0, DOI: 10.1145/2749469.2749472 *
RONAN COLLOBERT ET AL: "Natural Language Processing (almost) from Scratch", JOURNAL OF MACHINE LEARNING RESEARCH, vol. 12, August 2011 (2011-08-01), US, pages 2493 - 2537, XP055273931, ISSN: 1532-4435 *

Also Published As

Publication number Publication date
SG10201800824PA (en) 2018-10-30
AU2018201387A1 (en) 2018-09-20
US20180253638A1 (en) 2018-09-06
AU2019202829A1 (en) 2019-05-16
CN108536733A (en) 2018-09-14

Similar Documents

Publication Publication Date Title
EP3413540B1 (en) Integration platform for multi-network integration of service platforms
EP3370164A1 (en) Artificial intelligence digital agent
US11735157B2 (en) Systems and methods for providing automated natural language dialogue with customers
US11956187B2 (en) Natural language processing for information extraction
US11790910B2 (en) Interacting with a user device to provide automated testing of a customer service representative
US11928611B2 (en) Conversational interchange optimization
US10162844B1 (en) System and methods for using conversational similarity for dimension reduction in deep analytics
US20180075335A1 (en) System and method for managing artificial conversational entities enhanced by social knowledge
US11784947B2 (en) System and method for proactive intervention to reduce high cost channel usage
CN112671823A (en) Optimal routing of machine learning based interactions to contact center agents
TW201933267A (en) Method and apparatus for transferring from robot customer service to human customer service
US11551143B2 (en) Reinforcement learning for chatbots
US20230063131A1 (en) Dynamic goal-oriented dialogue with virtual agents
US20210142180A1 (en) Feedback discriminator
US11960841B2 (en) Incomplete problem description determination for virtual assistant user input handling

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION HAS BEEN PUBLISHED

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

AX Request for extension of the european patent

Extension state: BA ME

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

17P Request for examination filed

Effective date: 20190305

RBV Designated contracting states (corrected)

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: EXAMINATION IS IN PROGRESS

17Q First examination report despatched

Effective date: 20200701

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: EXAMINATION IS IN PROGRESS

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION HAS BEEN REFUSED

18R Application refused

Effective date: 20220304