CA3064116A1 - Systems and methods for performing automated interactive conversation with a user - Google Patents
Systems and methods for performing automated interactive conversation with a user Download PDFInfo
- Publication number
- CA3064116A1 CA3064116A1 CA3064116A CA3064116A CA3064116A1 CA 3064116 A1 CA3064116 A1 CA 3064116A1 CA 3064116 A CA3064116 A CA 3064116A CA 3064116 A CA3064116 A CA 3064116A CA 3064116 A1 CA3064116 A1 CA 3064116A1
- Authority
- CA
- Canada
- Prior art keywords
- user
- question
- intent
- answer
- natural language
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims description 72
- 230000002452 interceptive effect Effects 0.000 title claims description 19
- 230000004044 response Effects 0.000 claims abstract description 90
- 230000003993 interaction Effects 0.000 claims abstract description 5
- 238000012545 processing Methods 0.000 claims description 52
- 230000008569 process Effects 0.000 claims description 12
- 238000013528 artificial neural network Methods 0.000 claims description 10
- 230000000694 effects Effects 0.000 abstract description 2
- 230000009471 action Effects 0.000 description 18
- 238000010200 validation analysis Methods 0.000 description 7
- 238000012546 transfer Methods 0.000 description 6
- 238000012795 verification Methods 0.000 description 6
- 238000010586 diagram Methods 0.000 description 5
- 238000000605 extraction Methods 0.000 description 4
- 238000004458 analytical method Methods 0.000 description 3
- 230000008520 organization Effects 0.000 description 3
- 238000004590 computer program Methods 0.000 description 2
- 238000012790 confirmation Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 238000003058 natural language processing Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000013459 approach Methods 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 238000005352 clarification Methods 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000003062 neural network model Methods 0.000 description 1
- 230000001737 promoting effect Effects 0.000 description 1
- 238000012552 review Methods 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/20—Natural language analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/16—Sound input; Sound output
- G06F3/167—Audio in a user interface, e.g. using voice commands for navigating, audio feedback
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/30—Semantic analysis
- G06F40/35—Discourse or dialogue representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/088—Non-supervised learning, e.g. competitive learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q40/00—Finance; Insurance; Tax strategies; Processing of corporate or income taxes
- G06Q40/02—Banking, e.g. interest calculation or account maintenance
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/26—Speech to text systems
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Computational Linguistics (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Business, Economics & Management (AREA)
- Computing Systems (AREA)
- Biophysics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Molecular Biology (AREA)
- Biomedical Technology (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Life Sciences & Earth Sciences (AREA)
- Accounting & Taxation (AREA)
- Finance (AREA)
- Development Economics (AREA)
- Economics (AREA)
- Marketing (AREA)
- Strategic Management (AREA)
- Technology Law (AREA)
- General Business, Economics & Management (AREA)
- Multimedia (AREA)
- Human Computer Interaction (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
- Information Transfer Between Computers (AREA)
Abstract
A dialogue system is a computer system that converses with a human via a user interface. In some embodiments, a dialogue system is provided that may increase the probability of finding a satisfactory response with relatively little iteration of dialogue between a user and the dialogue system. The number of responses (e.g. in the form of alternative questions) may be progressively increased during the interaction with a user, and this may have the effect of increasing the overall robustness of the dialogue system.
Description
Robic Ref. No. 19958-0008 SYSTEMS AND METHODS FOR PERFORMING AUTOMATED INTERACTIVE
CONVERSATION WITH A USER
FIELD
[1] The following relates to a computer-implemented dialogue system for conversing with a human.
BACKGROUND
CONVERSATION WITH A USER
FIELD
[1] The following relates to a computer-implemented dialogue system for conversing with a human.
BACKGROUND
[2] A dialogue system is a computer system that converses with a human via a user interface. Many dialogue systems utilize a computer program to conduct the conversation using auditory and/or textual methods. A common name for such a computer program is a `chatboe. A
chatbot may be implemented using a natural language processing system.
131 An organization may use a dialogue system to help support and scale their customer relation efforts. A dialogue system may be used to provide a wide variety of information to many different users. For example, the dialogue system may be used to perform automated interactive conversation with users in order to provide answers to questions posed by the users. Questions originating from different users may be very different in nature, and the questions may be received and answered at any time of day or night. The users of the dialogue system may be customers or potential customers of the organization.
[4] Current dialogue systems have a technical problem in that they are often not robust. 'Robustness' refers to the ability of the dialogue system to satisfactorily answer a question posed by a human user. Some current dialogue systems may provide less than 50%
correct/satisfactory answers. If the dialogue system returns an incorrect or unsatisfactory answer too often, then the dialogue system will not be adopted by human users. Also, the organization's reputation may be negatively impacted.
Robic Ref. No. 19958-0008 SUMMARY
151 One way to try to increase the robustness of a dialogue system is to invest significant resources in the writing of questions and answers, and/or to invest significant resources in enriching the ability of the dialogue system to recognize intentions. Technical implementations often focus less on linguistics and more on improvements to algorithms /
models. Achieving satisfactory results may be expensive. In some cases, the results are not even satisfactory because of a technical challenge: the combinatory complexity of human language is boundless, and so it is difficult in technical implementation to predict the natural language a user could use to ask a particular question. The system may be highly based on recursion, and manual dialogue tree manufacture may not be a viable solution.
[6] Instead, in some embodiments disclosed herein, a dialogue system is provided that may increase the probability of finding a satisfactory response with relatively little iteration of dialogue between a user and the system. An interactive process is introduced to help facilitate the exchange between the user and the dialogue system to try to increase the level of robustness of the dialogue system.
171 In some embodiments, the number of responses in the form of questions may be progressively increased during an interaction with a user. This may have the effect of increasing the overall robustness of the dialogue system. For example, the responses that are progressively increased may be questions that the system determines the user may be asking.
[8] Another problem with dialogue systems is that a response formulated by a dialogue system is not customized based on the user. For example, if two different users ask the exact same question, e.g. "What is the monthly fee for your savings account", the answer would be the same. However, one user may actually be entitled to a preferable monthly fee compared to another user, e.g. based on the volume of monthly financial transactions associated with the user's bank accounts or based on the number of accounts held by the user.
191 In some embodiments, a technical solution is provided in which a response returned by a dialogue system is generated based on financial information specific to the user.
Robic Ref No. 19958-0008 The response may be in reply to a user's finance related question or finance related action. The response may be a question or an answer or an action.
1101 According to a possible embodiment, a computer-implemented method for performing automated interactive conversation with a user is provided. The method comprises:
a. providing a user interface at which the user can provide a natural language input;
b. processing the natural language input with a data processing unit comprising at least one processor executing instructions, the instructions configured for:
i. deriving from the natural language input a possible question the user might be asking;
ii. conveying the possible question to the user through the user interface;
iii. processing user input provided at the user interface indicating that the possible question is incorrect;
iv. deriving a series of alternate questions that the user might be asking;
v. presenting the series of alternate questions to the user through the user interface.
1111 According to another embodiment, a computer-implemented method for performing automated interactive conversation with a user is provided. The method comprises:
a. providing a user interface at which the user can provide a natural language input;
b. processing the natural language input with a data processing unit comprising at least one processor executing instructions, the instructions configured for:
i. extracting keywords from the natural language input;
ii. determining from the keywords a user intent;
iii. deriving from the user intent a possible question the user might be asking;
iv. conveying the possible question to the user through the user interface;
v. processing user input provided at the user interface indicating that the possible question is correct, thereby confirming that the possible question has been correctly determined;
vi. determining the answer associated with the confirmed question; and vii. presenting the answer to the user through the user interface.
chatbot may be implemented using a natural language processing system.
131 An organization may use a dialogue system to help support and scale their customer relation efforts. A dialogue system may be used to provide a wide variety of information to many different users. For example, the dialogue system may be used to perform automated interactive conversation with users in order to provide answers to questions posed by the users. Questions originating from different users may be very different in nature, and the questions may be received and answered at any time of day or night. The users of the dialogue system may be customers or potential customers of the organization.
[4] Current dialogue systems have a technical problem in that they are often not robust. 'Robustness' refers to the ability of the dialogue system to satisfactorily answer a question posed by a human user. Some current dialogue systems may provide less than 50%
correct/satisfactory answers. If the dialogue system returns an incorrect or unsatisfactory answer too often, then the dialogue system will not be adopted by human users. Also, the organization's reputation may be negatively impacted.
Robic Ref. No. 19958-0008 SUMMARY
151 One way to try to increase the robustness of a dialogue system is to invest significant resources in the writing of questions and answers, and/or to invest significant resources in enriching the ability of the dialogue system to recognize intentions. Technical implementations often focus less on linguistics and more on improvements to algorithms /
models. Achieving satisfactory results may be expensive. In some cases, the results are not even satisfactory because of a technical challenge: the combinatory complexity of human language is boundless, and so it is difficult in technical implementation to predict the natural language a user could use to ask a particular question. The system may be highly based on recursion, and manual dialogue tree manufacture may not be a viable solution.
[6] Instead, in some embodiments disclosed herein, a dialogue system is provided that may increase the probability of finding a satisfactory response with relatively little iteration of dialogue between a user and the system. An interactive process is introduced to help facilitate the exchange between the user and the dialogue system to try to increase the level of robustness of the dialogue system.
171 In some embodiments, the number of responses in the form of questions may be progressively increased during an interaction with a user. This may have the effect of increasing the overall robustness of the dialogue system. For example, the responses that are progressively increased may be questions that the system determines the user may be asking.
[8] Another problem with dialogue systems is that a response formulated by a dialogue system is not customized based on the user. For example, if two different users ask the exact same question, e.g. "What is the monthly fee for your savings account", the answer would be the same. However, one user may actually be entitled to a preferable monthly fee compared to another user, e.g. based on the volume of monthly financial transactions associated with the user's bank accounts or based on the number of accounts held by the user.
191 In some embodiments, a technical solution is provided in which a response returned by a dialogue system is generated based on financial information specific to the user.
Robic Ref No. 19958-0008 The response may be in reply to a user's finance related question or finance related action. The response may be a question or an answer or an action.
1101 According to a possible embodiment, a computer-implemented method for performing automated interactive conversation with a user is provided. The method comprises:
a. providing a user interface at which the user can provide a natural language input;
b. processing the natural language input with a data processing unit comprising at least one processor executing instructions, the instructions configured for:
i. deriving from the natural language input a possible question the user might be asking;
ii. conveying the possible question to the user through the user interface;
iii. processing user input provided at the user interface indicating that the possible question is incorrect;
iv. deriving a series of alternate questions that the user might be asking;
v. presenting the series of alternate questions to the user through the user interface.
1111 According to another embodiment, a computer-implemented method for performing automated interactive conversation with a user is provided. The method comprises:
a. providing a user interface at which the user can provide a natural language input;
b. processing the natural language input with a data processing unit comprising at least one processor executing instructions, the instructions configured for:
i. extracting keywords from the natural language input;
ii. determining from the keywords a user intent;
iii. deriving from the user intent a possible question the user might be asking;
iv. conveying the possible question to the user through the user interface;
v. processing user input provided at the user interface indicating that the possible question is correct, thereby confirming that the possible question has been correctly determined;
vi. determining the answer associated with the confirmed question; and vii. presenting the answer to the user through the user interface.
3 Robic Ref No. 19958-0008 [12] According to another embodiment, a system for performing automated interactive conversation with a user is provided. The system comprises:
a. a user interface configured to receive a natural language input from the user;
b. a data processing unit to process the natural language input, the data processing unit configured to:
i. derive from the natural language input a possible question the user might be asking;
ii. convey the possible question to the user through the user interface;
iii. process user input at the user interface indicating whether the possible question is incorrect;
iv. derive a series of alternate questions that the user might be asking;
v. present the series of alternate questions to the user through the user interface.
BRIEF DESCRIPTION OF THE DRAWINGS
[13] Embodiments will be described, by way of example only, with reference to the accompanying figures wherein:
[14] FIG. 1 is a block diagram of a computer implemented system for performing automated interactive conversation with a user, according to one embodiment;
[15] FIG.1A is block diagram of a portion of the computer implemented system relating to keyword extraction and intent classification, according to one embodiment.
[16] FIG. 1B is block diagram of a portion of the computer implemented system relating to the response generator for generating a first response, according to an embodiment.
[17] FIG. IC is block diagram of a portion of the computer implemented system relating to the response generator for generating alternate responses, according to a possible embodiment.
a. a user interface configured to receive a natural language input from the user;
b. a data processing unit to process the natural language input, the data processing unit configured to:
i. derive from the natural language input a possible question the user might be asking;
ii. convey the possible question to the user through the user interface;
iii. process user input at the user interface indicating whether the possible question is incorrect;
iv. derive a series of alternate questions that the user might be asking;
v. present the series of alternate questions to the user through the user interface.
BRIEF DESCRIPTION OF THE DRAWINGS
[13] Embodiments will be described, by way of example only, with reference to the accompanying figures wherein:
[14] FIG. 1 is a block diagram of a computer implemented system for performing automated interactive conversation with a user, according to one embodiment;
[15] FIG.1A is block diagram of a portion of the computer implemented system relating to keyword extraction and intent classification, according to one embodiment.
[16] FIG. 1B is block diagram of a portion of the computer implemented system relating to the response generator for generating a first response, according to an embodiment.
[17] FIG. IC is block diagram of a portion of the computer implemented system relating to the response generator for generating alternate responses, according to a possible embodiment.
4 Robic Ref. No. 19958-0008 [18] FIG. ID illustrates a flowchart of the exchanges occurring between the front end and back end of the system, according to a possible embodiment.
[19] FIG. 2 illustrates a flowchart of a computer-implemented method for performing automated interactive conversation with a user, according to one embodiment;
[20] FIGs. 3 and 4 illustrate example message exchanges on a user interface;
[21] FIG. 5 illustrates a flowchart of a computer-implemented method for interacting with a user, according to one embodiment;
[22] FIG. 6 illustrates a flowchart of a computer-implemented method for performing automated interactive conversation with a user, according to another embodiment; and [23] FIG. 7 illustrates a flowchart of a computer-implemented method for interacting with a user, according to another embodiment.
[24] FIG. 8 illustrates an exemplary flowchart of a portion of the computer-implemented method wherein the user's profile is taken into consideration.
DETAILED DESCRIPTION
[25] For illustrative purposes, specific embodiments and examples will be explained in greater detail below in conjunction with the figures.
[26] FIG. 1 is a block diagram of a computer implemented system 102 for performing automated interactive conversation with a user, according to one embodiment.
The system 102 implements a dialogue system, also commonly referred to as "chatbot".
[27] The system 102 includes a user interface 104 for receiving a natural language input originating from the user, and for providing a response to the user. The attributes of the user interface 104 are implementation specific and depend on how the user is interacting with the system 102. Two examples of a user interface 104 are illustrated in FIG. 1. In one example, the user interface 104 interfaces with a telephone handset belonging to the user.
The telephone handset includes a transmitter through which the user speaks and a receiver through which the Robic Ref. No. 19958-0008 user hears the response. A speech recognition module 106 is included as part of the system 102 in order to convert from speech to text. As another example, the user interface 104 may interface with a graphical user interface (GUI) on a computing device, such as on the user's mobile device. The user may use a keyboard or touchscreen to provide a text input, and the response would be presented as text on the display screen hosting the GUI. The user interface 104 is the component of the system 102 that interfaces with users and is meant to refer to the components of the interface that belong to the system 102, rather than to the user device. Throughout the description, a "response" is in reply to a communication sent to or capture through the user interface 102, while an "answer" is the resolution to the original question asked by the user. In the context of the present application, the "answer" is the information the user is looking for. A
"response" is thus more generic than the "answer".
[28] The system 102 further includes a data processing unit 110, which may implement a natural language processing system. The data processing unit 110 includes a keyword extractor 112, an intent classifier 114, response generator 116, and a learning component 118.
[29] Still referring to FIG.1, and also to FIG.1A, the keyword extractor 112 receives a natural language input originating from the user. The input received at the keyword extractor 112 is a string of text. In general, the string of text includes multiple words, although in some cases it could be that the string of text is only a single word. The string of text may convey a question asked by the user, or a user instruction, or a user's response to a question that was asked by the system 102. The keyword extractor 112 attempts to extract words and/or phrases from the string of text. If any keywords are extracted, the extracted keywords are stored in a memory, e.g.
memory 122.
[30] In some embodiments, the keyword extractor 112 may recognize properties that indicate a particular word in the string of text may be a keyword, such as the use of a date, capital letter, brand name, recognized phrase, etc. Examples of keyword extraction algorithms that may be implemented by the keyword extractor 112 are described in:
(1) Jean-Louis, L., Gagnon, M., and Charton, E., "A knowledge-base oriented approach for automatic keyword extraction" ComputaciOn y Sistemas, 2013, vol. 17, no 2, p.
187-196; and Robic Ref. No. 19958-0008 (2) Bechet, F., and Charton, E., "Unsupervised knowledge acquisition for extracting named entities from speech", in Acoustics Speech and Signal Processing (ICASSP), International Conference on, pp. 5338-5341, March 2010.
[31] In some embodiments, a keyword extraction algorithm is used involving named entity recognition based on knowledge representation of the semantic domain covered by the dialog system application. For example, the semantic domain can relate to banking in general or to more specific domains such as insurance, loans, investment, trading, etc.
[32] The intent classifier 114 also receives the natural language input originating from the user in the form of a string of text and analyses the string of text to determine the intent of the user. In some embodiments, the words in the string of text are compared to a library of intents and entity values. For example, if the user asked the question "What is the rate on your cashback Mastercard?", then the intent classifier 114 may match the word "rate" to an intent "get rate" that is stored in a library of intents. The intent classifier 114 may determine that the entity value relating to that intent is "cashback" by the presence of the word "cashback".
In such a scenario, the intent classifier 114 therefore determines that the user is asking for a cashback rate. The presence of word "Mastercard" may cause the intent classifier 114 to determine that the cashback rate requested by the user is the cashback rate for the Mastercard-1'm brand credit card. The intent classifier 114 may associate a confidence value with the determined intent.
The confidence value will be referred to as a "confidence score", and it quantifies how confident the intent classifier 114 is regarding the correctness of its determined intent. For example, the intent determined by the intent classifier 114 may be "get cashback rate for MastercardTM brand credit card".
However, this intent is not necessarily correct, e.g. there is some ambiguity from the string of text as to whether the rate requested is cashback rate or another type of rate instead (e.g. interest rate for the MastercardTM brand credit card). Therefore, the confidence score may not be 100%, but may instead have a lower value, e.g. 75%.
[33] The intent classifier 114 may be implemented with a neural network.
The neural network receives as input text string from the user interface 104, and outputs a plurality of intents, each associated with a confidence value or confidence score. For example, the first intent "get cashback rate for MastercardTM brand credit card" can be associated with a confidence score Robic Ref. No. 19958-0008 of 75%, a second intent "get interest rate for MastercardTM brand credit card"
can be associated with a confidence score of 60% and so on. The confidence score is thus an estimation of the likelihood or probability that the neural network has correctly interpreted the user intent from the text string.
[34] One example of an algorithm that may be implemented by the intent classifier 114 is described in: Serban, I. V., Sordoni, A., Bengio, Y., Courville, A. C., and Pineau, J., "Building End-To-End Dialogue Systems Using Generative Hierarchical Neural Network Models", in Association for the Advancement of Artificial Intelligence (AAAI), Vol. 16, pp. 3776-3784, February 2016.
[35] In other embodiments, the intent classifier 114 instead works by simply looking for matches between words in the natural language input and words in prewritten questions that are stored in memory 122.
[36] Referring to FIG.!, FIG.1B and 1D the response generator 116 receives one or more intents from the intent classifier 114, determines the question that the user is possibly asking based on the intent(s), and returns the possible question to the user for verification. The keyword extractor 112 and the intent classifier 114 can form part of a question analyser module 158, as illustrated in FIG. ID. In possible embodiments, a single intent is provided as an input to the response generator 116, corresponding to the intent having the highest confidence score. In some embodiments, if the intent has a confidence score above a predetermined confidence threshold, the response generator 116 may respond with a reformulated question to obtain validation from the user, the reformulated question being an equivalent of what the question identifier 160 has determined as the initial/original question. In other embodiments, the response generator 116 may respond with a reformulated question regardless of the value of the confidence score. The answer to the possible question may also be returned at this step of the process, along with the reformulated question for validation.
[37] Still referring to FIG.!, FIG 1B and FIG.1D , a first response generator module 116a is called or executed, which comprises a question identifier module 160, an answer identifier module 162 and an answer generator module 164. More specifically, the question identifier 160 processes the intent to determine a question that most likely matches the question Robic Ref. No. 19958-0008 conveyed by the text string. The reformulated question is sent to the user interface, to validate whether it has been correctly determined. The answer provided by the user through the user interface 104 is analysed by the question identifier 160, which may also require involvement of the intent classifier 114. If the user has confirmed correctness of the reformulated question, the first response generator 116a then identifies, formulates and returns the answer, via the answer identifier 162 and the answer generator module 164. The question identifier module 160 can thus generate a single or alternate questions, and can analyse the response from the user indicating whether the question identifier 160 has correctly determined the original question.
[38] In some embodiments, answers to the verified questions may be stored in memory and simply retrieved using a mapping between the verified question and the answer. In other embodiments, the answer identifier 162 may need to send a request over a network to obtain the answer. For example, if the verified question is "what is the cashback rate for MastercardTM
brand credit card", then the answer identifier 162 may query a database storing the cashback rate in order to obtain the cashback rate, and then formulate and send the response to the user, e.g.
"The cashback rate for our Mastercard is 1%".
[39] Still referring to FIGs.1 and 1D, and also to FIG. 1C, if the user has indicated that the reformulated question is incorrect, alternate questions are generated by the alternate response generator 116b, which can comprise or interact with the same modules 158 (including 114), 160, 162, 164 previously described. In this case, the extracted keywords, if any, are fed to the intent classifier 114, and the question identifier module 160 generates a list of alternate questions based on the intents having the highest confidence scores. The list of alternate questions is then sent to the user interface 104 for confirmation by the user. The question identifier module 160 analyses the user's response, which can be a selection of one of the alternate questions, or an indication that none of the alternate questions matched his original question. An answer is provided by the answer generator module 164, once the original question has been validated.
Advantageously, the question identifier 160 forces the user to confirm or select the correct question from the list, before providing an answer, which not only increases the success rate of providing satisfactory/useful answers, but also allows the learning component 118 to continuously improve the process of identifying the initial question.
Robic Ref No. 19958-0008 [40] The learning component 118 thus adapts the keyword extractor 112 and/or intent classifier 114 based on the answers provided by the user, as discussed in more detail later. For example, the learning component 118 may readjust the weights and biases applied by the neural network for determining intents based on text string inputs and/or extracted keywords.
[41] Operation of the intent classifier 114, response generators 116a or 116b, and learning component 118 will be explained in more detail below in relation to FIG. 2.
[42] The system 102 further includes a memory 122 for storing information used by the data processing unit 110. For example, the memory 122 may store a library of intents, the extracted keywords from the keyword extractor 112, responses or partial responses preprogramed for use by the response generator 116, etc. The memory 122 can comprise a combination of RAM and ROM and can be part of a single server or distributed across several servers and/or databases, either locally or remotely on cloud-based servers.
[43] The data processing unit 110 and its components (e.g. the keyword extractor 112, intent classifier 114, response generator 116, and learning component 118) may be implemented by one or more processors that execute instructions (software) stored in memory. The memory in which the instructions are stored may be memory 122 or another memory not illustrated. The instructions, when executed, cause the data processing unit 110 and its components to perform the operations described herein, e.g. extracting keywords from the user input, classifying intent, computing a confidence score, formulating the response to send to the user, updating one or more algorithms based on input from the user, etc. In some embodiments, the one or more processors consist of a central processing unit (CPU).
[44] Alternatively, some or all of the data processing unit 110 and its components may be implemented using dedicated circuitry, such as an application specific integrated circuit (ASIC), a graphics processing unit (GPU), or a programmed field programmable gate array (FPGA) for performing the operations of the data processing unit 110 and its components.
[45] In some embodiments, in order to try to increase the robustness of the dialogue system, an interactive process is used in which the number of responses may be progressively increased during an interaction with a user. Example embodiments are provided below.
Robic Ref. No. 19958-0008 [46] FIG. 2 illustrates a flowchart of a computer-implemented method for performing automated interactive conversation with a user, e.g. in order to provide an answer to a question from the user, according to one embodiment.
[47] In step 202, a natural language input originating from a user is received via the user interface 104. The natural language input is a string of text that conveys a question.
[48] In step 204, the keyword extractor 112 attempts to extract keywords from the natural language input. If one or more keywords are extracted, then they are stored in memory 122.
[49] In step 206 the intent classifier 114 determines an intent from the natural language input. The intent classifier 114 also determines the confidence score for its determined intent. In step 207, if the confidence score is below a threshold, then the method proceeds to step 221.
Otherwise, if the confidence score is above the threshold, it indicates that the system 102 is confident enough in its determined intent to return a single question for verification, and the method proceeds to step 208.
[50] In step 208, the response generator 116a returns the question for verification by the user, via the question identifier module 160 described above.
[51] In step 209, the data processing unit 110 determines whether the question returned in step 208 was verified as correct by the user. Step 209 may include receiving a natural language input from the user, and the intent classifier 114 determining from the intent of the natural language input whether or not the user has verified the correctness of the question, via the question identifier module 160.
[52] For example, the original natural language input received in step 202 may ask "What is the rate on your cashback MasterCard?" The intent classifier 114 may determine with high enough confidence that the intent is "get cashback rate for MasterCard", and so in step 208 the response generator 116, via the question identifier module 160, returns "I
think I understand your question. Can you verify for me that your question is: What is the cashback rate for the MasterCard credit card?" The user replies "Yes". The input "Yes" is determined in step 209 to be verifying that the question is correct, via the question identifier module 160.
Robic Ref. No. 19958-0008 [53] If the question is verified as correct, then in step 210 the response generator 116 returns the answer to the question, via the answer identifier 162 and answer generator 164, and the method ends. Optionally, the user's answer confirming correctness of the response may be used by to the intent classifier 114 and/or the question identifier 162 to increase their effectiveness for future questions that may be similar. However, if the question is not verified as correct, then the method proceeds to step 211.
[54] When a question is derived by the intent classifier 114 from the natural language input, and the confidence score is above the threshold, then the derived question is referred to as a "likely question". A "likely question" is a question that the system 102 determines was likely conveyed by the natural language input. In step 208, it is the likely question that is returned.
However, it is only a "likely" question because it is not necessarily the actual question that was asked, e.g. if the intent determined by the intent classifier 114 does not correctly reflect the user's intent.
[55] If step 211 is reached, it means that the initial question presented to the user for verification in steps 208 and 209 is not verified as correct. In step 211, the data processing unit 110 determines whether one or more keywords were recognized and extracted by the keyword extractor 112 in step 204. If no keywords were recognized, then the method proceeds from step 211 to step 230. Step 230 is explained later. Otherwise, if one or more keywords were recognized and extracted, then the method proceeds from step 211 to 212.
[56] In step 212, the intent classifier 114 identifies n alternative intents based on the keywords extracted in step 204, where n is a natural number. n may vary depending upon how many alternative intents can be determined, and n may also be capped. For example, if only one alternative intent is determined by the intent classifier 114, then n is limited to n = 1. As another example, if five alternative intents are determined by the intent classifier 114, then n may be capped at four, e.g. only the top four alternative intents are identified.
[57] An alternative intent is identified by the intent classifier 114 as follows: the keywords are processed, but instead of identifying the most likely intent (identified in step 206), a different intent is identified that is determined to be less likely, e.g.
has a lower confidence score. For example, the user's question may be "What is the rate on your cashback Mastercard?"
Robic Ref. No. 19958-0008 The intent classifier 114 determines two possible intents: (1) the user is requesting the cashback rate for the MastercardTM brand credit card, and the confidence score of this determined intent is 75%; or (2) the user is requesting the interest rate for the Mastercard rm brand credit card, and the confidence score of this determined intent is 65%. The intent identified in step 206 is the one with the higher confidence score, which in this example is cashback rate. The (n = 1) alternative intent identified in step 212 is the one with the lower confidence score, which in this example is interest rate. Each alternative intent corresponds to an alternative question the user might be asking, which is derived from the natural language input conveying the question that was received at step 202.
[58] The n alternative intents correspond to n alternative questions, and in step 214 the n alternative questions are generated by the question identifier module 160 and returned to the user via the user interface 104.
[59] In step 215 it is determined whether one of the n alternative questions is identified as correct by the user, by the question identifier 160. Step 215 may be performed by determining the intent of an input received from the user after the n alternative questions are presented to the user. For example, if the user responds "First question", then the question identifier 160 determines that the first question of the n alternative questions is the correct question.
[60] If one of the n alternative questions is identified as correct, then in step 216 the response generator 116 returns, via the answer identifier 162 and the answer generator 164, the corresponding answer and the method ends. If none of the n alternative questions is identified as correct, then the method proceeds to step 230. Step 230 is described later.
The intent classifier and question identifier module can also be at this point readjusted, for example by providing the user's answer to a neural network, for modifying weights and biases of layers of the neural network.
[61] Returning to step 207, if the intent determined by the intent classifier 114 in step 207 is below a threshold, then the method proceeds to step 221. If step 221 is reached, it means that an intent has been determined from the natural language input, but the intent classifier 114 is not particularly confident that the determined intent is correct.
Robic Ref. No. 19958-0008 [62] Therefore, in step 221, the data processing unit 110 determines whether one or more keywords were recognized and extracted by the keyword extractor 112 in step 204. If no keywords were recognized, then the method proceeds from step 221 to step 230.
Step 230 is explained later. Otherwise, if one or more keywords were recognized and extracted, then the method proceeds from step 221 to step 222. In step 222, the intent classifier 114 identifies the k most likely intents, where k is a natural number greater than or equal to one.
k does not need to have any relation to n, but in some embodiments k = n or k = n + 1. k may vary depending upon how many intents can be determined, and k may also be capped. The k intents returned may be the k intents having the highest confidence scores. For example, the user's question may be "What is the big deal about your Mastercard?" The intent classifier 114 determines two possible intents: (1) the user is requesting a summary of the features of the MastercardTM brand credit card, and the confidence score of this determined intent is 45%; or (2) the user is requesting information on promotional offers for signing up for the MastercardTM brand credit card, and the confidence score of this determined intent is 35%. Neither intent has a high enough confidence score to proceed to step 208, but in step 222 both intents (k = 2) are identified. Each intent corresponds to an alternative question the user might be asking, which is derived from the natural language input conveying the question that was received at step 202.
1631 The k intents correspond to k questions, generated by the question identifier module 160, and in step 224 the k questions are returned to the user via the user interface 104.
[64] In step 225 it is determined whether one of the k questions is identified as correct by the user, via the question identifier module 160. Step 225 may be performed by determining the intent of an input received from the user after the k questions are presented to the user. If one of the k questions is identified as correct, then in step 226 the response generator 116 returns, via the answer identifier module 162 and the answer generator module 164, the corresponding answer and the method ends. If none of the k questions is identified as correct, then the method proceeds to step 230. Again here, the user's question selection can be fed back to the intent classifier 114 and/or the question identifier 160 to improve accuracy of the validation/reformulated questions.
Robic Ref. No. 19958-0008 [65] If step 230 is reached in the method of FIG. 2 it means that the system 102 is not able to determine the question the user is asking. In step 230 the response generator 116 sends a reply to the user indicating this, e.g. "Sorry, I do not understand your question. Please try to rephrase your question".
[66] FIGID thus summarizes the back and forth validation process described in relation with F1G.2, occurring between the front end 120 and the back end 140 of the system 102. Questions and responses are sent to and captured via the user interface 104 and the back end modules 158, 116a and 116bprocess the user's responses and generate the reformulated question(s). As explained previously, the question entered by the user is first analysed by the question analyser 158, and intents, keywords and/or text strings are feed to either one of the first response generator 116a and alternate response generator 116b, for instance depending on the confidence score of the intent. In some embodiments, when the confidence score is above a given threshold, a single validation question is returned to the user interface. In other embodiments, the validation question with the highest confidence score is returned to the user interface. In other embodiments, several alternate questions are proposed, one of which can be selected by the user via the user interface if it correctly reflects the original question asked.
[67] FIG. 3 illustrates an example message exchange on a user interface 104, according to one embodiment. The message exchange corresponds to steps 202, 204, 206, 207, 208, 209, 211, 212, 214, 215, and 216 of FIG. 2. The number of responses (in the form of questions) is progressively increased during the interaction with the user. In particular, initially only one question is presented for verification at 382. However, upon receiving user feedback indicating that the initial question is incorrect, n = 3 alternative questions are provided at 384.
The user indicates that the first one of the three alternative questions is correct at 386, and the answer corresponding to that question is returned at 388.
[68] FIG. 4 illustrates an example message exchange on a user interface 104, according to another embodiment. The message exchange corresponds to steps 202, 204, 206, 207, 221, 222, 224, 225, and 226 of FIG. 2. The confidence score relating to the most likely intent does not exceed the threshold, and so the k = 3 most likely responses are returned at 392.
Robic Ref. No. 19958-0008 The user indicates that the first one of the three questions is correct at 396, and the answer corresponding to that question is returned at 398.
1691 Returning to FIG. 2, optionally, in step 234, the learning component 118 updates the keyword extractor 112 and/or the intent classifier 114 to reflect the user's response that indicates which question is the correct question. For example, the learning component 118 may receive the output of the "Yes" branch of step 215 and/or step 225, which indicates the correct question, and the learning component 118 may use this indication to update or train the intent classifier 114 and/or the keyword extractor 112. Two examples follow.
[70] One example: The user initially asks the question "What is the big deal about your Mastercard?" The system does not determine an intent with a high enough confidence score and so three questions are returned to the user, as shown at 392 of FIG. 4. The user replies that the first question is the correct, i.e. the correct question is "What are the features of the Mastercard credit card?". The learning component 118 then updates the keyword extractor 112 and/or intent classifier 114 to add the vocabulary "big deal" and to indicate that "big deal" is a synonym to "features". Then, if in the future a user asks a question including "big deal", e.g. "What is the big deal regarding your savings account", then the intent classifier 114 will more confidently determine that the user intent is that the user wants to learn about the features of the savings account.
[71] Another example: The user initially asks the question "What is the rate on your cashback Mastercard?" The system initially returns the incorrect question, as shown at 382 of FIG. 3, and so three alternative questions are returned to the user, as shown at 384 of FIG. 3. The user replies at 386 that the first question is correct, i.e. the correct question is "What is the interest rate on the Mastercard credit card". The learning component 118 then updates the intent classifier 114 to increase the confidence score of the entity value "interest rate" when "rate" is used in the user's question. Then, if in the future a user asks a similar question, e.g. "What is the rate on your Visa card", then the intent classifier will more confidently determine that the user intent is that the user wants to know the interest rate for the VisaTm brand credit card.
[72] An example of a learning algorithm that may be implemented by the learning component 118 is : Schatzmann. J., Weilhammer, K., Stuttle, M., & Young. S.
(2006), -A survey Robic Ref. No. 19958-0008 of statistical user simulation techniques for reinforcement-learning of dialogue management strategies-. The knowledge engineering review, 21(2), 97-126.
[73] In alternative embodiments, steps 208 and 209 of FIG. 2 may be modified to instead just return an answer to the determined question, and ask for validation that the returned answer is correct, in which case step 210 is not needed. For example, box 382 in FIG. 3 may instead be: "The cashback rate on the MasterCard credit card is 1%. Did that answer your question?". If the user answers "Yes" then the method ends, whereas if the user answers "No, that did not answer my question", then the method proceeds to step 211.
[74] In some embodiments, the original question asked in the natural language input received at step 202 may actually consist of more than one question, in which case the system 102 may extract and process each question separately, or the intent classifier 114 may try to determine an overall intent. For example, if the natural language input from the user in step 202 is "Does your bank offer multiple credit cards? What is the rate of each one?", then the intent classifier 114 may determine that the intent is that the user wants a comparison of the rate of each of the bank's credit cards.
[75] In some embodiments, the natural language input received at step 202 may not be a question, but may instead be a request or an instruction to perform an action. For example, the input may be a request for information. The reply may then be a question that confirms whether particular information is being requested. For example, the natural language input received in step 202 may be "Provide me with the rate on your cashback MasterCard", and the initial question returned in step 208 may be "Please confirm that you are asking: What is the cashback rate on the MasterCard credit card?" Similarly, the alternative questions in steps 214 and 224 may ask whether particular information is being requested.
[76] In some embodiments, the natural language input received at step 202 may be an instruction to perform an action, e.g. "open a new account", in which case the question(s) returned may relate to clarification or confirmation before proceeding, e.g.
in step 214 "Do you mean any one of the following actions: (1) Open a new savings account?; or (2) open a new chequing account?; or (3) open a new student account?".
Robic Ref. No. 19958-0008 [77] In some embodiments, when a user asks a question or requests an action, the response returned by the system 102 may be formulated based on information specific to the user. In some embodiments, the response may be in reply to a user's finance related question or finance related action. The response may be a function of the user's financial information, e.g.
the user's prior financial transactions. The response may be a question or an answer or an action.
[78] FIG. 5 illustrates a flowchart of a computer-implemented method for interacting with a user, according to one embodiment. In step 452, the data processing unit 110 receives, in text form, a natural language input originating from a user via the user interface 104. The natural language input conveys a finance related question or a finance related action to be performed. As an example, the user may be asking "What is the monthly fee for your savings account?" (a finance related question), or the user may be instructing "Please open a new savings account" (a finance related action).
[79] In step 454, an intent is determined from the natural language input, possibly using keywords extracted from the natural language input. In step 456, the response generator 116 formulates a response (e.g. a question, an answer, or an action) based on the intent.
However, the response formulated by the response generator 116 is based on user-specific financial information, as explained below.
[80] Stored in memory 122 is the identity of the user. The system 102 knows and stores the identity of the user because the identity of the user has been previously provided to the system 102. As one example, the user may have previously provided their bank card number to the system 102, which is used to uniquely identify the user. As another example, the system 102 may be part of an online banking platform, and the user is signed into their online banking, such that the system 102 is aware of the identity of the user.
[81] Stored in a data structure, e.g. a database, is user-specific financial information.
User-specific financial information is financial information that is specific or unique to the user.
A non-exhaustive list of user-specific financial information includes any one, some, or all of the following: prior financial transactions performed by the user, e.g. a stored record of previous financial transactions; and/or quantity, quality, or type of financial transactions performed by the user; and/or user account balances; and/or number or type of accounts held by a user (examples Robic Ref. No. 19958-0008 of accounts include banking accounts, mortgage accounts, investment accounts, etc.); and/or credit information for the user; and/or information relating to banking products utilized by the user, e.g. whether the user has a mortgage, a credit card, investments, etc.
The data structure may be stored in memory 122 or at another location, e.g. a database connected to data processing unit 110 via a network.
[82] There are multiple candidate responses that may be returned to the user, which are selected or weighted based on the user-specific financial information. Some examples are provided below.
[83] Example: The natural language input originating from the user in step conveys the following finance-related question: "What is the monthly fee for your savings account?" The intent determined in step 454 is that the user is requesting the monthly fee for a savings account. The response generator 116 determines the following, e.g. by querying a database: the standard monthly fee is $10 per month, but the fee is reduced to $5 per month if the user has a mortgage account or an investment account with the bank, and the fee is reduced to $0 per month if the user has both a mortgage account and an investment account with the bank.
Therefore, there are three candidate responses: $10, $5, or $0. The response generator 116 uses the user identification stored in memory 122 to query a database that lists the accounts held by the user. The accounts held by the user include a mortgage account, but not an investment account, and so the response returned to the user in step 456 is that the monthly fee is $5, or the response may be a question, e.g. "We can offer you a savings account for a monthly fee of only $5, are you interested?".
[84] Another example: The natural language input originating from the user in step 452 conveys the following finance-related question: "What is this month's fee for my savings account?" The intent determined in step 454 is that the user wants to know this month's fee for the user's savings account. The fee is a function of the number of financial transactions performed by the user involving the user's savings account, e.g. $1 fee for every transfer into or out of the savings account in the month. The response generator 116 uses the user identification stored in memory 122 to query a database that lists the number of transactions that month. The Robic Ref. No. 19958-0008 database returns a value indicating that there were three transfers since the beginning of the month, and so the response returned to the user in step 456 is that the fee will be $3.
[85] Another example: The natural language input originating from the user in step 452 conveys the following finance-related action: "transfer $100 from my savings account to my chequing account". The intent determined in step 454 is that $100 is to be transferred from the user's savings account to the user's chequing account. The response generator 116 determines that the user has two savings accounts ("A" and "B"), and so there are two candidate responses:
either transfer the $100 from the user's savings account A or transfer the $100 from the user's savings account B. The response generator 116 uses the user identification stored in memory 122 to query the account balances for savings accounts A and B and determines that savings account B has no money in it. In response, the response generator 116 performs the transfer from savings account A, perhaps after sending a question to the user confirming that the money is to be transferred from savings account A.
[86] In some embodiments, the method of FIG. 2 may be modified to incorporate generating a response based on user-specific financial information. For example, the answer returned in step 210 and/or 216, and/or 226 may be based on user-specific financial information.
In a variation of FIG. 2, answers (instead of questions) may be returned in steps 208/209, 214/215, and 224/225 (in which case steps 210, 216 and 226 are not needed).
The initial answer returned in step 208/209 may be formulated based on financial information specific to the user. If in step 209 the user found the answer to be unsatisfactory (e.g. incorrect), then the alternative intents or answers (e.g. of step 214/215) may or may not be based on the user's financial information.
[87] An example: The natural language input originating from the user conveys the following finance-related question: "What is the rate of your savings account?" The intent determined is that the user is requesting the interest rate of a savings account, and the confidence score is high enough to immediately supply an answer to the question. The standard interest rate for a savings account is 1% but can be offered at 1.5% if the user has a mortgage account with the bank. The response generator 116 uses the user identification stored in memory 122 to query a database that lists the accounts held by the user. The accounts held by the user include a Robic Ref. No. 19958-0008 mortgage account, and so the response returned to the user is that the interest rate is 1.5%. It is then determined that the user is not satisfied with the answer, e.g. the user actually wanted to know the fee for the savings account. n alternative intents are therefore identified, and n corresponding alternative answers are returned to the user. However, the n corresponding alternative answers are not formulated based on user-specific financial information because the system 102 is now not as confident about whether the alternative answers even reflect the question actually asked by the user. This is because the confidence scores associated with the alternative intents are lower than the confidence score associated with intent initially determined.
[88] In some embodiments, the response may only be formulated based on user-specific financial information if the confidence score of the intent associated with the response is above a particular threshold. For example, if the intent has a confidence score of 90% or above, then modify the corresponding response based on the user-specific financial information;
otherwise, do not modify the corresponding response based on the user-specific financial information.
[89] FIG. 6 illustrates a flowchart of a computer-implemented method for performing automated interactive conversation with a user, according to one embodiment.
The automated interactive conversation may be performed in order to provide an answer to a question from the user.
[90] In step 502, a user interface 104 is provided at which the user can provide a natural language input. The natural language input is processed by the data processing unit 110.
The data processing unit 110 comprises at least one processor executing instructions. The instructions are configured to cause the data processing unit 110 to perform the remaining steps of FIG. 6.
[91] In step 504, the data processing unit 110 derives, from the natural language input, a possible question the user might be asking. An example of step 504 is described earlier in relation to steps 202 to 208 of FIG. 2.
Robic Ref. No. 19958-0008 [92] In step 506, the data processing unit 110 conveys the possible question to the user through the user interface 104 for verification by the user. An example of step 506 is described earlier in relation to steps 208 and 209 of FIG. 2.
[93] In step 508, the data processing unit 110 processes user input at the user interface 104 indicating that the possible question is incorrect (e.g. the "No" branch of step 209 of FIG. 2).
[94] In step 510, the data processing unit 110 derives a series of alternate questions that the user might be asking (e.g. step 212 of FIG. 2). In some embodiments, step 510 may only be performed if at least one keyword was recognized and extracted from the natural language input. In some embodiments, at least one keyword is recognized and extracted from the natural language input, and step 510 includes deriving the series of alternate questions based on the at least one keyword.
[95] In step 512, the data processing unit presents the series of alternate questions to the user through the user interface 104.
[96] In some embodiments, deriving the possible question the user might be asking in step 504 includes determining a user intent from the natural language input.
In some embodiments, deriving the possible question the user might be asking is performed without an extracted keyword.
[97] In some embodiments, an algorithm for extracting at least one keyword and/or an algorithm for determining user intent is modified based on an indication from the user of which one of the alternate questions is a correct question.
[98] In some embodiments, the method further includes receiving an indication, from the user, that a particular question of the series of alternate questions is correct, and presenting to the user through the user interface an answer to the particular question. In some embodiments, the method includes generating the answer to the particular question using user-specific financial information. In some embodiments, the particular question is a finance-related question, and the user-specific financial information relates to financial transactions previously performed by the user and/or accounts held by the user.
Robic Ref. No. 19958-0008 [99] FIG. 7 illustrates a flowchart of a computer-implemented method for interacting with a user, according to one embodiment, wherein the question from the user relates to finance and wherein responses returned by the system to the user interface are based on user-specific information. FIG.8 provides an example of possible sub steps that can be performed when the response to a user's question depends on the user's profile.
[100] In step 552 of FIG.7, a user interface 104 is provided at which the user can provide a natural language input conveying a finance related question or a finance related action to be performed. The natural language input is processed by the data processing unit 110. The data processing unit 110 comprises at least one processor executing instructions. The instructions are configured to cause the data processing unit 110 to perform the remaining steps of FIG. 7.
[101] In step 554, the data processing unit 110 derives, from the natural language input, a possible finance related question or possible finance related action. In step 556, the data processing unit 110 obtains a series of candidate answers, for example using a query lookup table, each of which is an answer to the possible finance related question or the possible finance related action. In step 558, the data processing unit 110 selects one of the candidate responses on the basis of user-specific financial information.
[102] In step 560, the data processing unit 110 presents the selected candidate response to the user through the user interface 104. Other examples of financial related questions and responses are provided earlier when describing FIG. 5 and related embodiments.
[103] In some embodiments, the candidate responses are a series of answers, each answer corresponding to a respective possible finance related question. In other embodiments, the candidate responses are a series of actions, each action corresponding to a respective possible finance related action instructed by the user. In other embodiments, the candidate responses are a series of questions. The questions may each correspond to possible finance related question being asked. In some embodiments, the user-specific financial information relates to financial transactions previously performed by the user and/or accounts held by the user. In some embodiments, the user-specific financial information is retrieved using an identifier of the user stored in memory.
Robic Ref. No. 19958-0008 [104] For instance, as per the example presented in FIG. 8, the confirmed or validated question can be: "What is the interest rate on the Master CardTM. In the example of FIG. 8, the answer relates to the interest rate of a credit card, as determined by querying a lookup table, as in step 802. For some questions, once confirmed, the answer will be independent of the user's profile, and therefor the answer will be the same for all users. This scenario is reflected by the left side of the flowchart, where the answer, corresponding in the example to a 4% credit card interest rate, is the same for all users. In other cases, the answer can vary depending on the user's profile, such as based on the user's financial profile, as per step 804. For example, a user with three or more bank accounts can be offered a lower interest rate than users having a single bank account. An ID database can thus be queried, as per step 806, to uniquely identify the user with a user ID. Once the user has been uniquely identified, financial databases can be in turn queried to determine the personal or financial profile associated with the unique ID (step 808).
The financial profile can include for example the number of accounts, cards, financial services, amount per account and loans associated with the user ID. ln the example, for users having more than three accounts linked to their unique identifier, the system can be configured such that a lower interest rate is offered to these users. The rules for modulating the answers based on financial information parameters can be stored for instance in a lookup table for answers associated with financial information, as in step 810. In the example, the user has four different accounts, and thus has access to a reduced interest rate of 3%, instead of 4%.
The answer is returned to the answer generator module 164 at step 812.
[105] The predominant modus operandi of dialogue systems consists in determining the question asked by a user, and in returning an answer corresponding to the determined question.
Despite recent advances and developments in the field, the success rate of dialogue systems is still surprisingly low, wherein the success rate corresponds to the ability to correctly determine the question asked by a user, and to return a satisfactory/accurate answer following a dialogue session with the user. The proposed system and method improve on existing dialogue systems and methods by prompting users to confirm the system's "understanding" of their question, such as by reformulating the possible question. In cases where the level of confidence in the determined question is low, or if the initial attempt at determining the original question has failed, the system generates several candidate questions (alternate questions) to be validated by Robic Ref. No. 19958-0008 the user, instead of simply providing what would most likely be an inaccurate answer, as a typical chatbot would. This proposed method and system have proved to significantly increase the success rate in providing accurate answers to user's questions.
[106] A prototype of the proposed automated interactive conservation/dialogue system has been tested and compared with an existing commercially available system.
In the experiment, the same 100 predetermined questions with known answers were asked of both the proposed system 102 and an existing commercially available system . With respect to the proposed system 102, with reference to FIG. 3, dialogue box 382 is considered a first attempt by the proposed system 102, and dialog box 384 is considered a second attempt by the proposed system 102. The dialogue session of FIG. 3 would thus be attributed a score of 0.5. The existing commercially available system on the other hand provides an answer to the question in its first attempt and if the first attempt is incorrect, the existing commercially available system provides one or more alternative answers. This method of responding with a first answer and then subsequently with other alternative answers is common amongst existing chatbot dialogue systems. If the commercially available system provided a good answer at the first attempt a score of 1 is assigned, and if a correct answer was provided at the second attempt, a score of 0.5 was assigned. The assigned scores for all 100 questions was summed and the result divided by 100, for each system. The results are shown in the table below:
System tested Accuracy score Proposed system and method 77.2%
Existing commercially available system 51.3%
[107] As can be appreciated, an increase of 50.4% in the accuracy of the dialogue process is achieved with the proposed system and method.
[108] Although the foregoing has been described with reference to certain specific embodiments, various modifications thereof will be apparent to those skilled in the art without departing from the scope of the claims appended hereto.
Robic Ref. No. 19958-0008 [109]
Moreover, any module, component, or device exemplified herein that executes instructions may include or otherwise have access to a non-transitory computer/processor readable storage medium or media for storage of information, such as computer/processor readable instructions, data structures, program modules, and/or other data. A
non-exhaustive list of examples of non-transitory computer/processor readable storage media includes magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, optical disks such as compact disc read-only memory (CD-ROM), digital video discs or digital versatile disc (DVDs), Blu-ray DiscTM, or other optical storage, volatile and non-volatile, removable and non-removable media implemented in any method or technology, random access memory (RAM), read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), flash memory or other memory technology. Any such non-transitory computer/processor storage media may be part of a device or accessible or connectable thereto. Any application or module herein described may be implemented using computer/processor readable/executable instructions that may be stored or otherwise held by such non-transitory computer/processor readable storage media.
[19] FIG. 2 illustrates a flowchart of a computer-implemented method for performing automated interactive conversation with a user, according to one embodiment;
[20] FIGs. 3 and 4 illustrate example message exchanges on a user interface;
[21] FIG. 5 illustrates a flowchart of a computer-implemented method for interacting with a user, according to one embodiment;
[22] FIG. 6 illustrates a flowchart of a computer-implemented method for performing automated interactive conversation with a user, according to another embodiment; and [23] FIG. 7 illustrates a flowchart of a computer-implemented method for interacting with a user, according to another embodiment.
[24] FIG. 8 illustrates an exemplary flowchart of a portion of the computer-implemented method wherein the user's profile is taken into consideration.
DETAILED DESCRIPTION
[25] For illustrative purposes, specific embodiments and examples will be explained in greater detail below in conjunction with the figures.
[26] FIG. 1 is a block diagram of a computer implemented system 102 for performing automated interactive conversation with a user, according to one embodiment.
The system 102 implements a dialogue system, also commonly referred to as "chatbot".
[27] The system 102 includes a user interface 104 for receiving a natural language input originating from the user, and for providing a response to the user. The attributes of the user interface 104 are implementation specific and depend on how the user is interacting with the system 102. Two examples of a user interface 104 are illustrated in FIG. 1. In one example, the user interface 104 interfaces with a telephone handset belonging to the user.
The telephone handset includes a transmitter through which the user speaks and a receiver through which the Robic Ref. No. 19958-0008 user hears the response. A speech recognition module 106 is included as part of the system 102 in order to convert from speech to text. As another example, the user interface 104 may interface with a graphical user interface (GUI) on a computing device, such as on the user's mobile device. The user may use a keyboard or touchscreen to provide a text input, and the response would be presented as text on the display screen hosting the GUI. The user interface 104 is the component of the system 102 that interfaces with users and is meant to refer to the components of the interface that belong to the system 102, rather than to the user device. Throughout the description, a "response" is in reply to a communication sent to or capture through the user interface 102, while an "answer" is the resolution to the original question asked by the user. In the context of the present application, the "answer" is the information the user is looking for. A
"response" is thus more generic than the "answer".
[28] The system 102 further includes a data processing unit 110, which may implement a natural language processing system. The data processing unit 110 includes a keyword extractor 112, an intent classifier 114, response generator 116, and a learning component 118.
[29] Still referring to FIG.1, and also to FIG.1A, the keyword extractor 112 receives a natural language input originating from the user. The input received at the keyword extractor 112 is a string of text. In general, the string of text includes multiple words, although in some cases it could be that the string of text is only a single word. The string of text may convey a question asked by the user, or a user instruction, or a user's response to a question that was asked by the system 102. The keyword extractor 112 attempts to extract words and/or phrases from the string of text. If any keywords are extracted, the extracted keywords are stored in a memory, e.g.
memory 122.
[30] In some embodiments, the keyword extractor 112 may recognize properties that indicate a particular word in the string of text may be a keyword, such as the use of a date, capital letter, brand name, recognized phrase, etc. Examples of keyword extraction algorithms that may be implemented by the keyword extractor 112 are described in:
(1) Jean-Louis, L., Gagnon, M., and Charton, E., "A knowledge-base oriented approach for automatic keyword extraction" ComputaciOn y Sistemas, 2013, vol. 17, no 2, p.
187-196; and Robic Ref. No. 19958-0008 (2) Bechet, F., and Charton, E., "Unsupervised knowledge acquisition for extracting named entities from speech", in Acoustics Speech and Signal Processing (ICASSP), International Conference on, pp. 5338-5341, March 2010.
[31] In some embodiments, a keyword extraction algorithm is used involving named entity recognition based on knowledge representation of the semantic domain covered by the dialog system application. For example, the semantic domain can relate to banking in general or to more specific domains such as insurance, loans, investment, trading, etc.
[32] The intent classifier 114 also receives the natural language input originating from the user in the form of a string of text and analyses the string of text to determine the intent of the user. In some embodiments, the words in the string of text are compared to a library of intents and entity values. For example, if the user asked the question "What is the rate on your cashback Mastercard?", then the intent classifier 114 may match the word "rate" to an intent "get rate" that is stored in a library of intents. The intent classifier 114 may determine that the entity value relating to that intent is "cashback" by the presence of the word "cashback".
In such a scenario, the intent classifier 114 therefore determines that the user is asking for a cashback rate. The presence of word "Mastercard" may cause the intent classifier 114 to determine that the cashback rate requested by the user is the cashback rate for the Mastercard-1'm brand credit card. The intent classifier 114 may associate a confidence value with the determined intent.
The confidence value will be referred to as a "confidence score", and it quantifies how confident the intent classifier 114 is regarding the correctness of its determined intent. For example, the intent determined by the intent classifier 114 may be "get cashback rate for MastercardTM brand credit card".
However, this intent is not necessarily correct, e.g. there is some ambiguity from the string of text as to whether the rate requested is cashback rate or another type of rate instead (e.g. interest rate for the MastercardTM brand credit card). Therefore, the confidence score may not be 100%, but may instead have a lower value, e.g. 75%.
[33] The intent classifier 114 may be implemented with a neural network.
The neural network receives as input text string from the user interface 104, and outputs a plurality of intents, each associated with a confidence value or confidence score. For example, the first intent "get cashback rate for MastercardTM brand credit card" can be associated with a confidence score Robic Ref. No. 19958-0008 of 75%, a second intent "get interest rate for MastercardTM brand credit card"
can be associated with a confidence score of 60% and so on. The confidence score is thus an estimation of the likelihood or probability that the neural network has correctly interpreted the user intent from the text string.
[34] One example of an algorithm that may be implemented by the intent classifier 114 is described in: Serban, I. V., Sordoni, A., Bengio, Y., Courville, A. C., and Pineau, J., "Building End-To-End Dialogue Systems Using Generative Hierarchical Neural Network Models", in Association for the Advancement of Artificial Intelligence (AAAI), Vol. 16, pp. 3776-3784, February 2016.
[35] In other embodiments, the intent classifier 114 instead works by simply looking for matches between words in the natural language input and words in prewritten questions that are stored in memory 122.
[36] Referring to FIG.!, FIG.1B and 1D the response generator 116 receives one or more intents from the intent classifier 114, determines the question that the user is possibly asking based on the intent(s), and returns the possible question to the user for verification. The keyword extractor 112 and the intent classifier 114 can form part of a question analyser module 158, as illustrated in FIG. ID. In possible embodiments, a single intent is provided as an input to the response generator 116, corresponding to the intent having the highest confidence score. In some embodiments, if the intent has a confidence score above a predetermined confidence threshold, the response generator 116 may respond with a reformulated question to obtain validation from the user, the reformulated question being an equivalent of what the question identifier 160 has determined as the initial/original question. In other embodiments, the response generator 116 may respond with a reformulated question regardless of the value of the confidence score. The answer to the possible question may also be returned at this step of the process, along with the reformulated question for validation.
[37] Still referring to FIG.!, FIG 1B and FIG.1D , a first response generator module 116a is called or executed, which comprises a question identifier module 160, an answer identifier module 162 and an answer generator module 164. More specifically, the question identifier 160 processes the intent to determine a question that most likely matches the question Robic Ref. No. 19958-0008 conveyed by the text string. The reformulated question is sent to the user interface, to validate whether it has been correctly determined. The answer provided by the user through the user interface 104 is analysed by the question identifier 160, which may also require involvement of the intent classifier 114. If the user has confirmed correctness of the reformulated question, the first response generator 116a then identifies, formulates and returns the answer, via the answer identifier 162 and the answer generator module 164. The question identifier module 160 can thus generate a single or alternate questions, and can analyse the response from the user indicating whether the question identifier 160 has correctly determined the original question.
[38] In some embodiments, answers to the verified questions may be stored in memory and simply retrieved using a mapping between the verified question and the answer. In other embodiments, the answer identifier 162 may need to send a request over a network to obtain the answer. For example, if the verified question is "what is the cashback rate for MastercardTM
brand credit card", then the answer identifier 162 may query a database storing the cashback rate in order to obtain the cashback rate, and then formulate and send the response to the user, e.g.
"The cashback rate for our Mastercard is 1%".
[39] Still referring to FIGs.1 and 1D, and also to FIG. 1C, if the user has indicated that the reformulated question is incorrect, alternate questions are generated by the alternate response generator 116b, which can comprise or interact with the same modules 158 (including 114), 160, 162, 164 previously described. In this case, the extracted keywords, if any, are fed to the intent classifier 114, and the question identifier module 160 generates a list of alternate questions based on the intents having the highest confidence scores. The list of alternate questions is then sent to the user interface 104 for confirmation by the user. The question identifier module 160 analyses the user's response, which can be a selection of one of the alternate questions, or an indication that none of the alternate questions matched his original question. An answer is provided by the answer generator module 164, once the original question has been validated.
Advantageously, the question identifier 160 forces the user to confirm or select the correct question from the list, before providing an answer, which not only increases the success rate of providing satisfactory/useful answers, but also allows the learning component 118 to continuously improve the process of identifying the initial question.
Robic Ref No. 19958-0008 [40] The learning component 118 thus adapts the keyword extractor 112 and/or intent classifier 114 based on the answers provided by the user, as discussed in more detail later. For example, the learning component 118 may readjust the weights and biases applied by the neural network for determining intents based on text string inputs and/or extracted keywords.
[41] Operation of the intent classifier 114, response generators 116a or 116b, and learning component 118 will be explained in more detail below in relation to FIG. 2.
[42] The system 102 further includes a memory 122 for storing information used by the data processing unit 110. For example, the memory 122 may store a library of intents, the extracted keywords from the keyword extractor 112, responses or partial responses preprogramed for use by the response generator 116, etc. The memory 122 can comprise a combination of RAM and ROM and can be part of a single server or distributed across several servers and/or databases, either locally or remotely on cloud-based servers.
[43] The data processing unit 110 and its components (e.g. the keyword extractor 112, intent classifier 114, response generator 116, and learning component 118) may be implemented by one or more processors that execute instructions (software) stored in memory. The memory in which the instructions are stored may be memory 122 or another memory not illustrated. The instructions, when executed, cause the data processing unit 110 and its components to perform the operations described herein, e.g. extracting keywords from the user input, classifying intent, computing a confidence score, formulating the response to send to the user, updating one or more algorithms based on input from the user, etc. In some embodiments, the one or more processors consist of a central processing unit (CPU).
[44] Alternatively, some or all of the data processing unit 110 and its components may be implemented using dedicated circuitry, such as an application specific integrated circuit (ASIC), a graphics processing unit (GPU), or a programmed field programmable gate array (FPGA) for performing the operations of the data processing unit 110 and its components.
[45] In some embodiments, in order to try to increase the robustness of the dialogue system, an interactive process is used in which the number of responses may be progressively increased during an interaction with a user. Example embodiments are provided below.
Robic Ref. No. 19958-0008 [46] FIG. 2 illustrates a flowchart of a computer-implemented method for performing automated interactive conversation with a user, e.g. in order to provide an answer to a question from the user, according to one embodiment.
[47] In step 202, a natural language input originating from a user is received via the user interface 104. The natural language input is a string of text that conveys a question.
[48] In step 204, the keyword extractor 112 attempts to extract keywords from the natural language input. If one or more keywords are extracted, then they are stored in memory 122.
[49] In step 206 the intent classifier 114 determines an intent from the natural language input. The intent classifier 114 also determines the confidence score for its determined intent. In step 207, if the confidence score is below a threshold, then the method proceeds to step 221.
Otherwise, if the confidence score is above the threshold, it indicates that the system 102 is confident enough in its determined intent to return a single question for verification, and the method proceeds to step 208.
[50] In step 208, the response generator 116a returns the question for verification by the user, via the question identifier module 160 described above.
[51] In step 209, the data processing unit 110 determines whether the question returned in step 208 was verified as correct by the user. Step 209 may include receiving a natural language input from the user, and the intent classifier 114 determining from the intent of the natural language input whether or not the user has verified the correctness of the question, via the question identifier module 160.
[52] For example, the original natural language input received in step 202 may ask "What is the rate on your cashback MasterCard?" The intent classifier 114 may determine with high enough confidence that the intent is "get cashback rate for MasterCard", and so in step 208 the response generator 116, via the question identifier module 160, returns "I
think I understand your question. Can you verify for me that your question is: What is the cashback rate for the MasterCard credit card?" The user replies "Yes". The input "Yes" is determined in step 209 to be verifying that the question is correct, via the question identifier module 160.
Robic Ref. No. 19958-0008 [53] If the question is verified as correct, then in step 210 the response generator 116 returns the answer to the question, via the answer identifier 162 and answer generator 164, and the method ends. Optionally, the user's answer confirming correctness of the response may be used by to the intent classifier 114 and/or the question identifier 162 to increase their effectiveness for future questions that may be similar. However, if the question is not verified as correct, then the method proceeds to step 211.
[54] When a question is derived by the intent classifier 114 from the natural language input, and the confidence score is above the threshold, then the derived question is referred to as a "likely question". A "likely question" is a question that the system 102 determines was likely conveyed by the natural language input. In step 208, it is the likely question that is returned.
However, it is only a "likely" question because it is not necessarily the actual question that was asked, e.g. if the intent determined by the intent classifier 114 does not correctly reflect the user's intent.
[55] If step 211 is reached, it means that the initial question presented to the user for verification in steps 208 and 209 is not verified as correct. In step 211, the data processing unit 110 determines whether one or more keywords were recognized and extracted by the keyword extractor 112 in step 204. If no keywords were recognized, then the method proceeds from step 211 to step 230. Step 230 is explained later. Otherwise, if one or more keywords were recognized and extracted, then the method proceeds from step 211 to 212.
[56] In step 212, the intent classifier 114 identifies n alternative intents based on the keywords extracted in step 204, where n is a natural number. n may vary depending upon how many alternative intents can be determined, and n may also be capped. For example, if only one alternative intent is determined by the intent classifier 114, then n is limited to n = 1. As another example, if five alternative intents are determined by the intent classifier 114, then n may be capped at four, e.g. only the top four alternative intents are identified.
[57] An alternative intent is identified by the intent classifier 114 as follows: the keywords are processed, but instead of identifying the most likely intent (identified in step 206), a different intent is identified that is determined to be less likely, e.g.
has a lower confidence score. For example, the user's question may be "What is the rate on your cashback Mastercard?"
Robic Ref. No. 19958-0008 The intent classifier 114 determines two possible intents: (1) the user is requesting the cashback rate for the MastercardTM brand credit card, and the confidence score of this determined intent is 75%; or (2) the user is requesting the interest rate for the Mastercard rm brand credit card, and the confidence score of this determined intent is 65%. The intent identified in step 206 is the one with the higher confidence score, which in this example is cashback rate. The (n = 1) alternative intent identified in step 212 is the one with the lower confidence score, which in this example is interest rate. Each alternative intent corresponds to an alternative question the user might be asking, which is derived from the natural language input conveying the question that was received at step 202.
[58] The n alternative intents correspond to n alternative questions, and in step 214 the n alternative questions are generated by the question identifier module 160 and returned to the user via the user interface 104.
[59] In step 215 it is determined whether one of the n alternative questions is identified as correct by the user, by the question identifier 160. Step 215 may be performed by determining the intent of an input received from the user after the n alternative questions are presented to the user. For example, if the user responds "First question", then the question identifier 160 determines that the first question of the n alternative questions is the correct question.
[60] If one of the n alternative questions is identified as correct, then in step 216 the response generator 116 returns, via the answer identifier 162 and the answer generator 164, the corresponding answer and the method ends. If none of the n alternative questions is identified as correct, then the method proceeds to step 230. Step 230 is described later.
The intent classifier and question identifier module can also be at this point readjusted, for example by providing the user's answer to a neural network, for modifying weights and biases of layers of the neural network.
[61] Returning to step 207, if the intent determined by the intent classifier 114 in step 207 is below a threshold, then the method proceeds to step 221. If step 221 is reached, it means that an intent has been determined from the natural language input, but the intent classifier 114 is not particularly confident that the determined intent is correct.
Robic Ref. No. 19958-0008 [62] Therefore, in step 221, the data processing unit 110 determines whether one or more keywords were recognized and extracted by the keyword extractor 112 in step 204. If no keywords were recognized, then the method proceeds from step 221 to step 230.
Step 230 is explained later. Otherwise, if one or more keywords were recognized and extracted, then the method proceeds from step 221 to step 222. In step 222, the intent classifier 114 identifies the k most likely intents, where k is a natural number greater than or equal to one.
k does not need to have any relation to n, but in some embodiments k = n or k = n + 1. k may vary depending upon how many intents can be determined, and k may also be capped. The k intents returned may be the k intents having the highest confidence scores. For example, the user's question may be "What is the big deal about your Mastercard?" The intent classifier 114 determines two possible intents: (1) the user is requesting a summary of the features of the MastercardTM brand credit card, and the confidence score of this determined intent is 45%; or (2) the user is requesting information on promotional offers for signing up for the MastercardTM brand credit card, and the confidence score of this determined intent is 35%. Neither intent has a high enough confidence score to proceed to step 208, but in step 222 both intents (k = 2) are identified. Each intent corresponds to an alternative question the user might be asking, which is derived from the natural language input conveying the question that was received at step 202.
1631 The k intents correspond to k questions, generated by the question identifier module 160, and in step 224 the k questions are returned to the user via the user interface 104.
[64] In step 225 it is determined whether one of the k questions is identified as correct by the user, via the question identifier module 160. Step 225 may be performed by determining the intent of an input received from the user after the k questions are presented to the user. If one of the k questions is identified as correct, then in step 226 the response generator 116 returns, via the answer identifier module 162 and the answer generator module 164, the corresponding answer and the method ends. If none of the k questions is identified as correct, then the method proceeds to step 230. Again here, the user's question selection can be fed back to the intent classifier 114 and/or the question identifier 160 to improve accuracy of the validation/reformulated questions.
Robic Ref. No. 19958-0008 [65] If step 230 is reached in the method of FIG. 2 it means that the system 102 is not able to determine the question the user is asking. In step 230 the response generator 116 sends a reply to the user indicating this, e.g. "Sorry, I do not understand your question. Please try to rephrase your question".
[66] FIGID thus summarizes the back and forth validation process described in relation with F1G.2, occurring between the front end 120 and the back end 140 of the system 102. Questions and responses are sent to and captured via the user interface 104 and the back end modules 158, 116a and 116bprocess the user's responses and generate the reformulated question(s). As explained previously, the question entered by the user is first analysed by the question analyser 158, and intents, keywords and/or text strings are feed to either one of the first response generator 116a and alternate response generator 116b, for instance depending on the confidence score of the intent. In some embodiments, when the confidence score is above a given threshold, a single validation question is returned to the user interface. In other embodiments, the validation question with the highest confidence score is returned to the user interface. In other embodiments, several alternate questions are proposed, one of which can be selected by the user via the user interface if it correctly reflects the original question asked.
[67] FIG. 3 illustrates an example message exchange on a user interface 104, according to one embodiment. The message exchange corresponds to steps 202, 204, 206, 207, 208, 209, 211, 212, 214, 215, and 216 of FIG. 2. The number of responses (in the form of questions) is progressively increased during the interaction with the user. In particular, initially only one question is presented for verification at 382. However, upon receiving user feedback indicating that the initial question is incorrect, n = 3 alternative questions are provided at 384.
The user indicates that the first one of the three alternative questions is correct at 386, and the answer corresponding to that question is returned at 388.
[68] FIG. 4 illustrates an example message exchange on a user interface 104, according to another embodiment. The message exchange corresponds to steps 202, 204, 206, 207, 221, 222, 224, 225, and 226 of FIG. 2. The confidence score relating to the most likely intent does not exceed the threshold, and so the k = 3 most likely responses are returned at 392.
Robic Ref. No. 19958-0008 The user indicates that the first one of the three questions is correct at 396, and the answer corresponding to that question is returned at 398.
1691 Returning to FIG. 2, optionally, in step 234, the learning component 118 updates the keyword extractor 112 and/or the intent classifier 114 to reflect the user's response that indicates which question is the correct question. For example, the learning component 118 may receive the output of the "Yes" branch of step 215 and/or step 225, which indicates the correct question, and the learning component 118 may use this indication to update or train the intent classifier 114 and/or the keyword extractor 112. Two examples follow.
[70] One example: The user initially asks the question "What is the big deal about your Mastercard?" The system does not determine an intent with a high enough confidence score and so three questions are returned to the user, as shown at 392 of FIG. 4. The user replies that the first question is the correct, i.e. the correct question is "What are the features of the Mastercard credit card?". The learning component 118 then updates the keyword extractor 112 and/or intent classifier 114 to add the vocabulary "big deal" and to indicate that "big deal" is a synonym to "features". Then, if in the future a user asks a question including "big deal", e.g. "What is the big deal regarding your savings account", then the intent classifier 114 will more confidently determine that the user intent is that the user wants to learn about the features of the savings account.
[71] Another example: The user initially asks the question "What is the rate on your cashback Mastercard?" The system initially returns the incorrect question, as shown at 382 of FIG. 3, and so three alternative questions are returned to the user, as shown at 384 of FIG. 3. The user replies at 386 that the first question is correct, i.e. the correct question is "What is the interest rate on the Mastercard credit card". The learning component 118 then updates the intent classifier 114 to increase the confidence score of the entity value "interest rate" when "rate" is used in the user's question. Then, if in the future a user asks a similar question, e.g. "What is the rate on your Visa card", then the intent classifier will more confidently determine that the user intent is that the user wants to know the interest rate for the VisaTm brand credit card.
[72] An example of a learning algorithm that may be implemented by the learning component 118 is : Schatzmann. J., Weilhammer, K., Stuttle, M., & Young. S.
(2006), -A survey Robic Ref. No. 19958-0008 of statistical user simulation techniques for reinforcement-learning of dialogue management strategies-. The knowledge engineering review, 21(2), 97-126.
[73] In alternative embodiments, steps 208 and 209 of FIG. 2 may be modified to instead just return an answer to the determined question, and ask for validation that the returned answer is correct, in which case step 210 is not needed. For example, box 382 in FIG. 3 may instead be: "The cashback rate on the MasterCard credit card is 1%. Did that answer your question?". If the user answers "Yes" then the method ends, whereas if the user answers "No, that did not answer my question", then the method proceeds to step 211.
[74] In some embodiments, the original question asked in the natural language input received at step 202 may actually consist of more than one question, in which case the system 102 may extract and process each question separately, or the intent classifier 114 may try to determine an overall intent. For example, if the natural language input from the user in step 202 is "Does your bank offer multiple credit cards? What is the rate of each one?", then the intent classifier 114 may determine that the intent is that the user wants a comparison of the rate of each of the bank's credit cards.
[75] In some embodiments, the natural language input received at step 202 may not be a question, but may instead be a request or an instruction to perform an action. For example, the input may be a request for information. The reply may then be a question that confirms whether particular information is being requested. For example, the natural language input received in step 202 may be "Provide me with the rate on your cashback MasterCard", and the initial question returned in step 208 may be "Please confirm that you are asking: What is the cashback rate on the MasterCard credit card?" Similarly, the alternative questions in steps 214 and 224 may ask whether particular information is being requested.
[76] In some embodiments, the natural language input received at step 202 may be an instruction to perform an action, e.g. "open a new account", in which case the question(s) returned may relate to clarification or confirmation before proceeding, e.g.
in step 214 "Do you mean any one of the following actions: (1) Open a new savings account?; or (2) open a new chequing account?; or (3) open a new student account?".
Robic Ref. No. 19958-0008 [77] In some embodiments, when a user asks a question or requests an action, the response returned by the system 102 may be formulated based on information specific to the user. In some embodiments, the response may be in reply to a user's finance related question or finance related action. The response may be a function of the user's financial information, e.g.
the user's prior financial transactions. The response may be a question or an answer or an action.
[78] FIG. 5 illustrates a flowchart of a computer-implemented method for interacting with a user, according to one embodiment. In step 452, the data processing unit 110 receives, in text form, a natural language input originating from a user via the user interface 104. The natural language input conveys a finance related question or a finance related action to be performed. As an example, the user may be asking "What is the monthly fee for your savings account?" (a finance related question), or the user may be instructing "Please open a new savings account" (a finance related action).
[79] In step 454, an intent is determined from the natural language input, possibly using keywords extracted from the natural language input. In step 456, the response generator 116 formulates a response (e.g. a question, an answer, or an action) based on the intent.
However, the response formulated by the response generator 116 is based on user-specific financial information, as explained below.
[80] Stored in memory 122 is the identity of the user. The system 102 knows and stores the identity of the user because the identity of the user has been previously provided to the system 102. As one example, the user may have previously provided their bank card number to the system 102, which is used to uniquely identify the user. As another example, the system 102 may be part of an online banking platform, and the user is signed into their online banking, such that the system 102 is aware of the identity of the user.
[81] Stored in a data structure, e.g. a database, is user-specific financial information.
User-specific financial information is financial information that is specific or unique to the user.
A non-exhaustive list of user-specific financial information includes any one, some, or all of the following: prior financial transactions performed by the user, e.g. a stored record of previous financial transactions; and/or quantity, quality, or type of financial transactions performed by the user; and/or user account balances; and/or number or type of accounts held by a user (examples Robic Ref. No. 19958-0008 of accounts include banking accounts, mortgage accounts, investment accounts, etc.); and/or credit information for the user; and/or information relating to banking products utilized by the user, e.g. whether the user has a mortgage, a credit card, investments, etc.
The data structure may be stored in memory 122 or at another location, e.g. a database connected to data processing unit 110 via a network.
[82] There are multiple candidate responses that may be returned to the user, which are selected or weighted based on the user-specific financial information. Some examples are provided below.
[83] Example: The natural language input originating from the user in step conveys the following finance-related question: "What is the monthly fee for your savings account?" The intent determined in step 454 is that the user is requesting the monthly fee for a savings account. The response generator 116 determines the following, e.g. by querying a database: the standard monthly fee is $10 per month, but the fee is reduced to $5 per month if the user has a mortgage account or an investment account with the bank, and the fee is reduced to $0 per month if the user has both a mortgage account and an investment account with the bank.
Therefore, there are three candidate responses: $10, $5, or $0. The response generator 116 uses the user identification stored in memory 122 to query a database that lists the accounts held by the user. The accounts held by the user include a mortgage account, but not an investment account, and so the response returned to the user in step 456 is that the monthly fee is $5, or the response may be a question, e.g. "We can offer you a savings account for a monthly fee of only $5, are you interested?".
[84] Another example: The natural language input originating from the user in step 452 conveys the following finance-related question: "What is this month's fee for my savings account?" The intent determined in step 454 is that the user wants to know this month's fee for the user's savings account. The fee is a function of the number of financial transactions performed by the user involving the user's savings account, e.g. $1 fee for every transfer into or out of the savings account in the month. The response generator 116 uses the user identification stored in memory 122 to query a database that lists the number of transactions that month. The Robic Ref. No. 19958-0008 database returns a value indicating that there were three transfers since the beginning of the month, and so the response returned to the user in step 456 is that the fee will be $3.
[85] Another example: The natural language input originating from the user in step 452 conveys the following finance-related action: "transfer $100 from my savings account to my chequing account". The intent determined in step 454 is that $100 is to be transferred from the user's savings account to the user's chequing account. The response generator 116 determines that the user has two savings accounts ("A" and "B"), and so there are two candidate responses:
either transfer the $100 from the user's savings account A or transfer the $100 from the user's savings account B. The response generator 116 uses the user identification stored in memory 122 to query the account balances for savings accounts A and B and determines that savings account B has no money in it. In response, the response generator 116 performs the transfer from savings account A, perhaps after sending a question to the user confirming that the money is to be transferred from savings account A.
[86] In some embodiments, the method of FIG. 2 may be modified to incorporate generating a response based on user-specific financial information. For example, the answer returned in step 210 and/or 216, and/or 226 may be based on user-specific financial information.
In a variation of FIG. 2, answers (instead of questions) may be returned in steps 208/209, 214/215, and 224/225 (in which case steps 210, 216 and 226 are not needed).
The initial answer returned in step 208/209 may be formulated based on financial information specific to the user. If in step 209 the user found the answer to be unsatisfactory (e.g. incorrect), then the alternative intents or answers (e.g. of step 214/215) may or may not be based on the user's financial information.
[87] An example: The natural language input originating from the user conveys the following finance-related question: "What is the rate of your savings account?" The intent determined is that the user is requesting the interest rate of a savings account, and the confidence score is high enough to immediately supply an answer to the question. The standard interest rate for a savings account is 1% but can be offered at 1.5% if the user has a mortgage account with the bank. The response generator 116 uses the user identification stored in memory 122 to query a database that lists the accounts held by the user. The accounts held by the user include a Robic Ref. No. 19958-0008 mortgage account, and so the response returned to the user is that the interest rate is 1.5%. It is then determined that the user is not satisfied with the answer, e.g. the user actually wanted to know the fee for the savings account. n alternative intents are therefore identified, and n corresponding alternative answers are returned to the user. However, the n corresponding alternative answers are not formulated based on user-specific financial information because the system 102 is now not as confident about whether the alternative answers even reflect the question actually asked by the user. This is because the confidence scores associated with the alternative intents are lower than the confidence score associated with intent initially determined.
[88] In some embodiments, the response may only be formulated based on user-specific financial information if the confidence score of the intent associated with the response is above a particular threshold. For example, if the intent has a confidence score of 90% or above, then modify the corresponding response based on the user-specific financial information;
otherwise, do not modify the corresponding response based on the user-specific financial information.
[89] FIG. 6 illustrates a flowchart of a computer-implemented method for performing automated interactive conversation with a user, according to one embodiment.
The automated interactive conversation may be performed in order to provide an answer to a question from the user.
[90] In step 502, a user interface 104 is provided at which the user can provide a natural language input. The natural language input is processed by the data processing unit 110.
The data processing unit 110 comprises at least one processor executing instructions. The instructions are configured to cause the data processing unit 110 to perform the remaining steps of FIG. 6.
[91] In step 504, the data processing unit 110 derives, from the natural language input, a possible question the user might be asking. An example of step 504 is described earlier in relation to steps 202 to 208 of FIG. 2.
Robic Ref. No. 19958-0008 [92] In step 506, the data processing unit 110 conveys the possible question to the user through the user interface 104 for verification by the user. An example of step 506 is described earlier in relation to steps 208 and 209 of FIG. 2.
[93] In step 508, the data processing unit 110 processes user input at the user interface 104 indicating that the possible question is incorrect (e.g. the "No" branch of step 209 of FIG. 2).
[94] In step 510, the data processing unit 110 derives a series of alternate questions that the user might be asking (e.g. step 212 of FIG. 2). In some embodiments, step 510 may only be performed if at least one keyword was recognized and extracted from the natural language input. In some embodiments, at least one keyword is recognized and extracted from the natural language input, and step 510 includes deriving the series of alternate questions based on the at least one keyword.
[95] In step 512, the data processing unit presents the series of alternate questions to the user through the user interface 104.
[96] In some embodiments, deriving the possible question the user might be asking in step 504 includes determining a user intent from the natural language input.
In some embodiments, deriving the possible question the user might be asking is performed without an extracted keyword.
[97] In some embodiments, an algorithm for extracting at least one keyword and/or an algorithm for determining user intent is modified based on an indication from the user of which one of the alternate questions is a correct question.
[98] In some embodiments, the method further includes receiving an indication, from the user, that a particular question of the series of alternate questions is correct, and presenting to the user through the user interface an answer to the particular question. In some embodiments, the method includes generating the answer to the particular question using user-specific financial information. In some embodiments, the particular question is a finance-related question, and the user-specific financial information relates to financial transactions previously performed by the user and/or accounts held by the user.
Robic Ref. No. 19958-0008 [99] FIG. 7 illustrates a flowchart of a computer-implemented method for interacting with a user, according to one embodiment, wherein the question from the user relates to finance and wherein responses returned by the system to the user interface are based on user-specific information. FIG.8 provides an example of possible sub steps that can be performed when the response to a user's question depends on the user's profile.
[100] In step 552 of FIG.7, a user interface 104 is provided at which the user can provide a natural language input conveying a finance related question or a finance related action to be performed. The natural language input is processed by the data processing unit 110. The data processing unit 110 comprises at least one processor executing instructions. The instructions are configured to cause the data processing unit 110 to perform the remaining steps of FIG. 7.
[101] In step 554, the data processing unit 110 derives, from the natural language input, a possible finance related question or possible finance related action. In step 556, the data processing unit 110 obtains a series of candidate answers, for example using a query lookup table, each of which is an answer to the possible finance related question or the possible finance related action. In step 558, the data processing unit 110 selects one of the candidate responses on the basis of user-specific financial information.
[102] In step 560, the data processing unit 110 presents the selected candidate response to the user through the user interface 104. Other examples of financial related questions and responses are provided earlier when describing FIG. 5 and related embodiments.
[103] In some embodiments, the candidate responses are a series of answers, each answer corresponding to a respective possible finance related question. In other embodiments, the candidate responses are a series of actions, each action corresponding to a respective possible finance related action instructed by the user. In other embodiments, the candidate responses are a series of questions. The questions may each correspond to possible finance related question being asked. In some embodiments, the user-specific financial information relates to financial transactions previously performed by the user and/or accounts held by the user. In some embodiments, the user-specific financial information is retrieved using an identifier of the user stored in memory.
Robic Ref. No. 19958-0008 [104] For instance, as per the example presented in FIG. 8, the confirmed or validated question can be: "What is the interest rate on the Master CardTM. In the example of FIG. 8, the answer relates to the interest rate of a credit card, as determined by querying a lookup table, as in step 802. For some questions, once confirmed, the answer will be independent of the user's profile, and therefor the answer will be the same for all users. This scenario is reflected by the left side of the flowchart, where the answer, corresponding in the example to a 4% credit card interest rate, is the same for all users. In other cases, the answer can vary depending on the user's profile, such as based on the user's financial profile, as per step 804. For example, a user with three or more bank accounts can be offered a lower interest rate than users having a single bank account. An ID database can thus be queried, as per step 806, to uniquely identify the user with a user ID. Once the user has been uniquely identified, financial databases can be in turn queried to determine the personal or financial profile associated with the unique ID (step 808).
The financial profile can include for example the number of accounts, cards, financial services, amount per account and loans associated with the user ID. ln the example, for users having more than three accounts linked to their unique identifier, the system can be configured such that a lower interest rate is offered to these users. The rules for modulating the answers based on financial information parameters can be stored for instance in a lookup table for answers associated with financial information, as in step 810. In the example, the user has four different accounts, and thus has access to a reduced interest rate of 3%, instead of 4%.
The answer is returned to the answer generator module 164 at step 812.
[105] The predominant modus operandi of dialogue systems consists in determining the question asked by a user, and in returning an answer corresponding to the determined question.
Despite recent advances and developments in the field, the success rate of dialogue systems is still surprisingly low, wherein the success rate corresponds to the ability to correctly determine the question asked by a user, and to return a satisfactory/accurate answer following a dialogue session with the user. The proposed system and method improve on existing dialogue systems and methods by prompting users to confirm the system's "understanding" of their question, such as by reformulating the possible question. In cases where the level of confidence in the determined question is low, or if the initial attempt at determining the original question has failed, the system generates several candidate questions (alternate questions) to be validated by Robic Ref. No. 19958-0008 the user, instead of simply providing what would most likely be an inaccurate answer, as a typical chatbot would. This proposed method and system have proved to significantly increase the success rate in providing accurate answers to user's questions.
[106] A prototype of the proposed automated interactive conservation/dialogue system has been tested and compared with an existing commercially available system.
In the experiment, the same 100 predetermined questions with known answers were asked of both the proposed system 102 and an existing commercially available system . With respect to the proposed system 102, with reference to FIG. 3, dialogue box 382 is considered a first attempt by the proposed system 102, and dialog box 384 is considered a second attempt by the proposed system 102. The dialogue session of FIG. 3 would thus be attributed a score of 0.5. The existing commercially available system on the other hand provides an answer to the question in its first attempt and if the first attempt is incorrect, the existing commercially available system provides one or more alternative answers. This method of responding with a first answer and then subsequently with other alternative answers is common amongst existing chatbot dialogue systems. If the commercially available system provided a good answer at the first attempt a score of 1 is assigned, and if a correct answer was provided at the second attempt, a score of 0.5 was assigned. The assigned scores for all 100 questions was summed and the result divided by 100, for each system. The results are shown in the table below:
System tested Accuracy score Proposed system and method 77.2%
Existing commercially available system 51.3%
[107] As can be appreciated, an increase of 50.4% in the accuracy of the dialogue process is achieved with the proposed system and method.
[108] Although the foregoing has been described with reference to certain specific embodiments, various modifications thereof will be apparent to those skilled in the art without departing from the scope of the claims appended hereto.
Robic Ref. No. 19958-0008 [109]
Moreover, any module, component, or device exemplified herein that executes instructions may include or otherwise have access to a non-transitory computer/processor readable storage medium or media for storage of information, such as computer/processor readable instructions, data structures, program modules, and/or other data. A
non-exhaustive list of examples of non-transitory computer/processor readable storage media includes magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, optical disks such as compact disc read-only memory (CD-ROM), digital video discs or digital versatile disc (DVDs), Blu-ray DiscTM, or other optical storage, volatile and non-volatile, removable and non-removable media implemented in any method or technology, random access memory (RAM), read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), flash memory or other memory technology. Any such non-transitory computer/processor storage media may be part of a device or accessible or connectable thereto. Any application or module herein described may be implemented using computer/processor readable/executable instructions that may be stored or otherwise held by such non-transitory computer/processor readable storage media.
Claims (23)
1. A computer-implemented method for performing automated interactive conversation with a user, the method comprising:
c. providing a user interface at which the user can provide a natural language input;
d. processing the natural language input with a data processing unit comprising at least one processor executing instructions, the instructions configured for:
vi. deriving from the natural language input a possible question the user might be asking;
vii. conveying the possible question to the user through the user interface;
viii. processing user input provided at the user interface indicating that the possible question is incorrect;
ix. deriving a series of alternate questions that the user might be asking;
x. presenting the series of alternate questions to the user through the user interface.
c. providing a user interface at which the user can provide a natural language input;
d. processing the natural language input with a data processing unit comprising at least one processor executing instructions, the instructions configured for:
vi. deriving from the natural language input a possible question the user might be asking;
vii. conveying the possible question to the user through the user interface;
viii. processing user input provided at the user interface indicating that the possible question is incorrect;
ix. deriving a series of alternate questions that the user might be asking;
x. presenting the series of alternate questions to the user through the user interface.
2. The computer-implemented method of claim 1, wherein the instructions are further configured for: extracting at least one keyword from the natural language input and deriving the series of alternate questions based on the at least one keyword.
3. The computer-implemented method of claim 2, wherein deriving the possible question the user might be asking comprises determining a user intent from the natural language input.
4. The computer-implemented method of claim 3, wherein deriving the possible question the user might be asking is performed without the at least one keyword.
5. The computer-implemented method of claim 3 or claim 4, wherein an algorithm for extracting the at least one keyword and/or an algorithm for determining the user intent is modified based on an indication from the user of which one of the alternate questions is a correct question.
6. The computer-implemented method of any one of claims 1 to 5, wherein the instructions are further configured for: receiving an indication, from the user, that a particular question of the series of alternate questions is correct, and presenting to the user through the user interface an answer to the particular question.
7. The computer-implemented method of claim 6, further comprising generating the answer to the particular question using user-specific financial information.
8. The computer-implemented method of claim 7, wherein the particular question is a finance-related question, and the user-specific financial information relates to financial transactions previously performed by the user and/or accounts held by the user.
9. The computer-implemented method according to claims 3, 4 or 5, wherein the user intent associated to the possible question and to the alternate questions are each associated with a confidence score, and wherein in step ii, the possible question first formulated is derived from the user intent having the highest confidence score, and wherein in step iv, the series of alternate questions are derived from user intents having the next highest confidence scores.
10. The computer-implemented method according to claim 5, wherein the algorithm for extracting the at least one keyword and/or an algorithm for determining the user intent is a neural network algorithm, and wherein weights and biases of the neural network algorithm are adjusted based on the indication that one of the alternate questions is the correct question.
11. The computer-implemented method according to claim 8, wherein the answer is modulated based on the financial transactions previously performed by the user and/or accounts held by the user, whereby different users are provided different answers in response to the possible question or alternate questions, once confirmed.
12. The computer-implemented method according to any one of claims 1 to 11, wherein steps iii to v are repeated, until the user provides an indication, via the user interface, that one of the alternate questions corresponds to the original question.
13. The computer-implemented method according to any one of claims 1 to 12, wherein in step iv the number of alternate questions in progressively increased during interaction with the user.
14. A computer-implemented method for performing automated interactive conversation with a user, the method comprising:
a. providing a user interface at which the user can provide a natural language input;
b. processing the natural language input with a data processing unit comprising at least one processor executing instructions, the instructions configured for:
i. extracting keywords from the natural language input;
ii. determining from the keywords a user intent associated with the user intent;
iii. deriving from the user intent a possible question the user might be asking;
iv. conveying the possible question to the user through the user interface;
v. processing user input provided at the user interface indicating that the possible question is correct, thereby confirming that the possible question has been correctly determined;
vi. determining the answer associated with the confirmed question;
vii. presenting the answer to the user through the user interface.
a. providing a user interface at which the user can provide a natural language input;
b. processing the natural language input with a data processing unit comprising at least one processor executing instructions, the instructions configured for:
i. extracting keywords from the natural language input;
ii. determining from the keywords a user intent associated with the user intent;
iii. deriving from the user intent a possible question the user might be asking;
iv. conveying the possible question to the user through the user interface;
v. processing user input provided at the user interface indicating that the possible question is correct, thereby confirming that the possible question has been correctly determined;
vi. determining the answer associated with the confirmed question;
vii. presenting the answer to the user through the user interface.
15. The computer-implemented method according to claim 14, comprising a step of determining a confidence score associated with the user intent, and wherein step iv of conveying the possible question is performed when the user intent associated therewith has a confidence score above a predetermined threshold.
16. A system for performing automated interactive conversation with a user, the system comprising:
c. a user interface configured to receive a natural language input from the user;
d. a data processing unit to process the natural language input, the data processing unit configured to:
vi. derive from the natural language input a possible question the user might be asking;
vii. convey the possible question to the user through the user interface;
viii. process user input at the user interface indicating whether the possible question is incorrect;
ix. derive a series of alternate questions that the user might be asking;
x. present the series of alternate questions to the user through the user interface.
c. a user interface configured to receive a natural language input from the user;
d. a data processing unit to process the natural language input, the data processing unit configured to:
vi. derive from the natural language input a possible question the user might be asking;
vii. convey the possible question to the user through the user interface;
viii. process user input at the user interface indicating whether the possible question is incorrect;
ix. derive a series of alternate questions that the user might be asking;
x. present the series of alternate questions to the user through the user interface.
17. The system of claim 16, wherein the data processing unit comprises a keyword extractor from extracting keywords from the natural language input and an intent classifier for determining from the keywords and/or natural language input a user intent and a confidence score associated with the user intent, wherein the data processing unit is configured to derive the series of alternate questions based on the at least one keyword.
18. The system according to claim 18, wherein the data processing unit comprises:
a response generator for performing steps i. to v., the response generator comprising:
a question identifier module deriving the possible question and alternate questions from the keywords and/or the user intent;
an answer identifier module processing user input at the user interface indicating whether the possible question or one of the alternate questions is correct;
and an answer generator module presenting the answer associated to one of the alternate questions, once confirmed as correct by the user.
a response generator for performing steps i. to v., the response generator comprising:
a question identifier module deriving the possible question and alternate questions from the keywords and/or the user intent;
an answer identifier module processing user input at the user interface indicating whether the possible question or one of the alternate questions is correct;
and an answer generator module presenting the answer associated to one of the alternate questions, once confirmed as correct by the user.
19. The system of claim 17, wherein the data processing unit is configured to derive the possible question the user might be asking without using the at least one keyword.
20. The system of any one of claims 16 to 19, wherein the data processing unit is configured to modify an algorithm for extracting the at least one keyword and/or modify an algorithm for determining the user intent based on an indication from the user of which one of the alternate questions is a correct question.
21. The system of claim 20, wherein the algorithm is a neural network algorithm, and wherein weights and biases of the neural network algorithm are adjusted based on the indication that one of the alternate questions is the correct question.
22. The system of claim 18, wherein the answer generator module is configured to generate the answer to the possible or alternate question using user-specific financial information.
23. The system of claim 22, wherein the possible or alternate question is a finance-related question, and the user-specific financial information relates to financial transactions previously performed by the user and/or accounts held by the user.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CA3171020A CA3171020A1 (en) | 2018-12-07 | 2019-12-06 | Systems and methods for performing automated interactive conversation with a user |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CA3026936A CA3026936A1 (en) | 2018-12-07 | 2018-12-07 | Systems and methods for performing automated interactive conversation with a user |
CA3026936 | 2018-12-07 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CA3171020A Division CA3171020A1 (en) | 2018-12-07 | 2019-12-06 | Systems and methods for performing automated interactive conversation with a user |
Publications (1)
Publication Number | Publication Date |
---|---|
CA3064116A1 true CA3064116A1 (en) | 2020-06-07 |
Family
ID=71070829
Family Applications (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CA3026936A Abandoned CA3026936A1 (en) | 2018-12-07 | 2018-12-07 | Systems and methods for performing automated interactive conversation with a user |
CA3171020A Pending CA3171020A1 (en) | 2018-12-07 | 2019-12-06 | Systems and methods for performing automated interactive conversation with a user |
CA3064116A Pending CA3064116A1 (en) | 2018-12-07 | 2019-12-06 | Systems and methods for performing automated interactive conversation with a user |
Family Applications Before (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CA3026936A Abandoned CA3026936A1 (en) | 2018-12-07 | 2018-12-07 | Systems and methods for performing automated interactive conversation with a user |
CA3171020A Pending CA3171020A1 (en) | 2018-12-07 | 2019-12-06 | Systems and methods for performing automated interactive conversation with a user |
Country Status (1)
Country | Link |
---|---|
CA (3) | CA3026936A1 (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20210237757A1 (en) * | 2020-01-31 | 2021-08-05 | Toyota Jidosha Kabushiki Kaisha | Information processing device, information processing method, and storage medium storing information processing program |
US20210335357A1 (en) * | 2020-04-28 | 2021-10-28 | Baidu Online Network Technology (Beijing) Co., Ltd. | Method for controlling intelligent speech apparatus, electronic device and storage medium |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20230410191A1 (en) * | 2022-06-17 | 2023-12-21 | Truist Bank | Chatbot experience to execute banking functions |
-
2018
- 2018-12-07 CA CA3026936A patent/CA3026936A1/en not_active Abandoned
-
2019
- 2019-12-06 CA CA3171020A patent/CA3171020A1/en active Pending
- 2019-12-06 CA CA3064116A patent/CA3064116A1/en active Pending
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20210237757A1 (en) * | 2020-01-31 | 2021-08-05 | Toyota Jidosha Kabushiki Kaisha | Information processing device, information processing method, and storage medium storing information processing program |
US11577745B2 (en) * | 2020-01-31 | 2023-02-14 | Toyota Jidosha Kabushiki Kaisha | Information processing device, information processing method, and storage medium storing information processing program |
US20210335357A1 (en) * | 2020-04-28 | 2021-10-28 | Baidu Online Network Technology (Beijing) Co., Ltd. | Method for controlling intelligent speech apparatus, electronic device and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CA3171020A1 (en) | 2020-06-07 |
CA3026936A1 (en) | 2020-06-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109960725B (en) | Text classification processing method and device based on emotion and computer equipment | |
CN110717514A (en) | Session intention identification method and device, computer equipment and storage medium | |
CN108021934B (en) | Method and device for recognizing multiple elements | |
CN111177359A (en) | Multi-turn dialogue method and device | |
CN108829682B (en) | Computer readable storage medium, intelligent question answering method and intelligent question answering device | |
CA3064116A1 (en) | Systems and methods for performing automated interactive conversation with a user | |
CN110175229B (en) | Method and system for on-line training based on natural language | |
CN109857846B (en) | Method and device for matching user question and knowledge point | |
US11995523B2 (en) | Systems and methods for determining training parameters for dialog generation | |
CN110264330B (en) | Credit index calculation method, apparatus, and computer-readable storage medium | |
US20220138770A1 (en) | Method and apparatus for analyzing sales conversation based on voice recognition | |
KR102186641B1 (en) | Method for examining applicant through automated scoring of spoken answer based on artificial intelligence | |
CN112287090A (en) | Financial question asking back method and system based on knowledge graph | |
CN111858854A (en) | Question-answer matching method based on historical dialogue information and related device | |
CN111177307A (en) | Test scheme and system based on semantic understanding similarity threshold configuration | |
US11314534B2 (en) | System and method for interactively guiding users through a procedure | |
US20240211571A1 (en) | Authentication Question Improvement Based on Vocal Confidence Processing | |
KR20220042103A (en) | Method and Apparatus for Providing Hybrid Intelligent Customer Consultation | |
CN111309882B (en) | Method and device for realizing intelligent customer service question and answer | |
CN116304046A (en) | Dialogue data processing method and device, storage medium and electronic equipment | |
KR20210009266A (en) | Method and appratus for analysing sales conversation based on voice recognition | |
Podgorny et al. | Conversational agents and community question answering | |
Chung et al. | A question detection algorithm for text analysis | |
Li | An evaluation of automation on misogyny identification (ami) and deep-learning approaches for hate speech-highlight on graph convolutional networks and neural networks | |
US20220319496A1 (en) | Systems and methods for training natural language processing models in a contact center |