WO2021171238A1 - A method and a system for improving response to an end-user's query - Google Patents

A method and a system for improving response to an end-user's query Download PDF

Info

Publication number
WO2021171238A1
WO2021171238A1 PCT/IB2021/051608 IB2021051608W WO2021171238A1 WO 2021171238 A1 WO2021171238 A1 WO 2021171238A1 IB 2021051608 W IB2021051608 W IB 2021051608W WO 2021171238 A1 WO2021171238 A1 WO 2021171238A1
Authority
WO
WIPO (PCT)
Prior art keywords
text
query
user
actions
category
Prior art date
Application number
PCT/IB2021/051608
Other languages
French (fr)
Inventor
Madhusudan Singh
Kaushik Halder
Aritra Ghosh Dastidar
Nirmal RAMESH RAYULU VANAPALLI VENKATA
Original Assignee
L&T Technology Services Limited
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by L&T Technology Services Limited filed Critical L&T Technology Services Limited
Publication of WO2021171238A1 publication Critical patent/WO2021171238A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/903Querying
    • G06F16/9032Query formulation
    • G06F16/90332Natural language query formulation or dialogue systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/167Audio in a user interface, e.g. using voice commands for navigating, audio feedback
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/26Speech to text systems

Definitions

  • This disclosure relates generally to an intelligent virtual assistant, and particularly to a cognition-based intelligent virtual assistant and a method of improving response to an end-user’s query.
  • Virtual assistants or virtual agents or chatbots have assumed great importance lately, especially in handling queries of users.
  • the chatbots may not be automatically able to acknowledge queries pertaining to “Why”, “How”, “What”, “Who”, “Where”, “When” (Zachman Enterprise Framework).
  • the chatbots may face issues in acknowledging ad hoc queries which may be unique and non-repetitive. This may be due to the reason that these queries may have different semantic nature, or the queries may pertain to different sets of data.
  • These ad hoc queries may be of interest to higher management for taking strategic decisions.
  • the program may be coded based on various sources of data from the same domain as the query, and then prepare reports.
  • the process of developing the reports is a time-intensive and effort intensive exercise, as the process happens in iterations. For example, each iteration may have sub queries, dependent queries, and different facts based on statistics, which may not be part of slice- dice reports. Further, in some scenarios, the queries may be targeted at abstract or latent facts. Furthermore, it is observed that in some scenarios, reports based on ad hoc queries may lose value because of their long processing time.
  • FIG. 1A is a block diagram of a system for improving response to an end-user’s query, in accordance with an embodiment of the present disclosure.
  • FIG. IB is a block diagram of a response improving device, in accordance with an embodiment of the present disclosure.
  • FIGS. 2A-B are functional block diagrams of a system for improving response to an end- user’s query, in accordance with some other embodiments of the present disclosure.
  • FIG. 3A is a process flow diagram of a process of fetching data via channel-2, in accordance with an embodiment of the present disclosure.
  • FIG. 3B is a process flow diagram of an example process of fetching data via channel-2.
  • FIG. 4 A is a functional block diagram of a knowledge base of a system for improving response to an end-user’s query, in accordance with an embodiment of the present disclosure.
  • FIG. 4B illustrates a tabular representation of the knowledge base of the system for improving response to an end-user’s query, in accordance with an embodiment of the present disclosure.
  • FIGS. 5A-5B are process flow diagrams of a process of data processing by a log question repository, in accordance with an embodiment of the present disclosure.
  • FIG. 6 is a process flow diagram of an example process for determining a relevant action for a user query, in accordance with an embodiment of the present disclosure.
  • FIG. 7 is a flowchart of a method of improving response to an end-user’s query, in accordance with an embodiment of the present disclosure.
  • FIG. 8 is a flowchart of a method of improving response to an end-user’s query, in accordance with another embodiment of the present disclosure.
  • FIG. 9 is a flowchart of a method of improving response to an end-user’s query, in accordance with a yet another embodiment of the present disclosure.
  • FIG. 10 is a flowchart of a method of improving response to an end-user’s query, in accordance with another embodiment of the present disclosure.
  • the system 100 may include a response improving device 102.
  • This response improving device 102 may implement a chatbot (also called a virtual assistant or a virtual agent) for providing an interface to receive a query from an end-user and providing a response to the query from the end-user.
  • the response improving device 102 may further implement an Artificial Intelligence (AI) model.
  • the system 100 may further include a one or more data sources 104.
  • the response improving device 102 may receive a text-query input corresponding to the user’s query, segregate the text-query input to obtain one or more text-query tokens, and categorize each of the one or more text-query tokens into one or more categories.
  • the one or more categories may include a query category, an attribute category, a feature category, and a data selection category.
  • the response improving device 102 may further, upon categorizing, map each of the one or more classified text-query tokens with one or more data sources, and determine a plurality of actions based on the mapping of at least one of the one or more classified text-query tokens with the one or more data sources.
  • the response improving device 102 may further identify a relevant action from the plurality of actions, and provide an output to the end-user, based on the identified relevant action.
  • the system 100 may include one or more processors 110, a computer-readable medium (for example, a memory) 112, and a display 114.
  • the computer-readable storage medium 112 may store instructions that, when executed by the one or more processors 110, cause the one or more processors 110 to improve response to the end-user’s query, in accordance with aspects of the present disclosure.
  • the computer-readable storage medium 112 may also store various data that may be captured, processed, and/or required by the system 100.
  • the system 100 may interact with a user via a user interface 116 accessible via the display 114.
  • the system 100 may also interact with one or more external devices 106 over a communication network 108 for sending or receiving various data.
  • the external devices 106 may include, but may not be limited to, a remote server, a digital device, or another computing system.
  • the response improving device 102 may include one or more modules, for example, a text-query receiving module 118, a segregation module 120, a categorization module 122, a mapping module 124, an action determination module 126, a relevant action identification module 128, and an output module 130.
  • modules for example, a text-query receiving module 118, a segregation module 120, a categorization module 122, a mapping module 124, an action determination module 126, a relevant action identification module 128, and an output module 130.
  • the text-query receiving module 118 may receive a text-query input corresponding to the user’s query.
  • the response improving device 102 may receive a voice input from a user, such that the voice input includes a query corresponding to the text-query input. Further, the response improving device 102 may convert the voice input into text (i.e., speech-to-text conversion), to obtain the text-query input.
  • the segregation module 120 may segregate the text-query input to obtain one or more text- query tokens. For example, the segregation module 120 may segregate the text-query input into its constituent tokens, i.e., words.
  • the categorization module 122 may categorize each of the one or more text-query tokens into one or more categories.
  • the one or more categories may include a query category, an attribute category, a feature category, and a data selection category.
  • the one or more text-query tokens categorized in the query category may include: “Why”, “How”, “What”, “Who”, “Where”, and “When”.
  • the user query may include at least one of tokens “Why”, “How”, “What”, “Who”, “Where”, and “When”, as part of the query.
  • the one or more text-query tokens categorized in the data selection category may correspond to system data and user data (i.e., metadata), for example, user identity, user role, etc.
  • the mapping module 124 may then map each of the one or more categorized text-query tokens with one or more data sources.
  • the mapping may be semantic-based mapping.
  • the response improving device 102 may include or interact with one or more data sources (databases/knowledge bases) which may store data about different domains to which the user queries may be related to.
  • the action determination module 126 may determine a plurality of actions based on the mapping of at least one of the one or more categorized text-query tokens with the one or more data sources.
  • the plurality of actions may include “fetching data from one of one or more data sources” as an answer to the user query, “fetching one or more laddering questions to present to the user” in response to the user query, etc.
  • the relevant action identification module 128 may identify a relevant action from the plurality of actions. For example, the relevant action identification module 128 may select the most suitable action from the plurality of actions in response to the user query. The output module 130 may then provide an output to the end-user, based on the identified relevant action.
  • FIGS. 2A-2B a functional block diagram of a system 200 (corresponding to system 100) for improving response to an end-user’s query is illustrated, in accordance with another embodiment of the present disclosure.
  • the system 200 may implement a layer- 1, a layer-2, a layer-3, and a layer-4.
  • the architecture of the system 200 may be divided into a channel- 1, a channel-2, and a channel-3 (not shown in FIG. 2A-2B).
  • An output from the channel- 1 may be regarded as quantitative as source data is expected to be in normalized form.
  • An output from the channel-2 and channel-3 may be regarded as qualitative as source data is expected to be unstructured format.
  • channel- 1, channel-2, and channel-3 may execute either parallelly or sequentially to generate the output. Further, in some cases, an output may be received from each of the channel- 1, the channel-2, and channel-3. For example, in some embodiments, the channel-
  • 2 may fix a threshold based on a model probability, above which results may be added with channel- 1.
  • queries in voice format or text format may be received as input from an end-user. Further, the voice-format queries may be converted into text format to obtain text-query input.
  • voice-format queries may be converted into text format to obtain text-query input.
  • the text-query input may be understood properly. Understanding the query may include segregating the text-query input to obtain one or more text-query tokens, and categorizing each of the one or more text-query tokens into one or more categories. Therefore, the layer-3 may work as a compiler, which may break (segregate) an input phrase in a way that it can be understood at the next level. For example, the text-query may be segregated into one or more text-query tokens. Further, each of these text-query tokens may be classified into one or more categories. The one or more categories may include a query category, an event selection category, an event enhancer category, and a data selection category.
  • the query category may include at least one of the words “what”, “where”, “who”, “when”, “why”, and “how”.
  • the event selection category may include attributes, and the event enhancer module may include features of an attribute.
  • the layer-3 may act as a compiler by segregating the tokens in four buckets (categories) as below:
  • Event selection category attributes of the query segregated
  • Event enhancer category features of the attributes of the query are segregated.
  • the layer-2 may include a query module 202, an event selection module 204, an event enhancer module 206, and a data selection module 208.
  • the query module 202, the event selection module 204, the event enhancer module 206, and the data selection module 208 may semantically match the query category tokens, the event selection category tokens, the event enhancer category tokens, and the data selection category tokens in the knowledge base, to determine one or more actions for the text query. Further, log questions may be used as predicting questions/recommending questions.
  • the question may be segregated into a plurality of user-query tokens.
  • these user-query tokens may include a first token “what”, a second token “revenue”, and a third token “last month”.
  • the first token “what” may be categorized in the query category
  • the second token “revenue” may be categorized in the event selection category
  • the third token “last month” may be categorized in the event enhancer category (attribute).
  • the layer-2 may further include various generic rules, business rules, generic models, or models on specific requirements.
  • the one or more database/data sources may include one or more expert systems, (also called knowledge bases) for example, knowledge base-1, knowledge base- 2... knowledge base-n. These knowledge bases may be accessed via the channel- 1. These knowledge bases accessed via the channel- 1 may include data in structured/normalized format. Further, it may be noted that in addition to the channel- 1, data may be fetched from other data sources, via the channel-2 or the channel-3. These other data sources may include data in unstructured format. For example, the other data sources may include webpages.
  • the channel-2 and channel-3 are further explained in detail in conjunction with FIGS. 3A-B and FIG. 6, respectively.
  • the layer- 1 may include source data, like various corpus or knowledge bases or mapping documents. This source data may help in sourcing data and in taking corrective and meaningful action, as described in layer-2. It may be noted that the layer- 1 may include one or more components for extracting data. Further, the layer- 1 may include various models like a predictive model and a forecasting model, that may be triggered by layer-2. It may be further noted that the layer- 1 may be accessed in the form of Application Program Interfaces (APIs). Also, various domain specific models may be built in layer- 1 which may be triggered by action defined in layer- 2.
  • the layer-2 may include data sets.
  • the system 200 may further communicate with a log question repository 210.
  • the log question repository 210 may log questions along with user details and time stamp.
  • the log question repository 210 may include log metadata, like system details and user details like user role, department etc.
  • the log question repository 210 may flag a question, based on if the question is acknowledged or not. Further, the log question repository 210 may provide input to the knowledge base in selecting a particular action. If a question is not acknowledged, then that question may be flagged as “Zero”, which may be taken offline as a requirement.
  • the above process helps in determining which all actions need to be addressed in future based on status attribute in log question repository 210. For example, if the value is “0”, then it means question was not acknowledged based on the present knowledge of knowledge base (layer-2), and that it needs to be enhanced in the future.
  • the system 200 may further communicate with a recommendation module 212.
  • the recommendation module 212 may help in providing an intelligent decision support system, and recommend other questions (other questions) based on context of the question asked by the end- user, that may help in determining actions.
  • a process 300A of fetching data via the channel-2 is illustrated, in accordance with an embodiment of the present disclosure.
  • the response improving device 102 may not be able to determine the plurality or actions or identify the relevant action from the plurality of actions, via the channel- 1.
  • the response improving device 102 may attempt to provide an output to the user-query via the channel-2.
  • the channel-2 may implement an artificial intelligence (AI) or a machine learning model.
  • AI artificial intelligence
  • a query may be received at layer-4 and converted to text format.
  • the text query may be tokenized (segregated) at layer-3 to obtain token data.
  • the tokenized query (token data) may be sent to the layer-2.
  • the token data may be fed to an AI model in vectorized format.
  • text data (input data) may be obtained from raw source files.
  • the raw source files may be stored in the one or more data sources, and may include data relating to particular domains (topics), such as technical domains, to which the user query may be directed at.
  • this text data (input data) may be converted into vectorized format for the AI model.
  • one or more cluster based on like statements may be created from the text data.
  • the one of the one or more clusters may be selected based on the input data and the token data.
  • the AI model may be trained with training data to perform relevant text extraction from the input data.
  • the AI model may extract relevant text from the input data based on the training (i.e., training provided using the training data).
  • an output may be generated using the extracted relevant text and this output may be provided to the user.
  • Text data (input data) 320 may be obtained from raw source files in response to receiving a user query.
  • one or more clusters 322 (322A, 322B, 322C, 322D) based on like statements (i.e., similar statements) may be created from the text data.
  • a Cosine Similarity model may be applied on the text data to select one of the one or more clusters, based on an input query 324, to obtain a relevant cluster 326.
  • the cluster 322D may be selected as the relevant cluster 326.
  • a relevant text portion 328 may be extracted from the text data.
  • FIG. 4 A a functional block diagram of a knowledge base (layer-2) 400A of the system 100 for improving response to an end-user’s query is illustrated, in accordance with yet another embodiment of the present disclosure.
  • the layer-2 may act as a knowledge base which may help in determining one or more actions, based on the query asked by the user.
  • the layer-2 may act as a middle layer between a question compiler (layer-3) and actions defined.
  • the four bucketed tokens may output a particular action. However, if the output includes multiple actions, then the multiple actions may be narrowed down to a single action, for example, by using the log questions and event enhancers (features of attributes). Further, if no single action could be determined, then the log questions may be updated with a status “0”.
  • the knowledge base (layer-2) 400A may receive (text) input 402 to determine mapped action(s).
  • the input 402 may be segregated into tokens, which may be later categorized into one or more categories, such as a query category, an attribute category, a feature category, and a data selection category.
  • the one or more text-query tokens categorized in the query category may include the tokens “Why”, “How”, “What”, “Who”, “Where”, and “When”. Based on this categorization, one or more actions may be determined. It may be noted that the event enhance tokens may be optional and multiple in numbers.
  • the event selection and event enhancer may be matched semantically in the knowledge base to determine the actions 410. If multiple actions are determined, the multiple actions 410 may be narrowed down to a single (relevant) action, based on event enhancers 404. Further, metadata 414 may be used to narrow down the actions 410 to the relevant action 412.
  • the event enhancers 404 may include one or more parameters. It may be noted that the multiple actions 410 may be narrowed down to the single relevant action 412 based on weightages.
  • the weightages may be provided by a weightage module 406.
  • the weightage module 406 may receive input data from log questions repository 408. If the weightage module 406 fails in narrowing down to relevant action, the log questions, stored in a log question repository 408, may be used.
  • an end-user acknowledgement may be received depending on whether action(s) could be determined for the query or not. In some examples, this end-user acknowledgement may also be treated as an even enhancer. Further, the end-user acknowledgement may also be logged in log question repository 408. The log questions may also be used as predicting questions.
  • a threshold value may be defined with respect to narrowing down the multiple actions. If the threshold value is reached for a particular action, that action may be selected as a relevant action. If the threshold value is not reached, i.e., no action could be determined, then the log question repository 408 may be triggered to invoke the recommendations module 212 (this is explained in greater detail in conjunction with FIG. 5B).
  • FIG. 4B a tabular representation of the knowledge base 400A of the system 100 for improving response to an end-user’s query is illustrated, via a Table 400B.
  • column 418 of the Table 400B includes example queries (with corresponding query serial numbers shown in column 416).
  • Column 420 includes query category tokens
  • column 422 includes event selection category tokens
  • column 424 includes event enhancer category tokens.
  • details for the actions determined for these queries are included in column 428 (with corresponding action serial numbers shown in column 426).
  • column 430 includes actions details specific to the domain.
  • a relevant action identified for this query is: perform (Sum of contents/ Number of contents).
  • the actions detail specific to the (revenue) domain is: calculate [(fl+f2+f3)/3] (here, fl, f2, and f3 signify revenue for first financial year, second financial year, and third financial year).
  • tabular data for other example queries is shown in the rest of the columns of Table 400B.
  • the system for improving response to an end-user’s query may include a layer- 1, a layer-2, a layer- 3, and a layer-4.
  • the system 100 may further include the log question repository 210 including a plurality of log questions.
  • the log question repository 210 may log a question, when the question is asked by a user.
  • the log question repository 210 may further log user details, along with the log questions.
  • a text query may be received, at layer- 1.
  • a voice input speech
  • the text query may be segregated into one or more tokens, at layer-3.
  • the one more tokens may be categorized in to one or more categories, for example, using Python or any Natural Language Processing (NLP) technique.
  • NLP Natural Language Processing
  • one or more actions may be determined. For example, one or more actions may include presenting/asking the end-user a question (laddering question) in return. Further, at layer-2, the log question repository 210 may log questions, when questions are asked (in layer-4).
  • the questions may be asked either in verbal format or text format.
  • the logging of questions may help in understanding of requirements.
  • the questions may be acknowledged with a successful or an unsuccessful status, based on the processing in layer-2.
  • a weightage calculating module (not shown in FIG. 5A) may be present inside the knowledge base. It may be noted that weightage may be calculated based on all inputs including user details.
  • the log question repository may perform the following functions:
  • the log question repository 210 may receive an input (a text-query) from the layer-4. Along with the input, the log question repository may further receive user details. Further, if a user question is not matched in the knowledge base, then the log question repository 210 may help a recommendation module 212 (shown in FIG. 5B) to come up with a best possible question as a recommendation to the end-user. Furthermore, if the knowledge base ends up with multiple actions, then the log question repository 210 may help in narrowing the multiple actions to a single action, for example, based on weightage assigned to each action. The log questions repository 210 may further log the question as well as user details. In some embodiments, the log questions repository 210 may calculate the weightage based on existing log questions and user details. The log questions repository 210 may further provide inputs to the recommendation module 212.
  • the log question repository 210 may provide inputs to the recommendation module 212. It may be noted that the recommendation module 212 may recommend questions to the layer-4, that may be asked to the end-user. It may be further noted that, in some embodiments, responses to these recommended questions may help in improving response to an end-user’s query, and further train the AI model.
  • the system 100 may also implement a channel-3.
  • a process flow diagram of an example process 600 for determining a relevant action for a user query, via channel-3 is illustrated.
  • a data source may include a Table storing multiple questions, for example, as shown in FIG. 6, questions 602-616. The Table may further include an answer corresponding to each of the multiple questions (shown beside the questions 602-616).
  • a user query 620: “Can I perform my own housing wiring?” is received from a user.
  • a vector of the user query and a vector of each of the multiple questions may be created. Further, a distance between the vector of the user query and the vector of each of the multiple questions may be determined. In other words, distance of the vector of the user query from the vector of question 602 is dl, from the vector of question 604 is d2, from the vector of question 606 is d3, from the vector of question 608 is d4, from the vector of question 610 is d5, from the vector of question 612 is d6, from the vector of question 614 is d7, and from the vector of question 616 is d8.
  • a vector of question with minimum distance from the vector of the user query may be selected, i.e., the function [min (dl, d2, d3, d4, d5, d6, d7, d8)]. Further, the answer corresponding to that question with minimum distance may be selected, and provided to the user. As shown in FIG. 6, distance of the vector of the user query from the vector of question 606 (d3) may be minimum. Therefore, the question 606: “Can I do my own wiring?” may be selected, and its corresponding answer “In most places, homeowners are allowed to do their own wiring. In some, they're not. Check with your local electrical inspector. Most places won't permit you to do wiring on other's homes for money without a license. Nor are you permitted to do wiring in "commercial" buildings” may be provided to the user” may be selected.
  • a service -providing company wants to support their service engineers in acknowledging issues relating to an appliance, within a window time frame.
  • history data may be used.
  • the history data may include manual data in form of a portable document format (PDF) file, an incident logger (which may include history of incidents), and sensor data obtained based on sensors attached to the appliance.
  • PDF portable document format
  • An input data in form of a query from an end-user may be received.
  • the query may include one or more questions like “Why the motor in engine not moving at normal speed?”, “What is the normal speed of motor of the engine?”, “How it can be fixed?”, “Who fixed earlier?”, and “When this issue happens?”.
  • “Why the motor in engine not moving at normal speed?” may be segregated into tokens. Further, the tokens may be categorized in various categories, as:
  • the manual data in form of PDF files, history data, sensor data, and incident tracker may be obtained. Thereafter, based on various components of the input statement, the system may decide the following:
  • “motor in engine” may determine the data source for fetching data.
  • the relevant action for this query that is determined may be “fetch reason from motor database on what conditions it doesn’t work at normal speed”. Accordingly, a relevant answer may be extracted from the data sources, and the relevant answer may then be provided to the user. However, if the relevant answer is not found, then an alternate model, i.e., channel-2 or channel-3 may be used, as an to find the answer.
  • the channel-2 may be based on an Artificial Intelligence (AI) or Deep Learning (DL) model.
  • AI Artificial Intelligence
  • DL Deep Learning
  • the relevant answer may be extracted from the unstructured data sources, such as Portable Document Format (PDF) files, manuals, contextual data sources, etc.
  • PDF Portable Document Format
  • Metadata may be obtained from the relevant data source, like other expected failure reasons, who were engineers who had earlier fixed the issues, when usually in a day the issue happens, etc., that may be shared with user, such as field engineers.
  • Use Case 2 [057] A user query is received: “What is the normal speed of motor of the engine?”.
  • the system may refer history data (including manual data in form of a PDF file), incident logger (including history of incidents), and sensor data obtained from sensors attached to the appliance.
  • the question may be segregated as follows:
  • the manual data in form of PDF files, history data, sensor data, and incident tracker may be obtained. Thereafter, based on various components of the input statement, a system may decide the following:
  • “motor of the engine” may determine the data source for fetching data.
  • the relevant action for this query that is determined may be “fetch data from motor database on what is the normal speed of the motor”. Accordingly, a relevant answer may be extracted from the data sources, and the relevant answer may then be provided to the user. Further, similar to the user case 1, if the relevant answer is not found, then an alternate model, i.e., channel-2 or channel - 3 may be used, as to find the answer.
  • a topic may be selected based on the trained literature and text that comes under the topics, and the output may be shared with relevant users (e.g., field engineers).
  • a text-query input corresponding to the user’s query may be received.
  • the text- query input may correspond to a query regarding a particular domain, such as revenue domain of a company.
  • receiving the text-query input may include step 702A at which a voice input may be received from a user.
  • the voice input may include a query corresponding to the text-query input.
  • Receiving the text-query input may further include step 702B at which the voice input may be converted into text, to obtain the text-query input.
  • a speech-to-text conversion may be performed for the (voice) query received from the user.
  • the text-query input may include one or more words (i.e., tokens).
  • the text-query input may be segregated to obtain one or more text-query tokens.
  • the text-query input may segregate the into its constituent tokens, i.e., words.
  • each of the one or more text-query tokens may be categorized into one or more categories.
  • the one or more categories may include a query category, an attribute category, a feature category, and a data selection category.
  • the one or more text- query tokens categorized in the query category may include: “Why”, “How”, “What”, “Who”, “Where”, and “When”.
  • the user query may include at least one of tokens “Why”, “How”, “What”, “Who”, “Where”, and “When”, as part of the query.
  • the one or more text-query tokens categorized in the data selection category may correspond to system data and user data, for example, user identity, user role, etc.
  • each of the one or more categorized text-query tokens may be mapped with one or more data sources, for example, by way of semantic mapping.
  • the response improving device 102 may include one or more data sources (databases/knowledge bases) which may store data about different domains to which the user queries may be related to.
  • a plurality of actions may be determined based on the mapping of at least one of the one or more categorized text-query tokens with the one or more data sources. For example, the plurality of actions may include “fetching data from one of one or more data sources” as an answer to the user query, “fetching one or more laddering questions to present to the user”, in response to the user query, etc.
  • a check may be performed to determine whether the plurality of actions is determined successfully or not, for the text-query input. If the plurality of actions is successfully determined, the method may proceed to step 714 (“Yes” path).
  • a relevant action may be identified from the plurality of actions. For example, the most suitable action from the plurality of actions may be selected in response to the user query.
  • a weightage value may be assigned to each of the plurality of actions. It may be noted that the weightage value may be assigned based on relevance of each of the plurality of actions with the user’s query. The weightage value assigned to each of the plurality of actions may be compared with a predetermined threshold value. Thereafter, the relevant action may be selected from the plurality of actions, based on this comparison. As it will be understood, the action having the highest weightage value or the weightage value higher than the threshold value may be selected. [065] By way of an example, the most suitable action may include fetching data from one or more data sources, as part of response to the user’s query.
  • the most suitable action may include generating one or more laddering questions to be presented to the user. This is further explained in conjunction with FIG. 10.
  • an output may be provided to the end-user, based on the identified relevant action.
  • step 712 the method may proceed to step 718 (“No” path).
  • step 718 the user may be notified to update the one or more data sources, for the text-query input.
  • the response improving device 102 is unable to determine any action, for example, due to lack of data in the one or more data sources relevant to the user’s query, in response to the text-query input (corresponding to the user’s query), no output may be provided to the user.
  • the response improving device 102 may therefore, notify a user about the need to update the one or more data sources, i.e., to data relevant to the text-query input (corresponding to the user’s query).
  • the method 800 may include steps similar to steps 702-706 of the method 700.
  • a text-query input corresponding to the user’s query may be received.
  • the text-query input may be “Why the motor in engine not moving at normal speed?”.
  • the text-query input may be segregated to obtain one or more text-query tokens.
  • the above text-query input may be segregated to obtain one or more text-query tokens: “Why”, “the”, “motor”, “in”, “engine”, “not”, “moving”, “at”, “normal”, and “speed”.
  • each of the one or more text-query tokens may be categorized into one or more categories. For example, for the above text-query input, the token “Why” may be categorized in the query category, combination of tokens “motor in engine” may be categorized in the attribute category, and combination of tokens “normal speed” may be categorized in the feature category. It may be understood that “normal speed” (feature category) acts as a feature of “motor in engine” (attribute category).
  • each of the one or more text-query tokens is categorized into one or more categories, at step 808, at least one text-query token of the text-query tokens of the attribute category may be mapped with one or more data sources. Therefore, the combination of tokens “motor in engine” may be mapped with data sources relevant to and storing data relevant to “motor in engine”.
  • a plurality of actions may be determined based on the mapping of the at least one text-query token of the text-query tokens of the attribute category with the one or more data sources, based on the text-query tokens of the query category.
  • the plurality of actions may be determined by mapping “motor in engine” (attribute category) with the one or more relevant data sources, based on “Why (query category) and “normal speed” (feature category).
  • data pertaining to “Why the motor in engine not moving at normal speed?” may be extracted from the one or more data sources, as part of the plurality of actions. For example, one or more reasons for “Why the motor in engine not moving at normal speed?” may be extracted.
  • a relevant action from the plurality of actions may be identified. As such, a most relevant reason from the one or more reasons for “Why the motor in engine not moving at normal speed?” may be identified. Further, an output may be provided to the end-user, based on the identified relevant action. The identified relevant reason from the one or more reasons may be provided to the user.
  • the relevant action identified through method 700 or method 800 may include fetching text data from one or more data sources.
  • the fetched text data may be further processed to extract a relevant portion most relevant to the user’s query.
  • a suitable cosine-similarity model may be sued to extract this relevant portion.
  • a text excerpt may be fetched from the one or more data sources.
  • a plurality of text clusters may be generated from the text excerpt. It may be noted that any known-in-art clustering techniques may be used to generate the plurality of text clusters.
  • a vector corresponding to each of the plurality of text clusters may be created, using a clustering model. It may be further noted that any known-in-art vector forming techniques may be used to create the vectors corresponding to the plurality of text clusters.
  • a relevant text cluster may be identified from the plurality of text clusters, using a cosine- similarity model. In other words, the cosine-similarity model may be applied to the vectors corresponding to the plurality of text clusters to identify the relevant vector, therefore, the cluster most relevant to the user’s query.
  • a text portion may be extracted from this relevant text cluster, corresponding to the identified relevant action. Additionally, the extracted text portion may be normalized. For example, normalizing the extracted text portion may include sentence generation using the extracted text portion, so as to present the extracted text portion in a format which is easily comprehendible for the user.
  • FIG. 10 a flowchart of a method 1000 of improving response to an end- user’s query is illustrated, in accordance with yet another embodiment of the present disclosure.
  • the most suitable action may include generating one or more laddering questions to be presented to the user.
  • one or more laddering questions may be generated.
  • the one or more laddering questions may be based on one or more question tags including: “Why”, “How”, “What”, “Who”, “Where” and “When”.
  • the one or more laddering questions may be presented to the user.
  • a secondary text-query input corresponding to the user’s response to one or more laddering questions may be received. In other words, the user’s input to the one or more laddering question may be received.
  • the method 700 or the method 800 may be performed once again, with the secondary text-query input acting as the text-query input, to thereby identify the relevant action and providing an output to the user, based on the identified relevant action.
  • a laddering question such as “For which plant do you want to know there was an issue in?” may be fetched and presented to the user. Thereafter, the user may provide a response, such as “What was the issue in Plant A?” to this laddering question.
  • the method 700 or the method 800 may be performed once again, with “What was the issue in Plant A?” (secondary text-query input) acting as the text-query input, to thereby identify the relevant action and providing an output to the user.
  • the response improving device 102 may generate one or more laddering questions and provide these one or more laddering questions to the user.
  • these one or more laddering questions may be provided to help the user gain more information other than what the user is initially interested in.
  • the one or more laddering questions may be as follows:
  • an acknowledgement may be received.
  • the log question may be updated with Status “1” in the layer-2.
  • the one or more data sources may be notified for updating.
  • the log questions may be updated with Status “0”.
  • the one or more techniques are able to automatically acknowledge queries pertaining to “Why”, “How”, “What”, “Who”, “Where”, “When”, even when faced with ad hoc queries which are unique and non-repetitive. Further, the one or more techniques do away with the requirement of the developing programs for each type of queries, thereby providing a quick and effective solution for answering user queries and preparing reports.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Mathematical Physics (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

A method and a system for improving response to an end-user's query is disclosed. In an embodiment, the method may include receiving a text-query, wherein the text-query is associated with a query created by an end-user. The method may further include segregating the text-query into one or more text-query tokens, and categorizing the one or more text-query tokens into one or more categories. The one or more categories may include a query category, an attribute category, a feature category, and a data selection category. The method may further include, upon categorizing, mapping each of the one or more classified text-query tokens with one or more data sources, and determining a plurality of actions based on the mapping. The method may further include identifying a most suitable action from the plurality of actions, and providing an output to the end-user, based on the identified most suitable action.

Description

A METHOD AND A SYSTEM FOR IMPROVING RESPONSE TO AN END-USER’S
QUERY
DESCRIPTION
Technical Field
[001] This disclosure relates generally to an intelligent virtual assistant, and particularly to a cognition-based intelligent virtual assistant and a method of improving response to an end-user’s query.
Background
[002] Virtual assistants or virtual agents or chatbots have assumed great importance lately, especially in handling queries of users. However, the chatbots may not be automatically able to acknowledge queries pertaining to “Why”, “How”, “What”, “Who”, “Where”, “When” (Zachman Enterprise Framework). As such, the chatbots may face issues in acknowledging ad hoc queries which may be unique and non-repetitive. This may be due to the reason that these queries may have different semantic nature, or the queries may pertain to different sets of data. These ad hoc queries may be of interest to higher management for taking strategic decisions.
[003] Therefore, in order to acknowledge these queries, developers may need to code a program. The program may be coded based on various sources of data from the same domain as the query, and then prepare reports. The process of developing the reports is a time-intensive and effort intensive exercise, as the process happens in iterations. For example, each iteration may have sub queries, dependent queries, and different facts based on statistics, which may not be part of slice- dice reports. Further, in some scenarios, the queries may be targeted at abstract or latent facts. Furthermore, it is observed that in some scenarios, reports based on ad hoc queries may lose value because of their long processing time.
BRIEF DESCRIPTION OF THE DRAWINGS
[004] The accompanying drawings, which are incorporated in and constitute a part of this disclosure, illustrate exemplary embodiments and, together with the description, serve to explain the disclosed principles. [005] FIG. 1A is a block diagram of a system for improving response to an end-user’s query, in accordance with an embodiment of the present disclosure.
[006] FIG. IB is a block diagram of a response improving device, in accordance with an embodiment of the present disclosure.
[007] FIGS. 2A-B are functional block diagrams of a system for improving response to an end- user’s query, in accordance with some other embodiments of the present disclosure.
[008] FIG. 3A is a process flow diagram of a process of fetching data via channel-2, in accordance with an embodiment of the present disclosure.
[009] FIG. 3B is a process flow diagram of an example process of fetching data via channel-2. [010] FIG. 4 A is a functional block diagram of a knowledge base of a system for improving response to an end-user’s query, in accordance with an embodiment of the present disclosure. [Oi l] FIG. 4B illustrates a tabular representation of the knowledge base of the system for improving response to an end-user’s query, in accordance with an embodiment of the present disclosure.
[012] FIGS. 5A-5B are process flow diagrams of a process of data processing by a log question repository, in accordance with an embodiment of the present disclosure.
[013] FIG. 6 is a process flow diagram of an example process for determining a relevant action for a user query, in accordance with an embodiment of the present disclosure.
[014] FIG. 7 is a flowchart of a method of improving response to an end-user’s query, in accordance with an embodiment of the present disclosure.
[015] FIG. 8 is a flowchart of a method of improving response to an end-user’s query, in accordance with another embodiment of the present disclosure.
[016] FIG. 9 is a flowchart of a method of improving response to an end-user’s query, in accordance with a yet another embodiment of the present disclosure.
[017] FIG. 10 is a flowchart of a method of improving response to an end-user’s query, in accordance with another embodiment of the present disclosure.
DETAILED DESCRIPTION
[018] Exemplary embodiments are described with reference to the accompanying drawings. Wherever convenient, the same reference numbers are used throughout the drawings to refer to the same or like parts. While examples and features of disclosed principles are described herein, modifications, adaptations, and other implementations are possible without departing from the spirit and scope of the disclosed embodiments. It is intended that the following detailed description be considered as exemplary only, with the true scope and spirit being indicated by the following claims.
[019] Referring now to FIG. 1A, a block diagram of an exemplary system 100 for improving response to an end-user’s query is illustrated, in accordance with an embodiment of the present disclosure. The system 100 may include a response improving device 102. This response improving device 102 may implement a chatbot (also called a virtual assistant or a virtual agent) for providing an interface to receive a query from an end-user and providing a response to the query from the end-user. In some embodiments, the response improving device 102 may further implement an Artificial Intelligence (AI) model. The system 100 may further include a one or more data sources 104.
[020] In some embodiments, in order to improve response to an end-user’s query, the response improving device 102 may receive a text-query input corresponding to the user’s query, segregate the text-query input to obtain one or more text-query tokens, and categorize each of the one or more text-query tokens into one or more categories. For example, the one or more categories may include a query category, an attribute category, a feature category, and a data selection category. The response improving device 102 may further, upon categorizing, map each of the one or more classified text-query tokens with one or more data sources, and determine a plurality of actions based on the mapping of at least one of the one or more classified text-query tokens with the one or more data sources. The response improving device 102 may further identify a relevant action from the plurality of actions, and provide an output to the end-user, based on the identified relevant action.
[021] The system 100 may include one or more processors 110, a computer-readable medium (for example, a memory) 112, and a display 114. The computer-readable storage medium 112 may store instructions that, when executed by the one or more processors 110, cause the one or more processors 110 to improve response to the end-user’s query, in accordance with aspects of the present disclosure. The computer-readable storage medium 112 may also store various data that may be captured, processed, and/or required by the system 100. The system 100 may interact with a user via a user interface 116 accessible via the display 114. The system 100 may also interact with one or more external devices 106 over a communication network 108 for sending or receiving various data. The external devices 106 may include, but may not be limited to, a remote server, a digital device, or another computing system.
[022] Referring now to FIG. IB, a functional block diagram of the response improving device 102 is illustrated, in accordance with some embodiments of the present disclosure. The response improving device 102 may include one or more modules, for example, a text-query receiving module 118, a segregation module 120, a categorization module 122, a mapping module 124, an action determination module 126, a relevant action identification module 128, and an output module 130.
[023] In some embodiments, the text-query receiving module 118 may receive a text-query input corresponding to the user’s query. By way of an example, in order to receive the text-query input, the response improving device 102 may receive a voice input from a user, such that the voice input includes a query corresponding to the text-query input. Further, the response improving device 102 may convert the voice input into text (i.e., speech-to-text conversion), to obtain the text-query input. The segregation module 120 may segregate the text-query input to obtain one or more text- query tokens. For example, the segregation module 120 may segregate the text-query input into its constituent tokens, i.e., words.
[024] The categorization module 122 may categorize each of the one or more text-query tokens into one or more categories. The one or more categories may include a query category, an attribute category, a feature category, and a data selection category. By way of an example, the one or more text-query tokens categorized in the query category may include: “Why”, “How”, “What”, “Who”, “Where”, and “When”. As it will be understood that the user query may include at least one of tokens “Why”, “How”, “What”, “Who”, “Where”, and “When”, as part of the query. Further, it may be noted that the one or more text-query tokens categorized in the data selection category may correspond to system data and user data (i.e., metadata), for example, user identity, user role, etc. [025] The mapping module 124 may then map each of the one or more categorized text-query tokens with one or more data sources. For example, the mapping may be semantic-based mapping. It may be noted that the response improving device 102 may include or interact with one or more data sources (databases/knowledge bases) which may store data about different domains to which the user queries may be related to. The action determination module 126 may determine a plurality of actions based on the mapping of at least one of the one or more categorized text-query tokens with the one or more data sources. For example, the plurality of actions may include “fetching data from one of one or more data sources” as an answer to the user query, “fetching one or more laddering questions to present to the user” in response to the user query, etc.
[026] The relevant action identification module 128 may identify a relevant action from the plurality of actions. For example, the relevant action identification module 128 may select the most suitable action from the plurality of actions in response to the user query. The output module 130 may then provide an output to the end-user, based on the identified relevant action.
[027] Referring now to FIGS. 2A-2B, a functional block diagram of a system 200 (corresponding to system 100) for improving response to an end-user’s query is illustrated, in accordance with another embodiment of the present disclosure. In some embodiments, the system 200 may implement a layer- 1, a layer-2, a layer-3, and a layer-4. Further, it may be noted that, in some embodiments, the architecture of the system 200 may be divided into a channel- 1, a channel-2, and a channel-3 (not shown in FIG. 2A-2B). An output from the channel- 1 may be regarded as quantitative as source data is expected to be in normalized form. An output from the channel-2 and channel-3 may be regarded as qualitative as source data is expected to be unstructured format. It may be further noted that channel- 1, channel-2, and channel-3 may execute either parallelly or sequentially to generate the output. Further, in some cases, an output may be received from each of the channel- 1, the channel-2, and channel-3. For example, in some embodiments, the channel-
2 may fix a threshold based on a model probability, above which results may be added with channel- 1.
[028] Referring now to the FIG. 2A-2B, the system 200 with active channel- 1 is shown. At the layer-4, queries in voice format or text format may be received as input from an end-user. Further, the voice-format queries may be converted into text format to obtain text-query input. At the layer-
3 (inference phase), the text-query input may be understood properly. Understanding the query may include segregating the text-query input to obtain one or more text-query tokens, and categorizing each of the one or more text-query tokens into one or more categories. Therefore, the layer-3 may work as a compiler, which may break (segregate) an input phrase in a way that it can be understood at the next level. For example, the text-query may be segregated into one or more text-query tokens. Further, each of these text-query tokens may be classified into one or more categories. The one or more categories may include a query category, an event selection category, an event enhancer category, and a data selection category. It may be understood that the query category may include at least one of the words “what”, “where”, “who”, “when”, “why”, and “how”. It may be noted that the event selection category may include attributes, and the event enhancer module may include features of an attribute. The layer-3 may act as a compiler by segregating the tokens in four buckets (categories) as below:
(i) Query type category: tokens of “what”, “where”, “who”, “when”, “why”, and “how” are segregated;
(ii) Event selection category: attributes of the query segregated;
(iii) Event enhancer category: features of the attributes of the query are segregated; and
(iv) Data selection category:
[029] Once the questions are properly understood, then at a next phase, one or more actions may be taken. It may be noted that the above four buckets may be populated based on a data source lookup Table, which may be updated by the one or more data sources or database in layer-2. In some embodiments, the layer-2 may include a query module 202, an event selection module 204, an event enhancer module 206, and a data selection module 208. It may be further noted that in the layer-2, the query module 202, the event selection module 204, the event enhancer module 206, and the data selection module 208 may semantically match the query category tokens, the event selection category tokens, the event enhancer category tokens, and the data selection category tokens in the knowledge base, to determine one or more actions for the text query. Further, log questions may be used as predicting questions/recommending questions.
[030] For example, for a question: “what was revenue in the last month of company ABC?”, the question may be segregated into a plurality of user-query tokens. For example, these user-query tokens may include a first token “what”, a second token “revenue”, and a third token “last month”. Further, the first token “what” may be categorized in the query category, the second token “revenue” may be categorized in the event selection category, and the third token “last month” may be categorized in the event enhancer category (attribute).
[031] The layer-2 (knowledge base) may further include various generic rules, business rules, generic models, or models on specific requirements. At layer-2, based on an indication from the layer-3, one or more actions pertaining to the rules or generic models may be triggered. It may be noted that, in some embodiments, the one or more database/data sources may include one or more expert systems, (also called knowledge bases) for example, knowledge base-1, knowledge base- 2... knowledge base-n. These knowledge bases may be accessed via the channel- 1. These knowledge bases accessed via the channel- 1 may include data in structured/normalized format. Further, it may be noted that in addition to the channel- 1, data may be fetched from other data sources, via the channel-2 or the channel-3. These other data sources may include data in unstructured format. For example, the other data sources may include webpages. The channel-2 and channel-3 are further explained in detail in conjunction with FIGS. 3A-B and FIG. 6, respectively.
[032] The layer- 1 may include source data, like various corpus or knowledge bases or mapping documents. This source data may help in sourcing data and in taking corrective and meaningful action, as described in layer-2. It may be noted that the layer- 1 may include one or more components for extracting data. Further, the layer- 1 may include various models like a predictive model and a forecasting model, that may be triggered by layer-2. It may be further noted that the layer- 1 may be accessed in the form of Application Program Interfaces (APIs). Also, various domain specific models may be built in layer- 1 which may be triggered by action defined in layer- 2. The layer-2, on the other hand, may include data sets.
[033] The system 200 may further communicate with a log question repository 210. The log question repository 210 may log questions along with user details and time stamp. In some embodiments, the log question repository 210 may include log metadata, like system details and user details like user role, department etc. The log question repository 210 may flag a question, based on if the question is acknowledged or not. Further, the log question repository 210 may provide input to the knowledge base in selecting a particular action. If a question is not acknowledged, then that question may be flagged as “Zero”, which may be taken offline as a requirement. The above process helps in determining which all actions need to be addressed in future based on status attribute in log question repository 210. For example, if the value is “0”, then it means question was not acknowledged based on the present knowledge of knowledge base (layer-2), and that it needs to be enhanced in the future.
[034] The system 200 may further communicate with a recommendation module 212. The recommendation module 212 may help in providing an intelligent decision support system, and recommend other questions (other questions) based on context of the question asked by the end- user, that may help in determining actions.
[035] Referring now to FIG. 3A, a process 300A of fetching data via the channel-2 is illustrated, in accordance with an embodiment of the present disclosure. It may be noted that in some scenarios, the response improving device 102 may not be able to determine the plurality or actions or identify the relevant action from the plurality of actions, via the channel- 1. In such scenarios, the response improving device 102 may attempt to provide an output to the user-query via the channel-2. In some embodiments, the channel-2 may implement an artificial intelligence (AI) or a machine learning model.
[036] At step 302, a query may be received at layer-4 and converted to text format. At step 304, the text query may be tokenized (segregated) at layer-3 to obtain token data. The tokenized query (token data) may be sent to the layer-2. At step 306, in the layer-2, the token data may be fed to an AI model in vectorized format. At step 308, text data (input data) may be obtained from raw source files. The raw source files may be stored in the one or more data sources, and may include data relating to particular domains (topics), such as technical domains, to which the user query may be directed at. At step 310, this text data (input data) may be converted into vectorized format for the AI model. At step 312, one or more cluster based on like statements (i.e., similar statements) may be created from the text data. At step 314, the one of the one or more clusters may be selected based on the input data and the token data. It may be understood that the AI model may be trained with training data to perform relevant text extraction from the input data. At step 316, the AI model may extract relevant text from the input data based on the training (i.e., training provided using the training data). At step 318, an output may be generated using the extracted relevant text and this output may be provided to the user.
[037] Referring now to FIG. 3B, an example process 300B (corresponding to process 300A) of fetching data via the channel-2 is illustrated. Text data (input data) 320 may be obtained from raw source files in response to receiving a user query. Using the text data 320, one or more clusters 322 (322A, 322B, 322C, 322D) based on like statements (i.e., similar statements) may be created from the text data. A Cosine Similarity model may be applied on the text data to select one of the one or more clusters, based on an input query 324, to obtain a relevant cluster 326. For example, as shown in FIG. 3B, the cluster 322D may be selected as the relevant cluster 326. Thereafter, using a Deep Learning model (which may be first trained using training data), a relevant text portion 328 may be extracted from the text data.
[038] Referring now to FIG. 4 A, a functional block diagram of a knowledge base (layer-2) 400A of the system 100 for improving response to an end-user’s query is illustrated, in accordance with yet another embodiment of the present disclosure. As mentioned earlier, the layer-2 may act as a knowledge base which may help in determining one or more actions, based on the query asked by the user. The layer-2 may act as a middle layer between a question compiler (layer-3) and actions defined.
[039] It may be noted that upon receiving an input from the layer-3, the four bucketed tokens may output a particular action. However, if the output includes multiple actions, then the multiple actions may be narrowed down to a single action, for example, by using the log questions and event enhancers (features of attributes). Further, if no single action could be determined, then the log questions may be updated with a status “0”.
[040] The knowledge base (layer-2) 400A may receive (text) input 402 to determine mapped action(s). As explained above, the input 402 may be segregated into tokens, which may be later categorized into one or more categories, such as a query category, an attribute category, a feature category, and a data selection category. As shown in FIG. 4A, the one or more text-query tokens categorized in the query category may include the tokens “Why”, “How”, “What”, “Who”, “Where”, and “When”. Based on this categorization, one or more actions may be determined. It may be noted that the event enhance tokens may be optional and multiple in numbers. It may be further noted that the event selection and event enhancer may be matched semantically in the knowledge base to determine the actions 410. If multiple actions are determined, the multiple actions 410 may be narrowed down to a single (relevant) action, based on event enhancers 404. Further, metadata 414 may be used to narrow down the actions 410 to the relevant action 412. The event enhancers 404 may include one or more parameters. It may be noted that the multiple actions 410 may be narrowed down to the single relevant action 412 based on weightages. The weightages may be provided by a weightage module 406. The weightage module 406 may receive input data from log questions repository 408. If the weightage module 406 fails in narrowing down to relevant action, the log questions, stored in a log question repository 408, may be used.
[041] In some embodiments, an end-user acknowledgement may be received depending on whether action(s) could be determined for the query or not. In some examples, this end-user acknowledgement may also be treated as an even enhancer. Further, the end-user acknowledgement may also be logged in log question repository 408. The log questions may also be used as predicting questions.
[042] In some embodiments, a threshold value may be defined with respect to narrowing down the multiple actions. If the threshold value is reached for a particular action, that action may be selected as a relevant action. If the threshold value is not reached, i.e., no action could be determined, then the log question repository 408 may be triggered to invoke the recommendations module 212 (this is explained in greater detail in conjunction with FIG. 5B).
[043] Referring now to FIG. 4B, a tabular representation of the knowledge base 400A of the system 100 for improving response to an end-user’s query is illustrated, via a Table 400B. As shown in FIG. 4B, column 418 of the Table 400B includes example queries (with corresponding query serial numbers shown in column 416). Column 420 includes query category tokens, column 422 includes event selection category tokens, and column 424 includes event enhancer category tokens. Further, details for the actions determined for these queries are included in column 428 (with corresponding action serial numbers shown in column 426). Further, column 430 includes actions details specific to the domain.
[044] For example, for the first query: “what is the average revenue”, the query category token includes “What”, the event selection category tokens include “revenue”, and event enhancer category token includes “average”. Accordingly, a relevant action identified for this query is: perform (Sum of contents/ Number of contents). As such, the actions detail specific to the (revenue) domain is: calculate [(fl+f2+f3)/3] (here, fl, f2, and f3 signify revenue for first financial year, second financial year, and third financial year). Similarly, tabular data for other example queries is shown in the rest of the columns of Table 400B.
[045] Referring now to FIGS. 5A-5B, a process flow diagram of a process 500 of data processing by the log question repository 210 (of the system for improving response to an end-user’s query) is illustrated in accordance with an embodiment of the present disclosure. As mentioned earlier, the system for improving response to an end-user’s query may include a layer- 1, a layer-2, a layer- 3, and a layer-4. The system 100 may further include the log question repository 210 including a plurality of log questions. The log question repository 210 may log a question, when the question is asked by a user. The log question repository 210 may further log user details, along with the log questions.
[046] For example, a text query may be received, at layer- 1. For example, a voice input (speech) may be received from a user that may be converted into text, using a suitable speech-to-text technique. The text query may be segregated into one or more tokens, at layer-3. Further, the one more tokens may be categorized in to one or more categories, for example, using Python or any Natural Language Processing (NLP) technique. [047] At layer-2, based on the categorized tokens, one or more actions may be determined. For example, one or more actions may include presenting/asking the end-user a question (laddering question) in return. Further, at layer-2, the log question repository 210 may log questions, when questions are asked (in layer-4). The questions may be asked either in verbal format or text format. The logging of questions may help in understanding of requirements. The questions may be acknowledged with a successful or an unsuccessful status, based on the processing in layer-2. In some embodiments, a weightage calculating module (not shown in FIG. 5A) may be present inside the knowledge base. It may be noted that weightage may be calculated based on all inputs including user details. The log question repository may perform the following functions:
(i) Archive questions for future usage,
(ii) Recommend like questions, and
(iii) Help the knowledge base in narrowing down the multiple actions to a single action, by providing weightage to each option.
[048] The log question repository 210 may receive an input (a text-query) from the layer-4. Along with the input, the log question repository may further receive user details. Further, if a user question is not matched in the knowledge base, then the log question repository 210 may help a recommendation module 212 (shown in FIG. 5B) to come up with a best possible question as a recommendation to the end-user. Furthermore, if the knowledge base ends up with multiple actions, then the log question repository 210 may help in narrowing the multiple actions to a single action, for example, based on weightage assigned to each action. The log questions repository 210 may further log the question as well as user details. In some embodiments, the log questions repository 210 may calculate the weightage based on existing log questions and user details. The log questions repository 210 may further provide inputs to the recommendation module 212.
[049] As shown in FIG. 5B, the log question repository 210 may provide inputs to the recommendation module 212. It may be noted that the recommendation module 212 may recommend questions to the layer-4, that may be asked to the end-user. It may be further noted that, in some embodiments, responses to these recommended questions may help in improving response to an end-user’s query, and further train the AI model.
[050] In some embodiments, in addition to channel- 1 and channel-2 for determining an output for a user in response to the user query, the system 100 may also implement a channel-3. Referring now to FIG. 6, a process flow diagram of an example process 600 for determining a relevant action for a user query, via channel-3 is illustrated. A data source may include a Table storing multiple questions, for example, as shown in FIG. 6, questions 602-616. The Table may further include an answer corresponding to each of the multiple questions (shown beside the questions 602-616). [051] A user query 620: “Can I perform my own housing wiring?” is received from a user. By way of an example, once the user query is received, a vector of the user query and a vector of each of the multiple questions may be created. Further, a distance between the vector of the user query and the vector of each of the multiple questions may be determined. In other words, distance of the vector of the user query from the vector of question 602 is dl, from the vector of question 604 is d2, from the vector of question 606 is d3, from the vector of question 608 is d4, from the vector of question 610 is d5, from the vector of question 612 is d6, from the vector of question 614 is d7, and from the vector of question 616 is d8. Thereafter, a vector of question with minimum distance from the vector of the user query may be selected, i.e., the function [min (dl, d2, d3, d4, d5, d6, d7, d8)]. Further, the answer corresponding to that question with minimum distance may be selected, and provided to the user. As shown in FIG. 6, distance of the vector of the user query from the vector of question 606 (d3) may be minimum. Therefore, the question 606: “Can I do my own wiring?” may be selected, and its corresponding answer “In most places, homeowners are allowed to do their own wiring. In some, they're not. Check with your local electrical inspector. Most places won't permit you to do wiring on other's homes for money without a license. Nor are you permitted to do wiring in "commercial" buildings” may be provided to the user” may be selected.
Use Case 1:
[052] A service -providing company wants to support their service engineers in acknowledging issues relating to an appliance, within a window time frame. In such a case, history data may be used. The history data may include manual data in form of a portable document format (PDF) file, an incident logger (which may include history of incidents), and sensor data obtained based on sensors attached to the appliance.
[053] An input data in form of a query from an end-user may be received. The query may include one or more questions like “Why the motor in engine not moving at normal speed?”, “What is the normal speed of motor of the engine?”, “How it can be fixed?”, “Who fixed earlier?”, and “When this issue happens?”. For example, for the question, “Why the motor in engine not moving at normal speed?” may be segregated into tokens. Further, the tokens may be categorized in various categories, as:
(i) Why the (“Why” categorized in query category)
(ii) motor in engine (“motor in engine” categorized in data selection category)
(iii)not moving at (“not moving” categorized in event enhancer category)
(iv) normal speed? (“normal speed” categorized in event selection category)
[054] Once the query is segregated, the manual data in form of PDF files, history data, sensor data, and incident tracker may be obtained. Thereafter, based on various components of the input statement, the system may decide the following:
(i) a data source;
(ii) an action based on query;
(iii)a relevant extraction; and
(iv)a relevant model that needs to be executed.
[055] For example, in the above example, “motor in engine” may determine the data source for fetching data. The relevant action for this query that is determined may be “fetch reason from motor database on what conditions it doesn’t work at normal speed”. Accordingly, a relevant answer may be extracted from the data sources, and the relevant answer may then be provided to the user. However, if the relevant answer is not found, then an alternate model, i.e., channel-2 or channel-3 may be used, as an to find the answer. As mentioned above, the channel-2 may be based on an Artificial Intelligence (AI) or Deep Learning (DL) model. Through channel-2 or channel-3, the relevant answer may be extracted from the unstructured data sources, such as Portable Document Format (PDF) files, manuals, contextual data sources, etc.
[056] Further, metadata may be obtained from the relevant data source, like other expected failure reasons, who were engineers who had earlier fixed the issues, when usually in a day the issue happens, etc., that may be shared with user, such as field engineers.
Use Case 2: [057] A user query is received: “What is the normal speed of motor of the engine?”. For this query, the system may refer history data (including manual data in form of a PDF file), incident logger (including history of incidents), and sensor data obtained from sensors attached to the appliance. The question may be segregated as follows:
(i) What is the (“What” categorized in query category)
(ii) normal speed (“normal speed” categorized in event selection category)
(iii)of motor of the engine? (“motor of the engine” categorized in event enhancer category)
[058] Once the question is segregated, the manual data in form of PDF files, history data, sensor data, and incident tracker may be obtained. Thereafter, based on various components of the input statement, a system may decide the following:
(i) a data source;
(ii) an action based on query;
(iii)a relevant extraction; and
(iv)a relevant model that needs to be executed.
[059] In the above example, “motor of the engine” may determine the data source for fetching data. The relevant action for this query that is determined may be “fetch data from motor database on what is the normal speed of the motor”. Accordingly, a relevant answer may be extracted from the data sources, and the relevant answer may then be provided to the user. Further, similar to the user case 1, if the relevant answer is not found, then an alternate model, i.e., channel-2 or channel - 3 may be used, as to find the answer. As such, a topic may be selected based on the trained literature and text that comes under the topics, and the output may be shared with relevant users (e.g., field engineers).
[060] Referring now to FIG. 7, a flowchart of a method 700 of improving response to an end- user’s query is illustrated, in accordance with an embodiment of the present disclosure. At step 702, a text-query input corresponding to the user’s query may be received. For example, the text- query input may correspond to a query regarding a particular domain, such as revenue domain of a company. In some embodiments, receiving the text-query input may include step 702A at which a voice input may be received from a user. The voice input may include a query corresponding to the text-query input. Receiving the text-query input may further include step 702B at which the voice input may be converted into text, to obtain the text-query input. In other words, via steps 702A and 702B, a speech-to-text conversion may be performed for the (voice) query received from the user. As such, the text-query input may include one or more words (i.e., tokens).
[061] At step 704, the text-query input may be segregated to obtain one or more text-query tokens. For example, the text-query input may segregate the into its constituent tokens, i.e., words. At step 706, each of the one or more text-query tokens may be categorized into one or more categories. By way of an example, the one or more categories may include a query category, an attribute category, a feature category, and a data selection category. Further, the one or more text- query tokens categorized in the query category may include: “Why”, “How”, “What”, “Who”, “Where”, and “When”. As it will be understood that the user query may include at least one of tokens “Why”, “How”, “What”, “Who”, “Where”, and “When”, as part of the query. Further, it may be noted that the one or more text-query tokens categorized in the data selection category may correspond to system data and user data, for example, user identity, user role, etc.
[062] At step 708, each of the one or more categorized text-query tokens may be mapped with one or more data sources, for example, by way of semantic mapping. As mentioned earlier, the response improving device 102 may include one or more data sources (databases/knowledge bases) which may store data about different domains to which the user queries may be related to. At step 710, a plurality of actions may be determined based on the mapping of at least one of the one or more categorized text-query tokens with the one or more data sources. For example, the plurality of actions may include “fetching data from one of one or more data sources” as an answer to the user query, “fetching one or more laddering questions to present to the user”, in response to the user query, etc.
[063] In some embodiments, at step 712, a check may be performed to determine whether the plurality of actions is determined successfully or not, for the text-query input. If the plurality of actions is successfully determined, the method may proceed to step 714 (“Yes” path). At step 714, a relevant action may be identified from the plurality of actions. For example, the most suitable action from the plurality of actions may be selected in response to the user query.
[064] In some embodiments, in order to identify the relevant action from the plurality of actions, a weightage value may be assigned to each of the plurality of actions. It may be noted that the weightage value may be assigned based on relevance of each of the plurality of actions with the user’s query. The weightage value assigned to each of the plurality of actions may be compared with a predetermined threshold value. Thereafter, the relevant action may be selected from the plurality of actions, based on this comparison. As it will be understood, the action having the highest weightage value or the weightage value higher than the threshold value may be selected. [065] By way of an example, the most suitable action may include fetching data from one or more data sources, as part of response to the user’s query. By way of another example, when the response improving device 102 is unable to identify data from the one or more data sources relevant to the user’s query, the most suitable action may include generating one or more laddering questions to be presented to the user. This is further explained in conjunction with FIG. 10. At step, 716, an output may be provided to the end-user, based on the identified relevant action.
[066] However, if at step 712, it is found that the plurality of actions is not successfully determined, the method may proceed to step 718 (“No” path). At step 718, the user may be notified to update the one or more data sources, for the text-query input. In other words, in scenarios, when the response improving device 102 is unable to determine any action, for example, due to lack of data in the one or more data sources relevant to the user’s query, in response to the text-query input (corresponding to the user’s query), no output may be provided to the user. In such scenarios, the response improving device 102, may therefore, notify a user about the need to update the one or more data sources, i.e., to data relevant to the text-query input (corresponding to the user’s query). [067] Referring now to FIG. 8, a flowchart of a method 800 of improving response to an end- user’s query is illustrated, in accordance with another embodiment of the present disclosure. It may be noted that the method 800 may include steps similar to steps 702-706 of the method 700. As such, at step 802, a text-query input corresponding to the user’s query may be received. For example, the text-query input may be “Why the motor in engine not moving at normal speed?”. At step 804, the text-query input may be segregated to obtain one or more text-query tokens. As such, the above text-query input may be segregated to obtain one or more text-query tokens: “Why”, “the”, “motor”, “in”, “engine”, “not”, “moving”, “at”, “normal”, and “speed”. At step 806, each of the one or more text-query tokens may be categorized into one or more categories. For example, for the above text-query input, the token “Why” may be categorized in the query category, combination of tokens “motor in engine” may be categorized in the attribute category, and combination of tokens “normal speed” may be categorized in the feature category. It may be understood that “normal speed” (feature category) acts as a feature of “motor in engine” (attribute category).
[068] Once the, each of the one or more text-query tokens is categorized into one or more categories, at step 808, at least one text-query token of the text-query tokens of the attribute category may be mapped with one or more data sources. Therefore, the combination of tokens “motor in engine” may be mapped with data sources relevant to and storing data relevant to “motor in engine”. At step 810, a plurality of actions may be determined based on the mapping of the at least one text-query token of the text-query tokens of the attribute category with the one or more data sources, based on the text-query tokens of the query category. As such, the plurality of actions may be determined by mapping “motor in engine” (attribute category) with the one or more relevant data sources, based on “Why (query category) and “normal speed” (feature category). As such, by way of method 800, data pertaining to “Why the motor in engine not moving at normal speed?” may be extracted from the one or more data sources, as part of the plurality of actions. For example, one or more reasons for “Why the motor in engine not moving at normal speed?” may be extracted.
[069] At step 812, a relevant action from the plurality of actions may be identified. As such, a most relevant reason from the one or more reasons for “Why the motor in engine not moving at normal speed?” may be identified. Further, an output may be provided to the end-user, based on the identified relevant action. The identified relevant reason from the one or more reasons may be provided to the user.
[070] Referring now to FIG. 9, a flowchart of a method 900 of improving response to an end- user’s query is illustrated, in accordance with yet another embodiment of the present disclosure. It may be noted that in some scenarios, the relevant action identified through method 700 or method 800 may include fetching text data from one or more data sources. In such cases, the fetched text data may be further processed to extract a relevant portion most relevant to the user’s query. By way of an example, a suitable cosine-similarity model may be sued to extract this relevant portion. [071] Referring to the method 900, by way of an example, at step 902, upon identifying the relevant action from the plurality of actions, a text excerpt may be fetched from the one or more data sources. At step 904, a plurality of text clusters may be generated from the text excerpt. It may be noted that any known-in-art clustering techniques may be used to generate the plurality of text clusters. At step 906, a vector corresponding to each of the plurality of text clusters may be created, using a clustering model. It may be further noted that any known-in-art vector forming techniques may be used to create the vectors corresponding to the plurality of text clusters. At step 908, a relevant text cluster may be identified from the plurality of text clusters, using a cosine- similarity model. In other words, the cosine-similarity model may be applied to the vectors corresponding to the plurality of text clusters to identify the relevant vector, therefore, the cluster most relevant to the user’s query. At step 910, a text portion may be extracted from this relevant text cluster, corresponding to the identified relevant action. Additionally, the extracted text portion may be normalized. For example, normalizing the extracted text portion may include sentence generation using the extracted text portion, so as to present the extracted text portion in a format which is easily comprehendible for the user.
[072] Referring now to FIG. 10, a flowchart of a method 1000 of improving response to an end- user’s query is illustrated, in accordance with yet another embodiment of the present disclosure. As mentioned earlier, in scenarios when the response improving device 102 is unable to identify relevant data from the one or more data sources relevant to the user’s query, the most suitable action may include generating one or more laddering questions to be presented to the user.
[073] Therefore, in such scenarios, at step 1002, one or more laddering questions may be generated. In some embodiments, the one or more laddering questions may be based on one or more question tags including: “Why”, “How”, “What”, “Who”, “Where” and “When”. At step 1004, the one or more laddering questions may be presented to the user. At step 1006, a secondary text-query input corresponding to the user’s response to one or more laddering questions may be received. In other words, the user’s input to the one or more laddering question may be received. It may be noted that upon receiving the user’s input to the laddering question, the method 700 or the method 800 may be performed once again, with the secondary text-query input acting as the text-query input, to thereby identify the relevant action and providing an output to the user, based on the identified relevant action.
[074] By way of an example, for a text-query input: “What was the issue in Plant?”, if the response improving device 102 is unable to identify relevant data, a laddering question, such as “For which plant do you want to know there was an issue in?” may be fetched and presented to the user. Thereafter, the user may provide a response, such as “What was the issue in Plant A?” to this laddering question. Upon receiving the user’s input to this laddering question, the method 700 or the method 800 may be performed once again, with “What was the issue in Plant A?” (secondary text-query input) acting as the text-query input, to thereby identify the relevant action and providing an output to the user.
[075] In some embodiments, irrespective of whether the response improving device 102 is able to identify relevant data or not, the response improving device 102 may generate one or more laddering questions and provide these one or more laddering questions to the user. For example, these one or more laddering questions may be provided to help the user gain more information other than what the user is initially interested in. By way of an example, in the above example, for the subject “Plant A”, the one or more laddering questions may be as follows:
1) What was the issue in Plant A?
2) Why the issue occurred in Plant A?
3) How the issue occurred in Plant A?
4) When next the issue occurs in Plant A?
[076] Further, upon successfully performing the action, an acknowledgement may be received. For example, the log question may be updated with Status “1” in the layer-2. However, upon no receiving of an acknowledgement, the one or more data sources may be notified for updating. For example, the log questions may be updated with Status “0”.
[077] There present disclosure discloses one or more techniques for improving response to users’ queries via a chatbot. The one or more techniques are able to automatically acknowledge queries pertaining to “Why”, “How”, “What”, “Who”, “Where”, “When”, even when faced with ad hoc queries which are unique and non-repetitive. Further, the one or more techniques do away with the requirement of the developing programs for each type of queries, thereby providing a quick and effective solution for answering user queries and preparing reports.
[078] It is intended that the disclosure and examples be considered as exemplary only, with a true scope and spirit of disclosed embodiments being indicated by the following claims.

Claims

We Claim:
1. A method of improving response to a user’s query, the method comprising: receiving a text-query input corresponding to the user’s query; segregating the text-query input to obtain one or more text-query tokens; categorizing each of the one or more text-query tokens into one or more categories, wherein the one or more categories comprise: a query category, an attribute category, a feature category, and a data selection category; upon categorizing, mapping each of the one or more classified text-query tokens with one or more data sources; determining a plurality of actions based on the mapping of at least one of the one or more classified text-query tokens with the one or more data sources; identifying a relevant action from the plurality of actions; and providing an output to the end-user, based on the identified relevant action.
2. The method as claimed in claim 1, comprising: mapping at least one text-query token of the text-query tokens of the attribute category with one or more data sources; determining the plurality of actions based on the mapping of the at least one text-query token of the text-query tokens of the attribute category with the one or more data sources, based on the text-query tokens of the query category; and identifying the relevant action from the plurality of actions, based on at least one of the text-query tokens of the feature category and the text-query tokens of the data selection category.
3. The method of claim 1, further comprising: upon identifying the relevant action from the plurality of actions, fetching a text excerpt from the one or more data sources; generating a plurality of text clusters from the text excerpt; creating a vector corresponding to each of the plurality of text clusters, using a clustering model; identifying a relevant text cluster from the plurality of text clusters, based on cosine- similarity; and extracting a text portion from the relevant text cluster, corresponding to the identified relevant action.
4. The method as claimed in claim 3 further comprising normalizing the extracted text portion.
5. The method as claimed in claim 1 , wherein identifying the relevant action from the plurality of actions comprises: assigning a weightage value to each of the plurality of actions, based on the relevance of each of the plurality of actions with the user’s query; comparing the weightage value assigned to each of the plurality of actions with a predetermined threshold value; and selecting the relevant action from the plurality of actions, based on the comparison.
6. The method of claim 1, wherein one or more text-query tokens categorized in the query category comprise “Why”, “How”, “What”, “Who”, “Where”, and “When”.
7. The method as claimed in claim 1, wherein receiving the text-query input comprises: receiving a voice input from a user, wherein the voice input comprises a query corresponding to the text-query input; and converting the voice input into text, to obtain the text-query input.
8. The method as claimed in claim 1 further comprising: upon successfully determining the plurality of actions, tagging the text-query input with an acknowledgement tag and storing the text-query input along with the acknowledgement tag.
9. The method as claimed in claim 8 further comprising: notifying a user to update the one or more data sources, upon not successfully determining the plurality of actions, for the text-query input.
10. The method as claimed in claim 9, wherein the relevant action comprises: generating one or more laddering questions, wherein the one or more laddering questions are based on one or more question tags comprising “Why”, “How”, “What”, “Who”, “Where” and “When”; presenting the one or more laddering questions to the user; and receiving a secondary text-query input corresponding to the user’s response to one or more laddering questions.
11. A system for improving response to a user’s query, the system comprising: a processor; and a memory communicatively coupled to the processor, wherein the memory stores processor-executable instructions, which, on execution by the processor, cause the processor to: receive a text-query input corresponding to the user’s query; segregate the text-query input to obtain one or more text-query tokens; categorize each of the one or more text-query tokens into one or more categories, wherein the one or more categories comprise: a query category, an attribute category, a feature category, and a data selection category; map each of the one or more categorized text-query tokens with one or more data sources; upon mapping, determine a plurality of actions based on the mapping of at least one of the one or more categorized text-query tokens with the one or more data sources; identify a relevant action from the plurality of actions; and provide an output to the end-user, based on the identified relevant action.
12. The system as claimed in claim 11, wherein the processor-executable instructions, on execution by the processor, cause the processor to: map at least one text-query token of the text-query tokens of the attribute category with one or more data sources; determine the plurality of actions based on the mapping of the at least one text-query token of the text-query tokens of the attribute category with the one or more data sources, based on the text-query tokens of the query category; and identify the relevant action from the plurality of actions, based on at least one of the text- query tokens of the feature category and the text-query tokens of the data selection category.
13. The system as claimed in claim 11, wherein the processor-executable instructions, on execution by the processor, cause the processor to: upon identifying the relevant action from the plurality of actions, fetch a text excerpt from the one or more data sources; generate a plurality of text clusters from the text excerpt; create a vector corresponding to each of the plurality of text clusters, using a clustering model; identify a relevant text cluster from the plurality of text clusters, based on cosine- similarity; extract a text portion from the relevant text cluster, corresponding to the identified relevant action; and normalize the extracted text portion.
14. The system as claimed in claim 11, wherein the processor-executable instructions, on execution by the processor, cause the processor to perform at least one of: upon successfully determining the plurality of actions, tagging the text-query input with an acknowledgement tag and storing the text-query input along with the acknowledgement tag, or notifying a user to update the one or more data sources, upon not successfully determining the plurality of actions, for the text-query input.
15. A non-transitory computer-readable medium, for allocating resources to containers, having stored thereon, a set of computer-executable instructions causing a computer comprising one or more processors to perform steps comprising: receiving a text-query input corresponding to the user’s query; segregating the text-query input to obtain one or more text-query tokens; categorizing each of the one or more text-query tokens into one or more categories, wherein the one or more categories comprise: a query category, an attribute category, a feature category, and a data selection category; upon categorizing, mapping each of the one or more categorized text-query tokens with one or more data sources; determining a plurality of actions based on the mapping of at least one of the one or more categorized text-query tokens with the one or more data sources; identifying a relevant action from the plurality of actions; and providing an output to the end-user, based on the identified relevant action.
PCT/IB2021/051608 2020-02-28 2021-02-26 A method and a system for improving response to an end-user's query WO2021171238A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
IN202041008563 2020-02-28
IN202041008563 2020-02-28

Publications (1)

Publication Number Publication Date
WO2021171238A1 true WO2021171238A1 (en) 2021-09-02

Family

ID=77490745

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2021/051608 WO2021171238A1 (en) 2020-02-28 2021-02-26 A method and a system for improving response to an end-user's query

Country Status (1)

Country Link
WO (1) WO2021171238A1 (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190147880A1 (en) * 2016-06-06 2019-05-16 Apple Inc. Intelligent list reading
US20190214024A1 (en) * 2010-01-18 2019-07-11 Apple Inc. Intelligent automated assistant

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190214024A1 (en) * 2010-01-18 2019-07-11 Apple Inc. Intelligent automated assistant
US20190147880A1 (en) * 2016-06-06 2019-05-16 Apple Inc. Intelligent list reading

Similar Documents

Publication Publication Date Title
US11250033B2 (en) Methods, systems, and computer program product for implementing real-time classification and recommendations
US10705796B1 (en) Methods, systems, and computer program product for implementing real-time or near real-time classification of digital data
US11086601B2 (en) Methods, systems, and computer program product for automatic generation of software application code
US11334635B2 (en) Domain specific natural language understanding of customer intent in self-help
US10467122B1 (en) Methods, systems, and computer program product for capturing and classification of real-time data and performing post-classification tasks
US10303683B2 (en) Translation of natural language questions and requests to a structured query format
US11501080B2 (en) Sentence phrase generation
US20190005029A1 (en) Systems and methods for natural language processing of structured documents
US10754886B2 (en) Using multiple natural language classifier to associate a generic query with a structured question type
US8954360B2 (en) Semantic request normalizer
US11853337B2 (en) System to determine a credibility weighting for electronic records
US20220129635A1 (en) Semantic model instantiation method, system and apparatus
US20220051665A1 (en) Artificial intelligence (ai) based user query intent analyzer
US20130325757A1 (en) Cascading learning system as semantic search
AU2017351636A1 (en) An automatic encoder of legislation to logic
US11688393B2 (en) Machine learning to propose actions in response to natural language questions
US20240062016A1 (en) Systems and Methods for Textual Classification Using Natural Language Understanding Machine Learning Models for Automating Business Processes
CN111126073B (en) Semantic retrieval method and device
US11789962B1 (en) Systems and methods for interaction between multiple computing devices to process data records
WO2021171238A1 (en) A method and a system for improving response to an end-user's query
JP2020067864A (en) Knowledge search device, method for searching for knowledge, and knowledge search program
AU2019290658B2 (en) Systems and methods for identifying and linking events in structured proceedings
CN113095078A (en) Associated asset determination method and device and electronic equipment
Lokhacheva et al. Designing of Information System for Semantic Analysis and Classification of Issues in Service Desk System
US20220101094A1 (en) System and method of configuring a smart assistant for domain specific self-service smart faq agents

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21761820

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21761820

Country of ref document: EP

Kind code of ref document: A1