EP4348450A1 - Interaction et édition d'environnement de discussion d'utilisateur par l'intermédiaire de réponses générées par système - Google Patents

Interaction et édition d'environnement de discussion d'utilisateur par l'intermédiaire de réponses générées par système

Info

Publication number
EP4348450A1
EP4348450A1 EP22743953.6A EP22743953A EP4348450A1 EP 4348450 A1 EP4348450 A1 EP 4348450A1 EP 22743953 A EP22743953 A EP 22743953A EP 4348450 A1 EP4348450 A1 EP 4348450A1
Authority
EP
European Patent Office
Prior art keywords
user
answer
generated
question
answers
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
EP22743953.6A
Other languages
German (de)
English (en)
Inventor
Oleg Gennadievich SHEVELEV
Alberto Polleri
Marc Michiel BRON
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Oracle International Corp
Original Assignee
Oracle International Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US17/470,179 external-priority patent/US20220391595A1/en
Application filed by Oracle International Corp filed Critical Oracle International Corp
Publication of EP4348450A1 publication Critical patent/EP4348450A1/fr
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/332Query formulation
    • G06F16/3329Natural language query formulation or dialogue systems

Definitions

  • the present disclosure relates to a system that interacts with community members in user discussion environments.
  • the present disclosure relates to a system that identifies questions in an electronic discussion environment and determines a course of action for generating answers to the questions.
  • the system provides the answer to the discussion environment and updates answers based on feedback of community members.
  • Figure 1 illustrates a system in accordance with one or more embodiments
  • Figures 2A, 2B, and 2C illustrate an example set of operations for developing a knowledge base using automated interactions with community members in accordance with one or more embodiments
  • Figure 3 illustrates an example set of operations for training a machine learning model to associate an expert with a question in accordance with one or more embodiments
  • Figure 4 illustrates a graphical user interface (GUI) of an example embodiment
  • Figure 5 shows a block diagram that illustrates a computer system in accordance with one or more embodiments.
  • One or more embodiments interact with users in a discussion environment.
  • the system contributes to the generation and growth of a knowledge base by composing responses to user questions.
  • ETpon identifying a question in the discussion environment the system determines: (a) whether a stored answer has already been associated with the question, (b) whether an answer can be generated by the system using existing information accessible to the system, or (c) whether to contact an expert to answer the question.
  • the system updates the knowledge base by storing the questions and answers, along with user feedback to the questions and answers. Based on the user feedback, the system determines whether to modify existing answers to user-generated questions or to seek answers from alternative experts. In one embodiment, the system generates an answer by combining content from multiple different answers.
  • the system may determine that two separate user answers stored in a data repository have content necessary to generate the answer to the question in the discussion environment.
  • the system may combine content from a previously-stored user answer and one or both of external content - such as web content - or expert-generated content.
  • the system may be implemented with a conversational bot.
  • the conversational bot has a user account that monitors a user discussion environment.
  • the system may be implemented as a component of the platform itself that presents the user discussion environment. Examples of discussion environments include group discussion forums, group chat forums, group messaging applications, and a search field in which users can enter a question or information.
  • the bot interacts with users in the discussion environment to ask clarifying questions and answer questions.
  • the system utilizes multiple machine learning models to perform operations.
  • the output of one machine learning model is used by another model to generate additional information.
  • a question/answer encoding model is trained by a data set comprising: (a) user attribute information, and (b) question and answer information.
  • the question and answer information includes the human-readable text associated with the questions and answers, as well as feedback provided by users associated with the questions and answers.
  • the question/answer encoding model generates embedded questions and answers.
  • the monitoring bot may provide new comments in the discussion environment to the question/answer encoding model to generate embeddings of the comments.
  • a question/answer detection model analyzes the embedding output from the question/answer encoding model to identify questions and answers among the user comments in the discussion environment.
  • the question/answer detection model may also identify query terms such as key words, important topics, and other attributes that provide information for determining how to answer the question.
  • the system determines how to answer the question.
  • the system may identify the same, or a similar, question stored with a previously-generated answer. Accordingly, the system posts the existing answer in the discussion environment. When an answer does not already exist, the system may generate an answer using available information, or the system may seek help from an expert.
  • a similarity matching model searches data sources to identify information that is relevant to the user-generated question. Examples of data sources include: previously-generated user answers in the discussion environment, previously-generated user answers in other environments or applications, and articles and publications stored by the system or located in remote storage devices, such as over the Internet.
  • An example of a similarity matching model is a dot-product similarity matching model.
  • a text- summarizing model summarizes relevant text from one or more data sources.
  • An answer-generating model converts text from data sources into a human-readable answer.
  • an expert-identification model identifies one or more experts having attributes that match the user-generated question. The system contacts the expert(s) to provide an answer to the user-generated question.
  • a confidence score model generates a confidence score for the system-generated answers and expert-generated answers.
  • the confidence scores are updated based on user feedback in the discussion environment.
  • the system may generate additional answers for user-generated questions based on a change in a confidence score of an answer.
  • the system may contact a new expert to answer a question based on a change in a confidence score of an expert-generated answer.
  • One or more embodiments generate and post system-composed answers to user generated questions in user discussion environments.
  • the system monitors a set of user discussion environments to detect user-generated questions and user-generated answers.
  • the system composes a response by combining two or more user-generated answers that were posted on the set of user discussion environments.
  • the two or more user-generated answers may be selected for generating the system-composed answer based on user community feedback regarding relevancy or accuracy of the two or more user-generated answers.
  • the user-generated answers may be selected based on a characteristics or known expertise of the users that posted the user-generated answers.
  • FIG. 1 illustrates a system 100 in accordance with one or more embodiments.
  • system 100 includes a user communication application 110 and a user communication management engine 120.
  • user communication applications 110 include user discussion environments, group messaging applications, chat applications, and a search field in which a user may enter questions and information and interact with results.
  • the user communication management engine 120 includes a user communication interface application 121.
  • the user communication interface application 121 is a bot service that utilizes a user account in the user communication application 110 to monitor comments posted in a conversation thread or environment of the user communication application 110.
  • the bot may monitor chat channels, forums, tickets, or any user communication application 110 which is presented as a flow of messages with replies.
  • the bot monitors and responds to messages in live conversations.
  • the bot may monitor and respond to indexed webpages and static documents.
  • the bot may be a proactive agent programmed to act similar to a human user.
  • the bot identifies questions in the user communication application 110 including the flow of messages with replies.
  • the bot posts answers to the questions using a user account.
  • the bot monitors the user communication application 110 and generates replies and answers without receiving additional human input.
  • the user communication management engine 120 may identify questions that require expert-generated answers. The bot may respond to questions in the user communication application 110 using the expert-generated answers. Alternatively, the user communication management engine 120 may prompt experts to interact directly with the user communication application 110 without providing answers via the bot.
  • the user communication interface application 121 extracts content from the user communication application 110. For example, in an embodiment in which the user communication application 110 is a text chat forum, the user communication interface application extracts the posts and replies in the forum, as well as associated user feedback and available metadata.
  • a text encoder 122 applies a machine learning model to encode text content extracted by the user communication interface application 121 from the user communication interface application 121.
  • the text encoder 122 converts human-readable questions and answers into embedded values that are readable by machine learning engines.
  • the embedded values may further be searchable by one or more search programs.
  • text words may be converted into numerical values and vector values.
  • the text encoder employs a bidirectional encoder representations from transformers (BERT) model.
  • the BERT model receives as an input a set of text, identifies relationships among the words in the set of text, and outputs an embedded vector that includes values representing the text and the relationships among the words in the text.
  • the text encoder 122 analyzes content extracted by the user communication interface application 121 from the user communication application 121 to identify content that corresponds to a question and content that corresponds to an answer.
  • the content includes human-readable text.
  • the text encoder 122 may employ a machine learning model trained to recognize groupings of text that either expressly or implicitly include a question.
  • the model may take as inputs punctuation, such as a question mark, and sentence structure to determine whether particular text corresponds to a question.
  • the model may take as inputs content from an identified question, content of the candidate text, and vicinity of the candidate text to a question in a conversation.
  • a question/answer identification model is an attention-based recurrent neural network machine learning model.
  • the machine learning model is trained using the Stanford Question Answering Dataset (SQuAD) as well as a body of question/answer pairs, feedback, and metadata extracted form the user communication application 110.
  • SQuAD Stanford Question Answering Dataset
  • a feedback extraction engine 123 extracts the feedback information from the content extracted by the user communication interface application 121 from the user communication application 110.
  • Examples of feedback include: user-interactive indications of the helpfulness of a question or reply (e.g., user interface icons indicating like/dislike, helpful/not helpful, recommend, star), a number of replies to a post, and citations to a particular post or reply in other posts and replies.
  • user-interactive indications of the helpfulness of a question or reply e.g., user interface icons indicating like/dislike, helpful/not helpful, recommend, star
  • a number of replies to a post e.g., a number of replies to a post
  • citations to a particular post or reply in other posts and replies e.g., citations to a particular post or reply in other posts and replies.
  • a question and answer analysis engine 124 analyzes questions and answers to initiate actions associated with the questions and answers.
  • the question and answer analysis engine 124 pairs a single question with a single answer and stores the pairing in the data repository 126 together with any associated feedback.
  • a single answer is made up of multiple separate comments made by a same user.
  • any comments made by different users are stored in separate question and answer pairs.
  • the question/answer pairs may include human-readable questions and answers or machine-readable embeddings representing question/answer pairs. Each question/answer pair includes a question and an answer corresponding to the question.
  • the question may be a question obtained from the user communication application 110 or from other data sources.
  • the system 100 may access Internet-based or cloud-based question/answer applications and sites to obtain questions and answers.
  • the answers in the question/answer pairs may be user-generated, system-generated, or obtained from Internet-based or cloud-based sources external to the system 100.
  • feedback including user reactions, a number of replies, etc., and other metadata including a reputation score of a user, is stored in the data repository 126 in a separate table from the question and answer pairs.
  • the tables may be linked to associate a particular set of feedback and metadata with a particular question and answer pair.
  • user attributes from user profiles stored in the data repository 151 and the question/answer pairs comprise a training dataset to train a machine learning model employed by the text encoder 122. Answers and comments given by the machine or humans can be up-voted or down-voted. This information, plus implicit feedback (clicks, number of comments, pages visited, how long people stayed on the page, etc.), are used as features in the training dataset, which helps continuously re train and improve the question/answer models.
  • the question and answer analysis engine 124 initiates the actions using one or more machine learning models trained to perform a particular task. For example, the question and answer analysis engine 124 may analyze content in a question and determine that the question contains insufficient information to generate an answer.
  • the question and answer analysis engine 124 may send the encoded question and the determination of the need for additional information to the question and answer generator engine 127 to generate a follow-up question.
  • the question and answer analysis engine 124 may analyze a question and determine that: (a) the question corresponds to an existing question/answer pair stored in the data repository 126, (b) the question may be answered based on indexed text data stored in the data repository 140, or (c) the question requires additional expert input to generate an answer.
  • the question and answer analysis engine 127 determines a question may be answered based on indexed text data
  • the question and answer generator engine 127 employs one or more computer-generated text models 130 to generate an answer to a question.
  • a text- summarization model 132 applies a machine learning model to text to generate a summary of the text.
  • the machine learning model employs a text highlighting model 131 that identifies important sentences from a body of text and generates the summary by using only the important sentences, while omitting the remaining sentences of the body of text.
  • the text- summarization model 132 may employ an abstractive algorithm that generates an embedded vector associated with the meaning of the text and applies natural language processing (NLP) to generate new sentences that capture the meaning of the text.
  • NLP natural language processing
  • the text- summarization model 132 identifies query data, such as a subject or key topic, in an embedded question.
  • the question and answer generator engine 127 identifies relevant text in one or more data sources, such as in the indexed texts stored in the data repository 140, remote text content stored in a data repository 142, or other data sources external to the user communication management engine 120.
  • the question and answer generator engine 127 applies the text- summarization model 132 to the identified text to generate a summary of the text.
  • the question and answer generator engine 127 may provide the summary of the text, with a link to the entire text, to the human-readable question and answer generator model 128 to generate a human-readable answer including the summary of the text.
  • the user communication interface application 121 generates a comment or reply, that includes the human-readable answer, in the user communication application 110.
  • the human-readable question and answer generator model 128 is applied to question and/or answer content from the question and answer generator engine 127 to generate an answer in a human-readable form.
  • the question and answer generator engine 127 may generate answer content as a vector including one or more alphanumerical and vector values.
  • the human-readable question and answer generator model 128 converts the content embedded in the vector into a human-readable form, such a paragraph of text data, images, and links to images or articles.
  • the human-readable question and answer generator model 128 is a GPT-3 type model.
  • a similarity matching model 133 generates similarity scores of an embedded question with embedded answers obtained from the user communication application 110 to identify answers having the highest similarity to a particular question.
  • the similarity matching model 133 is a dot-product similarity matching model.
  • the user communication management engine 120 may store similarity rankings for each question/answer pair.
  • the question and answer generator engine 127 transmits to the human- understandable question and answer generator model 128 the answer having the highest similarity to a particular question.
  • the human-understandable question and answer generator model 128 generates a human-understandable answer.
  • the user communication interface application 121 posts the human-understandable answer in the user communication application 110.
  • the question and answer generator engine 127 employs a text combination engine 134 to generate an answer using text from multiple different sources. For example, if the question and answer generator engine 127 determines that an answer stored in the data repository 126 is incomplete and indexed text 140 would complete the answer, the text combination model 134 may combine answer content from the data repository 126 and the indexed texts 140. In one embodiment, the text combination engine 134 combines embedded vectors representing the text segments into a coherent answer.
  • a remote text extraction engine 141 extracts remote text content from sources external to the user communication management engine 120.
  • remote sources include data repositories 142, such as servers and databases, connected to the user communication management engine 120 via a local area network or wide area network, such as the Internet.
  • the remote text extraction engine 141 may index the extracted data and store the indexed content in the data repository 140 to be accessed by the user communication management engine 120.
  • the question and answer analysis engine 124 may employ the expert association model 150.
  • the expert association model 150 identifies a relevant expert or set of experts for a particular question.
  • expert attributes may be embedded from user profiles stored in the data repository 151. Examples of expert attributes include areas of expertise, experience, reputation among other users, and reputation among other experts.
  • a data set of expert attributes may be used to train the model 150.
  • the model 150 When an embedded question is provided to the model 150 by the question and answer analysis engine 124, the model 150 generates a set of one or more experts that are most likely to be able to answer the question.
  • the expert association model 150 generates a communication to an expert, via the expert communication interface 152, requesting an answer.
  • the question and answer generator engine 127 may generate embedded content for a request for expert input based on the expert identified by the expert association model 150.
  • the human-understandable question and answer generator model 128 may generate a message to the expert.
  • the model 128 may generate a message that says: “A user recently posted in X application asking about Y. We noticed that Y is one of your specialties. would you be willing to draft a response to the user?”
  • the expert communication interface 152 may send the message to the expert via email, text message, instant message, or any other communications means.
  • the expert may post an answer directly in the user communication application 110.
  • the expert may provide an answer to the user communication management engine 120 via the expert communication interface 152.
  • the user communication interface application 121 may post the expert answer.
  • a confidence score model 160 generates confidence scores for system-generated answers and expert-generated answers.
  • the confidence score represents a likelihood that the answer adequately addresses issues, topics, and concerns raised in a question.
  • the confidence score model 160 may be updated based on user feedback. For example, the question and answer generator engine 127 may generate an initial answer to a question in the user communication application 110.
  • the confidence score model 160 may determine, based on the content of the answer, that it has a confidence score that exceeds the threshold. Based on receiving negative user feedback, the confidence score model 160 may reduce the confidence score of the answer.
  • Negative feedback may include user interaction with an icon representing negative feedback (e.g., “dislike” icon), low traffic to the answer, lack of references to the answer in other comments, and textual user responses to the answer that contain negative comments.
  • the user communication management engine 120 re-trains the confidence score model 160 to account for the updated confidence score.
  • reducing the confidence score of the answer results in generating a new answer or seeking an answer from an expert. For example, if the confidence score for an answer is reduced below a threshold, the question and answer generator engine 127 may provide question content to the expert association model 160 to identify a relevant expert for the associated question.
  • the system 100 may include more or fewer components than the components illustrated in Figure 1.
  • the components illustrated in Figure 1 may be local to or remote from each other.
  • the components illustrated in Figure 1 may be implemented in software and/or hardware. Each component may be distributed over multiple applications and/or machines. Multiple components may be combined into one application and/or machine. Operations described with respect to one component may instead be performed by another component.
  • a data repository 126, 140, 142, and 151 is any type of storage unit and/or device (e.g., a file system, database, collection of tables, or any other storage mechanism) for storing data.
  • a data repository may include multiple different storage units and/or devices. The multiple different storage units and/or devices may or may not be of the same type or located at the same physical site.
  • a data repository may be implemented or may execute on the same computing system as the user communication management engine 120. Alternatively, or additionally, a data repository may be implemented or executed on a computing system separate from the user communication management engine 120.
  • a data repository may be communicatively coupled to the user communication management engine 120 via a direct connection or via a network.
  • the user communication management engine 120 refers to hardware and/or software configured to perform operations described herein for analyzing user communications, analyzing stored answers, determining whether stored answers correspond to user-generated questions, and initiating actions based on the determination. Examples of operations for generating system-generated answers to user-generated questions are described below with reference to Figures 2 A, 2B, and 2C.
  • the user communication management engine 120 is implemented on one or more digital devices.
  • digital device generally refers to any hardware device that includes a processor.
  • a digital device may refer to a physical device executing an application or a virtual machine. Examples of digital devices include a computer, a tablet, a laptop, a desktop, a netbook, a server, a web server, a network policy server, a proxy server, a generic machine, a function- specific hardware device, a hardware router, a hardware switch, a hardware firewall, a hardware firewall, a hardware network address translator (NAT), a hardware load balancer, a mainframe, a television, a content receiver, a set-top box, a printer, a mobile handset, a smartphone, a personal digital assistant (“PDA”), a wireless receiver and/or transmitter, a base station, a communication management device, a router, a switch, a controller, an access point, and/or a client device.
  • PDA personal digital assistant
  • the user communication management engine 120 includes an interface for receiving user input.
  • the interface renders user interface elements and receives input via user interface elements.
  • interfaces include a graphical user interface (GUI), a command line interface (CLI), a haptic interface, and a voice command interface.
  • GUI graphical user interface
  • CLI command line interface
  • haptic interface a haptic interface
  • voice command interface a voice command interface.
  • user interface elements include checkboxes, radio buttons, dropdown lists, list boxes, buttons, toggles, text fields, date and time selectors, command lines, sliders, pages, and forms.
  • different components of interface are specified in different languages.
  • the behavior of user interface elements is specified in a dynamic programming language, such as JavaScript.
  • the content of user interface elements is specified in a markup language, such as hypertext markup language (HTML) or XML User Interface Language (XUL).
  • the layout of user interface elements is specified in a style sheet language, such as Cascading Style Sheets (CSS).
  • CSS Cascading Style Sheets
  • interface is specified in one or more other languages, such as Java, C, or C++.
  • Figures 2A to 2C illustrates an example set of operations for automatically generating answers to user-generated questions in accordance with one or more embodiments.
  • One or more operations illustrated in figures 2A-2C may be modified, rearranged, or omitted all together. Accordingly, the particular sequence of operations illustrated in figures 2A-2C should not be construed as limiting the scope of one or more embodiments.
  • a system monitors a user discussion environment, such as a group text thread, a group communications application, or an on-line forum (Operation 202).
  • An organization that hosts the environment may include conversation-monitoring applications that monitor the conversations.
  • the system may generate one or more user accounts that have access to the environment. The user accounts may be used by the system to respond to questions and comments in the environment.
  • an organization that does not host the environment may monitor the environment using a bot.
  • a software company may generate an account in an independent forum that posts questions associated with the software company. The software company may assign the account to a bot to automatically monitor and determine whether to respond to questions on the forum.
  • the system identifies a user-generated question (operation 204).
  • a machine learning model may be applied to each comment in the environment to determine whether the comment includes a question.
  • the machine learning model may be trained to identify as questions sentences having particular sentence structures or punctuation, or combinations of sentence structure and punctuation.
  • the machine learning model may be trained to identify sentences that are not strictly questions as including a question element.
  • a comment may include the sentence “I wish there was a way to restore the settings...”
  • the machine learning model may be trained to identify in the sentence a question: “how to restore settings...?”
  • the system generates an embedding of the user-generated question (operation 206).
  • the embedding may include the relevant portions of the text, as well as any relevant context from the conversation. For example, a question: “How is it possible to do that?” may obtain context from a previous comment to generate the embedding. Additional information, such as a time of the comment, the type of forum, or the identity of the user asking the question may be included in the embedding.
  • the system analyzes the embedded question to determine if additional information is needed prior to formulating a response. For example, a user may ask how to perform a particular operation, and the system may determine that the CPU information would be required to answer the question. The system may generate a response to ask for additional information prior to answering the original question.
  • the system may analyze a question to determine if it is a type of question that should be answered. For example, if the system determines that a user is asking a rhetorical question, the system may stop processing the embedded question. In another example, the system may determine that the user asking the question is not authorized to receive an answer to the question. The system may ignore the question, or may generate a response indicating that the user is not authorized to obtain an answer.
  • the system determines whether the question contains sufficient information to generate an answer (Operation 208). For example, a question may state: “Why is my application so slow?” The system may determine that additional information, such as computer specifications, wireless bandwidth, geography, and Internet provider, may be required prior to generating an answer. Alternatively, the system may analyze previous conversations and analyze context to extract necessary information. The latter may require an additional confirmation from the user.
  • additional information such as computer specifications, wireless bandwidth, geography, and Internet provider
  • the system determines that the question contains insufficient information to generate an answer, the system generates a clarifying question (Operation 210).
  • the system obtains embedded data representing the content of the question and the type of data that is required to be able to answer the question.
  • the system provides the embedded data to a natural language processing (NLP) machine learning model to generate a human- readable follow-up question requesting additional information.
  • NLP natural language processing
  • the NLP machine learning model may generate a question such as: “You’re asking about the speed of your application. What type of computer are you using? What is your Internet speed?”
  • the system obtains an updated user-generated question (Operation 212). For example, the system may combine the original question with any follow-up answers to generate the updated user-generated question. [0057] The system generates an embedding of the updated user-generated question (Operation 214). The embedding converts the text of the original question and any follow-up answers into corresponding alpha-numeric and vector values.
  • the system searches one or more data sources to determine whether an answer exists to the question (Operation 216).
  • Data sources include: a repository of question/answer pairs, stored answers from users, stored answers generated by the system, stored answers generated by experts, informative articles and documents located in local storage or remotely, such as on a server accessed over the Internet.
  • the system may first search a repository of previously-stored question/answer pairs. If the no match exists in the previously-stored question/answer pairs, the system may search additional sources to determine whether the same question was previously answered.
  • Searching the one or more data source may include identifying query content in the user-generated question, and identifying similar query content in the answers in the data sources.
  • the query content may include key words, topics, phrases, identification information of a user generating the question, a time at which the question was generated, and other information associated with the user-generated question.
  • the system may encode the query content in the embedding of the user-generated question.
  • a similarity matching engine may determine that a similarity of an embedding of a question and an embedding of an answer exceeds a threshold similarity.
  • the system determines whether a previously-stored question/answer pair has a question that matches the user-generated question (Operation 218). If the system identifies a previously-stored question/answer pair having a question that matches the user-generated question, the system maps the answer to the user-generated question (Operation 220). In addition, if the system determines that multiple separate answers exist for questions that match the user-generated question, the system may map the user-generated question to the multiple different answers.
  • the system displays a matching answer in the discussion environment (Operation 222).
  • the system may include in the answer a link to reference material, an expert, or another user that was the source of the previously-generated answer.
  • the system summarizes the answer and provides a link to the full answer, such as an article or another post in a discussion environment.
  • the system may display the two or more answers separately or may combine the two or more answers into one answer in the discussion environment.
  • the system may display one answer using one user account and another answer using a different user account.
  • the system may analyze the multiple answers to identify overlapping content. When merging the previously-generated answers into one answer, the system may modify the previously-generated answers to omit overlapping content.
  • the system may provide differentiating information together with the respective answer. For example, the system may indicate which answer is more highly rated in the discussion environments. In addition, or in the alternative, the system may indicate conditions associated with the respective answers. For example, the system may indicate that for a system having one set of attributes, one of the previously-generated answers applies; for a system having another set of attributes, the other previously-generated answer applies.
  • the system utilizes a machine learning model to generate a human- readable answer.
  • question/answer pairs may be stored as embedded data comprising alpha-numeric and vector values.
  • a natural language processing model generates a human-readable answer based on the alpha-numeric and vector values.
  • the system determines whether the one or more data sources include information relevant to the question (Operation 224). For example, the system may analyze text data in one or more data repositories or in online publications. In one embodiment, the system identifies query content in a user-generated question, such as key words, topics, and subjects. The system may search text in stored answers, articles, and other publications to identify relevant information. In addition, or in the alternative, the system may generate embeddings of text content in previously-stored answers and other published content, such as online publications. A similarity machine learning model may be applied to the embeddings to identify relevant content.
  • a similarity machine learning model may be applied to the embeddings to identify relevant content.
  • the system Upon identifying relevant information, the system generates a system-generated answer to respond to the user-generated question (Operation 226).
  • the system-generated answer utilizes relevant information from one or more data sources combined with human-readable text to structure the relevant information in an answer-type format.
  • the system summarizes information from an information source (Operation 228). For example, the system may determine that if the relevant content is longer than a pre-defined length, such as three sentences, the system summarizes the relevant content.
  • a text- summarizing machine learning model identifies important sentences in a body of relevant content.
  • the system generates a summary of the relevant content using only the relevant sentences.
  • a machine learning model identifies the subject of text and generates one or more new sentences, not previously found in the text, to summarize the text.
  • the system may provide a link to the relevant content to allow a user to access the additional content that is omitted from the answer by the system.
  • the system combines information from two or more data sources to generate an answer (Operation 230).
  • the system may identify two online articles having relevant content. The system may determine that neither article completely answers the user-generated question, and that the information in the combination of articles completely answers the user-generated question. Accordingly, the system may combine the content from the two articles into one answer.
  • the system uses an answer-generating machine learning engine to generate a human-readable answer to the user-generated question using the combined content from the two or more data sources.
  • the system identifies two or more user-generated answers as having content relevant to the user-generated question.
  • the system combines the content from the multiple user-generated answers to generate a single answer (Operation 232). Combining the content includes omitting portions of the previously-generated answers that are not relevant to the user-generated question and adding text to present the answer in a human-readable format.
  • the system generates a confidence score for the system-generated answer to the user generated question (Operation 234).
  • a machine learning model is applied to an embedding of the user-generated question and an embedding of the system generated answer to generate the confidence score.
  • the system may take a dot product of the embeddings and the machine learning model may generate the confidence score based on the dot product.
  • the confidence score is based on feedback.
  • the system-generated answer may be displayed in the discussion environment. Users may rate the answer based on its usefulness to answer the question.
  • the system may generate and update the confidence score based on the user input over time. For example, an answer may have a high confidence score initially, but over time the answer may become less relevant, and the confidence score, as indicated by the user input, may decrease.
  • Other factors that may influence a confidence score include a reliability of a data source, a staleness of the information in the data source, an identity of a data source, and an understandability of the data source.
  • the system determines whether the confidence score associated with the system generated answer meets a threshold (Operation 236).
  • a threshold may correspond to a ninety-percent likelihood that the system-generated answer is adequate to answer the user generated question.
  • the system displays the system generated answer in the discussion environment (Operation 238).
  • the system may post a comment in a discussion environment using an account name associated with a bot to answer user questions.
  • the system obtains user feedback in the discussion environment (Operation 240).
  • the system may ask the user asking the user-generated question whether the answer was adequate to answer the user-generated question.
  • the system may receive additional feedback including user comments, and selections of a “like,” “approve,” or “helpful” icon.
  • the system may utilize a machine learning model to analyze text of comments that reply to, or follow, the system generated answer.
  • the system may generate one or more additional questions to obtain additional information from the user. For example, the system may ask the user whether the system-generated answer was understandable, whether a solution provided in the system-generated answer worked, whether the information provided in the system-generated answer was complete, or any other question to determine what the user considered to be lacking in the system-generated answer. If the user provides additional information, the system may include the user response in the query data used to identify answer content. For example, in the embodiment in which the system generates an embedding of the question, the system may include the additional user response in the embedding of the question. The system may then return to Operation 218 to determine whether any data sources match the updated question.
  • the system adjusts the confidence score based on the user feedback (Operation 242). For example, positive feedback may result in an increase to the confidence score. Negative feedback may result in a decrease to the confidence score.
  • adjusting the confidence score includes adjusting parameters of a machine learning model. For example, an embedding of the answer may initially result in a first confidence score. Based on user feedback, the system may adjust the parameters of the model, so that applying the model to the embedding results in a different confidence score. The system again determines if the confidence score meets the threshold value (Operation 236).
  • the system determines in Operation 236 that the confidence score associated with the system-generated answer does not meet the threshold value, the system identifies an expert and requests an expert-generated answer to the user-generated question (Operation 244). In addition, if the system determines in Operation 224 that information relevant to the user generated question is not found in one or more data sources, the system identifies an expert and requests an expert-generated answer to the user-generated question (Operation 244).
  • a machine learning model matches an expert to a question. For example, the model may be applied to the embedded question, and the result of the model may be the identity of one or more experts. Attributes of the expert that may contribute to selection of the expert by the model include: area of expertise, availability, usefulness of previous answers, level of experience, reputation among users, and reputation among experts.
  • the system displays the expert-generated answer in the discussion environment (Operation 246).
  • the system may also assign a confidence score to the expert-generated answer.
  • the confidence score may be based on how close the expert’s area of expertise is related to the subject matter of the user-generated question.
  • the system obtains feedback for the expert-generated answer (Operation 248) and adjusts the confidence score based on the user feedback (Operation 250). As discussed previously in connection with the system-generated answer, the system may adjust the confidence score for the expert-generated answer by adjusting parameters of a machine learning model. The system may also adjust the expert qualification attributes of the selected expert. For example, multiple negative responses to the expert-generated answer from users may result in the system downgrading the expert’s ranking in a particular field. On the other hand, multiple positive responses may result in the system upgrading the expert’s ranking in the field. [0079] The system determines if the adjusted confidence score meets a threshold (Operation 252). If so, the answer is maintained in the discussion environment until additional feedback is received.
  • the system requests another answer from another human expert (Operation 254). For example, the system may rank human experts according to their expertise and experience. If the highest- ranked expert provides an answer that is determined to be inadequate, as reflected by a low confidence score, the system may request an answer from a next-ranked human expert. In one embodiment, if a predetermined number of human experts is unable to answer a question, the system may notify a system administrator and may stop contacting human experts.
  • Figure 3 illustrates an example set of operations for training a machine learning model to associate an expert with a question in accordance with one or more embodiments.
  • a system obtains historical data of expert profile information, question/answer pair information, and confidence scores of expert-generated answers (Operation 302).
  • expert profile information include: areas of expertise, subject matter of questions previously answered by the expert, confidence score of answers previously answered by the expert, reputation of the expert among users, reputation of the expert among other experts, answer- quality of answers generated by the expert, and level of experience of the expert.
  • Question/answer pairs include embeddings of content of historically-asked questions with historically-provided answers, as well as confidence scores and feedback associated with the historically -provided answers.
  • the system generates a training data set from the historical data (Operation 304).
  • the training data set includes, for each set of question/answer pairs for which an expert generated an answer, the historical confidence score associated with the answer.
  • the system applies a machine learning algorithm to the training data set to train the machine learning model to identify experts associated with particular questions (Operation 306).
  • the system trains the machine learning model to identify the relationships between expert attributes, question attributes, and high confidence scores. For example, the machine learning model may learn that a particular combination of expertise and user reputation corresponds to a high confidence score in one subject. The machine learning model may learn that a different combination of expertise and user reputation corresponds to a high confidence score in another subject.
  • training the machine learning model includes applying an input vector from the training data set to a neural network, obtaining an output value, comparing the output value to a value in the training set, and adjusting parameters of the neural network based on a difference between the value output from the machine learning model and the value identified in the training set.
  • training the machine learning model includes receiving feedback for a value score generated by the machine learning model (Operation 308).
  • the system may display a particular expert associated with a set of subjects.
  • the system may receive one or more inputs to alter which expert should be associated with particular topics.
  • the system updates the machine learning model based on the feedback (Operation 310).
  • FIG. 4 is graphical user interface (GUI) according to an example embodiment.
  • the GUI 400 displays content for a search engine optimization forum 401.
  • a first user, UserA posts a question 402 associated with improving their website ranking.
  • the system answers via an account (UserBotl) associated with a bot.
  • the bot monitors the forum and extracts data from entries in the forum. For example, the bot extracts the text of the question 402.
  • the system analyzes the question and generates a response 403.
  • the GUI includes a feedback element 404. A user may interact with the feedback element 404 to either indicate that an answer is helpful or not.
  • Additional users may also answer the question 402.
  • the system may track the feedback 404, 408a, and 409a for the answers 403, 408, and 409. Over time, if the system determines that a particular generates enough highly-rated answers, the system may identify the user as an expert in a topic.
  • the system generates a confidence score for the answer 403 based on feedback 404 and user responses 405. For example, UserC comments in their reply 405 that the solution provided in the answer 404 no longer works.
  • the system identifies the comment as negative feedback and reduces the confidence score.
  • the system identifies that more users have found the comment 403 unhelpful than helpful, further reducing the confidence score.
  • the system may adjust the confidence score based on a reputation of users providing feedback. For example, if UserC has a high reputation score, the system may reduce the confidence score associated with the answer 403 by a relatively high amount. If UserC has a low reputation score, the system may reduce the confidence score associated with the answer 403 by a relatively low amount.
  • the system may determine when feedback was received to determine how to adjust a confidence score. For example, an answer 403 may receive 1,000 “helpful” feedback responses over a six-month period and no “un-helpful” responses. However, the system may receive 100 “unhelpful” feedback responses and only 10 “helpful” responses in the seventh month. The system may determine that circumstances have changed that have made the answer 403 less helpful within the last month. The system may accordingly reduce the confidence score for the answer by a relatively high amount.
  • the system may determine that a new answer is required. The system determines whether the answer may be found in indexed text data or whether an expert is required. The system generates a new answer 406 based on the indexed text data or the expert contribution.
  • the system receives additional feedback 407 and updates the confidence score for the answer 406 accordingly. While the embodiment of Figure 4 shows the answer 406 provided by UserBot2, in an alternative embodiment, an expert may enter an answer directly into the forum 401 without providing an answer to the system. The system may extract the expert answer 406 from the forum 401 using the bot. The system may index the expert answer, generate a confidence score, and update the expert profile based on the confidence score.
  • One or more embodiments integrate automated conversational search with sourcing answers from community experts.
  • a user has a single point of entry to pose a question in a user communication application.
  • the system takes care of answering the question. If the system is unable to answer the question, it obtains help from community experts. New answers are indexed and used to answer future questions.
  • One or more embodiments rely on machine learning models to: identify whether a question can be answered using the existing knowledge base; automatically generate answers from the existing knowledge base; decide between generating a clarifying question and reaching out to experts for help; identify experts on the question topic; distribute requests for help among experts; and enhance an existing knowledge base of indexed question and answer content based on user feedback and expert answers.
  • a user communication management engine is embodied in a computer network providing connectivity among a set of nodes.
  • the nodes may be local to and/or remote from each other.
  • the nodes are connected by a set of links. Examples of links include a coaxial cable, an unshielded twisted cable, a copper cable, an optical fiber, and a virtual link.
  • a subset of nodes implements the computer network. Examples of such nodes include a switch, a router, a firewall, and a network address translator (NAT). Another subset of nodes uses the computer network.
  • Such nodes also referred to as “hosts” may execute a client process and/or a server process.
  • a client process makes a request for a computing service (such as, execution of a particular application, and/or storage of a particular amount of data).
  • a server process responds by executing the requested service and/or returning corresponding data.
  • a computer network may be a physical network, including physical nodes connected by physical links.
  • a physical node is any digital device.
  • a physical node may be a function-specific hardware device, such as a hardware switch, a hardware router, a hardware firewall, and a hardware NAT. Additionally or alternatively, a physical node may be a generic machine that is configured to execute various virtual machines and/or applications performing respective functions.
  • a physical link is a physical medium connecting two or more physical nodes. Examples of links include a coaxial cable, an unshielded twisted cable, a copper cable, and an optical fiber.
  • a computer network may be an overlay network.
  • An overlay network is a logical network implemented on top of another network (such as, a physical network).
  • Each node in an overlay network corresponds to a respective node in the underlying network.
  • each node in an overlay network is associated with both an overlay address (to address to the overlay node) and an underlay address (to address the underlay node that implements the overlay node).
  • An overlay node may be a digital device and/or a software process (such as, a virtual machine, an application instance, or a thread)
  • a link that connects overlay nodes is implemented as a tunnel through the underlying network.
  • the overlay nodes at either end of the tunnel treat the underlying multi-hop path between them as a single logical link. Tunneling is performed through encapsulation and decapsulation.
  • a client may be local to and/or remote from a computer network.
  • the client may access the computer network over other computer networks, such as a private network or the Internet.
  • the client may communicate requests to the computer network using a communications protocol, such as Hypertext Transfer Protocol (HTTP).
  • HTTP Hypertext Transfer Protocol
  • the requests are communicated through an interface, such as a client interface (such as a web browser), a program interface, or an application programming interface (API).
  • a computer network provides connectivity between clients and network resources.
  • Network resources include hardware and/or software configured to execute server processes. Examples of network resources include a processor, a data storage, a virtual machine, a container, and/or a software application.
  • Network resources are shared amongst multiple clients. Clients request computing services from a computer network independently of each other.
  • Network resources are dynamically assigned to the requests and/or clients on an on- demand basis.
  • Network resources assigned to each request and/or client may be scaled up or down based on, for example, (a) the computing services requested by a particular client, (b) the aggregated computing services requested by a particular tenant, and/or (c) the aggregated computing services requested of the computer network.
  • Such a computer network may be referred to as a “cloud network.”
  • a service provider provides a cloud network to one or more end users.
  • Various service models may be implemented by the cloud network, including but not limited to Software-as-a-Service (SaaS), Platform-as-a-Service (PaaS), and Infrastructure-as-a- Service (IaaS).
  • SaaS Software-as-a-Service
  • PaaS Platform-as-a-Service
  • IaaS Infrastructure-as-a- Service
  • SaaS a service provider provides end users the capability to use the service provider’s applications, which are executing on the network resources.
  • PaaS the service provider provides end users the capability to deploy custom applications onto the network resources.
  • the custom applications may be created using programming languages, libraries, services, and tools supported by the service provider.
  • IaaS the service provider provides end users the capability to provision processing, storage, networks, and other fundamental computing resources provided by the network resources. Any arbitrary applications, including an operating system, may be deployed on the network resources.
  • various deployment models may be implemented by a computer network, including but not limited to a private cloud, a public cloud, and a hybrid cloud.
  • a private cloud network resources are provisioned for exclusive use by a particular group of one or more entities (the term “entity” as used herein refers to a corporation, organization, person, or other entity).
  • entity refers to a corporation, organization, person, or other entity.
  • the network resources may be local to and/or remote from the premises of the particular group of entities.
  • cloud resources are provisioned for multiple entities that are independent from each other (also referred to as “tenants” or “customers”).
  • the computer network and the network resources thereof are accessed by clients corresponding to different tenants.
  • Such a computer network may be referred to as a “multi-tenant computer network.”
  • Several tenants may use a same particular network resource at different times and/or at the same time.
  • the network resources may be local to and/or remote from the premises of the tenants.
  • a computer network comprises a private cloud and a public cloud.
  • An interface between the private cloud and the public cloud allows for data and application portability. Data stored at the private cloud and data stored at the public cloud may be exchanged through the interface.
  • Applications implemented at the private cloud and applications implemented at the public cloud may have dependencies on each other. A call from an application at the private cloud to an application at the public cloud (and vice versa) may be executed through the interface.
  • tenants of a multi-tenant computer network are independent of each other.
  • a business or operation of one tenant may be separate from a business or operation of another tenant.
  • Different tenants may demand different network requirements for the computer network. Examples of network requirements include processing speed, amount of data storage, security requirements, performance requirements, throughput requirements, latency requirements, resiliency requirements, Quality of Service (QoS) requirements, tenant isolation, and/or consistency.
  • QoS Quality of Service
  • tenant isolation and/or consistency.
  • the same computer network may need to implement different network requirements demanded by different tenants.
  • tenant isolation is implemented to ensure that the applications and/or data of different tenants are not shared with each other.
  • Various tenant isolation approaches may be used.
  • each tenant is associated with a tenant ID.
  • Each network resource of the multi-tenant computer network is tagged with a tenant ID.
  • a tenant is permitted access to a particular network resource only if the tenant and the particular network resources are associated with a same tenant ID.
  • each tenant is associated with a tenant ID.
  • Each application, implemented by the computer network is tagged with a tenant ID.
  • each data structure and/or dataset, stored by the computer network is tagged with a tenant ID.
  • a tenant is permitted access to a particular application, data structure, and/or dataset only if the tenant and the particular application, data structure, and/or dataset are associated with a same tenant ID.
  • each database implemented by a multi-tenant computer network may be tagged with a tenant ID. Only a tenant associated with the corresponding tenant ID may access data of a particular database.
  • each entry in a database implemented by a multi-tenant computer network may be tagged with a tenant ID. Only a tenant associated with the corresponding tenant ID may access data of a particular entry.
  • the database may be shared by multiple tenants.
  • a subscription list indicates which tenants have authorization to access which applications. For each application, a list of tenant IDs of tenants authorized to access the application is stored. A tenant is permitted access to a particular application only if the tenant ID of the tenant is included in the subscription list corresponding to the particular application.
  • network resources such as digital devices, virtual machines, application instances, and threads
  • packets from any source device in a tenant overlay network may only be transmitted to other devices within the same tenant overlay network.
  • Encapsulation tunnels are used to prohibit any transmissions from a source device on a tenant overlay network to devices in other tenant overlay networks.
  • the packets, received from the source device are encapsulated within an outer packet.
  • the outer packet is transmitted from a first encapsulation tunnel endpoint (in communication with the source device in the tenant overlay network) to a second encapsulation tunnel endpoint (in communication with the destination device in the tenant overlay network).
  • the second encapsulation tunnel endpoint decapsulates the outer packet to obtain the original packet transmitted by the source device.
  • the original packet is transmitted from the second encapsulation tunnel endpoint to the destination device in the same particular overlay network.
  • Embodiments are directed to a system with one or more devices that include a hardware processor and that are configured to perform any of the operations described herein and/or recited in any of the claims below.
  • a non-transitory computer readable storage medium comprises instructions which, when executed by one or more hardware processors, causes performance of any of the operations described herein and/or recited in any of the claims.
  • the techniques described herein are implemented by one or more special-purpose computing devices.
  • the special-purpose computing devices may be hard-wired to perform the techniques, or may include digital electronic devices such as one or more application-specific integrated circuits (ASICs), field programmable gate arrays (FPGAs), or network processing units (NPUs) that are persistently programmed to perform the techniques, or may include one or more general purpose hardware processors programmed to perform the techniques pursuant to program instructions in firmware, memory, other storage, or a combination.
  • ASICs application-specific integrated circuits
  • FPGAs field programmable gate arrays
  • NPUs network processing units
  • Such special-purpose computing devices may also combine custom hard-wired logic, ASICs, FPGAs, or NPUs with custom programming to accomplish the techniques.
  • the special-purpose computing devices may be desktop computer systems, portable computer systems, handheld devices, networking devices or any other device that incorporates hard-wired and/or program logic to implement the techniques.
  • Figure 5 is a block diagram that illustrates a computer system 500 upon which an embodiment of the invention may be implemented.
  • Computer system 500 includes a bus 502 or other communication mechanism for communicating information, and a hardware processor 504 coupled with bus 502 for processing information.
  • Hardware processor 504 may be, for example, a general purpose microprocessor.
  • Computer system 500 also includes a main memory 506, such as a random access memory (RAM) or other dynamic storage device, coupled to bus 502 for storing information and instructions to be executed by processor 504.
  • Main memory 506 also may be used for storing temporary variables or other intermediate information during execution of instructions to be executed by processor 504.
  • Such instructions when stored in non-transitory storage media accessible to processor 504, render computer system 500 into a special-purpose machine that is customized to perform the operations specified in the instructions.
  • Computer system 500 further includes a read only memory (ROM) 508 or other static storage device coupled to bus 502 for storing static information and instructions for processor 504.
  • ROM read only memory
  • a storage device 510 such as a magnetic disk or optical disk, is provided and coupled to bus 502 for storing information and instructions.
  • Computer system 500 may be coupled via bus 502 to a display 512, such as a cathode ray tube (CRT), for displaying information to a computer user.
  • a display 512 such as a cathode ray tube (CRT)
  • An input device 514 is coupled to bus 502 for communicating information and command selections to processor 504.
  • cursor control 516 is Another type of user input device
  • cursor control 516 such as a mouse, a trackball, or cursor direction keys for communicating direction information and command selections to processor 504 and for controlling cursor movement on display 512.
  • This input device typically has two degrees of freedom in two axes, a first axis (e.g., x) and a second axis (e.g., y), that allows the device to specify positions in a plane.
  • Computer system 500 may implement the techniques described herein using customized hard-wired logic, one or more ASICs or FPGAs, firmware and/or program logic which in combination with the computer system causes or programs computer system 500 to be a special-purpose machine. According to one embodiment, the techniques herein are performed by computer system 500 in response to processor 504 executing one or more sequences of one or more instructions contained in main memory 506. Such instructions may be read into main memory 506 from another storage medium, such as storage device 510. Execution of the sequences of instructions contained in main memory 506 causes processor 504 to perform the process steps described herein. In alternative embodiments, hard-wired circuitry may be used in place of or in combination with software instructions.
  • Non-volatile media includes, for example, optical or magnetic disks, such as storage device 510.
  • Volatile media includes dynamic memory, such as main memory 506.
  • Common forms of storage media include, for example, a floppy disk, a flexible disk, hard disk, solid state drive, magnetic tape, or any other magnetic data storage medium, a CD-ROM, any other optical data storage medium, any physical medium with patterns of holes, a RAM, a PROM, and EPROM, a FLASH-EPROM, NVRAM, any other memory chip or cartridge, content-addressable memory (CAM), and ternary content- addressable memory (TCAM).
  • a floppy disk a flexible disk, hard disk, solid state drive, magnetic tape, or any other magnetic data storage medium
  • CD-ROM any other optical data storage medium
  • any physical medium with patterns of holes a RAM, a PROM, and EPROM
  • FLASH-EPROM any other memory chip or cartridge
  • CAM content-addressable memory
  • TCAM ternary content- addressable memory
  • Storage media is distinct from but may be used in conjunction with transmission media.
  • Transmission media participates in transferring information between storage media.
  • transmission media includes coaxial cables, copper wire and fiber optics, including the wires that comprise bus 502.
  • transmission media can also take the form of acoustic or light waves, such as those generated during radio-wave and infra-red data communications.
  • Various forms of media may be involved in carrying one or more sequences of one or more instructions to processor 504 for execution.
  • the instructions may initially be carried on a magnetic disk or solid state drive of a remote computer.
  • the remote computer can load the instructions into its dynamic memory and send the instructions over a telephone line using a modem.
  • a modem local to computer system 500 can receive the data on the telephone line and use an infra-red transmitter to convert the data to an infra-red signal.
  • An infra-red detector can receive the data carried in the infra-red signal and appropriate circuitry can place the data on bus 502.
  • Bus 502 carries the data to main memory 506, from which processor 504 retrieves and executes the instructions.
  • Computer system 500 also includes a communication interface 518 coupled to bus 502.
  • Communication interface 518 provides a two-way data communication coupling to a network link 520 that is connected to a local network 522.
  • communication interface 518 may be an integrated services digital network (ISDN) card, cable modem, satellite modem, or a modem to provide a data communication connection to a corresponding type of telephone line.
  • ISDN integrated services digital network
  • communication interface 518 may be a local area network (LAN) card to provide a data communication connection to a compatible LAN.
  • LAN local area network
  • Wireless links may also be implemented.
  • communication interface 518 sends and receives electrical, electromagnetic, or optical signals that carry digital data streams representing various types of information.
  • Network link 520 typically provides data communication through one or more networks to other data devices.
  • network link 520 may provide a connection through local network 522 to a host computer 524 or to data equipment operated by an Internet Service Provider (ISP) 526.
  • ISP 526 in turn provides data communication services through the worldwide packet data communication network now commonly referred to as the “Internet” 528.
  • Internet 528 uses electrical, electromagnetic, or optical signals that carry digital data streams.
  • the signals through the various networks and the signals on network link 520 and through communication interface 518, which carry the digital data to and from computer system 500, are example forms of transmission media.
  • Computer system 500 can send messages and receive data, including program code, through the network(s), network link 520 and communication interface 518.
  • a server 530 might transmit a requested code for an application program through Internet 528, ISP 526, local network 522 and communication interface 518.
  • the received code may be executed by processor 504 as it is received, and/or stored in storage device 510, or other non-volatile storage for later execution.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Theoretical Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Human Computer Interaction (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

Sont ici divulguées des techniques d'interaction avec des utilisateurs dans un environnement de discussion. Lors de l'identification d'une question dans l'environnement de discussion, un système détermine : (a) si une réponse stockée a déjà été associée à la question, (b) si une réponse peut être générée par le système à l'aide d'informations existantes accessibles au système, ou (c) s'il faut contacter un expert pour répondre à la question. Le système met à jour la base de connaissances en stockant les questions et les réponses, conjointement avec un retour utilisateur concernant les questions et les réponses. Sur la base du retour utilisateur, le système détermine s'il faut modifier les réponses existantes à des questions générées par utilisateur ou rechercher des réponses provenant d'experts humains supplémentaires.
EP22743953.6A 2021-06-02 2022-05-27 Interaction et édition d'environnement de discussion d'utilisateur par l'intermédiaire de réponses générées par système Pending EP4348450A1 (fr)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US202163202240P 2021-06-02 2021-06-02
US17/470,179 US20220391595A1 (en) 2021-06-02 2021-09-09 User discussion environment interaction and curation via system-generated responses
PCT/US2022/031415 WO2022256262A1 (fr) 2021-06-02 2022-05-27 Interaction et édition d'environnement de discussion d'utilisateur par l'intermédiaire de réponses générées par système

Publications (1)

Publication Number Publication Date
EP4348450A1 true EP4348450A1 (fr) 2024-04-10

Family

ID=82608260

Family Applications (1)

Application Number Title Priority Date Filing Date
EP22743953.6A Pending EP4348450A1 (fr) 2021-06-02 2022-05-27 Interaction et édition d'environnement de discussion d'utilisateur par l'intermédiaire de réponses générées par système

Country Status (2)

Country Link
EP (1) EP4348450A1 (fr)
WO (1) WO2022256262A1 (fr)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117370521A (zh) * 2023-10-13 2024-01-09 北京百度网讯科技有限公司 医疗问答方法、系统、装置、设备以及存储介质

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7809664B2 (en) * 2007-12-21 2010-10-05 Yahoo! Inc. Automated learning from a question and answering network of humans

Also Published As

Publication number Publication date
WO2022256262A1 (fr) 2022-12-08

Similar Documents

Publication Publication Date Title
US20220391595A1 (en) User discussion environment interaction and curation via system-generated responses
US11734503B2 (en) Generating conversation models from documents
US10963525B2 (en) Artificial intelligence system for providing relevant content queries across unconnected websites via a conversational environment
US10832008B2 (en) Computerized system and method for automatically transforming and providing domain specific chatbot responses
US10878009B2 (en) Translating natural language utterances to keyword search queries
US10394853B2 (en) Providing a self-maintaining automated chat response generator
US10771424B2 (en) Usability and resource efficiency using comment relevance
US20230047212A1 (en) Assistive browsing using context
CN110869925B (zh) 搜索中的多个实体感知的预输入
US20200104427A1 (en) Personalized neural query auto-completion pipeline
US20200004827A1 (en) Generalized linear mixed models for generating recommendations
JP2023017921A (ja) コンテンツ推薦とソートモデルトレーニング方法、装置、機器、記憶媒体及びコンピュータプログラム
US8706909B1 (en) Systems and methods for semantic URL handling
US10853430B1 (en) Automated agent search engine
CN108306813A (zh) 会话消息的处理方法、服务器及客户端
EP4348450A1 (fr) Interaction et édition d'environnement de discussion d'utilisateur par l'intermédiaire de réponses générées par système
US11902223B2 (en) Intelligent assistant content generation
CN114547260A (zh) 在聊天室内提供搜索功能的方法、计算机装置及记录介质
US20200401279A1 (en) User interface for providing machine-learned reviewer recommendations
CN117693744A (zh) 经由系统生成的响应的用户讨论环境交互和策展
US20200174633A1 (en) User interface for optimizing digital page
US20190163798A1 (en) Parser for dynamically updating data for storage
US20240012837A1 (en) Text-triggered database and api actions
US11983209B1 (en) Partitioning documents for contextual search
US10831749B2 (en) Expert discovery using user query navigation paths

Legal Events

Date Code Title Description
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: UNKNOWN

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE

PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

17P Request for examination filed

Effective date: 20231229

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

DAV Request for validation of the european patent (deleted)
DAX Request for extension of the european patent (deleted)