CN110955766A - Method and system for automatically expanding intelligent customer service standard problem pairs - Google Patents

Method and system for automatically expanding intelligent customer service standard problem pairs Download PDF

Info

Publication number
CN110955766A
CN110955766A CN201911210422.0A CN201911210422A CN110955766A CN 110955766 A CN110955766 A CN 110955766A CN 201911210422 A CN201911210422 A CN 201911210422A CN 110955766 A CN110955766 A CN 110955766A
Authority
CN
China
Prior art keywords
standard
question
customer service
answer
log data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911210422.0A
Other languages
Chinese (zh)
Inventor
蒋亮
温祖杰
梁忠平
张家兴
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alipay Hangzhou Information Technology Co Ltd
Original Assignee
Alipay Hangzhou Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alipay Hangzhou Information Technology Co Ltd filed Critical Alipay Hangzhou Information Technology Co Ltd
Priority to CN201911210422.0A priority Critical patent/CN110955766A/en
Publication of CN110955766A publication Critical patent/CN110955766A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/332Query formulation
    • G06F16/3329Natural language query formulation or dialogue systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/35Clustering; Classification

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Artificial Intelligence (AREA)
  • Human Computer Interaction (AREA)
  • Computational Linguistics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The embodiment of the specification discloses a method and a system for automatically expanding intelligent customer service standard problem pairs. The method for automatically expanding intelligent customer service standard problem pairs comprises the following steps: determining at least one candidate question based on artificial customer service log data, the artificial customer service log data recording user questions and answers given by artificial customer service to the user questions; judging whether a standard question bank contains at least one candidate question or not, and determining the candidate question as a standard question if not; the standard question bank comprises at least one standard question pair; determining a standard answer to the standard question based on the artificial customer service log data; and taking the standard question and the standard answer as a standard question pair, wherein the standard question pair is used for intelligent customer service.

Description

Method and system for automatically expanding intelligent customer service standard problem pairs
Technical Field
The application relates to the field of data processing, in particular to a method and a system for automatically expanding intelligent customer service standard problem pairs.
Background
With the development of computer technology, automatic question answering systems are more and more popular in life, and bring various conveniences to the life of people. For example, by recognizing a text and/or voice question input by the user and automatically replying an answer corresponding to the question, the question and the demand of the user are solved.
Generally, because the question asking mode and the used words have various expression forms, the question of the user may be different from the standard question in the system, so that the automatic question answering system cannot accurately and quickly identify the question of the user, the user cannot answer or even return wrong answers, and the experience of the user in the process of using the automatic question answering system is reduced. Therefore, how to effectively and automatically expand the intelligent customer service standard becomes a technical problem which needs to be solved urgently at present.
Disclosure of Invention
One aspect of the present description provides a method for automatically augmenting intelligent customer service standard problem pairs. The method comprises the following steps: determining at least one candidate question based on artificial customer service log data, the artificial customer service log data recording user questions and answers given by artificial customer service to the user questions; judging whether a standard question bank contains at least one candidate question or not, and determining the candidate question as a standard question if not; the standard question bank comprises at least one standard question pair; determining a standard answer to the standard question based on the artificial customer service log data; and taking the standard question and the standard answer as a standard question pair, wherein the standard question pair is used for intelligent customer service.
Another aspect of the specification provides a system for automatically augmenting intelligent customer service standard problem pairs. The system comprises: a candidate question determination module for determining at least one candidate question based on artificial customer service log data, the artificial customer service log data recording user questions and answers given by artificial customer service to the user questions; the standard question determining module is used for judging whether the standard question bank contains at least one candidate question or not, and otherwise, determining the candidate question as the standard question; the standard question bank comprises at least one standard question pair; a standard answer determination module for determining a standard answer to the standard question based on the artificial customer service log data; and the standard question pair determining module is used for taking the standard question and the standard answer as a standard question pair, and the standard question pair is used for intelligent customer service.
Another aspect of the present specification provides an apparatus for automatically augmenting a pair of intelligent customer service standard questions, comprising at least one storage medium and at least one processor, the storage medium storing computer instructions; the processor is configured to perform a method for automatically augmenting intelligent customer service standard problem pairs.
Another aspect of the present specification provides a computer-readable storage medium storing computer instructions which, when read by a computer, cause the computer to perform a method for automatically augmenting intelligent customer service standard problem pairs.
Drawings
The present description will be further described by way of exemplary embodiments, which will be described in detail by way of the accompanying drawings. These embodiments are not intended to be limiting, and in these embodiments like numerals are used to indicate like structures, wherein:
FIG. 1 is a schematic diagram of an exemplary problem pair shown in accordance with some embodiments of the present description;
FIG. 2 is a block diagram of an exemplary automatically augmented intelligent customer service standard problem pair system, according to some embodiments of the present description;
FIG. 3 is an exemplary flow diagram of an automatically augmenting intelligent customer service standard problem pair method, according to some embodiments of the present description;
FIG. 4 is an exemplary flow diagram of a method of determining candidate problems based on clustering, according to some embodiments of the present description;
FIG. 5 is an exemplary flow diagram of a method of determining candidate problems based on frequency, according to some embodiments of the present description;
FIG. 6 is an exemplary flow diagram of a domain-based candidate question determination method shown in accordance with some embodiments of the present description:
FIG. 7 is an exemplary flow diagram of a method for determining whether a library of standard questions contains a candidate question in accordance with some embodiments of the present description; and
fig. 8 is an exemplary flow diagram illustrating an answer method to obtain standard questions in accordance with some embodiments of the present description.
Detailed Description
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings used in the description of the embodiments will be briefly described below. It is obvious that the drawings in the following description are only examples or embodiments of the present description, and that for a person skilled in the art, the present description can also be applied to other similar scenarios on the basis of these drawings without inventive effort. Unless otherwise apparent from the context, or otherwise indicated, like reference numbers in the figures refer to the same structure or operation.
It should be understood that "system", "device", "unit" and/or "module" as used in this specification is a method for distinguishing different components, elements, parts or assemblies at different levels. However, other words may be substituted by other expressions if they accomplish the same purpose.
As used in this specification and the appended claims, the terms "a," "an," "the," and/or "the" are not intended to be inclusive in the singular, but rather are intended to be inclusive in the plural, unless the context clearly dictates otherwise. In general, the terms "comprises" and "comprising" merely indicate that steps and elements are included which are explicitly identified, that the steps and elements do not form an exclusive list, and that a method or apparatus may include other steps or elements.
Flow charts are used in this description to illustrate operations performed by a system according to embodiments of the present description. It should be understood that the preceding or following operations are not necessarily performed in the exact order in which they are performed. Rather, the various steps may be processed in reverse order or simultaneously. Meanwhile, other operations may be added to the processes, or a certain step or several steps of operations may be removed from the processes.
Fig. 1 is a schematic diagram of an exemplary standard problem pair shown in accordance with some embodiments of the present description.
The standard question may be a question in the intelligent customer service system for expressing the user's intention. The user's intention means that the question input by the user is intended to express meaning, in other words, the user is confused about the question that the user wishes to answer by inputting the question.
The standard answer may refer to an answer in the smart customer service that corresponds to the standard question. For example, the standard question is "how to repay? "the standard answer may be" first step, open pay for; secondly, clicking 'bei' on the lower right corner, and clicking 'flower'; third, click 'pay immediately'.
In some embodiments, the standard questions and the expression of the standard answers may be formatted.
A standard question pair is a text pair consisting of a standard question and its corresponding standard answer. For example, "how to repay flower? "and" first step, open pay treasure; secondly, clicking 'bei' on the lower right corner, and clicking 'flower'; third, click 'immediate repayment' is a standard question pair. As another example, "how long a good medical insurance hesitation is" and "15 days" are a pair of standard questions.
In some embodiments, the standard questions, standard answers, or/and standard question pairs may belong to a particular industry, including, but not limited to, the financial industry, insurance industry, internet industry, automotive industry, catering industry, telecommunications industry, energy industry, entertainment industry, sports industry, logistics industry, medical industry, security industry, and the like.
In some embodiments, the standard questions are stored in a standard question bank along with their corresponding standard answers. Wherein the standard question bank may be a knowledge bank consisting of a large number of standard question pairs. If the standard question a is obtained from the standard question library, the standard answer a corresponding to the standard question a may be obtained at the same time. In some embodiments, after a user enters a question, a standard question corresponding to the entered question may be determined based on a library of standard questions and a corresponding answer may be output to the user. For example, the user inputs a question "i want to ask how you should get over the flower? "then answer the standard of the standard question" first step, turn on the payment treasures; secondly, clicking 'bei' on the lower right corner, and clicking 'flower'; third, click on 'immediate repayment' as an answer to the user input question.
In some embodiments, the standard questions in the standard question bank are characterized by a long tail distribution. Specifically, a majority (e.g., 80%) of the standard questions can cover only a small portion (e.g., 20%) of the user-entered questions, and a small portion (e.g., 20%) of the standard questions can cover a large portion (e.g., 80%) of the user-entered questions. Therefore, in order to answer the user's questions based on the standard question bank as much as possible, and particularly, in order to solve a small number of user input questions which need to be covered by a large number of standard questions, a large number of standard questions need to be collected in the standard question bank.
The manual customer service log data is used for recording user questions and answers given by manual customer service aiming at the user questions. Wherein, the user question can be a question input by the user in the intelligent customer service. In some embodiments, the user questions may relate to various industries. The user questions in the manual customer service log data may include standard questions, deformation questions that are consistent with the standard question expression content but have different expression modes, and other questions. For example, the standard question "repayment period of the ant flower? "also may contain a variant question of the standard question" i want to ask how i are going back for flower ", and may also contain other question" how is the weather today ". The answers given by the customer in the manual log data correspond to the user questions, and may include standard answers, modified answers of the standard answers, and other answers immediately after the corresponding questions.
In some embodiments, the manual customer service log data may be text data. For example, when a user interacts with a manual service by text, the manual service log data may be text data generated by the interaction. For another example, when a user interacts with a human customer service via speech, the human customer log data may be text data converted from speech data generated by the interaction. In some embodiments, the voice data may be converted to textual data by voice recognition techniques. Specifically, the speech data may be converted into text data by combining an acoustic model and a language model, wherein the acoustic model is used to classify acoustic features of the speech into corresponding decoded phonemes or word units, so as to obtain a mapping from the speech features to the phonemes, and the language model is used to decode words into complete sentences.
In some embodiments, a problem (standard problem, morph problem for standard problem, or other problem) in the log data may occur multiple times by a human customer. In some embodiments, the manual customer service log data may not be identical in answer to the same question, and may be a standard answer or a variant of the standard answer. For example, the question "reimbursement period of ant flowers? The answer may be a standard answer of "60 days", or a variant answer of the standard answer of "must be cleared within 60 days" or "cannot exceed 60 days", or the like.
FIG. 2 is a block diagram of an exemplary automatically augmented intelligent customer service standard problem pair system, according to some embodiments of the present description.
The system can be used for an online service platform of internet service. In some embodiments, the system 100 may be used in an online service platform that includes an intelligent response system. Such as e-commerce platforms, on-line consulting platforms, public service platforms, etc.
As shown in fig. 2, the system may include a candidate question determination module 210, a standard question determination module 220, a standard answer determination module 230, and a standard question pair determination module 240.
The candidate question determination module 210 may be configured to determine at least one candidate question based on artificial customer service log data that records user questions and answers given by artificial customer service to the user questions.
In some embodiments, the candidate issue determination module 210 may be configured to obtain the user issues in the manual customer service log data, cluster the user issues, determine at least one cluster, and determine a center point of the at least one cluster as the at least one candidate issue. In some embodiments, the candidate problem determination module 210 may implement clustering, etc. by any algorithm or model that may be classified. For example, K-Means algorithms (K-Means), Expectation Maximization (EM), Gaussian Mixture Models (GMM), etc. In some embodiments, clustering may be based on text similarity.
In some embodiments, the candidate question determining module 210 may be further configured to obtain the user question in the artificial customer service log data, count occurrence frequency of the user question in the artificial customer service log data, and determine the user question with the frequency higher than a first threshold as the at least one candidate question. In some embodiments, the candidate problem determination module 210 may manually or algorithmically count the frequency of occurrence of user problems in the manual log data.
In some embodiments, the candidate question determining module 210 may be further configured to obtain the user question in the manual customer service log data, determine whether the user question belongs to a target domain, and if so, determine the user question as the at least one candidate question. In some embodiments, the candidate question determination module 210 may determine the domain of the user question based on keywords.
The standard question determining module 220 may be configured to determine whether the standard question bank contains the at least one candidate question, and otherwise determine that the candidate question is a standard question; the standard question bank comprises at least one standard question pair. Specifically, a first similarity between the at least one candidate question and questions in the standard question bank is calculated, whether the first similarity is higher than a second threshold value or not is judged, and if not, the candidate question is determined as the standard question. In some embodiments, if the first similarity is higher than the second threshold, it is determined whether the standard answer to the standard question is different from the answer to the standard question in the artificial log data, and the standard answer to the standard question is updated based on the determination result. Specifically, if the difference exists, it can be further determined manually which answer is more accurate, and if the answer in the artificial log data is more accurate, the answer in the artificial log data is answered in place of the standard.
The standard answer determination module 230 may be configured to determine a standard answer to the standard question based on the artificial customer service log data. Specifically, at least one answer to the standard question is obtained from the manual log data, an abstract of the at least one answer is extracted, and the abstract is determined as the standard answer to the standard question. In some embodiments, the digest may be extracted based on an algorithm. Such as the TextRank algorithm, the tensrflow algorithm, etc. In some embodiments, the standard answer determination module 230 may also combine at least one answer to the standard question, with the combined text as the standard answer to the standard question.
The standard answer determination module 230 may be configured to obtain at least one answer to a standard question. In some embodiments, the standard answer determination module 230 may be configured to determine a correlation between the standard question and the answer given by the artificial customer service in the artificial customer service log data, and to take the answer given by the artificial customer service whose correlation satisfies a preset requirement as the at least one answer. The preset requirement may be that the correlation is above a set threshold, etc. In some embodiments, the relevance may be a textual similarity, semantic similarity, etc. of the standard question to the answer in the manual log data. In some embodiments, the standard answer determination module 230 may take as at least one answer to a standard question an answer given by a customer service in the manual log data immediately following the standard question.
The standard question pair determination module 240 may be configured to treat the standard question and the standard answer as a standard question pair for intelligent customer service.
It should be understood that the system and its modules shown in FIG. 2 may be implemented in a variety of ways. For example, in some embodiments, the system and its modules may be implemented in hardware, software, or a combination of software and hardware. Wherein the hardware portion may be implemented using dedicated logic; the software portions may be stored in a memory for execution by a suitable instruction execution system, such as a microprocessor or specially designed hardware. Those skilled in the art will appreciate that the methods and systems described above may be implemented using computer executable instructions and/or embodied in processor control code, such code being provided, for example, on a carrier medium such as a diskette, CD-or DVD-ROM, a programmable memory such as read-only memory (firmware), or a data carrier such as an optical or electronic signal carrier. The system and its modules in this specification may be implemented not only by hardware circuits such as very large scale integrated circuits or gate arrays, semiconductors such as logic chips, transistors, or programmable hardware devices such as field programmable gate arrays, programmable logic devices, etc., but also by software executed by various types of processors, for example, or by a combination of the above hardware circuits and software (e.g., firmware).
It should be noted that the above description of the system 200 and its modules for automatically expanding the smart customer service standard problem is for convenience of description only, and is not intended to limit the present disclosure to the scope of the illustrated embodiments. It will be appreciated by those skilled in the art that, given the teachings of the present system, any combination of modules or sub-system configurations may be used to connect to other modules without departing from such teachings. For example, the candidate question determining module 210, the standard question determining module 220, the standard answer determining module 230 and the standard question pair determining module 240 disclosed in fig. 2 may be different modules in one system, or may be one module that implements the functions of the two modules. For another example, the automatically-expanded intelligent customer service standard problem may share one storage module for each module in the system 200, and each module may have its own storage module. Such variations are within the scope of the present disclosure.
FIG. 3 is an exemplary flow diagram of a method for automatically augmenting intelligent customer service criteria problem pairs, according to some embodiments of the present description. As shown in FIG. 3, the method 300 for automatically augmenting intelligent customer service criteria questions comprises:
step 302, at least one candidate question is determined based on manual customer service log data, which records user questions and answers given by manual customer service to the user questions. In particular, step 302 may be performed by the candidate problem determination module 210.
The manual customer service log data records user questions and answers given by manual customer service to the user questions. In some embodiments, the manual customer service log data may be obtained from an online platform (e.g., a website, an application, etc.). For example, an intelligent response system of a website or application may be accessed to obtain artificial customer service log data within the intelligent response system. In some embodiments, the manual customer service log data may be read directly from the storage device. In some embodiments, the manual customer service log may also be obtained in any other manner, which is not limited in this embodiment.
Candidate questions refer to alternative questions that may be standard questions. In some embodiments, candidate questions may originate from user questions in manual customer service log data.
In some embodiments, at least one candidate problem may be determined from the artificial customer service log data based on a clustering method. Specifically, the user questions in the manual customer service log data are obtained first, then the user questions are clustered, at least one cluster is determined, and then a center point of the at least one cluster is determined as the at least one candidate question. For more details on the determination of candidate problems based on clustering, reference is made to fig. 4 and its related description, which are not repeated herein. For example, clustering is performed based on an algorithm such as a K-Means algorithm (K-Means) or a maximum expectation algorithm, so as to obtain clusters. For another example, a point in one cluster at which the average value of the similarity with other points is the largest is taken as the center point.
In some embodiments, at least one candidate question may be determined based on a frequency with which user questions appear in the manual log data. Specifically, the user problems in the manual customer service log data are obtained first, then the occurrence frequency of the user problems in the manual customer service log data is counted, and then the user problems with the frequency higher than a first threshold value are determined as the at least one candidate problem. For more details on the determination of candidate questions based on frequency, refer to fig. 5 and its related description, which are not repeated herein. For example, the number of times a user problem occurs in the manual customer service log data is counted based on a human or an algorithm (e.g., a hash table, etc.).
In some embodiments, at least one candidate question may be determined based on a target area described by a user question in the artificial customer service log data. Specifically, the user question in the manual customer service log data is obtained first, and then whether the user question belongs to a target field is judged, and if yes, the user question is determined as the at least one candidate question. For more details on determining candidate questions based on the domain, refer to fig. 6 and its related description, which are not repeated herein. For example, the domain of the question is determined based on keywords of the candidate question.
In some embodiments, the candidate problem may be determined by other methods, which are not limited in this embodiment. For example, at least one candidate question may be determined based on manual customer service log data over a set period of time. For example, user questions in the manual customer service log data over a period of 5 months, a year, etc. may be determined as at least one candidate question.
Step 304, judging whether the standard question bank contains at least one candidate question, and if not, determining the candidate question as a standard question; the standard question bank comprises at least one standard question pair. In particular, step 304 may be performed by the standard issue determination module 220.
In some embodiments, whether the standard problem library contains the at least one candidate problem may be determined by a similarity of the at least one candidate problem to standard problems in the standard problem library. For more details on determining whether the standard question bank includes the at least one candidate question, refer to fig. 7 and the description, which are not repeated herein.
Step 306, determining a standard answer to the standard question based on the artificial customer service log data. In particular, step 306 may be performed by the standard answer determination module 230.
As illustrated in fig. 1, manual customer service log data is used to record user questions and answers given to the user questions by manual customer service. In some embodiments, after one or more user questions in the manual log data are determined to be standard questions, the answers in the manual log data that are served to those user questions may be used as answers to the standard questions. For example, an answer given by customer service immediately following a user's question in the manual log data may be taken as an answer to the user's question, and the answer may be taken as an answer to a standard question. For another example, the answer to the user question may be determined based on the correlation between the user question and the answer given by the customer in the manual log data, and the answer may be used as the answer to the standard question. For more details on the determination of the answer to the standard question based on the relevance, reference is made to fig. 8 and the description, which are not repeated herein.
As described in fig. 1, a question in manual log data may appear multiple times and the answer for each question may be different. For example, the user question "how many days a good medical insurance hesitation period is asked" may appear twice in the manual log data, and the responses given by the manual customer service are "15 days" and "15 days a insurance hesitation period", respectively, which have the same meaning but are different in response. Thus, one or more answers to a standard question may be obtained from the manual log data. In some embodiments, a summary may be extracted of one or more answers to a standard question and used as a standard answer to the standard question.
Specifically, the abstract can be extracted by an algorithm. For example, a plurality of answers of the same standard question can be segmented into a plurality of constituent sentences by an extraction type text abstract TextRank algorithm, a node connection graph is constructed, the similarity between the sentences is used as the weight of edges, the TextRank value of the sentence is calculated by loop iteration, and finally, the sentences with high ranks are extracted to be combined into the abstract. For another example, key sentences or phrases can be extracted from the text of multiple answers to the same standard question by using the generalized text abstract Tensorflow algorithm and spliced into an abstract again. In some embodiments, the abstracts of the multiple answers may also be extracted through other algorithms, and the embodiment is not limited.
In some embodiments, multiple answers to the same standard question may be directly combined, and the combined text may be used as the answer to the standard question. For example, the answers "must be cleared within 60 days" and "cannot exceed 60 days" may be combined into "must be cleared within 60 days, cannot exceed 60 days". Specifically, the combination of answers can be realized by a sentence fusion technology (i.e., combining two or more related sentences containing overlapping contents to obtain a sentence), firstly, a part to be fused is selected based on the word importance and the split of a plurality of answered sentences of the same standard question, then, the same information of the sentences is combined based on word alignment, and finally, the sentences are combined and generated by utilizing an integer linear programming based on the dependency relationship, the binary language model and the word importance. In some embodiments, the answers may be combined in other manners, and the embodiment is not limited.
In some embodiments, the standard answer may be manually reviewed to confirm whether the standard answer is correct, and if not, the standard answer may be manually updated.
And 308, taking the standard question and the standard answer as a standard question pair, wherein the standard question pair is used for intelligent customer service. Specifically, step 308 may be performed by the standard question pair determination module 240.
In some embodiments, after the standard questions and the standard answers are used as standard question pairs, the standard question pairs may be added to a standard question bank as an update to the standard question bank.
As shown in fig. 1, the standard questions in the standard question bank may have a problem of long tail distribution, that is, a small portion of the standard questions may correspond to a large portion of the user questions, and a large portion of the standard questions may correspond to a small portion of the user questions. Through the above-mentioned embodiment, the problem of long tail distribution can be solved. Specifically, the problem with higher frequency of user propositions in the manual log data or the problem corresponding to the central point of the cluster in the cluster is taken as a standard problem, and the standard problems can cover more user problems, so that the degree of the user problems covered by the standard problems is increased.
FIG. 4 is an exemplary flow diagram of a method for determining candidate problems based on clustering, shown in accordance with some embodiments of the present description. As shown in FIG. 4, the cluster-based candidate problem determination method 400 includes:
step 402, obtaining the user question in the manual customer service log data. In particular, step 402 may be performed by the candidate problem determination module 210.
In some embodiments, manual customer service log data may be collected by accessing the client, proxy server, and Web server to extract questions (i.e., user questions) entered by the user therein.
Step 404, clustering the user questions, and determining at least one cluster. In particular, step 404 may be performed by the candidate problem determination module 210.
Clustering is the process of grouping similar objects (e.g., user questions) into classes, one cluster for each class. In some embodiments, the clusters may be any algorithm or model or the like that can be classified. Clustering algorithms include, but are not limited to, K-Means algorithms (K-Means), Expectation Maximization (EM), Gaussian Mixture Models (GMM), and the like.
In some embodiments, the clustering of sentences is calculated based on similarity of sentences. For example, the method may be based on vector space calculation, regard a sentence as a linear sequence of words, select feature words and calculate the correlation between the feature words and each word, so that each word may obtain a correlated feature word vector, and take the similarity between these feature word vectors as the similarity between words. For example, the semantic similarity of sentences may be obtained by establishing sentence dependency structures using a semantic knowledge base and dependency grammar and calculating semantic similarities of sememes and words based on semantic calculation.
For example, given a value of K and K initial cluster center points (i.e., a user question), first each point (i.e., a user question other than the initial cluster center point) is sorted into the cluster represented by the initial cluster center point closest to the point. After all the points are distributed, recalculating the center point of the cluster according to all the points in the cluster, and then iterating the steps of distributing the points and updating the cluster center point until the change of the cluster center point is small or the specified iteration times is reached to obtain the final K clusters.
Step 406, determining a center point of the at least one cluster as the at least one candidate problem. In particular, step 406 may be performed by the candidate problem determination module 21O.
In some embodiments, the center point of the clustered plurality of clusters may be used as a candidate problem. In some embodiments, the point in one cluster at which the average value of the similarity with other points is the largest may be taken as the center point.
And (4) continuing to explain by using the K-means clustering algorithm, and taking the finally obtained central points in the K clusters as candidate problems. The center point in the K clusters is the center point determined after the iteration is completed.
Fig. 5 is an exemplary flow diagram of a method of determining candidate problems based on frequency, shown in accordance with some embodiments of the present description. As shown in fig. 5, the frequency-based candidate problem determination method 500 includes:
step 502, obtaining the user question in the manual customer service log data. In particular, step 502 may be performed by the candidate problem determination module 210.
The method for obtaining the user question can refer to the description of step 402 in fig. 4, and is not described herein again.
Step 504, counting the frequency of the user problems in the manual customer service log data. In particular, step 504 may be performed by the candidate problem determination module 210.
In some embodiments, the number of occurrences of each problem in the manual log data may be counted manually. In some embodiments, statistics of the number of times each problem occurs in the manual log data may also be performed by algorithms, which may include, but are not limited to, Hash tables (Hash tables), prefix trees (Trie), and the like.
Step 506, determining the user question with the frequency higher than a first threshold value as the at least one candidate question. In particular, step 506 may be performed by candidate problem determination module 210.
The first threshold is used to define the minimum number of occurrences of user problems. E.g., 10 times, 20 times, etc. In some embodiments, user questions that occur in the manual customer service log with a frequency above a first threshold may be determined as candidate questions.
In some embodiments, the frequency and the first threshold may refer to a number of occurrences within a preset time period. For example, within 1 month, half year, etc. Accordingly, the user questions with the frequency higher than the first threshold value in the manual customer service log data within the preset time are determined as candidate questions. For example, a user question that appears more frequently than 50 times within 1 month is considered as a candidate question.
FIG. 6 is an exemplary flow diagram of a domain-based candidate problem determination method shown in accordance with some embodiments of the present description. As shown in FIG. 6, the domain-based candidate problem determination method 600 includes:
step 602, obtaining the user question in the manual customer service log data. In particular, step 602 may be performed by the candidate problem determination module 210.
The method for obtaining the user question can refer to the description of step 402 in fig. 4, and is not described herein again.
Step 604, judging whether the user question belongs to the target field. In particular, step 604 may be performed by the candidate problem determination module 210.
The target field may be one or a combination of more of a financial field, an insurance field, an internet field, an automobile field, a catering field, a telecommunications field, an energy field, an entertainment field, a sports field, a logistics field, a medical field, a security field, and the like. The target area can be set according to different requirements or scenes. For example, if the target industry is set as the insurance field, it may be determined that the user question "how long this medical insurance hesitation period is" belongs to the insurance field, and further, the user question may be determined as a candidate question.
In some embodiments, it may be determined whether the user belongs to the target domain based on keywords of the user question. In some embodiments, determining whether the user question belongs to the target domain may be accomplished by a classification model, which may refer to a model, algorithm, neural network, etc., that is capable of mapping samples of unknown classes to one or more of given classes based on characteristics of the data, where a given class may be set in advance. The classification model may include, but is not limited to, a Multi-Layer Perception (MLP), a Decision Tree (DT), a Deep Neural Network (DNN), a Support Vector Machine (SVM), a K-Nearest Neighbor (KNN), and any other algorithm or model that can classify text.
And step 606, if yes, determining the user question as the at least one candidate question. In particular, step 606 may be performed by the candidate problem determination module 210.
In some embodiments, the candidate question is determined based on a determination of whether the user question belongs to the target domain. Specifically, a user question belonging to the target field is determined as a candidate question.
FIG. 7 is an exemplary flow chart of a method for determining whether a library of standard questions contains the at least one candidate question, according to some embodiments of the present description. As shown in fig. 7, the method 700 for determining whether the question bank of criteria contains the at least one candidate question includes:
step 702, calculating a first similarity between the at least one candidate question and the questions in the standard question bank. In particular, step 702 may be performed by the standard issue determination module 220.
The first similarity may refer to a text similarity of the at least one candidate question to questions in the standard question bank.
In some embodiments, calculating a first similarity of the at least one candidate question to questions in the standard question bank may be accomplished by a text similarity algorithm, which may refer to a model, algorithm, neural network, etc., capable of measuring a similarity of two or more texts.
In some embodiments, the first similarity may be calculated based on keyword matching. For example, the Jaccard algorithm calculates the ratio of the intersection and union of word sets between two sentences, and the larger the value, the higher the similarity between sentences. For another example, the N-gram algorithm divides the original sentence into segments based on the length N, that is, all the substrings with the length N in the original sentence, and the closer the substrings of the two sentences are, the higher the similarity between the sentences is. In some embodiments, the first similarity may be calculated based on a vector space. For example, in a Distributed expression algorithm (Distributed Representation), each word is mapped into a vector with a fixed length, all the words form a word vector space, each vector is regarded as a point in the space, similarity between sentences is judged according to the distance between the points, and the closer the distance, the higher the similarity between sentences is. In some embodiments, the first similarity may also be obtained in other manners, and this embodiment is not limited.
Step 704, determining whether the first similarity is higher than a second threshold. In particular, step 704 may be performed by standard problem determination module 220.
The second threshold is used to define the lowest criterion for the candidate question to have the same meaning as the questions in the standard question bank. E.g., 0.9, 0.95, etc. If the first similarity of the user question is higher than the second threshold, it can be stated that the question with the same or similar meaning as the user question expression exists in the standard question bank.
Step 706, otherwise, determining the candidate question as the standard question. In particular, step 706 may be performed by the standard issue determination module 220.
In some embodiments, the standard question is determined according to a determination of whether the first similarity is higher than a second threshold. Specifically, the candidate problem of which the first similarity is not higher than a second threshold is determined as the standard problem.
In some embodiments, if the first degree of similarity of the user's questions is above the second threshold, the differences between the standard answers to the standard questions in the standard question bank and the answers given by the artificial customer service to the standard questions in the artificial log data may be compared to further determine whether to update the standard answers in the standard question bank. Specifically, if the difference exists, the standard answer or the answer in the manual log data is judged to be more accurate, the answer in the manual log data is responded to be more accurate, the answer in the manual log data can be used for replacing the standard answer, and therefore the accuracy of the standard answer is improved. In some embodiments, the distinction of the standard answer from the answer of the human customer service may be determined manually or algorithmically. For example, the difference may be further determined by algorithmically calculating the text similarity. If the similarity is higher than the set threshold, no difference exists, otherwise, a difference exists. In some embodiments, if there is a difference, it may be determined manually which answer is more accurate.
Fig. 8 is an exemplary flow diagram illustrating an answer method to obtain standard questions in accordance with some embodiments of the present description. As shown in fig. 8, the method 800 for obtaining a standard question includes:
step 802, determining whether the standard question is related to the answer given by the artificial customer service in the artificial customer service log data. In particular, step 802 may be performed by the standard answer determination module 230.
In some embodiments, the relevance may be determined based on a semantic relevance of the standard question to an answer given by the artificial customer service in the artificial customer service log data. Specifically, the judgment of the correlation may be implemented by a semantic matching model. For example, a jump Convolution model based on Lexical Semantic features (LSF-SCNN). Specifically, the model introduces three optimization strategies: and (3) vocabulary semantic features, jump volume and K-Max mean value sampling, and extracting richer semantic feature similarity scores on vocabulary granularity, phrase granularity and sentence granularity respectively to obtain the relevancy between the standard question and the answer given by the manual customer service in the manual customer service log data. For another example, embedding the standard question and the answer given by the artificial customer service is established based on a Bi-directional long Short-Term Memory model (BiLSTM), and the correlation between the standard question and the answer given by the artificial customer service in the artificial customer service log data is measured by cosine similarity. In some embodiments, the determination of relevance may also be based on textual similarity of standard questions to answers in the manual customer service log data.
And step 804, taking an answer given by the artificial customer service with the correlation meeting a preset requirement as the at least one answer. In particular, step 804 may be performed by the standard answer determination module 230.
The preset requirements may be used to define a minimum criterion for which the standard question is relevant to an answer given by the artificial customer service in the artificial customer service log data. For example, the correlation is greater than 0.9, etc.
In some embodiments, the at least one answer may be determined based on a determination of a relevance of the standard question to an answer given by the artificial customer service in the artificial customer service log data. Specifically, if the correlation between the standard question and the answer given by the artificial customer service in the artificial customer service log data satisfies the preset requirement, which indicates that the standard question is correlated with the answer given by the artificial customer service in the artificial customer service log data, the answer given by the artificial customer service may be used as the at least one answer.
Step 806, discarding answers given by the artificial customer service whose relevance does not meet preset requirements and supplementing answers. In particular, step 806 may be performed by the standard answer determination module 230.
In some embodiments, the responses given by the artificial customer service may be rejected if the correlation between the standard question and the responses given by the artificial customer service in the artificial customer service log data does not satisfy the preset requirement, indicating that the standard question is not correlated with the responses given by the artificial customer service in the artificial customer service log data. In some embodiments, for a standard question corresponding to an answer given by the manual customer service whose relevance does not meet a preset requirement, the answer may be supplemented manually and taken as the standard answer.
The embodiment of the present specification further provides an apparatus, which at least includes a processor and a memory. The memory is to store instructions. The instructions, when executed by the processor, cause the apparatus to implement the aforementioned method of automatically augmenting pairs of intelligent customer service criteria questions. The method may include: determining at least one candidate question based on artificial customer service log data, the artificial customer service log data recording user questions and answers given by artificial customer service to the user questions; judging whether a standard question bank contains at least one candidate question or not, and determining the candidate question as a standard question if not; the standard question bank comprises at least one standard question pair; determining a standard answer to the standard question based on the artificial customer service log data; and taking the standard question and the standard answer as a standard question pair, wherein the standard question pair is used for intelligent customer service.
The embodiment of the specification also provides a computer readable storage medium. The storage medium stores computer instructions, and after the computer reads the computer instructions in the storage medium, the computer realizes the method for automatically expanding the intelligent customer service standard problem pairs. The method may include: determining at least one candidate question based on artificial customer service log data, the artificial customer service log data recording user questions and answers given by artificial customer service to the user questions; judging whether a standard question bank contains at least one candidate question or not, and determining the candidate question as a standard question if not; the standard question bank comprises at least one standard question pair; determining a standard answer to the standard question based on the artificial customer service log data; and taking the standard question and the standard answer as a standard question pair, wherein the standard question pair is used for intelligent customer service.
The beneficial effects that may be brought by the embodiments of the present description include, but are not limited to: (1) the embodiment in the specification expands the intelligent customer service standard problem pair by adopting an automatic mode, thereby avoiding the problem of manually determining the standard and reducing the labor cost; (2) by adding the standard problem pair of the intelligent customer service, the degree of the standard problem covering the user problem can be improved, and the problem of long tail distribution of the standard problem in the manual customer service is solved, so that the service capacity and efficiency of the intelligent customer service are improved; (3) the standard answers are updated by comparing the standard answers in the standard question bank with the answers given by the customer service in the manual log data, so that the accuracy of the standard answers is improved. It is to be noted that different embodiments may produce different advantages, and in different embodiments, any one or combination of the above advantages may be produced, or any other advantages may be obtained.
Having thus described the basic concept, it will be apparent to those skilled in the art that the foregoing detailed disclosure is to be regarded as illustrative only and not as limiting the present specification. Various modifications, improvements and adaptations to the present description may occur to those skilled in the art, although not explicitly described herein. Such modifications, improvements and adaptations are proposed in the present specification and thus fall within the spirit and scope of the exemplary embodiments of the present specification.
Also, the description uses specific words to describe embodiments of the description. Reference throughout this specification to "one embodiment," "an embodiment," and/or "some embodiments" means that a particular feature, structure, or characteristic described in connection with at least one embodiment of the specification is included. Therefore, it is emphasized and should be appreciated that two or more references to "an embodiment" or "one embodiment" or "an alternative embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, some features, structures, or characteristics of one or more embodiments of the specification may be combined as appropriate.
Moreover, those skilled in the art will appreciate that aspects of the present description may be illustrated and described in terms of several patentable species or situations, including any new and useful combination of processes, machines, manufacture, or materials, or any new and useful improvement thereof. Accordingly, aspects of this description may be performed entirely by hardware, entirely by software (including firmware, resident software, micro-code, etc.), or by a combination of hardware and software. The above hardware or software may be referred to as "data block," module, "" engine, "" unit, "" component, "or" system. Furthermore, aspects of the present description may be represented as a computer product, including computer readable program code, embodied in one or more computer readable media.
The computer storage medium may comprise a propagated data signal with the computer program code embodied therewith, for example, on baseband or as part of a carrier wave. The propagated signal may take any of a variety of forms, including electromagnetic, optical, etc., or any suitable combination. A computer storage medium may be any computer-readable medium that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code located on a computer storage medium may be propagated over any suitable medium, including radio, cable, fiber optic cable, RF, or the like, or any combination of the preceding.
Computer program code required for the operation of various portions of this specification may be written in any one or more programming languages, including an object oriented programming language such as Java, Scala, Smalltalk, Eiffel, JADE, Emerald, C + +, C #, VB.NET, Python, and the like, a conventional programming language such as C, Visual Basic, Fortran2003, Perl, COBOL2002, PHP, ABAP, a dynamic programming language such as Python, Ruby, and Groovy, or other programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or processing device. In the latter scenario, the remote computer may be connected to the user's computer through any network format, such as a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet), or in a cloud computing environment, or as a service, such as a software as a service (SaaS).
Additionally, the order in which the elements and sequences of the process are recited in the specification, the use of alphanumeric characters, or other designations, is not intended to limit the order in which the processes and methods of the specification occur, unless otherwise specified in the claims. While various presently contemplated embodiments of the invention have been discussed in the foregoing disclosure by way of example, it is to be understood that such detail is solely for that purpose and that the appended claims are not limited to the disclosed embodiments, but, on the contrary, are intended to cover all modifications and equivalent arrangements that are within the spirit and scope of the embodiments herein. For example, although the system components described above may be implemented by hardware devices, they may also be implemented by software-only solutions, such as installing the described system on an existing processing device or mobile device.
Similarly, it should be noted that in the preceding description of embodiments of the present specification, various features are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure aiding in the understanding of one or more of the embodiments. This method of disclosure, however, is not intended to imply that more features than are expressly recited in a claim. Indeed, the embodiments may be characterized as having less than all of the features of a single embodiment disclosed above.
Numerals describing the number of components, attributes, etc. are used in some embodiments, it being understood that such numerals used in the description of the embodiments are modified in some instances by the use of the modifier "about", "approximately" or "substantially". Unless otherwise indicated, "about", "approximately" or "substantially" indicates that the number allows a variation of ± 20%. Accordingly, in some embodiments, the numerical parameters used in the specification and claims are approximations that may vary depending upon the desired properties of the individual embodiments. In some embodiments, the numerical parameter should take into account the specified significant digits and employ a general digit preserving approach. Notwithstanding that the numerical ranges and parameters setting forth the broad scope of the range are approximations, in the specific examples, such numerical values are set forth as precisely as possible within the scope of the application.
For each patent, patent application publication, and other material, such as articles, books, specifications, publications, documents, etc., cited in this specification, the entire contents of each are hereby incorporated by reference into this specification. Except where the application history document does not conform to or conflict with the contents of the present specification, it is to be understood that the application history document, as used herein in the present specification or appended claims, is intended to define the broadest scope of the present specification (whether presently or later in the specification) rather than the broadest scope of the present specification. It is to be understood that the descriptions, definitions and/or uses of terms in the accompanying materials of this specification shall control if they are inconsistent or contrary to the descriptions and/or uses of terms in this specification.
Finally, it should be understood that the embodiments described herein are merely illustrative of the principles of the embodiments of the present disclosure. Other variations are also possible within the scope of the present description. Thus, by way of example, and not limitation, alternative configurations of the embodiments of the specification can be considered consistent with the teachings of the specification. Accordingly, the embodiments of the present description are not limited to only those embodiments explicitly described and depicted herein.

Claims (16)

1. A method of automatically augmenting intelligent customer service standard problem pairs, comprising:
determining at least one candidate question based on artificial customer service log data, the artificial customer service log data recording user questions and answers given by artificial customer service to the user questions;
judging whether a standard question bank contains at least one candidate question or not, and determining the candidate question as a standard question if not; the standard question bank comprises at least one standard question pair;
determining a standard answer to the standard question based on the artificial customer service log data;
and taking the standard question and the standard answer as a standard question pair, wherein the standard question pair is used for intelligent customer service.
2. The method of claim 1, the determining at least one candidate problem based on the artificial customer service log data comprising:
acquiring the user problems in the manual customer service log data;
clustering the user problems and determining at least one cluster;
determining a center point of the at least one cluster as the at least one candidate problem.
3. The method of claim 1, the determining at least one candidate problem based on the artificial customer service log data comprising:
acquiring the user problems in the manual customer service log data;
counting the occurrence frequency of the user problems in the manual customer service log data;
determining the user question having the frequency above a first threshold as the at least one candidate question.
4. The method of claim 1, the determining at least one candidate problem based on the artificial customer service log data comprising:
acquiring the user problems in the manual customer service log data;
judging whether the user problem belongs to a target field;
if so, determining the user question as the at least one candidate question.
5. The method of claim 1, wherein determining whether the library of standard questions contains the at least one candidate question, otherwise determining the candidate question as a standard question comprises:
calculating a first similarity of the at least one candidate question to questions in the standard question bank;
judging whether the first similarity is higher than a second threshold value;
otherwise, determining the candidate problem as the standard problem.
6. The method of claim 1, the determining a standard answer to the standard question based on the artificial customer service log data comprising:
obtaining at least one answer to the standard question from the artificial log data;
extracting a summary of the at least one answer;
determining the summary as the standard answer to the standard question.
7. The method of claim 6, said obtaining at least one answer to said standard question from said manual log data comprising:
judging the relevance of the standard question and an answer given by the artificial customer service in the artificial customer service log data;
and taking an answer given by the artificial customer service with the correlation meeting a preset requirement as the at least one answer.
8. A system for automatically augmenting intelligent customer service standard problem pairs, comprising:
a candidate question determination module for determining at least one candidate question based on artificial customer service log data, the artificial customer service log data recording user questions and answers given by artificial customer service to the user questions;
the standard question determining module is used for judging whether the standard question bank contains at least one candidate question or not, and otherwise, determining the candidate question as the standard question; the standard question bank comprises at least one standard question pair;
a standard answer determination module for determining a standard answer to the standard question based on the artificial customer service log data;
and the standard question pair determining module is used for taking the standard question and the standard answer as a standard question pair, and the standard question pair is used for intelligent customer service.
9. The system of claim 8, the candidate problem determination module to:
acquiring the user problems in the manual customer service log data;
clustering the user problems and determining at least one cluster;
determining a center point of the at least one cluster as the at least one candidate problem.
10. The system of claim 8, the candidate problem determination module to:
acquiring the user problems in the manual customer service log data;
counting the occurrence frequency of the user problems in the manual customer service log data;
determining the user question having the frequency above a first threshold as the at least one candidate question.
11. The system of claim 8, the candidate problem determination module to:
acquiring user problems in the manual customer service log data;
judging whether the user problem belongs to a target field;
if so, determining the user question as the at least one candidate question.
12. The system of claim 8, the standard issue determination module to:
calculating a first similarity of the at least one candidate question to questions in the standard question bank;
judging whether the first similarity is higher than a second threshold value;
otherwise, determining the candidate problem as the standard problem.
13. The system of claim 8, the standard answer determination module to:
obtaining at least one answer to the standard question from the artificial log data;
extracting a summary of the at least one answer;
determining the summary as the standard answer to the standard question.
14. The system of claim 13, the standard answer determination module to:
judging the relevance of the standard question and an answer given by the artificial customer service in the artificial customer service log data;
and taking an answer given by the artificial customer service with the correlation meeting a preset requirement as the at least one answer.
15. An apparatus for automatically augmenting intelligent customer service standard problem pairs, comprising at least one storage medium and at least one processor, the at least one storage medium storing computer instructions; the at least one processor is configured to execute the computer instructions to implement the method of any of claims 1-7.
16. A computer-readable storage medium storing computer instructions which, when read by a computer, cause the computer to perform the method of any one of claims 1 to 7.
CN201911210422.0A 2019-11-29 2019-11-29 Method and system for automatically expanding intelligent customer service standard problem pairs Pending CN110955766A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911210422.0A CN110955766A (en) 2019-11-29 2019-11-29 Method and system for automatically expanding intelligent customer service standard problem pairs

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911210422.0A CN110955766A (en) 2019-11-29 2019-11-29 Method and system for automatically expanding intelligent customer service standard problem pairs

Publications (1)

Publication Number Publication Date
CN110955766A true CN110955766A (en) 2020-04-03

Family

ID=69979171

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911210422.0A Pending CN110955766A (en) 2019-11-29 2019-11-29 Method and system for automatically expanding intelligent customer service standard problem pairs

Country Status (1)

Country Link
CN (1) CN110955766A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111221945A (en) * 2020-04-24 2020-06-02 支付宝(杭州)信息技术有限公司 Method and device for generating standard question based on user question
WO2021114834A1 (en) * 2020-06-24 2021-06-17 平安科技(深圳)有限公司 Customer service question update method and system, terminal device, and computer storage medium
CN113505293A (en) * 2021-06-15 2021-10-15 深圳追一科技有限公司 Information pushing method and device, electronic equipment and storage medium
CN113761178A (en) * 2021-08-11 2021-12-07 北京三快在线科技有限公司 Data display method and device
CN117113092A (en) * 2023-10-24 2023-11-24 北京睿企信息科技有限公司 Question expansion method based on question-answering task model and storage medium
CN117556006A (en) * 2023-11-10 2024-02-13 摩尔线程智能科技(上海)有限责任公司 Standard problem determining method and device, electronic equipment and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106155522A (en) * 2016-06-29 2016-11-23 上海智臻智能网络科技股份有限公司 Session data process, knowledge base foundation, optimization, exchange method and device
US20170193086A1 (en) * 2015-12-31 2017-07-06 Shanghai Xiaoi Robot Technology Co., Ltd. Methods, devices, and systems for constructing intelligent knowledge base
CN107329967A (en) * 2017-05-12 2017-11-07 北京邮电大学 Question answering system and method based on deep learning
CN108804567A (en) * 2018-05-22 2018-11-13 平安科技(深圳)有限公司 Improve method, equipment, storage medium and the device of intelligent customer service response rate
CN109033270A (en) * 2018-07-09 2018-12-18 深圳追科技有限公司 A method of service knowledge base is constructed based on artificial customer service log automatically
CN109858626A (en) * 2019-01-23 2019-06-07 三角兽(北京)科技有限公司 A kind of construction of knowledge base method and device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170193086A1 (en) * 2015-12-31 2017-07-06 Shanghai Xiaoi Robot Technology Co., Ltd. Methods, devices, and systems for constructing intelligent knowledge base
CN106155522A (en) * 2016-06-29 2016-11-23 上海智臻智能网络科技股份有限公司 Session data process, knowledge base foundation, optimization, exchange method and device
CN107329967A (en) * 2017-05-12 2017-11-07 北京邮电大学 Question answering system and method based on deep learning
CN108804567A (en) * 2018-05-22 2018-11-13 平安科技(深圳)有限公司 Improve method, equipment, storage medium and the device of intelligent customer service response rate
CN109033270A (en) * 2018-07-09 2018-12-18 深圳追科技有限公司 A method of service knowledge base is constructed based on artificial customer service log automatically
CN109858626A (en) * 2019-01-23 2019-06-07 三角兽(北京)科技有限公司 A kind of construction of knowledge base method and device

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111221945A (en) * 2020-04-24 2020-06-02 支付宝(杭州)信息技术有限公司 Method and device for generating standard question based on user question
CN111221945B (en) * 2020-04-24 2020-08-04 支付宝(杭州)信息技术有限公司 Method and device for generating standard question based on user question
WO2021114834A1 (en) * 2020-06-24 2021-06-17 平安科技(深圳)有限公司 Customer service question update method and system, terminal device, and computer storage medium
CN113505293A (en) * 2021-06-15 2021-10-15 深圳追一科技有限公司 Information pushing method and device, electronic equipment and storage medium
CN113505293B (en) * 2021-06-15 2024-03-19 深圳追一科技有限公司 Information pushing method and device, electronic equipment and storage medium
CN113761178A (en) * 2021-08-11 2021-12-07 北京三快在线科技有限公司 Data display method and device
CN117113092A (en) * 2023-10-24 2023-11-24 北京睿企信息科技有限公司 Question expansion method based on question-answering task model and storage medium
CN117113092B (en) * 2023-10-24 2024-01-23 北京睿企信息科技有限公司 Question expansion method based on question-answering task model and storage medium
CN117556006A (en) * 2023-11-10 2024-02-13 摩尔线程智能科技(上海)有限责任公司 Standard problem determining method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN110765244B (en) Method, device, computer equipment and storage medium for obtaining answering operation
CN110955766A (en) Method and system for automatically expanding intelligent customer service standard problem pairs
US11531818B2 (en) Device and method for machine reading comprehension question and answer
CN109858010B (en) Method and device for recognizing new words in field, computer equipment and storage medium
CN109165291B (en) Text matching method and electronic equipment
US10915756B2 (en) Method and apparatus for determining (raw) video materials for news
CN110781687B (en) Same intention statement acquisition method and device
CN110990532A (en) Method and device for processing text
CN110738059B (en) Text similarity calculation method and system
CN111858913A (en) Method and system for automatically generating text abstract
US20220351634A1 (en) Question answering systems
CN117290492A (en) Knowledge base question-answering method and device, electronic equipment and storage medium
JP2020512651A (en) Search method, device, and non-transitory computer-readable storage medium
WO2021190662A1 (en) Medical text sorting method and apparatus, electronic device, and storage medium
EP3832485A1 (en) Question answering systems
CN111414746A (en) Matching statement determination method, device, equipment and storage medium
CN112132238A (en) Method, device, equipment and readable medium for identifying private data
CN113254613A (en) Dialogue question-answering method, device, equipment and storage medium
CN111198949B (en) Text label determination method and system
CN112256863A (en) Method and device for determining corpus intentions and electronic equipment
CN115858776B (en) Variant text classification recognition method, system, storage medium and electronic equipment
CN116028626A (en) Text matching method and device, storage medium and electronic equipment
CN114490986B (en) Computer-implemented data mining method, device, electronic equipment and storage medium
CN112487154B (en) Intelligent search method based on natural language
CN111400413B (en) Method and system for determining category of knowledge points in knowledge base

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200403

RJ01 Rejection of invention patent application after publication