WO2020019686A1 - Procédé et appareil d'interaction de session - Google Patents

Procédé et appareil d'interaction de session Download PDF

Info

Publication number
WO2020019686A1
WO2020019686A1 PCT/CN2019/071301 CN2019071301W WO2020019686A1 WO 2020019686 A1 WO2020019686 A1 WO 2020019686A1 CN 2019071301 W CN2019071301 W CN 2019071301W WO 2020019686 A1 WO2020019686 A1 WO 2020019686A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
user
intent
sentence
intention
Prior art date
Application number
PCT/CN2019/071301
Other languages
English (en)
Chinese (zh)
Inventor
周建华
武文杰
陈少昂
孙谷飞
丁薛
邓永庆
王德锋
桑聪聪
杨少文
Original Assignee
众安信息技术服务有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 众安信息技术服务有限公司 filed Critical 众安信息技术服务有限公司
Publication of WO2020019686A1 publication Critical patent/WO2020019686A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/279Recognition of textual entities
    • G06F40/289Phrasal analysis, e.g. finite state techniques or chunking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q40/00Finance; Insurance; Tax strategies; Processing of corporate or income taxes
    • G06Q40/08Insurance

Definitions

  • the present invention relates to the field of computer technology, and in particular, to a method and a device for session interaction.
  • the existing human-machine multi-round conversation method technical solutions are based on the user question and the standard requirements included in the requirement structure tree to map, so as to output the standard requirement content of the hit leaf nodes.
  • This solution has shortcomings in flexibility and accuracy, cannot support flexible jumps and calls between multiple conversation processes, and dynamically updated corpus templates in real time, which makes interaction in some scenarios difficult to achieve, and the intention The accuracy of the matching model is low.
  • the conversation includes not only the user's user portrait but also user related users' portraits, such as customers, referees, Beneficiaries, etc.
  • the technical problem solved by the present invention is to provide a conversation interaction method and device that can more accurately guide customer consultation.
  • one aspect of the present invention provides a method for session interaction, which includes the following steps: obtaining a user sentence; determining whether the user sentence contains a conventional question; and if so, calling a routine corresponding to the conventional question in a database Answer and output; if not, determine whether the user statement contains intent, and if so, retrieve and output the conversation flow corresponding to the intent in the database.
  • the inventor in this technical solution uses two rounds of intent judgment to identify the intentions in the user sentence, and makes corresponding outputs respectively, so that the user's intent can be accurately identified, and the customer can be more accurately guided to complete the consultation and follow-up Services.
  • the two intent judgments are used to determine whether the user statement directly contains the existing problems in the database, and to determine whether the user statement implies a specific intention, so as to prevent the user from missing a specific intention type and reduce the probability of recognition errors. To improve the comprehensiveness and accuracy of identification.
  • the method further includes: if not, inferring the intention according to the user sentence; judging whether the value obtained in the intent guessing is greater than a preset threshold; if yes, calling and outputting a conversation flow corresponding to the intent in the database.
  • the manner of the second intent judgment includes two steps, namely, determining whether the user sentence contains the intention, and intent guessing; determining whether the user sentence contains the intention means judgment. Whether the user statement directly includes intent types, such as "car insurance” and "insurance”, and then conducts multiple rounds of conversation according to the intent type; and if the user statement does not directly include these intent types.
  • the inventor further provides an intent-guessing step in this technical solution to further determine whether the user sentence contains an intent type implicitly. For example, "how long can the car be guaranteed", it may point to the "auto insurance” intent type.
  • determining whether the user sentence contains a conventional question includes: performing text processing on the obtained user sentence, and determining whether the user sentence contains a conventional question according to a result of the text processing.
  • the manner of text processing includes text segmentation.
  • the pre-processing step for judging a user sentence includes performing text processing on the user sentence, which can more conveniently perform processing and subsequent recognition judgment, and improve the efficiency and accuracy of recognition.
  • the user sentence includes entity information; the entity information includes one or more of the following: sentence vector information for training and compiling a sequence of word vectors; general entity information for representing general information; industry entity information , Used to represent industry-related information.
  • the user sentence includes entity information to distinguish and judge, and the entity information includes sentence vector information, general entity information, and industry entity information.
  • entity information includes sentence vector information, general entity information, and industry entity information.
  • sentence vector information could be "I have a car accident in Shanghai today. May I ask if the car's off-site auto insurance claim process is the same as the local one?";
  • general entity information can be "today” and “Shanghai” and other time and place information;
  • the industry Entity information can be industry information such as "auto” and "auto insurance”.
  • the user sentence further includes user portrait information, which is used to represent user personal and social relationship information.
  • the system can obtain the user's social relationship by obtaining the user portrait information appearing in the user sentence multiple times.
  • This method can refer to the construction method of the character relationship map in the prior art, or refer to the following Creative way of acquisition designed by the inventor.
  • the system can further improve the session interaction construction of the database based on the user portrait information, so as to further accurately determine the user's intention and implement subsequent information push.
  • the user portrait information includes one or more of personal identification information, personal attribute information, and social relationship information.
  • the method for obtaining user portrait information specifically includes: performing an association calculation on a user sentence to obtain an association relationship, and obtaining the user sentence.
  • the syntactic dependency relationship and dependency structure are extracted, and the personal identification information, personal attribute information, and social relationship information are extracted based on the association relationship for triple-item iterative learning to obtain a user portrait knowledge map.
  • the obtained data structure has a strong network relationship, and it is necessary to obtain or retrieve other relevant node attributes, so that it can be more Accurately obtain the user portrait knowledge map, that is, the character relationship map above.
  • the specific way of performing association calculation on the user sentence is: associating calculation through the POS-CBOW method and improved Word2vec.
  • association calculation is performed by the POS-CBOW method and improved Word2vec. Because the entity attributes and entity distribution are integrated, the technical effect of extracting entity associations can be achieved.
  • the data sets are matched. If there are corresponding general questions in the FAQ data set, the general answers corresponding to the general questions are output.
  • the FAQ data set refers to a database including general questions and general answers as opposed to conventional questions.
  • a conventional question is, for example, "How much is a car insurance a year?", And the corresponding conventional answer may be "4,000 yuan”.
  • Matching the sentence vector information, general entity information, and industry entity information obtained from user sentences in the FAQ data set can accurately match the database and improve the matching efficiency.
  • matching the stitching matrix of sentence vector information, general entity information, and industry entity information to the FAQ data set includes: replacing the general entity information and industry entity information in the stitching matrix with the encoding of the top-level entity, and then Match with the FAQ dataset.
  • replacing the general entity information and industry entity information with the encoding of the top-level entity can more fully match the content of the data set, such as replacing "private car” with "car” , It is replaced by the encoding of the upper layer entity, which can further accurately identify the matching question and the answer corresponding to the question.
  • determining whether the user sentence includes an intention includes: performing a text classification through a CNN model to obtain the intention according to a stitching matrix of entity information and user portrait information.
  • the specific process of the above process is: in an independent sentence S ⁇ R nk of the user session, represented by a k-dimensional vector of n words, and encoding the corresponding information of the entity and the user portrait into a dictionary In Word2vec, the word vector X i ⁇ R k is obtained after segmentation and de-stopping. Then the independent sentence can be expressed as: Represents the connection of the word vector X i .
  • the feature information is mined using the N-Gram form of independent sentences.
  • the CNN model is used to define the text convolution kernel W ⁇ R lk (where the convolution length is L).
  • the largest one-dimensional feature vector is stored as feature information, and the n-dimensional vector is obtained from n convolutions and mapped into a global feature vector of a fixed length.
  • a fully-linked layer is established, which is mapped to the h-dimensional intent space.
  • the binary cross-entropy loss function is optimized through supervised learning, and the probability output by softmax is mapped into the intent-confidence matrix of the h-dimensional intent space.
  • the output is intent and confidence, and a list of entity sets is stored in memory.
  • the type of intention includes one or more of an insurance intention, an underwriting intention, a claims intention, a renewal intention, and a surrender intention.
  • retrieving and outputting a conversation flow corresponding to the intent in the database includes: judging the type of the intent; fetching the required information and obtaining the information according to the type of the intent; and outputting the corresponding scheme according to the information.
  • the specific way of judging the type of intent is to perform a confidence calculation.
  • the confidence level is greater than a set value of a certain intent type, it is determined that the intent type belongs.
  • the confidence type calculation can more accurately identify the user's intention type.
  • the confidence calculation method is the softmax layer output of the fully connected layer vector z Through the judgment of the above-mentioned confidence calculation, the judgment result of the intent type can be obtained more accurately.
  • the information is obtained by obtaining one or two of entity information and user portrait information; and / or inquiring and obtaining information from the user.
  • the second intent judgment step after determining the type of intent, it is necessary to output the solution according to the type of intent.
  • Some information here can be obtained from one or two of entity information and user portrait information, which can improve the efficiency of information acquisition; in addition, users can be asked for information and obtained information to improve the accuracy of information acquisition, thereby Further improve the overall accuracy of the matching output.
  • the information includes one or more of gender, age, license plate number, region, and number of households.
  • the scheme is a recommended insurance scheme.
  • the calculation method of the inferred value is specifically: firstly embed the training word vector into the text segmentation and words of the question text, and then convert it into a sentence vector, stitch the matrix of the sentence vector information, the entity information and the user portrait information through the LSTM model Perform training to extract features.
  • a conversation interaction device including: an acquisition module for acquiring a user sentence; a first judgment module for judging whether the user sentence contains a conventional question; a first output module for the first time When the judgment result of a judgment module is yes, a conventional answer corresponding to a conventional question is retrieved from the database and output; the second judgment module is used to judge whether the user sentence is in the sentence when the judgment result of the first judgment module is no. Contains intent; a second output module is used to retrieve and output the conversation flow corresponding to the intent in the database when the judgment result of the second judgment module is yes.
  • a guessing module configured to make an intention inference according to a user sentence when the judgment result of the second judgment module is negative
  • a third judgment module which is used to judge whether the value obtained in the intention inference is greater than a preset threshold
  • the third output module is configured to retrieve and output the conversation flow corresponding to the intention in the database when the judgment result of the third judgment module is yes.
  • the conversation interaction method of the present invention recognizes the intentions in user sentences through two rounds of intention judgment to make corresponding outputs respectively, so that the user's intentions can be accurately identified, and the customer can be more accurately guided to complete consultation and subsequent services.
  • the second intent judgment method includes two steps, namely, judging whether the user sentence contains an intent, and intent inference. Through the two-step judgment, the user's intent can be more accurately identified. Improve the accuracy of identifying user intentions, and thus avoid errors and incompleteness of user sentence recognition.
  • the preprocessing step of judging a user sentence includes text processing of the user sentence, which can more conveniently perform processing and subsequent recognition judgment, and improve the efficiency and accuracy of recognition.
  • the general entity information and industry entity information are replaced with the encoding of the top-level entity, so that the matching question and the answer corresponding to the question can be further accurately identified.
  • the conversation interaction method of the present invention can obtain related information from one or two of entity information and user portrait information after determining the type of intent, which can improve the efficiency of information acquisition; in addition, it can also be performed to the user. Query and obtain the relevant information to improve the accuracy of the information acquisition, thereby further improving the overall accuracy of the matching output.
  • the conversation interaction method of the present invention realizes the complete extraction of all the information in the context of the user's conversation.
  • entity extraction model and relationship extraction general entities, industry entities and user portraits are extracted from sentences. Users are learned through deep learning models. The intentions and possible intentions have higher accuracy.
  • FIG. 1 is a schematic flowchart of an implementation manner of a session interaction method according to the present invention.
  • FIG. 2 is a schematic diagram of a preferred embodiment of a user knowledge map of the conversation interaction method of the present invention.
  • FIG. 3 is a schematic flowchart of a preferred implementation of the second intention judgment of the conversation interaction method of the present invention.
  • FIG. 4 is a schematic flowchart of defining a session flow rule in the present invention.
  • FIG. 5 is a schematic diagram of a preferred process in the step of FIG. 4.
  • FIG. 6 is a schematic structural diagram of a session interaction apparatus according to an embodiment of the present invention.
  • FIG. 1 is a schematic flowchart of an implementation manner of a session interaction method according to the present invention.
  • the method 100 includes the following steps: 110.
  • the intentions in the user sentence are identified through two rounds of intention judgments to make corresponding outputs respectively, so that the user's intentions can be accurately identified, and the customer can be more accurately guided to complete the consultation and subsequent services.
  • the two intent judgments are used to determine whether the user statement directly contains the existing problems in the database, and to determine whether the user statement implies a specific intention, so as to prevent the specific type of intention implicit from the user from being missed and reduce the recognition error rate. To improve the comprehensiveness and accuracy of identification.
  • the specific process of the second intent judgment 130 is to determine whether the user sentence contains an intent, and if so, retrieve and output the conversation flow corresponding to the intent in the database; if not, Then, the intention inference is performed according to the user sentence, and it is determined whether the value obtained in the intention inference is greater than a preset threshold. If so, it is confirmed that there is an intention, and then the conversation process corresponding to the intention is retrieved from the database and output.
  • the method of the second intent judgment includes two steps, namely, determining whether the user sentence contains the intention, and intent guessing; determining whether the user sentence contains the intention, refers to determining whether the user sentence directly includes the type of the intention. , Such as “car insurance”, “insurance”, etc., and then conduct multiple rounds of conversation according to the type of intent; and if the user statement does not directly include these types of intent, the inventor further provides intent inference in this technical solution Steps to further determine whether the user's sentence contains an intent type, for example, "how long can the car be guaranteed", it is possible to point to the "car insurance” intent type.
  • the user's intention can be identified more accurately, the accuracy of identifying the user's intention can be improved, and the errors and incompleteness of user sentence recognition can be reduced.
  • the obtaining of the sentence in step 110 specifically includes: performing text processing on the obtained user sentence.
  • the text processing manner includes text segmentation.
  • the pre-processing step for judging a user sentence includes text processing on the user sentence, which can more conveniently perform processing and subsequent recognition judgment, and improve the efficiency and accuracy of recognition.
  • the user sentence includes entity information
  • step 110 includes extracting entity information.
  • entity information includes one or more of the following: sentence vector information used to train and compile word vector sequences; general entity information used to represent general information; industry entity information used to represent industry-related information.
  • Entities include entity information to distinguish and judge, and entity information includes sentence vector information, general entity information, and industry entity information.
  • entity information includes sentence vector information, general entity information, and industry entity information.
  • the acquisition of the word vector is completed in the text word segmentation step.
  • An example of sentence vector information could be "I have a car accident in Shanghai today. May I ask if the auto claims insurance process is the same?"
  • the general entity information can be "today”, “Shanghai” and other time and place information; and industry entity information , Can be “auto”, “auto insurance” and other industry information.
  • the user sentence further includes user portrait information, which is used to represent the personal and social relationship of the user.
  • step 110 also extracts user portrait information.
  • the system can obtain the user's social relationship by obtaining the user portrait information appearing in the user sentence multiple times.
  • This method can refer to the way of constructing the task relationship map in the prior art, or refer to the inventor design as described below. Creative way of getting it.
  • the system can further improve the session interaction construction of the database based on the user portrait information, so as to further accurately determine the user's intention and subsequent information push.
  • the user portrait information includes one or more of personal identification information, personal attribute information, and social relationship information.
  • the method for obtaining user portrait information specifically includes: performing association calculations on user sentences to obtain association relationships, obtaining syntactic dependencies and dependency structures in user sentences, and extracting individuals based on the association relationships.
  • the identification information, personal attribute information and social relationship information are subjected to triple-tuple iterative learning to obtain a user portrait knowledge map.
  • the obtained data structure has strong network relationships.
  • the formation process needs to obtain or retrieve the attributes of other related nodes, so that it can be obtained more accurately.
  • User portrait knowledge map which is the relationship map described above.
  • the specific way of performing association calculation on the user sentence is: through the POS-CBOW method, and through the improved Word2vec association calculation. Through the POS-CBOW method and the improved Word2vec for association calculations, due to the integration of entity attributes and entity distribution, the technical effect of extracting entity association relationships can be achieved.
  • Figure 2 is a schematic diagram of a preferred embodiment of a user portrait knowledge map composed of user portraits.
  • My dad is 66 years old this year, and bladder cancer has recovered last year. May I ask Xiaoxin for the elderly?" Is it safe against human cancer? ”,
  • the entity extraction is performed first, including general entity information, industry entity information, and user portrait information extraction, and matrix coding is performed. Specifically, you can do text segmentation first, for example: "I
  • POS-CBOW method to perform correlation calculation through improved Word2vec, obtain syntactic dependencies and dependency structures, extract entities, attributes, relationships through relationships, and learn more templates through triples iteration. For example, get me and dad in the previous sentence. The age is 66 years old, and the disease is bladder cancer. By training the ontology association relationship in the insurance field corpus, a user portrait knowledge map 200 shown in FIG. 2 is obtained.
  • problems such as agents insuring his customers, users recommending good insurance products to friends, and customers insuring themselves, parents, and children, etc., which involve related ontology. Therefore, it is necessary to establish user portrait knowledge through context.
  • Atlas which solves the problem of the complex relationship between many subjects and entities in the conversation. This method builds a knowledge map through user portraits, and solves scenarios such as users querying "my client's policy” or “what friends do I recommend” or "what coverage does my family's insurance cover?”
  • the specific method of the first intent judgment includes: the database is provided with a FAQ data set; a mosaic matrix of sentence vector information, general entity information and industry entity information is performed with the FAQ data set Matching, if there is a corresponding conventional question in the FAQ data set, then a conventional answer corresponding to the conventional question is output.
  • the FAQ data set refers to a database including general questions and general answers corresponding to the general questions.
  • a conventional question is, for example, "How much is a car insurance a year?", And the corresponding conventional answer may be "4,000 yuan”.
  • Matching the sentence matrix information, general entity information, and industry entity information from the user's sentence to the stitching matrix and FAQ data set can accurately match the data blocks and improve the matching efficiency.
  • matching the stitching matrix of sentence vector information, general entity information, and industry entity information with the FAQ data set specifically includes the following steps: replacing the general entity information and industry entity information in the stitching matrix with the most The encoding of the upper entity is then matched with the FAQ data set. Replacing the general entity information and industry entity information with the encoding of the top-level entity can more fully match the content of the data set. For example, replacing "private car” with "car” is the encoding of the upper-level entity, which can further Accurately identify matching questions and answers corresponding to that question.
  • a stitching matrix composed of sentence vectors, general entities, and industry entities is input, and the entities in the user's question sentence are replaced with the encoding of the top-level entity, such as replacing diabetes with Disease, the BMW 320Li is replaced with a car, then the codes of these top-level entities are matched with the problems in the QA, and finally the similarity comparison model is used to find problems in the QA that are greater than a certain similarity threshold.
  • the user asks “Can the BMW 320Li be insured?" And the "Can the car be insured?"
  • Template in QA has the highest similarity, and different answers can be set through QA conditions, such as "Zhongan Auto Insurance can insure vehicles under 2 million ".
  • the specific method for determining the second intent includes: obtaining the intent through text classification through a CNN model according to a stitching matrix of entity information and user portrait information, and FIG. 3 illustrates one of the processes.
  • An independent sentence S ⁇ R nk of a user conversation is represented by a k-dimensional vector of n words, and corresponding information of an entity and a user portrait is encoded into
  • word2vec is used to obtain the word vector X i ⁇ R k after segmentation and de-stopping.
  • the independent sentence can be expressed as: Represents the connection of the word vector X i .
  • the largest one-dimensional feature vector is stored as feature information, and the n-dimensional vector is obtained from n convolutions and mapped into a global feature vector of a fixed length.
  • the output layer establish a fully-linked layer, which is mapped to the h-dimensional intent space, optimize the binary cross-entropy loss function through supervised learning, and map the probability of the softmax output to the intent-confidence matrix of the h-dimensional intent space.
  • the output is intent and confidence, and a list of entity sets is stored in memory.
  • the type of intention includes one or more of an insurance intention, an underwriting intention, a claims intention, a renewal intention, and a surrender intention.
  • the specific manner of fetching and outputting the conversation process corresponding to the intent in the database includes: judging the type of intent; obtaining the required information and obtaining the information according to the type of intent; according to the Information output corresponding scheme.
  • the specific way of judging the type of intent is to perform a confidence calculation.
  • the confidence level is greater than a set value of a certain intent type, it is determined that the intent type belongs. Confidence calculation can more accurately identify the user's intent type.
  • the confidence calculation method is the softmax layer output of the full link layer vector z
  • the way to obtain information is to obtain from one or both of entity information and user portrait information; and / or inquire the user about the information and obtain it.
  • the second intent judgment step after determining the type of intent, it is necessary to output the solution according to the type of intent.
  • the information includes one or more of gender, age, license plate number, region, and number of households.
  • the scheme is a insurance recommendation scheme.
  • FIG. 4 is a schematic flowchart of defining a session flow rule in the present invention.
  • the conversation flow rule 400 contains (intent) triggers, nodes, conditions, and actions.
  • node rules include:
  • Node names 420 are defined, and each node includes corresponding conditions and actions.
  • condition 430 Among them, IF ... ELSE, IF / ELSE, IF / .. and other logical expressions are used to implement the mapping and alignment of entities.
  • the mapping and alignment of the entity includes the mapping of entities and user portraits and the alignment between entities.
  • the condition is defined as age ⁇ 55, and in the user portrait, the attribute of "my dad” who is "born 52" is obtained from “my dad was born in 52", and the age in the condition definition is mapped to the "age” of "my dad", And "52-year-old” is aligned to 66 years old. From the user portrait, it is judged that the condition does not meet age ⁇ 55.
  • an action 440 There are three types defined here, namely cards, jumps, and application programming interface (API) return values. Among them: cards, including selection cards, text cards, graphic cards, graphic lists, pictures, and other cards, interact with users to obtain information and map information, the purpose is to collect structured and unstructured data External data, as well as response and feedback results; jump, you can jump to other nodes or Uniform Resource Locator (URL) or manual, etc .; API return value, return to the server through the API to the user portrait collected Information, acquisition requests, such as insurance recommendations.
  • cards including selection cards, text cards, graphic cards, graphic lists, pictures, and other cards, interact with users to obtain information and map information, the purpose is to collect structured and unstructured data External data, as well as response and feedback results; jump, you can jump to other nodes or Uniform Resource Locator (URL) or manual, etc .
  • API return value return to the server through the API to the user portrait collected Information, acquisition requests, such as insurance recommendations.
  • each node further includes a memory
  • the definition node rule further includes defining a memory
  • the user's question is processed by the intent recognition model to obtain the highest confidence intent and the corresponding entity. For example, the user enters the question “Can a 50-year-old man be insured?" To obtain the highest confidence intent. For “insured”, the corresponding entities are “50 years old” and “male”.
  • the content related to the entity corresponding to the intent in the user input is mapped to the entity as part of the context information of the conversation process, and stored in a storage medium suitable for high-frequency access as a follow-up One of the data sources for the first intention judgment in the step.
  • a trigger triggers multiple rounds of sessions.
  • Node 1 determines whether it has an identity card, obtains it through the API, and reads the user portrait. Node 2. The condition is judged. If an ID is obtained, the card is selected for gender, and jumps to node 3, and the card is selected for age; if no ID is obtained, jumps directly to node 4.
  • Node 4 needs to enter the license plate, node 5 judges whether the region is in the specified region, if so, reads the user portrait and jumps to node 6, otherwise, it gives a selection card and selects the region. Node 6 selects the number of households, and node 7 recommends auto insurance. If the recommendation API fails, an error message is returned.
  • multiple rounds of sessions can be judged by the logic of the nodes and complete the jump of each node. It can support the mapping of entities and user portraits and the alignment of entities. It also supports the selection of cards, text cards, graphic cards, and text. Rich interactive cards such as lists and pictures.
  • a session flow configuration consists of multiple interactive step nodes, which include at least a start node and an end node.
  • Each node consists of a node body, a trigger, multiple sets of conditional behaviors, and a memory network.
  • the node body is the key value of the content that a node needs to collect.
  • the input from the node body and the user to the body will be added to the conversation process context in the form of key-value pairs and stored in the storage medium.
  • the structure of the context is shown in Figure 5.
  • a trigger determines whether the node will be executed. When the condition of the trigger is met, the machine program will push the preset content of the node to the user, and the user will continue to input and complete the user interaction at this step.
  • a trigger consists of a trigger body and a trigger condition. There are three types of trigger bodies, which are intent type (identified with @ symbol), entity type (identified with # symbol), and data type (identified with _ symbol).
  • the data type is defined by the user in advance and stored in the memory medium, and a specific namespace memory in the memory is allocated in advance.
  • the user-defined data x will be stored in memory with memory.x as the key value.
  • the memory.x key-value pair is The life cycle is equivalent to the entire conversation process, and the application range is from the machine to the user.
  • the calculation method of the intentionally inferred value is specifically:
  • the feature is extracted by training through the LSTM model.
  • the specific process is that the current input X t enters a new memory block memory.
  • FIG. 6 is a schematic structural diagram of a session interaction apparatus according to an embodiment of the present invention.
  • the conversation interaction device 600 includes an acquisition module 610, a first determination module 620, a first output module 630, a second determination module 640, and a second output module 650.
  • the obtaining module 610 is used to obtain a user sentence; the first judgment module 620 is used to judge whether the user sentence contains a conventional question; the first output module 630 is used when the judgment result of the first judgment module 620 is yes, in the database Calling a conventional answer corresponding to the conventional question and outputting it; a second judging module 640 for judging whether the user sentence includes an intention when the judging result of the first judging module 620 is no; a second output module 650 for When the judgment result of the second judgment module 640 is YES, a conversation flow corresponding to the intention is retrieved from the database and output.
  • the session interaction device 600 further includes a speculation module 660, a third determination module 670, and a third output module 680.
  • the guessing module 660 is used to make an intention inference according to the user sentence when the judgment result of the second judgment module 640 is negative;
  • the third judgment module 670 is used to judge whether the value obtained from the intention guessing is greater than a preset threshold;
  • the third output The module 680 is configured to: when the determination result of the third determination module 670 is YES, retrieve and output a conversation flow corresponding to the intention in the database.
  • the session interaction device 600 further includes a processing module, configured to perform text processing on the user sentence acquired by the obtaining module 610.
  • the first determining module 620 is specifically configured to determine whether the user sentence contains a conventional question according to the processing result of the processing module.
  • the text processing here includes text segmentation.
  • the user sentence includes entity information
  • entity information includes one or more of the following: sentence vector information for training and compiling a sequence of word vectors; general entity information for representing general information ; Industry entity information, used to represent industry-related information.
  • the user sentence further includes user portrait information, which is used to represent personal and social relationships of the user.
  • the user portrait information includes one or more of personal identification information, personal attribute information, and social relationship information.
  • the method for obtaining user portrait information includes: performing association calculations on user sentences to obtain association relationships, obtaining syntactic dependencies and dependency structures in user sentences, and extracting personal identification information, personal attribute information, and social relationship information based on the association relationship to perform ternary
  • the group learns iteratively to get the user portrait knowledge map.
  • the specific methods used to perform association calculation on user statements include POS-CBOW method and association calculation through improved Word2vec.
  • the first judgment module 620 is specifically configured to match the stitching matrix of sentence vector information, general entity information, and industry entity information with the FAQ data set in the database.
  • the general entity information in the stitching matrix is matched.
  • the information of the industry entity is replaced with the encoding of the top-level entity and then matched with the FAQ data set;
  • the first output module 630 is specifically configured to output a conventional answer corresponding to the conventional question when there is a conventional question in the FAQ data set.
  • the second judgment module 640 is specifically configured to perform text classification through a CNN model to obtain an intent according to a stitching matrix of entity information and user portrait information.
  • the second output module 650 is specifically configured to determine the type of the intent, and the type of the intent includes one or more of an insurance intention, an underwriting intention, a claim intention, a renewal intention, and a surrender intention; according to The type of intent retrieves the required information and obtains this information, which includes one or more of gender, age, license plate number, region, and number of households; according to the information, the corresponding scheme is output, and the scheme includes insurance recommendation Program.
  • the specific way of judging the type of intent here is to calculate the confidence level.
  • the method for obtaining information includes obtaining from one or two of entity information and user portrait information, and the user may also be asked to obtain the information.
  • the session interaction device shown in FIG. 6 may correspond to the session interaction method provided by any of the foregoing embodiments.
  • the specific descriptions and limitations of the session interaction method described above may be applied to the session interaction device, and details are not described herein again.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Machine Translation (AREA)

Abstract

L'invention concerne un procédé d'interaction de session. Le procédé selon l'invention consiste : à obtenir un énoncé d'utilisateur ; à déterminer si ledit énoncé contient une question classique ; si tel est le cas, à appeler, dans une base de données, une réponse classique correspondant à la question classique et émettre la réponse classique ; si tel n'est pas le cas, à déterminer si l'énoncé d'utilisateur contient une intention et, si tel est le cas, à appeler, dans la base de données, un flux de session correspondant à l'intention, et à émettre en sortie le flux de session. La présente invention permet d'identifier efficacement l'intention de l'utilisateur et de réaliser de façon plus précise l'orientation d'informations et le pousser de schéma.
PCT/CN2019/071301 2018-07-27 2019-01-11 Procédé et appareil d'interaction de session WO2020019686A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810841590.9 2018-07-27
CN201810841590.9A CN109241251B (zh) 2018-07-27 2018-07-27 一种会话交互方法

Publications (1)

Publication Number Publication Date
WO2020019686A1 true WO2020019686A1 (fr) 2020-01-30

Family

ID=65073119

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/071301 WO2020019686A1 (fr) 2018-07-27 2019-01-11 Procédé et appareil d'interaction de session

Country Status (2)

Country Link
CN (1) CN109241251B (fr)
WO (1) WO2020019686A1 (fr)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111324814A (zh) * 2020-03-05 2020-06-23 中国建设银行股份有限公司 智能设备的控制方法、装置及系统
CN111339745A (zh) * 2020-03-06 2020-06-26 京东方科技集团股份有限公司 一种随访报告生成方法、设备、电子设备和存储介质
CN111611362A (zh) * 2020-04-07 2020-09-01 安徽慧医信息科技有限公司 基于树形数据实现系统与用户主动交互的技能开发方法
CN111813896A (zh) * 2020-07-13 2020-10-23 重庆紫光华山智安科技有限公司 文本三元组关系识别方法、装置、训练方法及电子设备
CN111814484A (zh) * 2020-07-03 2020-10-23 海信视像科技股份有限公司 语义识别方法、装置、电子设备及可读存储介质
CN112463959A (zh) * 2020-10-29 2021-03-09 中国人寿保险股份有限公司 一种基于上行短信的业务处理方法及相关设备
CN112669011A (zh) * 2020-12-30 2021-04-16 招联消费金融有限公司 智能对话方法、装置、计算机设备和存储介质
CN112860850A (zh) * 2021-01-21 2021-05-28 平安科技(深圳)有限公司 人机交互方法、装置、设备及存储介质
CN113064986A (zh) * 2021-04-30 2021-07-02 中国平安人寿保险股份有限公司 模型的生成方法、系统、计算机设备和存储介质
CN113127731A (zh) * 2021-03-16 2021-07-16 西安理工大学 一种基于知识图谱的个性化试题推荐方法
CN113434633A (zh) * 2021-06-28 2021-09-24 平安科技(深圳)有限公司 基于头像的社交话题推荐方法、装置、设备及存储介质
CN113743124A (zh) * 2021-08-25 2021-12-03 南京星云数字技术有限公司 一种智能问答异常的处理方法、装置及电子设备
CN116521822A (zh) * 2023-03-15 2023-08-01 上海帜讯信息技术股份有限公司 基于5g消息多轮会话机制的用户意图识别方法和装置

Families Citing this family (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110020913A (zh) * 2019-02-20 2019-07-16 中国人民财产保险股份有限公司 产品推荐方法、设备及存储介质
CN110096570B (zh) * 2019-04-09 2021-03-30 苏宁易购集团股份有限公司 一种应用于智能客服机器人的意图识别方法及装置
CN110134765B (zh) * 2019-05-05 2021-06-29 杭州师范大学 一种基于情感分析的餐厅用户评论分析系统及方法
CN110287285B (zh) * 2019-05-31 2023-06-16 平安科技(深圳)有限公司 一种问题意图识别方法、装置、计算机设备及存储介质
CN110399472B (zh) * 2019-06-17 2022-07-15 平安科技(深圳)有限公司 面试提问提示方法、装置、计算机设备及存储介质
CN110377715A (zh) * 2019-07-23 2019-10-25 天津汇智星源信息技术有限公司 基于法律知识图谱的推理式精准智能问答方法
CN110580284B (zh) * 2019-07-31 2023-08-18 平安科技(深圳)有限公司 一种实体消歧方法、装置、计算机设备及存储介质
CN110750628A (zh) * 2019-09-09 2020-02-04 深圳壹账通智能科技有限公司 会话信息交互处理方法、装置、计算机设备和存储介质
CN110781280A (zh) * 2019-10-21 2020-02-11 深圳众赢维融科技有限公司 基于知识图谱的语音辅助方法及装置
CN110968674B (zh) * 2019-12-04 2023-04-18 电子科技大学 基于词向量表征的问题评论对的构建方法
CN111143545A (zh) * 2019-12-31 2020-05-12 北京明略软件系统有限公司 保险数据获取方法及装置、电子设备、计算机存储介质
CN111368043A (zh) * 2020-02-19 2020-07-03 中国平安人寿保险股份有限公司 基于人工智能的事件问答方法、装置、设备及存储介质
CN113468297B (zh) * 2020-03-30 2024-02-27 阿里巴巴集团控股有限公司 一种对话数据处理方法、装置、电子设备及存储设备
CN111475631B (zh) * 2020-04-05 2022-12-06 北京亿阳信通科技有限公司 一种基于知识图谱与深度学习的疾病问答方法及装置
CN111694939B (zh) * 2020-04-28 2023-09-19 平安科技(深圳)有限公司 智能调用机器人的方法、装置、设备及存储介质
CN111666400B (zh) * 2020-07-10 2023-10-13 腾讯科技(深圳)有限公司 消息获取方法、装置、计算机设备及存储介质
CN112183098B (zh) * 2020-09-30 2022-05-06 完美世界(北京)软件科技发展有限公司 会话的处理方法和装置、存储介质、电子装置
CN111930854B (zh) * 2020-10-10 2021-01-08 北京福佑多多信息技术有限公司 意图预测的方法及装置
CN113240438A (zh) * 2021-05-11 2021-08-10 京东数字科技控股股份有限公司 意图识别方法、设备、存储介质及程序产品
CN113420136A (zh) * 2021-06-22 2021-09-21 中国工商银行股份有限公司 一种对话方法、系统、电子设备、存储介质和程序产品
CN113918679A (zh) * 2021-09-22 2022-01-11 三一汽车制造有限公司 一种知识问答方法、装置及工程机械
CN117078270B (zh) * 2023-10-17 2024-02-02 彩讯科技股份有限公司 用于网络产品营销的智能交互方法和装置

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20110095083A (ko) * 2010-02-16 2011-08-24 윤재민 애완견 연속 소리 분석 및 감정 표현, 대화 생성 시스템 및 방법
CN106294341A (zh) * 2015-05-12 2017-01-04 阿里巴巴集团控股有限公司 一种智能问答系统及其主题判别方法和装置
CN106407333A (zh) * 2016-09-05 2017-02-15 北京百度网讯科技有限公司 基于人工智能的口语查询识别方法及装置
CN107193853A (zh) * 2016-12-08 2017-09-22 孙瑞峰 一种基于语境的社交场景构建方法和系统
CN108090174A (zh) * 2017-12-14 2018-05-29 北京邮电大学 一种基于系统功能语法的机器人应答方法及装置

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108885625A (zh) * 2016-04-07 2018-11-23 日商先进媒体公司 信息处理系统、受理服务器、信息处理方法和程序
CN106649704B (zh) * 2016-12-20 2020-04-07 竹间智能科技(上海)有限公司 一种智能对话控制方法和系统
CN107025278A (zh) * 2017-03-27 2017-08-08 竹间智能科技(上海)有限公司 基于人机对话的用户画像自动提取方法及装置
CN107562856A (zh) * 2017-08-28 2018-01-09 深圳追科技有限公司 一种自助式客户服务系统及方法
CN107729549B (zh) * 2017-10-31 2021-05-11 深圳追一科技有限公司 一种包含要素提取的机器人客服方法及系统
CN108038234B (zh) * 2017-12-26 2021-06-15 众安信息技术服务有限公司 一种问句模板自动生成方法及装置

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20110095083A (ko) * 2010-02-16 2011-08-24 윤재민 애완견 연속 소리 분석 및 감정 표현, 대화 생성 시스템 및 방법
CN106294341A (zh) * 2015-05-12 2017-01-04 阿里巴巴集团控股有限公司 一种智能问答系统及其主题判别方法和装置
CN106407333A (zh) * 2016-09-05 2017-02-15 北京百度网讯科技有限公司 基于人工智能的口语查询识别方法及装置
CN107193853A (zh) * 2016-12-08 2017-09-22 孙瑞峰 一种基于语境的社交场景构建方法和系统
CN108090174A (zh) * 2017-12-14 2018-05-29 北京邮电大学 一种基于系统功能语法的机器人应答方法及装置

Cited By (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111324814A (zh) * 2020-03-05 2020-06-23 中国建设银行股份有限公司 智能设备的控制方法、装置及系统
CN111339745A (zh) * 2020-03-06 2020-06-26 京东方科技集团股份有限公司 一种随访报告生成方法、设备、电子设备和存储介质
CN111611362A (zh) * 2020-04-07 2020-09-01 安徽慧医信息科技有限公司 基于树形数据实现系统与用户主动交互的技能开发方法
CN111611362B (zh) * 2020-04-07 2023-03-31 安徽慧医信息科技有限公司 基于树形数据实现系统与用户主动交互的技能开发方法
CN111814484B (zh) * 2020-07-03 2024-01-26 海信视像科技股份有限公司 语义识别方法、装置、电子设备及可读存储介质
CN111814484A (zh) * 2020-07-03 2020-10-23 海信视像科技股份有限公司 语义识别方法、装置、电子设备及可读存储介质
CN111813896A (zh) * 2020-07-13 2020-10-23 重庆紫光华山智安科技有限公司 文本三元组关系识别方法、装置、训练方法及电子设备
CN111813896B (zh) * 2020-07-13 2022-12-02 重庆紫光华山智安科技有限公司 文本三元组关系识别方法、装置、训练方法及电子设备
CN112463959A (zh) * 2020-10-29 2021-03-09 中国人寿保险股份有限公司 一种基于上行短信的业务处理方法及相关设备
CN112669011A (zh) * 2020-12-30 2021-04-16 招联消费金融有限公司 智能对话方法、装置、计算机设备和存储介质
CN112669011B (zh) * 2020-12-30 2024-03-22 招联消费金融股份有限公司 智能对话方法、装置、计算机设备和存储介质
CN112860850B (zh) * 2021-01-21 2022-08-30 平安科技(深圳)有限公司 人机交互方法、装置、设备及存储介质
CN112860850A (zh) * 2021-01-21 2021-05-28 平安科技(深圳)有限公司 人机交互方法、装置、设备及存储介质
CN113127731B (zh) * 2021-03-16 2024-01-30 北京第一因科技有限公司 一种基于知识图谱的个性化试题推荐方法
CN113127731A (zh) * 2021-03-16 2021-07-16 西安理工大学 一种基于知识图谱的个性化试题推荐方法
CN113064986B (zh) * 2021-04-30 2023-07-25 中国平安人寿保险股份有限公司 模型的生成方法、系统、计算机设备和存储介质
CN113064986A (zh) * 2021-04-30 2021-07-02 中国平安人寿保险股份有限公司 模型的生成方法、系统、计算机设备和存储介质
CN113434633A (zh) * 2021-06-28 2021-09-24 平安科技(深圳)有限公司 基于头像的社交话题推荐方法、装置、设备及存储介质
CN113743124A (zh) * 2021-08-25 2021-12-03 南京星云数字技术有限公司 一种智能问答异常的处理方法、装置及电子设备
CN113743124B (zh) * 2021-08-25 2024-03-29 南京星云数字技术有限公司 一种智能问答异常的处理方法、装置及电子设备
CN116521822A (zh) * 2023-03-15 2023-08-01 上海帜讯信息技术股份有限公司 基于5g消息多轮会话机制的用户意图识别方法和装置
CN116521822B (zh) * 2023-03-15 2024-02-13 上海帜讯信息技术股份有限公司 基于5g消息多轮会话机制的用户意图识别方法和装置

Also Published As

Publication number Publication date
CN109241251A (zh) 2019-01-18
CN109241251B (zh) 2022-05-27

Similar Documents

Publication Publication Date Title
WO2020019686A1 (fr) Procédé et appareil d'interaction de session
US11651163B2 (en) Multi-turn dialogue response generation with persona modeling
CN111709233B (zh) 基于多注意力卷积神经网络的智能导诊方法及系统
US11893345B2 (en) Inducing rich interaction structures between words for document-level event argument extraction
CN111274365B (zh) 基于语义理解的智能问诊方法、装置、存储介质及服务器
CN111506714A (zh) 基于知识图嵌入的问题回答
CN110737758A (zh) 用于生成模型的方法和装置
WO2023029502A1 (fr) Procédé et appareil pour construire un portrait d'utilisateur sur la base d'une session de requête, dispositif et support
CN117076653B (zh) 基于思维链及可视化提升上下文学习知识库问答方法
WO2022227203A1 (fr) Procédé, appareil et dispositif de triage basés sur une représentation de dialogue, et support de stockage
CN112509690B (zh) 用于控制质量的方法、装置、设备和存储介质
CN110399473B (zh) 为用户问题确定答案的方法和装置
Patra A survey of community question answering
CN116992007B (zh) 基于问题意图理解的限定问答系统
CN115455169A (zh) 一种基于词汇知识和语义依存的知识图谱问答方法和系统
CN115274086A (zh) 一种智能导诊方法及系统
CN115714030A (zh) 一种基于疼痛感知和主动交互的医疗问答系统及方法
CN117591663B (zh) 一种基于知识图谱的大模型prompt生成方法
CN113705207A (zh) 语法错误识别方法及装置
CN114372454A (zh) 文本信息抽取方法、模型训练方法、装置及存储介质
Liu et al. Attention based r&cnn medical question answering system in chinese
CN114911940A (zh) 文本情感识别方法及装置、电子设备、存储介质
CN113869058A (zh) 基于lc-gcn方面级情感分析方法、系统、存储介质和电子设备
CN117972434B (zh) 文本处理模型的训练方法、装置、设备、介质和程序产品
CN117009532B (zh) 语义类型识别方法、装置、计算机可读介质及电子设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19840464

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205 DATED 06/05/2021)

122 Ep: pct application non-entry in european phase

Ref document number: 19840464

Country of ref document: EP

Kind code of ref document: A1