CN114168721A - Method for constructing knowledge enhancement model for multi-sub-target dialogue recommendation system - Google Patents
Method for constructing knowledge enhancement model for multi-sub-target dialogue recommendation system Download PDFInfo
- Publication number
- CN114168721A CN114168721A CN202111369183.0A CN202111369183A CN114168721A CN 114168721 A CN114168721 A CN 114168721A CN 202111369183 A CN202111369183 A CN 202111369183A CN 114168721 A CN114168721 A CN 114168721A
- Authority
- CN
- China
- Prior art keywords
- knowledge
- sub
- target
- decoder
- reply
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 35
- 230000007246 mechanism Effects 0.000 claims abstract description 19
- 230000008569 process Effects 0.000 claims abstract description 15
- 239000011159 matrix material Substances 0.000 claims abstract description 6
- 238000012545 processing Methods 0.000 claims description 12
- 238000001914 filtration Methods 0.000 claims description 4
- 238000013528 artificial neural network Methods 0.000 claims description 2
- 230000004927 fusion Effects 0.000 claims description 2
- 238000012216 screening Methods 0.000 claims description 2
- 239000000284 extract Substances 0.000 abstract description 2
- 238000012549 training Methods 0.000 description 2
- 238000012935 Averaging Methods 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 239000003623 enhancer Substances 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 230000003997 social interaction Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/33—Querying
- G06F16/332—Query formulation
- G06F16/3329—Natural language query formulation or dialogue systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/33—Querying
- G06F16/3331—Query processing
- G06F16/334—Query execution
- G06F16/3346—Query execution using probabilistic model
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/90—Details of database functions independent of the retrieved data types
- G06F16/95—Retrieval from the web
- G06F16/953—Querying, e.g. by the use of web search engines
- G06F16/9535—Search customisation based on user profiles and personalisation
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Databases & Information Systems (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Engineering & Computer Science (AREA)
- Computational Linguistics (AREA)
- Mathematical Physics (AREA)
- Human Computer Interaction (AREA)
- Artificial Intelligence (AREA)
- Probability & Statistics with Applications (AREA)
- Machine Translation (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The invention discloses a method for constructing a knowledge enhancement model for a multi-sub-target dialogue recommendation system. The dialog guidance module controls the process of the dialog, predicts a series of sub-goals and selects useful external knowledge information for each sub-goal to improve production performance. The dialogue generating module is composed of an encoder and a decoder, the encoder encodes information of dialogue history, external knowledge and sub-targets into a feature matrix, and the decoder extracts effective information from the feature matrix to generate a reply with rich and accurate information. A sequential attention mechanism is provided in a decoder, sub-target guidance is enhanced, a noise filter and a knowledge enhancement module are arranged, irrelevant and unnecessary knowledge is eliminated, the importance of the selected knowledge in reply generation is increased, and generated reply information is richer. The invention effectively improves the user experience and the recommendation success rate in the multi-sub-target recommendation scene.
Description
Technical Field
The invention relates to the technical field of a dialogue recommendation system, in particular to a method for improving the information richness and accuracy of a dialogue system generated reply and the success rate of multi-sub-target dialogue recommendation by selecting knowledge and targets, filtering noise knowledge and enhancing knowledge generation.
Background
Recommendation dialog systems have recently received much attention due to their significant commercial potential. Such systems first elicit user preferences through a dialog and then provide high quality recommendations based on the elicited preferences.
Many real-world recommendation applications typically involve the collaborative effort of chatting, question-answering, and recommendation conversations. Various social interactions establish a consistent relationship with the user and gain trust. To provide more social recommendations, recent work has proposed a dialog recommendation dialog data set DuRecDial that labels 21 sub-targets. The sub-targets may be considered as different dialog phases. In this task, the dialog system starts a dialog with some non-recommended sub-objectives, such as chats and question-and-answer, to collect user information and establish social relationships, and finally enters the recommended sub-objectives. The work also provides a multi-target driving dialogue generation framework-MGCG based on RNN to complete multi-sub-target dialogue recommendation tasks. The MGCG first models the sub-objectives individually to plan the appropriate sequence of sub-objectives for topic conversion and final recommendation. The MGCG then extracts knowledge features from the entire knowledge graph and generates replies to complete each sub-goal. However, MGCG has not studied how to use knowledge efficiently in different sub-goals. A dialog often involves a relatively large knowledge graph and multiple sub-objectives throughout the course of a multi-sub-objective recommendation dialog. Both the question-answering and recommendation processes require the assistance of accurate knowledge information. Therefore, having rich and accurate knowledge is crucial to generating engaging conversations. Furthermore, it is important how to select useful knowledge among different sub-goals, since taking all possible knowledge as input results in more noise and a high computational effort.
Disclosure of Invention
The invention aims to provide a method for constructing a knowledge enhancement model for a multi-sub-target dialogue recommendation system aiming at the defects of the prior art, which can enhance the information richness and accuracy of reply generation and improve the success rate of dialogue recommendation.
The specific technical scheme for realizing the purpose of the invention is as follows:
a method for constructing a knowledge enhancement model for a multi-sub-target dialogue recommendation system comprises the following steps:
1) establishing a conversation guide module which completes the prediction of sub-targets and the screening of knowledge; using a Transformer model to give the dialog history X, external knowledgeAnd recommendation sub-target GTUnder the condition of (1), predicting the sub-goal G of the next roundnextAnd optimizing a cross entropy loss function-logP:
whereinIs the sub-target character that has been currently generated and then the predicted sub-target GnextDialog history X, external knowledgeAnd recommendation sub-target GTInputting the predicted candidate knowledge K into another Transformer modelc(ii) a Optimizing the cross-entropy loss function, logP', to train the knowledge generator LK:
WhereinA head or a relation belonging to a knowledge triple,is a knowledge character that has been currently generated; then, a knowledge item matching the generated tuple (read) is selected as a candidate knowledge Kc(ii) a Finally dialog guide Module outputs G'next=[Gnext;GT]And Kc(ii) a Wherein, G'nextTo predict sub-goal GnextAnd recommendation sub-target GTSplicing;
2) establishing a dialogue generating module, wherein the dialogue generating module comprises an encoder and a decoder, a noise filter and a knowledge enhancement module are arranged in the decoder, and the sequence of the dialogue historical characteristics, the knowledge characteristics and the sub-target characteristics is arranged by using a sequence attention mechanism; wherein:
2.1 encoder converts dialog history X, candidate knowledge KcAnd subdirectory Standard G'nextConverting into a feature matrix; encoding the text information by adopting a standard Transformer encoder; the encoder processes as follows:
EC=Transformer(X)
EK=Transformer(Kc)
EG=Transformer(G′next)
wherein ECRepresenting historical characteristics of the encoder output, EKRepresenting knowledge characteristics of the encoder output, EGSub-target features representing the encoder output;
2.2 the decoder takes the historical characteristics, knowledge characteristics and sub-target characteristics as input and utilizes a sequential attention mechanism, a noise filter and a knowledge enhancement module to generate a reply; the formula for generating the reply Y is as follows:
where A is the set of all possible replies, Y 'is any reply in the set of replies, P (Y' | E)C,EK,EG) Is the conditional probability of forming a reply Y', replyArranging the order of feature processing in the decoder by adopting an order attention mechanism; the feature processing process after the sequential attention mechanism arrangement comprises the steps of processing sub-target features, and then processing knowledge features and conversation history features; the decoder processes the feature procedure as follows:
OP=MultiHead(I(Yp),I(Yp),I(Yp))
OG=MultiHead(OP,EG,EG)
OKG=NF(OG,EC,EK)
Odec=FFN(OKG)
wherein Multihead is a multi-head attention maneuver; y ispIs a word that has already been decoded; i is an embedding function of the input, OPIs a feature of the word that has been decoded, OGRepresenting the sub-target features extracted by the decoder, NF representing the noise filter process, OKGRepresenting de-noised knowledge and history fusion features, OdecA hidden layer representation representing the decoder output, FFN being a feed-forward neural network; in the noise filter, a knowledge gating unit is used for filtering knowledge characteristics; the method specifically comprises the following steps: the filter outputs O on the upper layerGAs a query, extracting encoded historical features E by multi-head attentionCAnd coding knowledge features EKThe characteristics of (A):
OC=MultiHead(OG,EC,EC)
OK=MultiHead(OG,EK,EK)
wherein, OCFor historical features extracted by the decoder, OKKnowledge features extracted for a decoder; then, the knowledge gate unit calculates a weight α according to the degree of matching between the knowledge and the conversation historyk(ii) a Finally, the filter uses αk∈[0,1]Average conversation history characteristics and knowledge characteristics and output OKG:
αk=Sigmoid(Wk[OC;OK])
OKG=OC+(1-αk)OC+αkOK
Wherein WkIs a trainable parameter; the noise filter controls the flow of knowledge; obtaining a hidden layer representation O of the decoder outputdecThen, converting the characteristics into word list probability distribution by using a knowledge enhancement module; the knowledge enhancement module emphasizes the retrieved knowledge by a set of learned weights; the method specifically comprises the following steps: to make external knowledgeThe words in (1) are used as a knowledge dictionary; then using the weight αg∈[0,1]Calculating a weighted probability distribution for the word:
αg=Sigmoid(WgOdec)
H=WvOdec
wherein WgAnd WvIs a trainable parameter, H denotes WvTransformed hidden layer state, yjDenotes the jth word in the vocabulary, Softmax denotes the normalized exponential function, Po(yj) The expression yjThe generation probability of (2); alpha is alphagControlling the weight of the generated general words; alpha is alphagA low value of (b) indicates that a word in the knowledge dictionary is highlighted; finally, P is distributed from the wordso(yj) Generating the word with the maximum probability, and combining all the generated words into a reply.
In the step 1), in a multi-sub-target dialogue recommendation task, a dialogue is divided into a plurality of segments, each segment comprises a sub-target, the final target of the dialogue is recommendation, a system predicts a dialogue sub-target sequence through a Transformer, and completes each sub-target by using Transformer prediction candidate knowledge;
in the step 2.1, an encoder based on a Transformer is established to convert the input sub-targets, the candidate knowledge and the conversation history into a feature matrix;
in step 2.2, features are extracted from the encoded output using a transform-based decoder for decoding. A sequential attention mechanism is established in view of the existence of a variety of feature extraction processes. The sequential attention mechanism sets sub-target features to be extracted first, and then dialogue history and knowledge features are extracted. In order to eliminate the noise of input knowledge, in the process of extracting knowledge characteristics, a noise filter is used for calculating the correlation between the knowledge and conversation history, the knowledge characteristics are weighted through a gating mechanism, and finally the conversation history characteristics and the filtered knowledge characteristics are combined to be used as the input of a subsequent Transformer layer. At the decoder top, a knowledge enhancement module is used for calculating a gating unit to balance the weight between the common word list and the knowledge word list, and the information richness of the generated reply is enhanced.
Compared with the prior art, the invention has the following advantages:
1. ease of use: compared with the past method, the method can accelerate the model training and reasoning speed and reduce the application cost and the time cost.
2. Correctness: a framework with enhanced knowledge and target selection, knowledge filtering and knowledge generation is designed, the richness and accuracy of reply generation information can be effectively improved, and the recommendation success rate is improved.
3. The practicability is as follows: the method has wide practical significance, can be applied to real scenes such as intelligent sound boxes, mobile phone intelligent assistants and the like, effectively improves the user experience, and recommends proper content for the user.
Drawings
FIG. 1 is a flow chart of the method of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the following detailed description and accompanying drawings. The procedures, conditions, experimental methods and the like for carrying out the present invention are general knowledge and common general knowledge in the art, and the present invention is not particularly limited, except for those specifically mentioned below.
Referring to fig. 1, the present invention mainly includes the following steps:
the method comprises the following steps: predict the sub-goals that the next reply needs to complete, and the required candidate knowledge.
In a multi-sub-target dialogue recommendation task, a reasonable sub-target sequence needs to be designed by the system to actively and naturally guide the dialogue from a non-recommended scene to a recommended scene. To generate active and natural dialog recommendations, a dialog guidance module is proposed to customize reasonable sequences of sub-goals and provide appropriate candidate knowledge. This module accomplishes two subtasks: sub-goal generation and knowledge generation. Given a dialog history X, external knowledge, using a Transformer-based modelAnd a final recommendation sub-target GTUnder the condition of (1), predicting the sub-goal G of the next roundnextAnd optimizing the loss function as follows:
then inputting the predicted sub-target, the conversation history, the external knowledge and the final sub-target into another Transformer model to predict candidate knowledge Kc. Because there is no knowledge of the tag in the true reply, the pseudo tag is obtained in an unsupervised manner. First connect the knowledge items (head, relation, tail) in the tuple. A character-based F1 score between each knowledge and the true reply is then calculated. Finally, the knowledge item with F1 score greater than threshold 0.35 is taken as pseudo label Kw. The following loss functions are optimized to train the knowledge generator:
whereinA head or a relation belonging to a knowledge triple,is a knowledge word that has been generated currentlySymbol; then, a knowledge item matching the generated tuple (read) is selected as a candidate knowledge Kc(ii) a Finally dialog guide Module outputs G'next=[Gnext;GT]And Kc(ii) a Wherein, G'nextTo predict sub-goal GnextAnd recommendation sub-target GTSplicing;
step two: establishing a dialogue generating module, wherein the dialogue generating module comprises an encoder and a decoder, a noise filter and a knowledge enhancement module are arranged in the decoder, and the sequence of the dialogue historical characteristics, the knowledge characteristics and the sub-target characteristics is arranged by using a sequence attention mechanism; wherein:
the encoder converts the input sub-goals, candidate knowledge, and dialog history into a feature matrix. A Transformer-based encoder is used to encode a variety of textual information, including predicted sub-goals, candidate knowledge, and dialog history. To combine different types of information, a Transformer model is used as the encoder. Since different types of information have different structures, the sub-goals of the dialog history, candidate knowledge, and dialog guidance module predictions are encoded independently. Further, the input embedding includes word embedding, type embedding, and position embedding. Multi-type embedding helps the encoder to better distinguish between different parts of the dialog history. Specifically, the encoder processes as follows:
EC=Transformer(X)
EK=Transformer(Kc)
EG=Transformer(G′next)
a decoder in the dialog generation module generates replies, which are guided by an enhancer of the sequential attention mechanism, a noise filter eliminates irrelevant and unnecessary knowledge, and a knowledge enhancement module increases the importance of the selected knowledge in reply generation.
Three new mechanisms are proposed to incorporate a Transformer-based decoder to generate an information-rich reply consistent with the predicted sub-goals. Three mechanisms, a sequential attention mechanism, a noise filter, and a knowledge enhancement module are described in detail below. The decoder generates a reply Y as follows:
1) arranging the order of feature processing in the decoder by adopting an order attention mechanism; the feature processing process after the sequential attention mechanism arrangement comprises the steps of processing sub-target features, and then processing knowledge features and conversation history features; the decoder processes the feature procedure as follows:
OP=MultiHead(I(Yp),I(Yp),I(Yp))
OG=MultiHead(OP,EG,EG)
OKG=NF(OG,EC,EK)
Odec=FFN(OKG)
in this structure, the model captures valid information in the conversation history and knowledge based on sub-goals, and then generates a more consistent reply consistent with the sub-goals.
2) Although high quality candidate knowledge may be generated, there are still false candidate knowledge that may lead to unexpected responses. In addition, excessive knowledge input can be more noisy as the recommender does not always provide knowledge-related responses in the dialog. To solve these problems, a noise filter is proposed to select better knowledge items. The knowledge features are filtered by a knowledge gating unit. Specifically, the filter first outputs the previous layer OGExtracting, as a query, a dialog history code E by multi-head attentionCAnd EKThe characteristics of (A):
OC=MultiHead(OG,EC,EC)
OK=MultiHead(OG,EK,EK)
then, the knowledge gate unit calculates a weight α according to the degree of matching between the knowledge and the conversation historyk. Finally, the filter uses αk∈[0,1]Averaging conversational history features and knowledgeIdentity output OKG:
αk=Sigmoid(Wk[OC;OK])
OKG=OC+(1-αk)OC+αkOK
Wherein WkIs a trainable parameter. The noise filter controls the flow of knowledge.
3) To further generate a richer reply, a knowledge enhancement module is proposed that emphasizes the retrieved knowledge through a set of learned weights. In particular, knowledge is combinedThe word in (2) serves as a knowledge dictionary. Then using the weight αg∈[0,1]Calculating a weighted probability distribution for the word:
αg=Sigmoid(WgOdec)
H=WvOdec
wherein WgAnd WvAre trainable parameters. Alpha is alphagControls the generation of the weights of the general words. Alpha is alphagA low value of (b) indicates that a word in the knowledge dictionary is highlighted. In the training process, the model can automatically learn to improve the generation probability of the known words in an appropriate step. The introduced knowledge enhancement module can not only help the model generate replies with more information, but can also increase the presence of selected knowledge in the replies.
Claims (1)
1. A method for constructing a knowledge enhancement model for a multi-sub-target dialogue recommendation system is characterized by comprising the following steps:
1) establishing a conversation guide module which completes the prediction of sub-targets and the screening of knowledge; using a Transformer model to give the dialog history X, external knowledgeAnd recommendation sub-target GTUnder the condition of (1), predicting the sub-goal G of the next roundnextAnd optimizing a cross entropy loss function-logP:
whereinIs the sub-target character that has been currently generated and then the predicted sub-target GnextDialog history X, external knowledgeAnd recommendation sub-target GTInputting the predicted candidate knowledge K into another Transformer modelc(ii) a Optimizing the cross-entropy loss function, logP', to train the knowledge generator LK:
WhereinA head or a relation belonging to a knowledge triple,is a knowledge character that has been currently generated; then, a knowledge item matching the generated tuple (read) is selected as a candidate knowledge Kc(ii) a Finally dialog guide Module outputs G'next=[Gnext;GT]And Kc(ii) a Wherein, G'nextTo predict sub-goal GnextAnd recommendation sub-target GTSplicing;
2) establishing a dialogue generating module, wherein the dialogue generating module comprises an encoder and a decoder, a noise filter and a knowledge enhancement module are arranged in the decoder, and the sequence of the dialogue historical characteristics, the knowledge characteristics and the sub-target characteristics is arranged by using a sequence attention mechanism; wherein:
2.1 encoder converts dialog history X, candidate knowledge KcAnd subdirectory Standard G'nextConverting into a feature matrix; encoding the text information by adopting a standard Transformer encoder; the encoder processes as follows:
EC=Transformer(X)
EK=Transformer(Kc)
EG=Transformer(G′next)
wherein ECRepresenting historical characteristics of the encoder output, EKRepresenting knowledge characteristics of the encoder output, EGSub-target features representing the encoder output;
2.2 the decoder takes the historical characteristics, knowledge characteristics and sub-target characteristics as input and utilizes a sequential attention mechanism, a noise filter and a knowledge enhancement module to generate a reply; the formula for generating the reply Y is as follows:
where A is the set of all possible replies, Y 'is any reply in the set of replies, P (Y' | E)C,EK,EG) Is to form the conditional probability of the reply Y', which uses a sequence attention mechanism to arrange the sequence of feature processing in the decoder; the feature processing process after the sequential attention mechanism arrangement comprises the steps of processing sub-target features, and then processing knowledge features and conversation history features; the decoder processes the feature procedure as follows:
OP=MultiHead(I(Yp),I(Yp),I(Yp))
OG=MultiHead(OP,EG,EG)
OKG=NF(OG,EC,EK)
Odec=FFN(OKG)
wherein Multihead is a multi-head attention maneuver; y ispIs a word that has already been decoded; i is an embedding function of the input, OPIs a feature of the word that has been decoded, OGRepresenting the sub-target features extracted by the decoder, NF representing the noise filter process, OKGRepresenting de-noised knowledge and history fusion features, OdecA hidden layer representation representing the decoder output, FFN being a feed-forward neural network; in the noise filter, a knowledge gating unit is used for filtering knowledge characteristics; the method specifically comprises the following steps: the filter outputs O on the upper layerGAs a query, extracting encoded historical features E by multi-head attentionCAnd coding knowledge features EKThe characteristics of (A):
OC=MultiHead(OG,EC,EC)
OK=MultiHead(OG,EK,EK)
wherein, OCFor historical features extracted by the decoder, OKKnowledge features extracted for a decoder; then, the knowledge gate unit calculates a weight α according to the degree of matching between the knowledge and the conversation historyk(ii) a Finally, the filter uses αk∈[0,1]Average conversation history characteristics and knowledge characteristics and output OKG:
αk=Sigmoid(Wk[OC;OK])
OKG=OC+(1-αk)OC+αkOK
Wherein WkIs a trainable parameter; the noise filter controls the flow of knowledge; obtaining a hidden layer representation O of the decoder outputdecThen, converting the characteristics into word list probability distribution by using a knowledge enhancement module; the knowledge enhancement module emphasizes the retrieved knowledge by a set of learned weights; the method specifically comprises the following steps: to make external knowledgeThe words in (1) are used as a knowledge dictionary; then using the weight αg∈[0,1]Calculating a weighted probability distribution for the word:
αg=Sigmoid(WgOdec)
H=WvOdec
wherein WgAnd WvIs a trainable parameter, H denotes WvTransformed hidden layer state, yjDenotes the jth word in the vocabulary, Softmax denotes the normalized exponential function, Po(yj) The expression yjThe generation probability of (2); alpha is alphagControlling the weight of the generated general words; alpha is alphagA low value of (b) indicates that a word in the knowledge dictionary is highlighted; finally, P is distributed from the wordso(yj) Generating the word with the maximum probability, and combining all the generated words into a reply.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111369183.0A CN114168721A (en) | 2021-11-18 | 2021-11-18 | Method for constructing knowledge enhancement model for multi-sub-target dialogue recommendation system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111369183.0A CN114168721A (en) | 2021-11-18 | 2021-11-18 | Method for constructing knowledge enhancement model for multi-sub-target dialogue recommendation system |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114168721A true CN114168721A (en) | 2022-03-11 |
Family
ID=80479591
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111369183.0A Pending CN114168721A (en) | 2021-11-18 | 2021-11-18 | Method for constructing knowledge enhancement model for multi-sub-target dialogue recommendation system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114168721A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114610861A (en) * | 2022-05-11 | 2022-06-10 | 之江实验室 | End-to-end dialogue method for integrating knowledge and emotion based on variational self-encoder |
CN114912020A (en) * | 2022-04-21 | 2022-08-16 | 华东师范大学 | Multi-sub-target dialogue recommendation method based on user preference graph |
-
2021
- 2021-11-18 CN CN202111369183.0A patent/CN114168721A/en active Pending
Non-Patent Citations (2)
Title |
---|
JUN ZHANG等: "KERS: A Knowledge-Enhanced Framework for Recommendation Dialog Systems with Multiple Subgoals", 《FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: EMNLP 2021》, 11 November 2021 (2021-11-11), pages 1092 - 1101 * |
张骏: "基于知识的对话系统关键技术研究和模型实现", 《中国优秀硕士学位论文全文数据库 信息科技辑》, 15 December 2022 (2022-12-15), pages 138 - 437 * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114912020A (en) * | 2022-04-21 | 2022-08-16 | 华东师范大学 | Multi-sub-target dialogue recommendation method based on user preference graph |
CN114912020B (en) * | 2022-04-21 | 2023-06-23 | 华东师范大学 | Multi-sub-target dialogue recommendation method based on user preference graph |
CN114610861A (en) * | 2022-05-11 | 2022-06-10 | 之江实验室 | End-to-end dialogue method for integrating knowledge and emotion based on variational self-encoder |
CN114610861B (en) * | 2022-05-11 | 2022-08-26 | 之江实验室 | End-to-end dialogue method integrating knowledge and emotion based on variational self-encoder |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110737764B (en) | Personalized dialogue content generation method | |
CN110929092B (en) | Multi-event video description method based on dynamic attention mechanism | |
CN110490946B (en) | Text image generation method based on cross-modal similarity and antagonism network generation | |
CN110164476B (en) | BLSTM voice emotion recognition method based on multi-output feature fusion | |
CN107346340A (en) | A kind of user view recognition methods and system | |
CN114168721A (en) | Method for constructing knowledge enhancement model for multi-sub-target dialogue recommendation system | |
CN108563624A (en) | A kind of spatial term method based on deep learning | |
CN114168749A (en) | Question generation system based on knowledge graph and question word drive | |
CN111966800A (en) | Emotional dialogue generation method and device and emotional dialogue model training method and device | |
CN112967739B (en) | Voice endpoint detection method and system based on long-term and short-term memory network | |
CN114385802A (en) | Common-emotion conversation generation method integrating theme prediction and emotion inference | |
CN110717027A (en) | Multi-round intelligent question-answering method, system, controller and medium | |
CN114168707A (en) | Recommendation-oriented emotion type conversation method | |
CN115563290B (en) | Intelligent emotion recognition method based on context modeling | |
CN113673535A (en) | Image description generation method of multi-modal feature fusion network | |
CN114281954A (en) | Multi-round dialog reply generation system and method based on relational graph attention network | |
CN112765333A (en) | Automatic dialogue generation method and system based on emotion and prompt word combination | |
CN115495566A (en) | Dialog generation method and system for enhancing text features | |
CN113656569B (en) | Context information reasoning-based generation type dialogue method | |
CN114239607A (en) | Conversation reply method and device | |
CN115422388B (en) | Visual dialogue method and system | |
CN111582287A (en) | Image description method based on sufficient visual information and text information | |
CN116842150A (en) | Variation self-encoder reply generation method based on contrast learning | |
CN115204186A (en) | System and method for neural representation with event-centric common sense knowledge for response selection | |
CN115169363A (en) | Knowledge-fused incremental coding dialogue emotion recognition method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |