CN114912020A - Multi-sub-target dialogue recommendation method based on user preference graph - Google Patents

Multi-sub-target dialogue recommendation method based on user preference graph Download PDF

Info

Publication number
CN114912020A
CN114912020A CN202210437356.6A CN202210437356A CN114912020A CN 114912020 A CN114912020 A CN 114912020A CN 202210437356 A CN202210437356 A CN 202210437356A CN 114912020 A CN114912020 A CN 114912020A
Authority
CN
China
Prior art keywords
entity
user
sub
user preference
key
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210437356.6A
Other languages
Chinese (zh)
Other versions
CN114912020B (en
Inventor
杨静
凌静
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
East China Normal University
Original Assignee
East China Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by East China Normal University filed Critical East China Normal University
Priority to CN202210437356.6A priority Critical patent/CN114912020B/en
Publication of CN114912020A publication Critical patent/CN114912020A/en
Application granted granted Critical
Publication of CN114912020B publication Critical patent/CN114912020B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9535Search customisation based on user profiles and personalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/903Querying
    • G06F16/9032Query formulation
    • G06F16/90332Natural language query formulation or dialogue systems
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Machine Translation (AREA)

Abstract

The invention discloses a multi-sub-target dialogue recommendation method based on a user preference chart, which is characterized by comprising the following steps: 1) constructing a user preference graph, and expressing entity-level preference expressed by a user in a recent topic; 2) predicting key entities based on the user preference graph, and providing explicit content guidance for subsequent reply generation; 3) generating reply content based on the current conversation state, the user preference graph and the related triples of the key entities, and determining that the conversation is ended or enters the next conversation based on the online feedback of the user. Compared with the prior art, the method has higher conversation recommendation accuracy and consistency of reply generation, further enhances model interpretability, and can be widely applied to social conversation recommendation systems.

Description

Multi-sub-target dialogue recommendation method based on user preference graph
Technical Field
The invention relates to the technical field of session recommendation, in particular to a multi-sub-target dialog recommendation method based on a user preference chart.
Background
With the increasing amount of information on the internet, people are more and more difficult to distinguish interesting contents from the information. Recommendation systems have emerged to address the problem of "information overload," which predicts the needs of users and recommends to them the content they are most likely to like. The traditional recommendation utilizes historical interactive information to estimate user preference, and the situations of sparse historical information, noise or change of user preference are difficult to deal with. The Conversation Recommendation System (CRS) dynamically interacts with the user, and can directly inquire the user's preference during the conversation process, emphasize interaction in natural language and obtain explicit feedback.
Recent conversation recommendation work proposes a DuRecDial data set for establishing a multi-sub-target recommendation conversation system, marks 21 detailed sub-targets in three subjects of chatting, question answering and recommendation, and divides a conversation into a plurality of stages according to the sub-targets. The task aims to enable the system to plan various sub-targets to actively guide conversation trend, and the conversation is introduced from non-recommended scenes such as question and answer, chatting about stars and the like to recommended scenes such as movie recommendation and music recommendation. The multi-sub-target recommendation dialogue system uses information in the knowledge base to help complete processes of question answering, recommendation and the like, so that accurate selection of knowledge is important for generating high-quality dialogue.
In the multi-sub-target recommendation dialogue method in the prior art, most of knowledge in a given knowledge base is directly used without being screened, so that the calculation amount is increased, and a large amount of noise knowledge can be introduced.
Disclosure of Invention
The invention aims to design a multi-sub-target conversation recommendation method based on a user preference graph aiming at the defects of the prior art, which adopts the constructed user preference graph, utilizes a key entity prediction module to predict key entities of the next round of conversation content, generates the reply of each round of system to the historical conversation by a conversation generation module, and determines the end of the conversation or the entering of the next round of conversation based on the online feedback of the user. The method realizes multi-sub-target dialogue recommendation, improves the accuracy rate of the dialogue recommendation and the consistency of reply generation, enhances the interpretability of the model, is simple and convenient, has less calculation amount, greatly reduces noise knowledge, has high recommendation accuracy rate, can be widely applied to a social dialogue recommendation system, and has good application prospect.
The purpose of the invention is realized as follows: a multi-sub-target dialogue recommendation method based on user preference chart is characterized in thatIs to adopt the constructed user preference graph K p The method comprises the following steps of predicting the key entity of the next round of conversation content by using a key entity prediction module, and generating the reply of each round of system to the historical conversation by using a conversation generation module, wherein the method specifically comprises the following steps:
a, step a: building module for building user preference graph
a-1: initializing a user preference graph K based on a user profile before a session begins p
a-2: while the conversation is in progress, matching entities mentioned in the conversation according to the entity files, and predicting entity-level preferences of the user through a pre-trained language model (BERT) -based language model;
a-3: updating user preference graph K according to user entity-level preferences p
b, step (b): establishing key entity modules
Based on current user preference graph K p And sub-targets, determining candidate key entities and predicting the key entities through a BERT model.
c, step (c): establishing dialogue generating module
Based on historical dialogue, user preference graph K p And the key entity related information is interacted with the user by generating a reply through the deep neural network model, and the interactive content comprises chatting, question answering or recommendation.
Initializing user preference graph K in the a-1 step p The method specifically comprises the following steps:
1.1 taking the intersection of the user accepted entity in the user profile of the multi-sub-target dialogue data and the entity existing in the knowledge graph as the user preference graph K p The initial core node of (1).
1.2 fetch user preference map K in knowledge graph p The initial core node and the subgraph formed by the points and the edges in the hop range of the initial core node are used as the initial user preference graph K p And each initial core node is given a weight of 1.
The entity-level preference in the step a-2 specifically comprises the following steps:
1.3 processing entity documents based on the entity information in the knowledge graph and according to the entity documentsMatching entity E mentioned in the dialog; wherein the mentioned entities are classified as user mentioned entities E user And the dialog recommendation system mentions entity E system
1.4 mentioning entity e based on the user u ∈E user And a sentence s of the user mentioning the entity u Obtaining user-to-entity e through a pre-trained BERT model u The process of (c) is represented by the following formulae (a) to (c):
F u =[[CLS];e u ;[SEP];s u ] (a);
H u =BERT(F u ) (b);
A(e u )=Linear(H u ) (c)。
or mentioning entity e based on the dialog recommendation system s ∈E system And user-to-mention entity e s Feedback s of sentences of resp Obtaining user-to-entity e through a pre-trained BERT model s The process of (d) is represented by the following formulae (d) to (f):
F s =[[CLS];e s ;[SEP];s resp ] (d);
H s =BERT(F s ) (e);
A(e s )=Linear(H s ) (f)。
b, updating the user preference graph K in the step 2 p The method specifically comprises the following steps:
1.5 user preference map K before beginning each session p Weight w of the middle core entity key Carrying out attenuation; wherein the weight w key The decay process is represented by the following formula (g):
w key =w key ×0.75 (g)。
1.6 based on the attitude of the user to the mentioned entities, the entities accepted by the user and their one-hop around points and edges in the knowledge base are added to the user preference graph K p . The entity accepted by the user is taken as a core entity and is given a weight of 1, and the nodes of one hop around are non-core entities and are not given weights; user refusal based on user attitude to mention entityAbsolute entity if it exists in the user preference graph K p The entity and its surrounding one-hop non-core entities and associated edges will be mapped from the user preference map K p Is removed.
1.7 when user prefers to Picture K p When the number of the middle core entities is larger than the limited number alpha which is 2, only the user preference graph K is reserved p The limited number of the core entities with the largest medium weight is alpha and the points and edges of one hop around the core entities.
The step b of determining candidate key entities and predicting the key entities by a BERT model specifically comprises the following steps:
2.1 predicting the next sub-target g by means of a Transformer-based method next External knowledge at given dialog history X
Figure BDA0003607613210000043
And recommendation sub-target G T Under the condition of (1), predicting the sub-goal g of the next round next And optimizing a cross entropy loss function logP by the following formula (h):
Figure BDA0003607613210000041
wherein the content of the first and second substances,
Figure BDA0003607613210000042
is the currently generated sub-target character.
2.2 Su-Bidditional g next The determined recommendation types include: movies, music, gourmet, and points of interest; based on the determined recommendation type, finding out entities of the type in all knowledge bases as a candidate key entity set E c
And 2.3, scoring the candidate key entities based on the pre-trained BERT model, and taking the candidate entity with the highest score as the key entity.
2.4 user preference based map K p And e and candidate key entities in the knowledge graph c ∈E c Subgraph K composed of related triples c Giving each candidate key entity e through a pre-trained BERT model c Scoring, and taking the candidate entity with the highest score as a key entity e key The process is represented by the following formulae (i) to (l):
F p =[s 1 ;r 1 ;o 1 ;…;s n ;r n ;o n ],(s i ,r i ,o i )∈K p (i);
F c =[s 1 ;r 1 ;o 1 ;…;s m ;r m ;o m ],(s j ;r j ;o j )∈K c (j);
H=BERT([[CLS];g next ;F p ;[SEP];F c ]) (k);
S(E c )=Sigmoid(Wh 0 ) (l);
wherein: h is 0 Is [ CLS ]]Is represented by (a); w is a trainable parameter.
The step c of generating the reply through the deep neural network model specifically comprises the following steps:
3.1 sub-goal g as described in step 2.1 next The method is further divided into a recommendation sub-target and a non-recommendation sub-target, wherein the recommendation sub-target comprises: recommending movies, music, food and interest points, wherein the sub-targets except the recommending sub-target are non-recommending sub-targets; among them, news recommendations are not entity recommendations, and are therefore also considered as non-recommendation sub-targets.
3.2 under the non-recommended sub-targets, all knowledge triples K in the user preference graph p As reference knowledge K input Inputting a dialog generation model, and under a recommendation sub-target, combining the key entity e in the knowledge base and the key entity e in the step two key Subgraph K composed of related triples key As reference knowledge K input Inputting a dialogue generating model; wherein the dialog generation module comprises an encoder and a decoder; the specific process of generating the reply by the dialog generation module is represented by the following expressions (m) to (o):
F=[s 1 ;r 1 ;o 1 ;…;s n ;r n ;o n ],(s i ,r i ,o i )∈K input (m);
E K =Encoder(F) (n);
Figure BDA0003607613210000055
wherein E is K To refer to knowledge K input The triple in (3) is spliced and then the knowledge characteristics are obtained through an encoder.
3.3 knowledge-based features E K And the representation of the historical dialog X and the decoded part yields the output characteristics of step t, I (-) represents the acquisition of the embedded function, y <t Is a word that has already been decoded and,
Figure BDA0003607613210000051
is the hidden state of the decoder output at time t.
3.4 to integrate the knowledge information into the reply, a replication mechanism is employed at the decoder. Specifically, the word probability p (w) is calculated by a replication mechanism represented by the following expressions (p) to(s):
Figure BDA0003607613210000052
Figure BDA0003607613210000053
Figure BDA0003607613210000054
Figure BDA0003607613210000061
wherein, W v Is an embedded matrix; w k Is a trainable parameter; p(s) determining whether the model is from vocabulary or from reference knowledge K input Generating a word。
3.5 through the following (t) type optimization cross entropy loss training dialog box generation module:
Figure BDA0003607613210000062
compared with the prior art, the invention has the following remarkable technical progress and advantages:
1) the recommendation accuracy is high, the user preference graph provided by the invention focuses on the contribution of entity-level preferences, which are embodied in recent topics, of users to subsequent topics, and the recommendation accuracy is greatly improved;
2) the generated replies have high consistency, the user preference graph provided by the invention models the stage entity level preferences expressed by the user in the historical dialogue, the triplets related to the key entities provide more accurate follow-up content guidance for the dialogue module, and both the triplets and the key entities provide more accurate knowledge for the dialogue module, thereby avoiding the influence caused by noise knowledge in the knowledge graph and improving the consistency of the generated replies;
3) the model interpretability can be improved by providing more accurate human interpretable related knowledge and making predictions which are more consistent with the model according to the knowledge.
Drawings
FIG. 1 is a flow chart of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the following detailed description and accompanying drawings. The procedures, conditions, experimental methods and the like for carrying out the present invention are general knowledge and common general knowledge in the art, and the present invention is not particularly limited, except for those specifically mentioned below.
Example 1
Referring to fig. 1, the present invention mainly includes the following steps:
the method comprises the following steps: initializing user preference graphs
The fact that the user accepts in the user configuration file of the multi-sub-target dialogue data is takenUsing the intersection of the body and the entity existing in the knowledge graph as a user preference graph K p The initial core node of (a); taking the user preference map K in the knowledge graph p The initial core node and the subgraph formed by the points and the edges in the hop range of the initial core node are used as the initial user preference graph K p And each initial core node is given a weight of 1.
Step two: predicting key entities
Predicting the next sub-target g by means of a Transformer-based method next External knowledge at given dialog history X
Figure BDA0003607613210000071
And recommendation sub-target G T Under the condition of (1), predicting the sub-goal g of the next round next And optimizing a cross entropy loss function logP by the following formula (h):
Figure BDA0003607613210000072
wherein the content of the first and second substances,
Figure BDA0003607613210000073
is the sub-target character that has been currently generated.
Then, based on the sub-target g next Determining a recommendation type; wherein the recommendation types include: finding out entities of the type in all knowledge bases as a candidate key entity set E based on the determined recommendation type for the movies, the music, the gourmet and the interest points c (ii) a Finally, based on the user preference graph K p And correlating the knowledge graph with candidate key entities e c ∈E c Subgraph K composed of related triples c Giving each candidate key entity e through a pre-trained BERT model c Scoring, and taking the candidate entity with the highest score as a key entity e key The process is represented by the following formulae (i) to (l):
F p =[s 1 ;r 1 ;o 1 ;…;s n ;r n ;o n ],(s i ,r i ,o i )∈K p (i);
F c =[s 1 ;r 1 ;o 1 ;…;s m ;r m ;o m ],(s j ;r j ;o j )∈K c (j);
H=BERT([[CLS];g next ;F p ;[SEP];F c ]) (k);
S(E c )=Sigmoid(Wh 0 ) (l);
wherein h is 0 Is [ CLS ]]Is represented by (a); w is a trainable parameter.
Step three: conducting conversation generation
A dialog generation module based on the dialog history, the key entities and the user preference graph K p Generating a dialog to interact with a user through a dialog generation model, comprising: chat, question-and-answer, or recommendation, and determine that the conversation is over or enters a next round of conversation based on the user's online feedback. Step two, the sub-target g next And can be further divided into a recommended sub-target and a non-recommended sub-target. Wherein the recommendation sub-targets include: the method comprises the following steps of movie recommendation, music recommendation, food recommendation and interest point recommendation, wherein the sub-targets except the recommendation sub-target are non-recommendation sub-targets. Among them, news recommendations are not entity recommendations, and are therefore also considered as non-recommendation sub-targets.
Under the non-recommended sub-target, a user preference graph K is set p All of knowledge triples K in p As reference knowledge K input Inputting a dialog generation model, and under a recommendation sub-target, combining the key entity e in the knowledge base and the key entity e in the step two key Subgraph K composed of related triples key As reference knowledge K input A dialog generation model is input. Wherein, the dialogue generating module comprises: an encoder and a decoder; the specific process of generating the reply by the dialog generation module is represented by the following expressions (m) to (o):
F=[s 1 ;r 1 ;o 1 ;…;s n ;r n ;o n ],(s i ,r i ,o i )∈K input (m);
E K =Encoder(F) (n);
Figure BDA0003607613210000085
wherein E is K To refer to knowledge K input The triple in (3) is spliced and then the knowledge characteristics are obtained through an encoder.
Then, based on knowledge characteristics E K And the representation of the historical dialog X and the decoded part yields the output characteristics of the t step, I (-) represents the acquisition of the embedded function, y <t Is a word that has already been decoded and,
Figure BDA0003607613210000081
is the hidden state of the decoder output at time t, in order to integrate the knowledge information into the reply, the present invention uses a copy mechanism on the decoder. Specifically, the word probability p (w) is calculated by a replication mechanism expressed by the following expressions (p) to(s):
Figure BDA0003607613210000082
Figure BDA0003607613210000083
Figure BDA0003607613210000084
Figure BDA0003607613210000091
wherein, W v Is an embedded matrix; w k Is a trainable parameter; p(s) determining whether the model is from vocabulary or from reference knowledge K input A word is generated.
An optimized cross entropy loss training dialog generation module by the following equation (t):
Figure BDA0003607613210000092
if the dialog is not finished, the process continues to step four.
Step four: updating a user preference graph
Processing an entity file based on the entity information in the knowledge graph and matching an entity E mentioned in the conversation according to the entity file; wherein the mentioned entities are classified as user mentioned entities E user And the dialog recommendation system mentions entity E system (ii) a Mentioning an entity e based on the user u ∈E user And a sentence s of the user mentioning the entity u Obtaining the user-to-entity e through a pre-trained BERT model u The process of (c) is represented by the following formulae (a) to (c):
F u =[[CLS];e u ;[SEP];s u ] (a);
H u =BERT(F u ) (b);
A(e u )=Linear(H u ) (c)。
or mentioning entity e based on the dialog recommendation system s ∈E system And user to mention entity e s Feedback s of sentences of resp Obtaining user-to-entity e through a pre-trained BERT model s The process of (d) is represented by the following formulae (d) to (f):
F s =[[CLS];e s ;[SEP];s resp ] (d);
H s =BERT(F s ) (e);
A(e s )=Linear(H s ) (f)。
the user preference map K is then presented before each session pair begins p Weight w of the middle core entity key Carrying out attenuation; wherein the weight w key The attenuation process is as follows: w is a key =w key X is 0.75; based on the attitude of the user to the mentioned entity, the entity accepted by the user and the knowledge thereofPoints and edges of one hop around in the library are added to the user preference graph K p . The entity accepted by the user is taken as a core entity and is given a weight of 1, and the nodes of one hop around are non-core entities and are not given a weight. Based on the attitude of the user to the mentioned entities, if the entity rejected by the user exists in the user preference graph K p The entity and its surrounding one-hop non-core entities and associated edges will be mapped from the user preference map K p Is removed. When the user preference map K p When the number of the core entities is greater than the limited number alpha which is 2, only the user preference graph K is reserved p The core entities with the maximum weight and the points and edges of one hop around the core entities with the maximum weight are continuously executed from the step two in the next round of conversation after the updating is completed.
The invention has been described in further detail in order to avoid limiting the scope of the invention, and it is intended that all such equivalent embodiments be included within the scope of the following claims.

Claims (6)

1. A multi-sub-target dialogue recommendation method based on a user preference graph is characterized in that a constructed user preference graph K is adopted p The method comprises the following steps of predicting the key entity of the next round of conversation content by using a key entity prediction module, and generating the reply of each round of system to the historical conversation by using a conversation generation module, wherein the method specifically comprises the following steps:
a, step a: building module for user preference graph
a-1: initializing a user preference graph K based on a user profile before a session begins p
a-2: when the conversation is carried out, matching an entity E mentioned in the conversation according to the entity file, and predicting the entity level preference of the user through a language model based on pre-training;
a-3: updating user preference graph K according to user entity-level preferences p
b, step (b): establishing key entity modules
Based on current user preference graph K p And sub-targets, determining candidate key entities and predicting the key entities through a language model;
c, step (c): establishing dialogue generating module
Based on historical dialogue, user preference graph K p And the key entity related information is interacted with the user by generating a reply through the deep neural network model, and the interactive content comprises chatting, question answering or recommendation.
2. The method for multi-sub-target dialog recommendation based on user preference graph as claimed in claim 1, wherein the initializing user preference graph K in step a p The method specifically comprises the following steps:
a-1-1: taking the intersection of the entity accepted by the user in the user configuration file of the multi-sub-target dialogue data and the entity existing in the knowledge graph as a user preference graph K p The initial core node of (a);
a-1-2: user preference graph K in knowledge graph p The initial core node and the subgraph formed by the points and the edges in the hop range of the initial core node are used as the initial user preference graph K p And each initial core node is given a weight of 1.
3. The method according to claim 1, wherein the entity-level preferences in step a specifically include the following steps:
a-2-1: processing entity files based on the entity information in the knowledge graph and matching entity E mentioned in the conversation according to the entity files, wherein the mentioned entity E is divided into user mentioned entity E user And the dialog recommendation system mentions entity E system
a-2-2: based on user mentioning entity e u ∈E user And a sentence s of the user mentioning the entity u Obtaining user-to-entity e through pre-trained language model u The process of (c) is represented by the following formulae (a) to (c):
F u =[[CLS];e u ;[SEP];s u ] (a);
H u =BERT(F u ) (b);
A(e u )=Linear(H u ) (c);
or mentioning entity e based on a dialog recommendation system s ∈E system And user to mention entity e s Feedback s of sentences of resp Obtaining user-to-entity e through pre-trained language model s The process of (d) is represented by the following formulae (d) to (f):
F s =[[CLS];e s ;[SEP];s resp ] (d);
H s =BERT(F s ) (e);
A(e s )=Linear(H s ) (f)。
4. the method of claim 1, wherein the user preference graph K is updated in step a p The method specifically comprises the following steps:
a-3-1: mapping user preferences K before each session is initiated p Weight w of the middle core entity key Attenuation is carried out, which is represented by the following formula (g):
w key =w key ×0.75 (g);
a-3-2: taking an entity accepted by a user as a core entity and giving a weight of 1, and taking a node of one hop around as a non-core entity and giving no weight;
a-3-3: based on the attitude of the user to the mentioned entities, the entities rejected by the user if present in the user preference graph K p The entity and its surrounding one-hop non-core entities and associated edges will be mapped from the user preference map K p Removing;
a-3-4: when the user prefers the chart K p When the number of the middle core entities is larger than the limited number alpha which is 2, only the user preference graph K is reserved p The limited number of the core entities with the largest medium weight, alpha, and the points and edges of one hop around them.
5. The method for multi-sub-target dialog recommendation based on user preference graph according to claim 1, wherein the determining of the candidate key entities and the prediction of the key entities by the language model in the step b specifically comprise the following steps:
b-1: predicting the next sub-target g by means of a Transformer-based method next Given a dialog history X, external knowledge
Figure FDA0003607613200000033
And recommendation sub-target G T Under the condition of (1), predicting the sub-goal g of the next round nex t, and optimizing a cross-entropy loss function logP by the following formula (h):
Figure FDA0003607613200000031
wherein the content of the first and second substances,
Figure FDA0003607613200000032
the characters of the sub-targets which are generated currently;
b-2: based on the recommended types determined by the sub-objectives, finding out the entities of the types in all knowledge bases as a candidate key entity set E c The recommendation types include: movies, music, gourmet, and points of interest;
b-3: base user preference graph K p And correlating the knowledge graph with candidate key entities e c ∈E c Subgraph K composed of related triples c Giving each candidate key entity e through a pre-trained language model c Scoring, and taking the candidate entity with the highest score as a key entity e key The process is represented by the following formulae (i) to (l):
F p =[s 1 ;r 1 ;o 1 ;…;s n ;r n ;o n ],(s i ,r i ,o i )∈K p (i);
F c =[s 1 ;r 1 ;o 1 ;…;s m ;r m ;o m ],(s j ;r j ;o j )∈K c (j);
H=BERT([[CLS];g next ;F p ;[SEP];F c ]) (k);
S(E c )=Sigmoid(Wh 0 ) (l);
wherein: h is a total of 0 Is [ CLS ]]Is represented by (a); w is a trainable parameter.
6. The method for recommending multi-sub-target dialog based on user preference graph according to claim 1, wherein the step c of generating the reply through the deep neural network model specifically comprises the steps of:
c-1: b sub-target g in step b next The method is further divided into a recommendation sub-goal and a non-recommendation sub-goal, wherein the recommendation sub-goal comprises the following steps: the method comprises the following steps of recommending movies, music, gourmet and interest points, wherein the sub-targets except the recommended sub-target are non-recommended sub-targets;
c-2: under the non-recommended sub-target, a user preference graph K is set p All of the knowledge triples in (A) as reference knowledge K input Inputting a dialogue generating model; under the recommendation sub-target, the key entity e in the knowledge base and the step b is added key Subgraph K composed of related triples key As reference knowledge K input Inputting a dialogue generating model; the dialog generation module includes: an encoder and a decoder; the specific process of generating the reply by the dialog generation module is expressed by the following expressions (m) to (o):
F=[s 1 ;r 1 ;o 1 ;…;s n ;r n ;o n ],(s i ,r i ,o i )∈K input (m);
E K =Encoder(F) (n);
Figure FDA0003607613200000041
wherein, E K For reference knowledge K input The triple in (3) is spliced and then the knowledge characteristics are obtained through an encoder; i (-) denotes obtaining the embedded function; y is <t Is a word that has already been decoded;
Figure FDA0003607613200000042
is the hidden state of the decoder output at time t;
c-3: based on knowledge characteristics E K And the historical dialogue X and the decoded part are expressed to obtain the output characteristics of the t step;
c-4: the knowledge information is integrated into the reply by using a replication mechanism arranged on a decoder, and the word probability P (w) is calculated by the replication mechanism represented by the following expressions (p) to(s):
Figure FDA0003607613200000043
Figure FDA0003607613200000044
Figure FDA0003607613200000045
Figure FDA0003607613200000046
wherein, W v Is an embedded matrix; w k Is a trainable parameter; p(s) determining whether the model is from vocabulary or from reference knowledge K input Generating a word;
c-5: an optimized cross entropy loss training dialog generation module by the following equation (t):
Figure FDA0003607613200000051
CN202210437356.6A 2022-04-21 2022-04-21 Multi-sub-target dialogue recommendation method based on user preference graph Active CN114912020B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210437356.6A CN114912020B (en) 2022-04-21 2022-04-21 Multi-sub-target dialogue recommendation method based on user preference graph

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210437356.6A CN114912020B (en) 2022-04-21 2022-04-21 Multi-sub-target dialogue recommendation method based on user preference graph

Publications (2)

Publication Number Publication Date
CN114912020A true CN114912020A (en) 2022-08-16
CN114912020B CN114912020B (en) 2023-06-23

Family

ID=82764314

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210437356.6A Active CN114912020B (en) 2022-04-21 2022-04-21 Multi-sub-target dialogue recommendation method based on user preference graph

Country Status (1)

Country Link
CN (1) CN114912020B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116384412A (en) * 2023-02-24 2023-07-04 华院计算技术(上海)股份有限公司 Dialogue content generation method and device, computer readable storage medium and terminal

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101482884A (en) * 2009-01-21 2009-07-15 华东师范大学 Cooperation recommending system based on user predilection grade distribution
CN108874998A (en) * 2018-06-14 2018-11-23 华东师范大学 A kind of dialog mode music recommended method indicated based on composite character vector
CN112836120A (en) * 2021-01-27 2021-05-25 深圳大学 Multi-mode knowledge graph-based movie recommendation method, system and terminal
CN113268609A (en) * 2021-06-22 2021-08-17 中国平安人寿保险股份有限公司 Dialog content recommendation method, device, equipment and medium based on knowledge graph
CN114065047A (en) * 2021-11-25 2022-02-18 上海理工大学 Knowledge enhancement dialogue recommendation method based on multi-level attention mechanism
CN114168721A (en) * 2021-11-18 2022-03-11 华东师范大学 Method for constructing knowledge enhancement model for multi-sub-target dialogue recommendation system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101482884A (en) * 2009-01-21 2009-07-15 华东师范大学 Cooperation recommending system based on user predilection grade distribution
CN108874998A (en) * 2018-06-14 2018-11-23 华东师范大学 A kind of dialog mode music recommended method indicated based on composite character vector
CN112836120A (en) * 2021-01-27 2021-05-25 深圳大学 Multi-mode knowledge graph-based movie recommendation method, system and terminal
CN113268609A (en) * 2021-06-22 2021-08-17 中国平安人寿保险股份有限公司 Dialog content recommendation method, device, equipment and medium based on knowledge graph
CN114168721A (en) * 2021-11-18 2022-03-11 华东师范大学 Method for constructing knowledge enhancement model for multi-sub-target dialogue recommendation system
CN114065047A (en) * 2021-11-25 2022-02-18 上海理工大学 Knowledge enhancement dialogue recommendation method based on multi-level attention mechanism

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
张骏等: ""知识感知的多类型对话推荐"", 《中文信息学报》, vol. 35, no. 5, pages 110 - 117 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116384412A (en) * 2023-02-24 2023-07-04 华院计算技术(上海)股份有限公司 Dialogue content generation method and device, computer readable storage medium and terminal
CN116384412B (en) * 2023-02-24 2024-03-29 华院计算技术(上海)股份有限公司 Dialogue content generation method and device, computer readable storage medium and terminal

Also Published As

Publication number Publication date
CN114912020B (en) 2023-06-23

Similar Documents

Publication Publication Date Title
Csaky Deep learning based chatbot models
CN110427617B (en) Push information generation method and device
Monroe et al. Colors in context: A pragmatic neural model for grounded language understanding
CN109844743B (en) Generating responses in automated chat
US20180329884A1 (en) Neural contextual conversation learning
CN111966800B (en) Emotion dialogue generation method and device and emotion dialogue model training method and device
CN108549658B (en) Deep learning video question-answering method and system based on attention mechanism on syntax analysis tree
CN110297887B (en) Service robot personalized dialogue system and method based on cloud platform
CN106875940B (en) Machine self-learning construction knowledge graph training method based on neural network
CN111460132B (en) Generation type conference abstract method based on graph convolution neural network
CN108228576B (en) Text translation method and device
CN111159368A (en) Reply generation method for personalized dialogue
CN109271496B (en) Natural question-answering method based on text, knowledge base and sequence-to-sequence
CN114153955B (en) Construction method of multi-skill task type dialogue system fusing chatting and common knowledge
CN110069611B (en) Topic-enhanced chat robot reply generation method and device
CN110597968A (en) Reply selection method and device
CN112364148B (en) Deep learning method-based generative chat robot
CN113392640B (en) Title determination method, device, equipment and storage medium
CN113761156A (en) Data processing method, device and medium for man-machine interaction conversation and electronic equipment
CN115630145A (en) Multi-granularity emotion-based conversation recommendation method and system
CN110046239B (en) Dialogue method based on emotion editing
CN115470991A (en) Network rumor propagation prediction method based on user short-time emotion and evolutionary game
CN114912020A (en) Multi-sub-target dialogue recommendation method based on user preference graph
CN113656542A (en) Dialect recommendation method based on information retrieval and sorting
CN114328866A (en) Strong anthropomorphic intelligent dialogue robot with smooth and accurate response

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant