CN114387007A - Acceptance enhancement-based dynamic recommendation method for graph neural network - Google Patents

Acceptance enhancement-based dynamic recommendation method for graph neural network Download PDF

Info

Publication number
CN114387007A
CN114387007A CN202111471759.4A CN202111471759A CN114387007A CN 114387007 A CN114387007 A CN 114387007A CN 202111471759 A CN202111471759 A CN 202111471759A CN 114387007 A CN114387007 A CN 114387007A
Authority
CN
China
Prior art keywords
user
article
graph
item
dynamic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111471759.4A
Other languages
Chinese (zh)
Inventor
程明杰
徐小龙
邬晶
李少远
周松
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianyi Electronic Commerce Co Ltd
Original Assignee
Tianyi Electronic Commerce Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianyi Electronic Commerce Co Ltd filed Critical Tianyi Electronic Commerce Co Ltd
Priority to CN202111471759.4A priority Critical patent/CN114387007A/en
Publication of CN114387007A publication Critical patent/CN114387007A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0241Advertisements
    • G06Q30/0251Targeted advertisements
    • G06Q30/0255Targeted advertisements based on user history
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9536Search customisation based on social or collaborative filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2216/00Indexing scheme relating to additional aspects of information retrieval not explicitly covered by G06F16/00 and subgroups
    • G06F2216/03Data mining

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Business, Economics & Management (AREA)
  • Accounting & Taxation (AREA)
  • Development Economics (AREA)
  • Finance (AREA)
  • Strategic Management (AREA)
  • Probability & Statistics with Applications (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Game Theory and Decision Science (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • General Business, Economics & Management (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses a dynamic recommendation method based on an acceptance enhancement graph neural network, which comprises the following steps of: building an article dynamic graph; constructing an original article dynamic graph; acquiring a representation vector of an article; and obtaining the representation vector and model prediction and optimization of the user. According to the invention, the interaction between the user and the article is dynamically modeled in the form of an article dynamic graph, the established directed graph can show the dynamic transfer relationship of the article selected by the user, the interest transfer trend of the user is analyzed by excavating the article dynamic graph, and the problem that the interest of the user drifts along with the lapse of time is effectively solved; according to the method, the items interacted with the user are divided into a long-term sequence and a short-term sequence, the inherent interest and the short-term interest of the user are respectively mined from the long-term sequence and the short-term sequence, and the two are considered simultaneously, so that the recommended items not only meet the needs of the user, but also meet the current needs of the user; the method improves the recommendation effect through group intelligence, and improves the accuracy and diversity of recommendation.

Description

Acceptance enhancement-based dynamic recommendation method for graph neural network
Technical Field
The invention relates to the technical field of recommendation systems based on big data and deep learning, in particular to a dynamic recommendation method based on an acceptance enhancement graph neural network.
Background
With the popularization of information technology, people are exposed to a large amount of data information in daily life, such as various news, video websites, advertisement information and the like. However, such information inevitably causes an information overload problem, which means that the excessive information in the network interferes with the user to quickly and accurately find useful information. The efficient recommendation method is one of effective ways to solve the problem of information overload, can search and recommend articles meeting personalized requirements for users, and is widely applied to various online information platforms at present.
One traditional recommendation method is collaborative filtering, which is based on the assumption that users with similar behavior in the past will also have similar preferences in the future for the selection of items. Collaborative filtering assigns a representation vector to each user and each item, and then matches these users and items through a specific matching function such as an inner product or a neural network. However, the recommendation method based on collaborative filtering generally models the interaction between the user and the object in a static manner, and only can capture the long-term inherent interest of the user, and cannot dynamically track the short-term preference of the user, and the problem of interest transfer of the user cannot be solved. In order to quickly track the interest and the preference of a user, the invention carries out dynamic serialization modeling on the interaction of the user and an article and designs a dynamic recommendation method.
In order to quickly grasp the short-term interest of the user through dynamic serialization modeling of the interaction of the user and the article, the accuracy of the interaction data of the user and the article is higher, however, certain randomness and inaccuracy exist in the selection of the article by the user, namely, the user can relatively randomly select an article in a candidate set, and the selected article may not accord with the interest preference of the user. Inaccurate items in the user and item interaction data can cause large noise or disturbance, and the quality of final recommendation is affected. In order to reduce the influence of noise disturbance, the invention introduces the concept of acceptance, calculates the acceptance of the articles selected by the user, and reconstructs the dynamic sequence to complete the filtering of the inaccurate articles and reduce the influence of data disturbance.
The graph neural network is the latest development of deep learning technology in recent years, can effectively process graph data, and is widely applied to the fields of social networks, recommendation systems and the like. The core idea of the method is to construct a multi-layer iterative framework, each node in each layer can fuse the information of the neighbor nodes to complete the updating of the information of the node, and the low-dimensional dense vector representation of the node can be obtained after the multi-layer iteration. Aiming at the sequence problem, a Gated Graph Neural Network (GGNN) which is updated by using gating information has outstanding expressive force, and combines a Gate Recovery Unit (GRU) with the Graph Neural Network, so that Graph data can be effectively processed, and the serialization information in the Graph can be considered. Therefore, the invention considers the introduction of the GGNN to complete the dynamic recommendation process.
The invention provides a dynamic recommendation method based on an acceptance-enhanced graph neural network. Firstly, establishing an original article dynamic graph in a sequence form by interactive data of a user and an article to indicate a dynamic change process of user interest. And reconstructing an article dynamic graph by using the acceptance, and analyzing the article dynamic graph through a Gated Graph Neural Network (GGNN) to obtain a representation vector of the article. And then aiming at each user, acquiring the long-term preference of the user according to all the articles interacted with the user in the past, reconstructing a short-term interaction sequence according to the articles interacted in the short term by using the receptivity to acquire the short-term interest of the user, and finally fusing the long-term preference and the short-term interest of the user to acquire the representation vector of the user. According to the method, on one hand, dynamic modeling is carried out on the articles, the dynamic change process of user interest can be excavated, the concept of acceptance is introduced to complete the filtering of noise data, the robustness of the short-term interactive modeling process is improved, on the other hand, the long-term preference and the short-term interest of the user are considered simultaneously, and the requirements of the user can be tracked quickly and accurately.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provides a dynamic recommendation method based on a graph neural network with enhanced acceptance, which constructs an article dynamic graph in a sequence form by interactive data of a user and an article and then utilizes a gated graph neural network to mine a dynamic change process of user interest. And then, the long-term preference and the short-term interest of the user are mined simultaneously, and the actual demand of the user is quickly grasped. According to the invention, the recommendation effect of the recommendation system is improved by mining the dynamic information of the interaction between the user and the article.
The invention provides the following technical scheme:
the invention provides a dynamic recommendation method based on a graph neural network with enhanced receptivity, which comprises the following steps of:
firstly, constructing a dynamic graph of an original object
When each user interacts in the online recommendation platform, an article sequence can be formed according to the time sequence information of stay, attention and purchase of each user, and then the article sequences formed by all the users are connected together to obtain an original article dynamic graph required by each user; the method comprises the steps that an article sequence generated by each user is a subgraph of an original article dynamic graph, and the original article dynamic graph contains all dynamic transfer information selected by the user on articles, so that the interest transfer process of the user is embodied;
the construction process of the original article dynamic graph comprises the following steps: the article sequences of the users can be connected into a directed graph, namely an original article dynamic graph, and as the number of the users increases, the article sequences are lengthened, and the original article dynamic graph is enlarged; after the original article dynamic graph is constructed, edges in the graph are endowed with a certain weight, and the weight is the number from a head node to a tail node edge divided by the output of the head node;
second, obtaining the expression vector of the article
After constructing the dynamic graph of the original object of the user, we need to obtain the representation vector of the user; in order to quickly grasp the short-term interest of the user through the item dynamic graph, the accuracy of the item dynamic graph has higher requirements, however, the user selects the item at random in the candidate set, and the selected item may not accord with the interest preference of the user; inaccurate objects in the object dynamic graph can bring larger noise, and the quality of representation learning is influenced; here, we introduce the concept of acceptance and combine the neural network of the graph to complete the filtering of inaccurate articles;
facing a directed graph formed by sequences, the gated graph neural network GGNN has outstanding expressive force, so that the structure of the GGNN analytic graph is selected to obtain an expression vector of an article; the GGNN is a multi-layer iterative structure, each layer of iteration is divided into two parts in the invention, the first part utilizes the acceptance to reconstruct the dynamic graph of the article, the inaccurate article is discarded (or weight is reduced) with a certain probability, and the second part directly represents, learns and updates the reconstructed dynamic graph through the GGCN;
the dynamic graph of the article is reconstructed as follows:
firstly, in an original article dynamic graph, the receptivity (the end node of each edge is a start node) of each article relative to the previous article is calculated, and the receptivity represents the probability that a user continues to select the next article after selecting the previous article, and the calculation formula is as follows:
Figure BDA0003392763280000041
where M isijFor the acceptance of item j after the user selects item i,
ω1,ω2∈Rd×d,α∈R2×dfor a corresponding weight matrix, sigma (-) is a sigmoid activation function, and data are mapped into probability values between 0 and 1;
after the receptivity between the objects is obtained, the receptivity is multiplied by the own directed edge weight to obtain a reconstructed weight, and a threshold value delta is introduced into the reconstructed weight1Below which is an indication that the selection of the next item is a noise disturbance, directly removed;
the learning update process is represented as follows:
Figure BDA0003392763280000042
Figure BDA0003392763280000043
Figure BDA0003392763280000044
Figure BDA0003392763280000045
Figure BDA0003392763280000046
where v isiIs a node in the graph, and n is the total number of the nodes; h is belonged to Rd×2dRepresents a weight matrix, ziAnd riRepresenting reset and update gates, respectively, A ∈ Rn×2nFor reconstructing two adjacent matrixes of the picture, i.e. the adjacent matrix of the outgoing edge is spliced with the adjacent matrix of the incoming edge, Ai∈R1×2nIs v in A matrixiTwo corresponding columns; sigma (-) is sigmoid activation function, and v is obtained after updatingi∈Rd
Through multi-layer iteration-dynamic graph reconstruction-representation learning updating, a final representation vector of an article node can be obtained;
thirdly, obtaining the expression vector of the user
After obtaining the representation vector of the item, we need to obtain the representation vector of the user by using the item interacted with the user; for each user, the items with which they interact can be divided into two categories: one category contains all items that interact with the user, representing the user's long-term preferences, from which we can mine the user's inherent interests; another category includes items that interact with the user during the current time period, representing the user's short-term preferences, from which we can mine the user's short-term interests;
for user ujArticle sequence [ v ]u,1,vu,2,...,vu,k]Firstly, the user's natural interests are mined by using the long-term interactive items of the user to obtain the overall embedding of the user, and the process is as follows:
Figure BDA0003392763280000051
αi=qTσ(W1vu,i+c) (8)
where u islFor the global embedding of users, it aggregates the representation vectors of all the items and assigns different weights, α, to different items through the attention mechanismiTo attention coefficient, q, W1And c is the corresponding weight matrix;
for the short-term preference of the user, the short-term preference of the user is more easily influenced by noise disturbance when being acquired, so that here, the short-term sequence reconstruction is carried out by utilizing the acceptance, and the acceptance MijThe calculation method is the same as above; in the reconstruction process, the acceptance degree of the next node is calculated from the beginning of the sequence, and a threshold value delta is introduced2Below which is an indication that the selection of the next item is a noise disturbance, directly removed; then calculating the acceptance of the node after the disturbance relative to the node before the disturbance, and continuously repeating the above process until the last node; finally obtaining reconstructed short-term sequence [ v ]u,1,vu,2,...,vu,m];
The local embedding of the user is then obtained using the reconstruction sequence:
Figure BDA0003392763280000052
where u issFor local embedding by the user, which aggregates items with short-term user interaction, e-(m-i)aIs the attenuation coefficient of each article becauseThe user's short-term interest is more likely to be associated with items that they have interacted with recently, so the longer the interaction time, the less the item impact is made with the attenuation coefficient;
after the integral embedding and the local embedding of the user are obtained, the integral embedding and the local embedding are fused to obtain a final expression vector of the user:
uj=W2(uj,l||uj,s) (10)
where u isj∈RdFor the final representation vector of the user, W2∈Rd×2dIs a weight matrix, | | is a splice symbol, which represents the splicing of two vectors;
fourth, model prediction and optimization
For user set U and item set V, any user UiArticle v with article collectionjThe matching degree of the epsilon V with the epsilon V is as follows:
Figure BDA0003392763280000053
after the matching degrees of all the articles are obtained, the vectors are obtained through the normalization of a softmax function
Figure BDA0003392763280000054
Figure BDA0003392763280000055
Here, the
Figure BDA0003392763280000056
Indicating a likelihood of each item being next selected by the user;
the loss function of our model is obtained by cross entropy loss:
Figure BDA0003392763280000057
where y isiRepresenting user uiActual selectionThe one-hot code of the article of (1);
by minimizing the above loss function, training of the model is completed, and the item with the highest selection probability is recommended for each user.
Compared with the prior art, the invention has the following beneficial effects:
1) the method carries out dynamic modeling on interaction between a user and an article in the form of an article dynamic graph, an established directed graph can show a dynamic transfer relation of the article selected by the user, and by mining the article dynamic graph, the interest transfer trend of the user is analyzed, so that the problem that the interest of the user drifts along with the lapse of time is effectively solved;
2) according to the method, the items interacted with the user are divided into a long-term sequence and a short-term sequence, the inherent interest and the short-term interest of the user are respectively mined from the long-term sequence and the short-term sequence, and the two are considered simultaneously, so that the recommended items not only meet the needs of the user, but also meet the current needs of the user;
3) according to the method, the interaction sequences of all users in the system are spliced together, the recommendation effect is improved through group intelligence, and the recommendation accuracy and diversity are improved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention and not to limit the invention. In the drawings:
FIG. 1 is a schematic diagram of a process for building a dynamic graph of an item;
fig. 2 is a schematic diagram of the division of long-term and short-term preference of an article.
Detailed Description
The preferred embodiments of the present invention will be described in conjunction with the accompanying drawings, and it will be understood that they are described herein for the purpose of illustration and explanation and not limitation. Wherein like reference numerals refer to like parts throughout.
Example 1
Referring to fig. 1-2, the present invention provides a dynamic recommendation method based on a graph neural network with enhanced receptivity, comprising the following steps:
firstly, constructing a dynamic graph of an original object
When each user interacts in the online recommendation platform, an article sequence can be formed according to the time sequence information of stay, attention and purchase of each user, and then the article sequences formed by all the users are connected together to obtain an original article dynamic graph required by each user; the method comprises the steps that an article sequence generated by each user is a subgraph of an original article dynamic graph, and the original article dynamic graph contains all dynamic transfer information selected by the user on articles, so that the interest transfer process of the user is embodied;
the construction process of the original article dynamic graph comprises the following steps: the article sequences of the users can be connected into a directed graph, namely an original article dynamic graph, and as the number of the users increases, the article sequences are lengthened, and the original article dynamic graph is enlarged; after the original article dynamic graph is constructed, edges in the graph are endowed with a certain weight, and the weight is the number from a head node to a tail node edge divided by the output of the head node;
second, obtaining the expression vector of the article
After constructing the dynamic graph of the original object of the user, we need to obtain the representation vector of the user; in order to quickly grasp the short-term interest of the user through the item dynamic graph, the accuracy of the item dynamic graph has higher requirements, however, the user selects the item at random in the candidate set, and the selected item may not accord with the interest preference of the user; inaccurate objects in the object dynamic graph can bring larger noise, and the quality of representation learning is influenced; here, we introduce the concept of acceptance and combine the neural network of the graph to complete the filtering of inaccurate articles;
facing a directed graph formed by sequences, the gated graph neural network GGNN has outstanding expressive force, so that the structure of the GGNN analytic graph is selected to obtain an expression vector of an article; the GGNN is a multi-layer iterative structure, each layer of iteration is divided into two parts in the invention, the first part utilizes the acceptance to reconstruct the dynamic graph of the article, the inaccurate article is discarded (or weight is reduced) with a certain probability, and the second part directly represents, learns and updates the reconstructed dynamic graph through the GGCN;
the dynamic graph of the article is reconstructed as follows:
firstly, in an original article dynamic graph, the receptivity (the end node of each edge is a start node) of each article relative to the previous article is calculated, and the receptivity represents the probability that a user continues to select the next article after selecting the previous article, and the calculation formula is as follows:
Figure BDA0003392763280000071
where M isijFor the acceptance of item j after the user selects item i,
ω1,ω2∈Rd×d,α∈R2×dfor a corresponding weight matrix, sigma (-) is a sigmoid activation function, and data are mapped into probability values between 0 and 1;
after the receptivity between the objects is obtained, the receptivity is multiplied by the own directed edge weight to obtain a reconstructed weight, and a threshold value delta is introduced into the reconstructed weight1Below which is an indication that the selection of the next item is a noise disturbance, directly removed;
the learning update process is represented as follows:
Figure BDA0003392763280000081
Figure BDA0003392763280000082
Figure BDA0003392763280000083
Figure BDA0003392763280000084
Figure BDA0003392763280000085
where v isiIs a node in the graph, and n is the total number of the nodes; h is belonged to Rd×2dRepresents a weight matrix, ziAnd riRepresenting reset and update gates, respectively, A ∈ Rn×2nFor reconstructing two adjacent matrixes of the picture, i.e. the adjacent matrix of the outgoing edge is spliced with the adjacent matrix of the incoming edge, Ai∈R1×2nIs v in A matrixiTwo corresponding columns; sigma (-) is sigmoid activation function, and v is obtained after updatingi∈Rd
Through multi-layer iteration-dynamic graph reconstruction-representation learning updating, a final representation vector of an article node can be obtained;
thirdly, obtaining the expression vector of the user
After obtaining the representation vector of the item, we need to obtain the representation vector of the user by using the item interacted with the user; for each user, the items with which they interact can be divided into two categories: one category contains all items that interact with the user, representing the user's long-term preferences, from which we can mine the user's inherent interests; another category includes items that interact with the user during the current time period, representing the user's short-term preferences, from which we can mine the user's short-term interests;
for user ujArticle sequence [ v ]u,1,vu,2,...,vu,k]Firstly, the user's natural interests are mined by using the long-term interactive items of the user to obtain the overall embedding of the user, and the process is as follows:
Figure BDA0003392763280000086
αi=qTσ(W1vu,i+c) (8)
where u islFor the user's whole embedding, it aggregates the representation vectors of all the itemsAnd different weights, alpha, are assigned to different articles by means of an attention mechanismiTo attention coefficient, q, W1And c is the corresponding weight matrix;
for the short-term preference of the user, the short-term preference of the user is more easily influenced by noise disturbance when being acquired, so that here, the short-term sequence reconstruction is carried out by utilizing the acceptance, and the acceptance MijThe calculation method is the same as above; in the reconstruction process, the acceptance degree of the next node is calculated from the beginning of the sequence, and a threshold value delta is introduced2Below which is an indication that the selection of the next item is a noise disturbance, directly removed; then calculating the acceptance of the node after the disturbance relative to the node before the disturbance, and continuously repeating the above process until the last node; finally obtaining the reconstructed short-term sequence [ v'u,1,v′u,2,...,v′u,m];
The local embedding of the user is then obtained using the reconstruction sequence:
Figure BDA0003392763280000091
where u issFor local embedding by the user, which aggregates items with short-term user interaction, e-(m-i)aFor each item, the attenuation factor is used to make the influence of items interacted longer, smaller, because the user's short-term interest is more likely to be associated with items that they have interacted with in the near future;
after the integral embedding and the local embedding of the user are obtained, the integral embedding and the local embedding are fused to obtain a final expression vector of the user:
uj=W2(uj,l||uj,s) (10)
where u isj∈RdFor the final representation vector of the user, W2∈Rd×2dIs a weight matrix, | | is a splice symbol, which represents the splicing of two vectors;
fourth, model prediction and optimization
For user set U and article set V, use arbitrarilyHuu (household)iArticle v with article collectionjThe matching degree of the epsilon V with the epsilon V is as follows:
Figure BDA0003392763280000092
after the matching degrees of all the articles are obtained, the vectors are obtained through the normalization of a softmax function
Figure BDA0003392763280000096
Figure BDA0003392763280000093
Here, the
Figure BDA0003392763280000094
Indicating a likelihood of each item being next selected by the user;
the loss function of our model is obtained by cross entropy loss:
Figure BDA0003392763280000095
where y isiRepresenting user uiOne-hot encoding of the actually selected item;
by minimizing the above loss function, training of the model is completed, and the item with the highest selection probability is recommended for each user.
Finally, it should be noted that: although the present invention has been described in detail with reference to the foregoing embodiments, it will be apparent to those skilled in the art that changes may be made in the embodiments and/or equivalents thereof without departing from the spirit and scope of the invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (1)

1. A dynamic recommendation method based on a graph neural network with enhanced receptivity is characterized by comprising the following steps:
firstly, constructing a dynamic graph of an original object
When each user interacts in the online recommendation platform, an article sequence can be formed according to the time sequence information of stay, attention and purchase of each user, and then the article sequences formed by all the users are connected together to obtain an original article dynamic graph required by each user; the method comprises the steps that an article sequence generated by each user is a subgraph of an original article dynamic graph, and the original article dynamic graph contains all dynamic transfer information selected by the user on articles, so that the interest transfer process of the user is embodied;
the construction process of the original article dynamic graph comprises the following steps: the article sequences of the users can be connected into a directed graph, namely an original article dynamic graph, and as the number of the users increases, the article sequences are lengthened, and the original article dynamic graph is enlarged; after the original article dynamic graph is constructed, edges in the graph are endowed with a certain weight, and the weight is the number from a head node to a tail node edge divided by the output of the head node;
second, obtaining the expression vector of the article
After constructing the dynamic graph of the original object of the user, we need to obtain the representation vector of the user; in order to quickly grasp the short-term interest of the user through the item dynamic graph, the accuracy of the item dynamic graph has higher requirements, however, the user selects the item at random in the candidate set, and the selected item may not accord with the interest preference of the user; inaccurate objects in the object dynamic graph can bring larger noise, and the quality of representation learning is influenced; here, we introduce the concept of acceptance and combine the neural network of the graph to complete the filtering of inaccurate articles;
facing a directed graph formed by sequences, the gated graph neural network GGNN has outstanding expressive force, so that the structure of the GGNN analytic graph is selected to obtain an expression vector of an article; the GGNN is a multi-layer iterative structure, each layer of iteration is divided into two parts in the invention, the first part utilizes the acceptance to reconstruct the dynamic graph of the article, the inaccurate article is discarded (or weight is reduced) with a certain probability, and the second part directly represents, learns and updates the reconstructed dynamic graph through the GGCN;
the dynamic graph of the article is reconstructed as follows:
firstly, in an original article dynamic graph, the receptivity (the end node of each edge is a start node) of each article relative to the previous article is calculated, and the receptivity represents the probability that a user continues to select the next article after selecting the previous article, and the calculation formula is as follows:
Figure FDA0003392763270000011
where M isijFor the acceptance of item j after the user selects item i,
ω1,ω2∈Ra×d,α∈R2×dfor a corresponding weight matrix, sigma (-) is a sigmoid activation function, and data are mapped into probability values between 0 and 1;
after the receptivity between the objects is obtained, the receptivity is multiplied by the own directed edge weight to obtain a reconstructed weight, and a threshold value delta is introduced into the reconstructed weight1Below which is an indication that the selection of the next item is a noise disturbance, directly removed;
the learning update process is represented as follows:
Figure FDA0003392763270000021
Figure FDA0003392763270000022
Figure FDA0003392763270000023
Figure FDA0003392763270000024
Figure FDA0003392763270000025
where v isiIs a node in the graph, and n is the total number of the nodes; h is belonged to Rd×2dRepresents a weight matrix, ziAnd riRepresenting reset and update gates, respectively, A ∈ Rn×2nFor reconstructing two adjacent matrixes of the picture, i.e. the adjacent matrix of the outgoing edge is spliced with the adjacent matrix of the incoming edge, Ai∈R1×2nIs v in A matrixiTwo corresponding columns; sigma (-) is sigmoid activation function, and v is obtained after updatingi∈Rd
Through multi-layer iteration-dynamic graph reconstruction-representation learning updating, a final representation vector of an article node can be obtained;
thirdly, obtaining the expression vector of the user
After obtaining the representation vector of the item, we need to obtain the representation vector of the user by using the item interacted with the user; for each user, the items with which they interact can be divided into two categories: one category contains all items that interact with the user, representing the user's long-term preferences, from which we can mine the user's inherent interests; another category includes items that interact with the user during the current time period, representing the user's short-term preferences, from which we can mine the user's short-term interests;
for user ujArticle sequence [ v ]u,1,vu,2,...,vu,k]Firstly, the user's natural interests are mined by using the long-term interactive items of the user to obtain the overall embedding of the user, and the process is as follows:
Figure FDA0003392763270000026
αi=qTσ(W1vu,i+c) (8)
where u islFor the global embedding of users, it aggregates the representation vectors of all the items and assigns different weights, α, to different items through the attention mechanismiTo attention coefficient, q, W1And c is the corresponding weight matrix;
for the short-term preference of the user, the short-term preference of the user is more easily influenced by noise disturbance when being acquired, so that here, the short-term sequence reconstruction is carried out by utilizing the acceptance, and the acceptance MijThe calculation method is the same as above; in the reconstruction process, the acceptance degree of the next node is calculated from the beginning of the sequence, and a threshold value delta is introduced2Below which is an indication that the selection of the next item is a noise disturbance, directly removed; then calculating the acceptance of the node after the disturbance relative to the node before the disturbance, and continuously repeating the above process until the last node; finally obtaining the reconstructed short-term sequence [ v'u,1,v′u,2,...,v′u,m];
The local embedding of the user is then obtained using the reconstruction sequence:
Figure FDA0003392763270000031
where u issFor local embedding by the user, which aggregates items with short-term user interaction, e-(m-i)aFor each item, the attenuation factor is used to make the influence of items interacted longer, smaller, because the user's short-term interest is more likely to be associated with items that they have interacted with in the near future;
after the integral embedding and the local embedding of the user are obtained, the integral embedding and the local embedding are fused to obtain a final expression vector of the user:
uj=W2(uj,l||uj,s) (10)
where u isj∈RdFor the final representation vector of the user, W2∈Rd×2dIs a weight matrix, | | is a splice symbol, which represents the splicing of two vectors;
fourth, model prediction and optimization
For user set U and item set V, any user UiArticle v with article collectionjThe matching degree of the epsilon V with the epsilon V is as follows:
Figure FDA0003392763270000032
after the matching degrees of all the articles are obtained, the vectors are obtained through the normalization of a softmax function
Figure FDA0003392763270000033
Figure FDA0003392763270000034
Here, the
Figure FDA0003392763270000035
Indicating a likelihood of each item being next selected by the user;
the loss function of our model is obtained by cross entropy loss:
Figure FDA0003392763270000036
where y isiRepresenting user uiOne-hot encoding of the actually selected item;
by minimizing the above loss function, training of the model is completed, and the item with the highest selection probability is recommended for each user.
CN202111471759.4A 2021-12-06 2021-12-06 Acceptance enhancement-based dynamic recommendation method for graph neural network Pending CN114387007A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111471759.4A CN114387007A (en) 2021-12-06 2021-12-06 Acceptance enhancement-based dynamic recommendation method for graph neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111471759.4A CN114387007A (en) 2021-12-06 2021-12-06 Acceptance enhancement-based dynamic recommendation method for graph neural network

Publications (1)

Publication Number Publication Date
CN114387007A true CN114387007A (en) 2022-04-22

Family

ID=81196534

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111471759.4A Pending CN114387007A (en) 2021-12-06 2021-12-06 Acceptance enhancement-based dynamic recommendation method for graph neural network

Country Status (1)

Country Link
CN (1) CN114387007A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116028727A (en) * 2023-03-30 2023-04-28 南京邮电大学 Video recommendation method based on image data processing

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116028727A (en) * 2023-03-30 2023-04-28 南京邮电大学 Video recommendation method based on image data processing
CN116028727B (en) * 2023-03-30 2023-08-18 南京邮电大学 Video recommendation method based on image data processing

Similar Documents

Publication Publication Date Title
CN112214685B (en) Knowledge graph-based personalized recommendation method
CN111611472B (en) Binding recommendation method and system based on graph convolution neural network
CN111310063B (en) Neural network-based article recommendation method for memory perception gated factorization machine
CN113254803A (en) Social recommendation method based on multi-feature heterogeneous graph neural network
CN112115377B (en) Graph neural network link prediction recommendation method based on social relationship
CN112364976B (en) User preference prediction method based on session recommendation system
CN112256980A (en) Dynamic graph attention network-based multi-relation collaborative filtering recommendation
CN112950324B (en) Knowledge graph assisted pairwise sorting personalized merchant recommendation method and system
CN111127146A (en) Information recommendation method and system based on convolutional neural network and noise reduction self-encoder
CN113918832B (en) Graph convolution collaborative filtering recommendation system based on social relationship
CN109783539A (en) Usage mining and its model building method, device and computer equipment
CN112651940B (en) Collaborative visual saliency detection method based on dual-encoder generation type countermeasure network
CN113592609B (en) Personalized clothing collocation recommendation method and system utilizing time factors
CN112016002A (en) Mixed recommendation method integrating comment text level attention and time factors
CN110738314B (en) Click rate prediction method and device based on deep migration network
CN113158071A (en) Knowledge social contact recommendation method, system and equipment based on graph neural network
CN113868537B (en) Recommendation method based on multi-behavior session graph fusion
CN113918834A (en) Graph convolution collaborative filtering recommendation method fusing social relations
CN115687760A (en) User learning interest label prediction method based on graph neural network
CN114387007A (en) Acceptance enhancement-based dynamic recommendation method for graph neural network
CN113610610B (en) Session recommendation method and system based on graph neural network and comment similarity
Liao et al. Time-sync comments denoising via graph convolutional and contextual encoding
CN111062738A (en) Big data and artificial intelligence based audio platform popularization advertisement subject generation method
CN110347916A (en) Cross-scenario item recommendation method, device, electronic equipment and storage medium
CN116304289A (en) Information chain recommendation method and device for supply chain based on graphic neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication