CN113918833A - Product recommendation method realized through graph convolution collaborative filtering of social network relationship - Google Patents

Product recommendation method realized through graph convolution collaborative filtering of social network relationship Download PDF

Info

Publication number
CN113918833A
CN113918833A CN202111235556.5A CN202111235556A CN113918833A CN 113918833 A CN113918833 A CN 113918833A CN 202111235556 A CN202111235556 A CN 202111235556A CN 113918833 A CN113918833 A CN 113918833A
Authority
CN
China
Prior art keywords
embedding
user
layer
social
matrix
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111235556.5A
Other languages
Chinese (zh)
Other versions
CN113918833B (en
Inventor
刘小洋
赵正阳
马敏
王浩田
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing University of Technology
Original Assignee
Chongqing University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University of Technology filed Critical Chongqing University of Technology
Priority to CN202111235556.5A priority Critical patent/CN113918833B/en
Publication of CN113918833A publication Critical patent/CN113918833A/en
Application granted granted Critical
Publication of CN113918833B publication Critical patent/CN113918833B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9536Search customisation based on social or collaborative filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • G06Q30/0601Electronic shopping [e-shopping]
    • G06Q30/0631Item recommendations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/01Social networking

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Databases & Information Systems (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Strategic Management (AREA)
  • Mathematical Physics (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • General Business, Economics & Management (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Accounting & Taxation (AREA)
  • Finance (AREA)
  • Molecular Biology (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • Development Economics (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Resources & Organizations (AREA)
  • Primary Health Care (AREA)
  • Tourism & Hospitality (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention provides a product recommendation method realized through graph convolution collaborative filtering of social network relationships, which comprises the following steps: s1, randomly initializing an embedding matrix of the nodes and inquiring to respectively obtain the initialized embedding of the user u and the item i; s2, after obtaining the initial embedding of the node, using a semantic aggregation layer to aggregate and update the node embedding; firstly, introducing first-order semantic aggregation in a semantic aggregation layer, and then expanding the first-order semantic aggregation to each layer to realize high-order semantic aggregation; s3, fusing the user embedded vectors of the social embedding and propagation layer and the interactive embedding and propagation layer after respectively obtaining the semantic aggregation embedded vector of the social embedding and propagation layer and the semantic aggregation embedded vector of the interactive embedding and propagation layer; then, weighting, summing and fusing each order of embedding obtained by each embedding propagation layer to obtain final user embedding and article embedding; and S4, recommending products for the user according to the embedding of the articles. The method can extract the social information of the user, has high expandability, and has rich mined semantic information and good recommendation effect.

Description

Product recommendation method realized through graph convolution collaborative filtering of social network relationship
Technical Field
The invention relates to a recommendation method, in particular to a product recommendation method realized through graph convolution collaborative filtering of social network relationships.
Background
In the information explosion age, the recommendation system has become one of the most effective ways to help users find mass data which are interested in, and the core of the recommendation system is to estimate the possibility of users receiving properties according to historical interaction conditions of users such as purchase and click. Generally, recommendation systems generally follow two steps: the vectorized representation (embedding) of the user and the item is learned and then the interaction between them is simulated (e.g., whether the user purchased the item or not). Collaborative Filtering (CF) is based on historical interactive learning node embedding on a user-item bipartite graph, and item recommendation is performed by predicting user preferences based on parameters.
In general, there are two key components in the learnable CF model: 1) embedding, which converts users and items into vectorized representations; 2) interaction modeling, which reconstructs historical interactions based on embedding. For example, Matrix Factorization (MF) embeds user and item IDs directly as vectors and models user-item interactions using inner products; the cooperative deep learning expands the MF embedding function by integrating the deep representation of the rich side information of the article; the neural collaborative filtering model replaces MF interactive functions of inner products by a nonlinear neural network; the translation-based CF model uses euclidean distance metrics as interaction functions, etc.
While these methods are effective, they construct embedding functions using only descriptive features (such as ID and attributes) rather than considering user-item interaction information, which is only used to define the objective function of model training, whose embedding functions lack explicit coding of key synergy signals hidden in the user-item interaction data, resulting in insufficient generation of satisfactory embedding for CF.
With the recent development of the graph neural network, the proposal of LightGCN makes the CF model implemented by the conventional method shift to the graph convolutional neural network. The method is a lightweight GCN network construction model, abandons the feature transformation and nonlinear activation of the traditional GCN, and verifies through experiments that the two operations are ineffective for collaborative filtering. LightGCN learns the embedding of users and items by linear propagation over the user-item interaction matrix, and finally takes the weighted sum of the embedding learned by all layers as the final embedding. Although the problem existing in the method is solved by the proposal of the LightGCN, the LightGCN is only limited to processing historical interaction data of a user-item, and cannot model the social interaction of the user so as to extract social characteristic information of the user, so that the expandability of the LightGCN is not high, and the mined semantic information is single, so that the recommendation effect is influenced.
Disclosure of Invention
The invention aims to at least solve the technical problems in the prior art, and particularly creatively provides a product recommendation method realized through graph volume collaborative filtering of social network relationships.
In order to achieve the above object, the present invention provides a product recommendation method implemented by graph convolution collaborative filtering of social network relationships, comprising the following steps:
s1, randomly initializing an embedding matrix of the nodes and inquiring to respectively obtain the initialized embedding of the user u and the item i;
s2, after obtaining the initial embedding of the node, using a semantic aggregation layer to aggregate and update the node embedding; firstly, introducing first-order semantic aggregation in a semantic aggregation layer, and then expanding the first-order semantic aggregation to each layer to realize high-order semantic aggregation;
s3, fusing the user embedded vectors of the social embedding and propagation layer and the interactive embedding and propagation layer after respectively obtaining the semantic aggregation embedded vector of the social embedding and propagation layer and the semantic aggregation embedded vector of the interactive embedding and propagation layer; then, weighting, summing and fusing all orders of embedding obtained by embedding the propagation layers to obtain final user embedding and article embedding;
the fusion adopts a polymerization mode of firstly solving a Hadamard product and then performing line regularization or a polymerization mode of firstly splicing and then reducing the dimensionality to the same dimensionality as the original dimensionality through a full connection layer;
s4, recommending products for users according to the embedding of the articles;
s5, optimizing the product in the step S4;
and S6, sending the optimized recommended product to the mobile phone of the corresponding user.
Further, the first-order semantic aggregation in S2 includes:
the interaction embedding propagation layer refines the embedding of the user by aggregating the embedding of the interaction articles, and refines the embedding of the articles by aggregating the embedding of the interaction users; the first-order semantic aggregation is respectively expressed by the formula (1) and the formula (2):
Figure BDA0003317555550000021
Figure BDA0003317555550000022
wherein e isuRepresenting the embedding of user u obtained by semantic aggregation of interactive embedding propagation layers;
AGG (. cndot.) is the aggregation function;
Hua first-order neighbor set representing the user u, namely an item set interacted with the user u;
eirepresents the embedding of item i;
Hirepresenting a first-order neighbor set of an item i, namely a user set interacted with the item i;
the social embedding propagation layer refines the embedding of the user by aggregating friends, and records the embedding of the user performing semantic aggregation in the social embedding propagation layer as c, so that the first-order semantic aggregation process of the social embedding propagation layer is shown as formula (3):
Figure BDA0003317555550000023
wherein, cuRepresenting the embedding of user u by semantic aggregation of the social embedding propagation layer;
cvrepresenting the embedding of user v by semantic aggregation of social embedding propagation layers;
a user v is a first-order friend of a user u, and v is not equal to u;
AGG (. cndot.) is the aggregation function;
Furepresenting a set of friends of user u.
Further, the high-order semantic aggregation in S2 is implemented by superimposing a plurality of first-order semantic aggregation layers; the high-order semantic aggregation comprises: semantic aggregation of social embedding propagation layer and semantic aggregation of interactive embedding propagation layer:
the semantic aggregation of the social embedding propagation layer comprises:
semantic aggregation of social embedding propagation layer higher-order friend signals are captured by overlapping a plurality of social embedding propagation layers to achieve the purpose of enhancing user embedding, and the mathematical expression of the process is shown as formula (4) and formula (5):
Figure BDA0003317555550000031
Figure BDA0003317555550000032
wherein the content of the first and second substances,
Figure BDA0003317555550000033
an embedding vector representing a user u of a (k + 1) th layer obtained by semantic aggregation of the social embedding propagation layer;
Fua set of friends representing user u;
Fva set of friends representing user v;
Figure BDA0003317555550000034
the embedding vector of the user v of the k layer is obtained through semantic aggregation of the social embedding propagation layer;
Figure BDA0003317555550000035
the embedding vector of the user v at the k +1 th layer is obtained through semantic aggregation of the social embedding propagation layer;
Figure BDA0003317555550000036
an embedding vector representing a user u of a k-th layer obtained by semantic aggregation of the social embedding propagation layer;
| DEG | represents the number of elements in the solution set;
the semantic aggregation of the interaction embedding propagation layer comprises the following steps:
semantic aggregation of interaction embedding propagation layers enhances user and article embedding by superimposing multiple interaction embedding propagation layers to capture collaborative signals of interaction high-order connectivity, the mathematical expression of the process is as shown in equation (6) and equation (7):
Figure BDA0003317555550000037
Figure BDA0003317555550000038
wherein the content of the first and second substances,
Figure BDA0003317555550000039
denotes the embedding of item i at layer k + 1;
Hia first-order neighbor set representing item i;
Hua first-order neighbor set representing user u;
Figure BDA0003317555550000041
represents the embedding of user u at layer k;
Figure BDA0003317555550000042
represents the embedding of user u at layer k + 1;
Figure BDA0003317555550000043
represents the embedding of item i of the k-th layer;
| · | represents the number of elements in the solution set.
Further, the process of fusing in S3 includes:
Figure BDA0003317555550000044
wherein the content of the first and second substances,
Figure BDA0003317555550000045
representing fusion of user embedding vectors of a k level of a social embedding propagation layer and an interactive embedding propagation layer;
Figure BDA0003317555550000046
representing the embedding of the user u of the k layer obtained by semantic aggregation of interactive embedding propagation layers;
Figure BDA0003317555550000047
an embedding vector representing a user u of a k-th layer obtained by semantic aggregation of the social embedding propagation layer;
g (. cndot.) is the polymerization mode.
Further, the user embedding and item embedding in S3 includes:
Figure BDA0003317555550000048
wherein,
Figure BDA0003317555550000049
Embedding a user u fusing a social embedding propagation layer and an interactive embedding propagation layer;
k represents the total number of layers;
αkis the weight when the k-th layer aggregates the embedding of the user;
Figure BDA00033175555500000410
representing fusion of user embedding vectors of a k level of a social embedding propagation layer and an interactive embedding propagation layer;
eiis the embedding of item i;
βkis the weight at which the kth layer aggregates the embedding of the item;
Figure BDA00033175555500000411
indicating the embedding of item i of the k-th layer.
Further, the aggregation manner of first solving the hadamard product and then performing row regularization includes:
Figure BDA00033175555500000412
norm (·) represents row regularization;
an indication of a hadamard product;
the polymerization mode of splicing first and then reducing the dimensionality to the same dimensionality as the original dimensionality through the full-connection layer comprises the following steps:
Figure BDA0003317555550000051
wherein f (-) is a fully connected layer;
w is weight, which is the weight of the vector after splicing, and w is a vector;
Figure BDA0003317555550000052
show that
Figure BDA0003317555550000053
And
Figure BDA0003317555550000054
splicing is carried out;
Figure BDA0003317555550000055
representing the embedding of the user u of the (k + 1) th layer obtained by semantic aggregation of interactive embedding propagation layers;
Figure BDA0003317555550000056
an embedding vector representing a user u of a (k + 1) th layer obtained by semantic aggregation of the social embedding propagation layer; b is an offset.
The aggregation mode of firstly adding element by element and then regularizing can also be adopted:
Figure BDA0003317555550000057
where norm (·) represents row regularization;
Figure BDA0003317555550000058
presentation pair
Figure BDA0003317555550000059
Adding element by element;
Figure BDA00033175555500000510
representing the embedding of the user u of the (k + 1) th layer obtained by semantic aggregation of interactive embedding propagation layers;
Figure BDA00033175555500000511
an embedding vector representing user u at level k +1 obtained by semantic aggregation of social embedding propagation layers. The aggregation mode of firstly adding element by element, then activating the function and finally regularizing can also be adopted;
Figure BDA00033175555500000512
jh (-) is the activation function.
Further, the S4 includes:
using the inner product of the user and the recommended item as a predictive score, as shown in equation (12):
Figure BDA00033175555500000513
Figure BDA00033175555500000514
a score representing the score of the prediction score,
Figure BDA00033175555500000515
representing the final embedding of the user u,
Figure BDA00033175555500000516
the transpose is represented by,
eiindicating the embedding of item i.
Further, the product recommendation method implemented through graph and volume collaborative filtering of social network relationships may be specifically implemented by using an SRRA, which includes the following steps:
S-A, notation of user-item interaction matrix
Figure BDA0003317555550000061
Where M and N are the number of users and items, R, respectivelyuiIs the u-th row, i-th column of the R matrixValue, where user u and item i if there is an interaction RuiNot all right 1, otherwise R ui0; a adjacency matrix of the user-item interaction graph may then be obtained, as shown in equation (14):
Figure BDA0003317555550000062
wherein A is an adjacency matrix of a user and article interaction diagram;
r is an interaction matrix of the user and the article;
Figure BDA0003317555550000063
representing a transpose;
S-B, let the embedded matrix of layer 0 be E(0)The user or article embedding matrix for obtaining the (k + 1) th layer is shown as the formula (15):
Figure BDA0003317555550000064
wherein D is a degree matrix;
a is a adjacency matrix;
E(k)is a user or item embedding matrix of the k-th layer;
S-C, recording the social matrix of the user as
Figure BDA0003317555550000065
Where user u and user v are friends then Suv1, otherwise Suv=0,SuvIs the value of the u row and v column of the S matrix; a adjacency matrix of the user's social graph may be obtained, as shown in equation (16):
Figure BDA0003317555550000066
S-D, let the embedded matrix of layer 0 be
Figure BDA0003317555550000067
The user embedded matrix of the k +1 th layer is obtained as shown in the formula (17):
Figure BDA0003317555550000068
wherein, P is a degree matrix corresponding to the matrix B;
b is an adjacency matrix of the user social graph;
C(k)embedding a matrix for the users of the k layer;
S-E, respectively intercepting matrix E(k)And matrix C(k)The parts of (2) related to user embedding are respectively marked as Eu (k)And Cu (k),Eu (k)And Cu (k)All represent a user embedded matrix of the k-th layer, where Eu (k)Is derived from user-item interactions, and Cu (k)Is derived from social relationships;
then matrix E(k)About the part of the article being embedded is denoted as Ei (k)Having E of(k)=concat(Eu (k),Ei (k)) Wherein concat (E)u (k),Ei (k)) Denotes a reaction of Eu (k)And Ei (k)Splicing is carried out;
S-F, calculating a representation of the user according to equation (18):
Figure BDA0003317555550000071
wherein, sum (E)u (k),Cu (k)) Represents a pair Eu (k)And Cu (k)Summing is carried out;
norm (·) represents a row regularization operation;
Eu (k)representing a user embedding matrix of a k layer obtained according to the user-article interaction relation;
Cu (k)representing a user embedding matrix of a k-th layer obtained by social relations;
S-G, obtaining final representations of the user and the item, respectively, by fusing the representations of the layers according to equation (19):
Figure BDA0003317555550000072
wherein the content of the first and second substances,
Figure BDA0003317555550000073
representing the final user embedding matrix;
k represents a k-th layer;
k represents the total number of layers;
αkis the weight when the k-th layer aggregates the embedding of the user;
Eu (k)representing a user embedding matrix of a k layer obtained according to the user-article interaction relation;
Figure BDA0003317555550000074
representing the final article embedding matrix;
βkis the weight at which the kth layer aggregates the embedding of the item;
Ei (k)representing the obtained article embedding matrix of the k layer;
S-H, calculating a prediction score according to the formula (20):
Figure BDA0003317555550000075
wherein the content of the first and second substances,
Figure BDA0003317555550000076
represents a predicted score;
Figure BDA0003317555550000077
to represent
Figure BDA0003317555550000078
Transposing;
Figure BDA0003317555550000079
representing the final article embedding matrix;
S-I, calculating a loss function using BPR as shown in equation (21):
Figure BDA0003317555550000081
wherein L isBPRRepresenting the BPR loss in matrix form;
m is the number of users;
u is the user;
i, j are both items;
Hua first-order neighbor set representing the user u, namely an item set interacted with the user u;
ln σ (·) denotes the natural logarithm of σ (·);
σ (-) is a sigmoid function;
Figure BDA0003317555550000082
the item i is predicted and scored by a user u;
Figure BDA0003317555550000083
the item j is predicted and scored by the user u;
λ represents control L2The strength of the regularization is used to prevent overfitting;
E(0)an embedded matrix representing layer 0;
| | · | | represents a norm.
Further, in step S5, the optimization method is:
Figure BDA0003317555550000084
wherein the content of the first and second substances,
Figure BDA0003317555550000085
a score representing a predictive score;
Figure BDA0003317555550000086
represents the final embedding of user u;
t represents transpose;
eirepresents the embedding of item i;
then calculating the BPR loss and optimizing the model parameters according to the calculated BPR loss as shown in formula (13):
Figure BDA0003317555550000087
wherein L represents BPR loss;
o represents paired training data;
u is the user;
i, j are both items;
ln σ (·) denotes the natural logarithm of σ (·);
σ (-) is a sigmoid function;
Figure BDA0003317555550000091
the item i is predicted and scored by a user u;
Figure BDA0003317555550000092
the item j is predicted and scored by the user u;
λ represents control L2The strength of the regularization is used to prevent overfitting;
Θ represents all trainable model parameters;
Figure BDA0003317555550000093
is the square of the two norms.
In summary, due to the adoption of the technical scheme, the invention has the beneficial effects that:
(1) the social relationship is innovatively integrated into the training of the collaborative filtering recommendation method based on graph convolution, a graph convolution collaborative filtering recommendation model (SGCF) integrating the social relationship is provided, and the embedding of the nodes is learned through the integration of social behaviors and high-order semantic information of interactive behaviors.
(2) An implementable recommendation algorithm (SRRA) is provided under a constructed SGCF model framework, high-order relations in user-article interaction data and social data are modeled respectively, and then the two high-order relations are fused in semantic information of each layer to form final user and article expressions, and finally the final user and article expressions are used for recommending tasks.
(3) And comparison experiments are carried out on a plurality of real data sets with social information and a baseline model, and the rationality, effectiveness and computing performance superiority of the SGCF model and the SRRA algorithm are verified.
Additional aspects and advantages of the invention will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the invention.
Drawings
The above and/or additional aspects and advantages of the present invention will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
FIG. 1 is a schematic diagram of a CF-based social recommendation.
Fig. 2 is a schematic diagram of the graph embedding principle.
Fig. 3 is a schematic diagram of a HIN recommendation system.
FIG. 4 is a schematic diagram of user social relationships.
FIG. 5 is a schematic view of a user-item interaction relationship.
Fig. 6 is a schematic diagram of a framework structure of the SGCF model proposed by the present invention.
FIG. 7 is a diagram showing the relationship between the performance improvement value of each evaluation index and S-sensitivity according to the present invention.
FIG. 8 is a schematic diagram of the SRRA and baseline model evaluation index training curves of the present invention.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the accompanying drawings are illustrative only for the purpose of explaining the present invention, and are not to be construed as limiting the present invention.
1 research motivation
Based on the analysis, a graph convolution neural network collaborative filtering recommendation method fusing social relations is provided, so that the following basic problems are solved.
Heterogeneous data is difficult to exploit: the network containing both the user interaction information and the user social information in the heterogeneous graph is a more complex heterogeneous graph. How to deal with the complex structure information for recommendation is a problem which needs to be solved urgently.
High-order semantic information is difficult to extract: the retention of different long-term dependencies among high-order semantic information capturing nodes is a key for improving node embedding and relieving the cold start problem of a recommendation system. How to inject high-order semantics into node embedding is a fundamental issue of the recommendation system.
Various semantic information is difficult to fuse: in a data set needing to be processed, there are two types of semantic information, namely social information and interaction preference information, and how to fuse and inject the two types of semantic information into user embedding is a basic problem to be solved.
2 related work
2.1 traditional collaborative filtering recommendation Algorithm
Collaborative filtering algorithms have been widely used in the e-commerce industry, and many collaborative filtering algorithms have emerged in the academic and industrial communities over the past two decades. Roughly speaking, such algorithms can be divided into two categories: based on neighborhood collaborative filtering algorithm and model recommendation algorithm.
1) Neighborhood-based recommendation algorithm
The neighborhood-based algorithm principle is to rank the target users or target items according to their similarity to neighbors and predict according to the scores of the most similar top-k neighbors, which can find potential information from the user's past behavior to directly predict the user's interests without any domain knowledge. The neighborhood-based collaborative filtering algorithm mainly uses user-item interaction data or sample data to complete prediction, which can be further divided into a user-based collaborative filtering algorithm and an item-based collaborative filtering algorithm.
The principle of the collaborative filtering algorithm based on the user is to predict the unknown score of the user for a certain item by using the weighted average of all scores of similar users for the item, and the collaborative filtering algorithm based on the item predicts the score of the user for the certain item based on the average score of the user for the similar items. The key problem of the neighborhood-based CF method is to compute the similarity and how to weight the summary score.
2) Model-based recommendation algorithm
The main idea of the model-based recommendation algorithm is to embed both the user and the item into a common potential subspace and then predict through the inner product between the implicit factors of the user and the item.
Model-based methods apply data mining and machine learning techniques to find matching models from training data to predict unknown scores. Model-based CF is more comprehensive than neighborhood-based CF, and it can mine the underlying information of explicit scoring levels. Common model-based methods include random walk-based methods and factorization-based CF models. The CF method based on factorization is one of the most popular methods at present and is widely used to construct recommendation systems.
However, the conventional recommendation method of collaborative filtering is limited in the accuracy of recommendation because only user-item interaction data is used.
2.2 socialized recommendation Algorithm
Most existing social recommendation systems today are based on CF technology. A CF-based social recommendation system, also known as a social recommendation system, is shown in fig. 1.
It can be seen in FIG. 1 that social recommendations have two inputs, namely user-item interaction information and social information. The generic CF-based social recommendation framework contains two parts: basic CF models and social information models.
According to different fusion mechanisms of user-item interaction data and social data, social recommendation systems can be divided into two main categories: a regularization-based recommendation system and a feature-based shared social recommendation system.
1) Socialized recommendation algorithm based on regularization
The regularized socialization-based recommendation algorithm is based on the assumption that: users trust friends in their social circle more than strangers, consistent with their preferences. The regularization-based recommendation is implemented by converting social data and scoring data into the same target space, and constraining limits to each other so that the social influence of the user can be considered before the user makes a decision. SocialMF and CUNE are two representative algorithms in this group.
The SocialMF is intended to constrain the user's preferences to approximate the average preferences of the user's social network. Socimf solves the transitivity of trust in a trust network because the potential feature vectors of a user depend on the potential feature vectors of direct neighbors, neighbor feature vectors can propagate in the network, and make the potential feature vectors of a user depend on potentially all users in the network.
Because explicit user-user relationships extracted directly from social information have many limitations, the CUNE proposes to extract implicit and reliable social information from user feedback, determine top-k semantic socialization for each user, and then add top-k semantic friend information to MF and BPR frames to solve the problems of score prediction and item ordering, respectively.
A model is established indirectly in a social network based on a regularized social recommendation algorithm, thereby helping the model reduce cold start problems and increase coverage of recommended items. However, since the social information is indirectly modeled, the user-item interaction information has a low degree of contact and association with the social information, which results in that the recommendation algorithm cannot effectively integrate the social information and the scoring information.
2) Socialized recommendation algorithm based on feature sharing
The basic assumptions based on the feature sharing recommendation algorithm are: user feature vectors in the user-item interaction space and the user-user social space are shared. The principle of the method is that the user-article interaction information and the social information share the user feature vector, and the user-article interaction information and the social information can be converted into the same space to be subjected to joint learning so as to obtain the feature representation of the user. TnustSVD and SoRec are two representative recommendation systems for this approach.
TrustSVD not only models scoring data and user trust relationship data, but also considers implicit behavior data and social relationship data of users. Therefore, the method adds implicit social information on the basis of the SVD + + model to improve the recommendation precision.
The SoRec method is based on the assumption that there is diversity in the social preferences trusted by the user. The user low-dimensional feature vector is learned by decomposing the scoring matrix and the social relationship matrix at the same time, so that the learned user feature vector can take the scoring habits and social characteristics of the user into account.
The social recommendation prediction can be accurately realized when the social recommendation prediction task is completed based on the feature sharing recommendation algorithm. However, the algorithms currently proposed in the mainstream of society only use original social information, and thus cannot fully utilize social data. At this point the graph embedding algorithm gradually walks into the people's view.
2.3 recommendation Algorithm based on graph embedding
Network embedding, also known as network representation learning and graph embedding, is one of the popular research directions in the field of graph data mining in recent years, and is a process for mapping graph data (generally a high-dimensional dense matrix) into a low-micro dense vector, so that the obtained vector form can have the capability of representation and reasoning in a vector space, and can be used as an input of a machine learning model, and further, the obtained vector representation can be applied to a recommendation task.
Network embedding may represent the representation of the graphical data in vector form. The vector form may preserve the structural information of the node in the graph, i.e. the more structurally similar in the graph, the closer its position in the vector space. The graph embedding principle is shown in fig. 2.
It can be seen from fig. 2 that nodes 1 and 3 are similar in structure, so they remain symmetrically positioned in vector space; nodes 4,5,6,7 are structurally equivalent so that their positions in vector space are the same.
The social recommendation system based on the homogeneous information network and the social recommendation system based on the heterogeneous information network can be classified according to the network type. The principles and classification of these two types of algorithms will be described in detail below.
1) Recommendation algorithm based on isomorphic graph embedding
The homogeneous graph contains only one type of node and edge, which only needs to aggregate a single type of neighbor to update the node representation. Perozzi et al proposed a random walk (Deepwalk) algorithm suitable for homogeneous graphs, which uses a truncated random walk sequence to represent the neighbors of a node, and then combines the obtained sequences as sentences in natural language processing to obtain vector representation of the node.
However, the random walk strategy in Deepwalk is completely random, so node2vec was proposed. The node2vec further expands a Deepwalk algorithm by changing the generation mode of the random walk sequence, the mode of selecting the next node in the random walk sequence by the Deepwalk is uniformly and randomly distributed, and the node2vec introduces width-first search and depth-first search into the generation process of the random walk sequence by introducing two parameters p and q.
The problems of data sparsity and cold start of the recommendation system are well solved based on the isomorphic network algorithm. But most of the graphs in the real world can be modeled naturally as heterogeneous graphs. Therefore, recommendation algorithms based on heterogeneous networks are receiving increasing attention.
2) Recommendation algorithm based on heterogeneous graph embedding
A Heterogeneous Information Network (HIN) is composed of various types of nodes and edges, and fig. 3 is an exemplary diagram of a HIN-based recommendation system.
As can be seen in fig. 3, a HIN includes two or more types of entities linked by a plurality of (two or more) relationships.
Under the heterogeneous network representation, the recommendation problem can be regarded as a similarity search task on the HIN. The basic idea of most existing HIN-based recommendation methods is to make recommendations on HIN using path-based semantic correlations between users and items, e.g. meta-path-based similarities. And several path-based similarity metrics are proposed to evaluate the similarity of objects in heterogeneous information networks. Wang et al propose to integrate social tag information as additional information into the HIN to overcome the problem of data sparsity. Most HIN-based approaches, however, rely on display meta-paths, which may not fully mine the potential features of users and items on the HIN for recommendations.
The advent of network embedding has demonstrated its ability to fully mine the underlying information of data, and researchers are increasingly focusing their attention on this. Deepwalk generates a sequence of nodes by random walk and then learns the node embedding representation by the Skip-Gram model. Furthermore, LNES and SDNE characterize the proximity of second order links and neighbor relationships.
Most graph embedding methods, however, focus on homogeneous networks, and therefore they cannot migrate and apply directly to heterogeneous networks. While the literature attempts to analyze heterogeneous networks through embedded methods, few have modeled the entire system as a heterogeneous network for social recommendations to capture the similarities of users that are implicit to each other on a social network, despite the good improvements that these methods have made.
Definition of 3 problems
3.1 higher order connectivity
3.1.1 social high-level connectivity
Social relationships have high-order connectivity.
In fig. 4, the target node is marked with a double circle. Indicating the path length, path and no direct connection to the sum, reflecting potential friends that may be. The closer the distance is to all the accessible paths, the more paths it occupies, and the greater the impact on it.
3.1.2 Interactive high-order connectivity
The interaction also has a high-order connectivity.
In FIG. 5, the user of recommended interest is marked with a double circle in the left sub-graph of the user-item interaction graph. The right subgraph shows the tree structure from the development. High-order connectivity means paths that are reached from any node with a path length greater than 1. This high-order connectivity contains rich semantic information with collaborative signals. For example, the path represents the similarity of behavior between sums, since both users interacted with; a longer path indicates a high probability of being taken because its similar users have interacted with before. Also, from the perspective of the path, items are more likely to be of interest than items, because there are two paths that are connected, and only one path is connected.
4 recommendation method
4.1 SGCF recommendation model
The basic idea of the SGCF is to learn node embedding of users and items by fusing the high-order semantics of social and interactive behaviors. The SGCF models the high-order relations in the user-article interaction data and the social data respectively to learn the embedding of the user and the article, finally fuses the semantic information of the two high-order relations in each layer to form a final user expression, and fuses the semantic information of the high-order interaction relations in each layer to form a final article expression for a final recommendation task. The overall frame structure of the model is shown in fig. 6.
As shown in fig. 6, the SGCF first adopts initialization embedding layer initialization node embedding, then performs semantic aggregation operation on the social embedding propagation layer and the interactive embedding propagation layer in the semantic aggregation layer to refine embedding of users and articles, and fuses the two user embedding parts in the semantic fusion layer, and then weights and sums the user embedding parts in each propagation layer and the article embedding parts respectively to form a final embedded representation, and finally scores the embedding parts in the prediction layer for recommendation.
4.1.1 initialization embedding layer
Randomly initializing an embedding matrix of nodes and inquiring to obtain initialized embedding of a user u and an article i respectively
Figure BDA0003317555550000141
And
Figure BDA0003317555550000142
where g is the dimension of the node embedding.
Figure BDA0003317555550000143
To represent
Figure BDA0003317555550000144
Is the embedded vector of user u (one node); this vector is g-dimensional, and each component of the vector belongs to the real number domain;
Figure BDA0003317555550000145
to represent
Figure BDA0003317555550000146
Is the embedded vector of item i (a node); this vector is g-dimensional and each component of the vector belongs to the real number domain.
4.1.2 semantic aggregation layer
After obtaining the initialized embedding of the nodes, a semantic aggregation layer is proposed to aggregate and update the node embedding, so high-order semantic information can be well preserved. First-order semantic aggregation is introduced into a semantic aggregation layer and then the semantic aggregation is expanded to each layer, so that high-order semantic aggregation is realized.
1) First order semantic aggregation
The basic idea of the graph neural network GCN is to learn the representation of the nodes by smoothing the features on the graph. To achieve this, it iteratively convolves the graph, i.e. aggregates the features of the neighbors as a new representation of the target node. In the SGCF, the interaction embedding propagation layer refines the embedding of users by aggregating the embedding of interaction articles, and refines the embedding of articles by aggregating the embedding of interaction users. The first-order semantic aggregation is respectively shown as a formula (1) and a formula (2).
Figure BDA0003317555550000147
Figure BDA0003317555550000148
Wherein euIndicating the embedding of user u, eiRepresenting the embedding of an item i, AGG (-) being a polymerization function, HuA first-order neighbor set representing user u, i.e. a set of items with which user u has interacted, HiA first-order neighbor set representing item i, i.e., a set of users who have interacted with item i. The above formula shows the embedding e of user u in the interactionuBy aggregation of the embeddings of an item i, whose first-order neighbours are (directly interacting), while the embeddings e of an item iiBy means of embedded aggregation of its first-order neighbors (directly interacted with) user u.
The social embedding propagation layer refines the embedding of the user by aggregating friends. In order to distinguish meanings well, a user embedding of semantic aggregation in the social embedding propagation layer is marked as c, and then a first-order semantic aggregation process of the social embedding propagation layer is shown as a formula (3)
Figure BDA0003317555550000149
Wherein c isuAnd cvAll are user embedding, user v is a first-order friend of user u, and v ≠ u; AGG (. cndot.) is the aggregation function, FuRepresenting a set of friends of user u. The above formula shows that in social interaction, user u is embedded in euBy embedding e into its first-order neighbors (socializes)vPolymerization occurs.
2) High-order semantic aggregation
The semantic aggregation layer realizes the aggregation of high-order semantics by overlapping a plurality of first-order semantic aggregation layers. It includes semantic aggregation of social embedding propagation layers and interaction embedding propagation layers.
Semantic aggregation of social embedding propagation layers
As known by social high-order connectivity, stacking k layers can aggregate information to k-order neighbors. Semantic aggregation of social embedding propagation layers captures higher-order friend signals by overlapping a plurality of social embedding propagation layers to achieve the purpose of enhancing user embedding, and mathematical expressions of the process are shown as formulas (4) and (5).
Figure BDA0003317555550000151
Figure BDA0003317555550000152
Wherein
Figure BDA0003317555550000153
An embedding vector representing user u at level k +1 obtained by semantic aggregation of the social embedding propagation layer,
Figure BDA0003317555550000154
an embedding vector, F, representing a user u of level k obtained by semantic aggregation of social embedding propagation layersuSet of friends, F, representing user uvA set of friends that represents user v,
Figure BDA0003317555550000155
refers to the embedding vector of the user v at the k +1 th layer obtained by semantic aggregation of the social embedding propagation layer,
Figure BDA0003317555550000156
refers to the embedding vector of the user v at the k-th layer obtained by semantic aggregation of the social embedding propagation layer. It should be noted that
Figure BDA0003317555550000157
Figure BDA0003317555550000158
Embedded for initialization of user u. | · | represents the number of elements in the solution set.
Semantic aggregation of interaction embedding propagation layers
According to the interactive high-order connectivity, the superposition even layer (starting from the user, the path length is even) can capture the similarity information of the user behavior, and the superposition odd layer can capture the potential interactive information of the user on the article. Semantic aggregation of interaction embedding propagation layer the mathematical expression of the process is shown in formula (6) and formula (7) by overlaying a plurality of interaction embedding propagation layers to capture cooperative signals of high-order connectivity in the interaction so as to strengthen the embedding of users and articles.
Figure BDA0003317555550000159
Figure BDA00033175555500001510
Wherein
Figure BDA00033175555500001511
And
Figure BDA00033175555500001512
respectively representing the embedding of user u and item i at layer k, HiFirst-order neighbor set, H, representing item iuRepresenting a first order neighbor set of user u.
4.1.3 semantic fusion layer
1) End-user embedding formation
The embedding of the social part user and the merging of the user embedding of the interactive part (for example, the embedding of the social part user has 3 layers, and then correspondingly, the user embedding of the interactive part also has 3 layers, and when merging, there is one-to-one correspondence, the user social embedding of the layer 1 is merged with the user interaction embedding of the layer 1, and so on, wherein the layer means the order of the captured information, the layer 1 represents that only the information of the order 1 is captured, the layer 2 represents that the information of the order 2 is captured, and so on), and the role of this part is to enable the final user embedding to carry the social information and the interactive information at the same time. Using the formula
Figure BDA0003317555550000161
Fusion of layers. The role of this part is to enable the end user to embed information that can capture the various orders. The formula used:
Figure BDA0003317555550000162
2) formation of the final article inlay
Different from final user embedding, the final user embedding uses social information and interactive information, and the final article embedding only uses the interactive information, so that the article embedding method only fuses article interactive embedding of all layers, namely only 2 in 1), and the second step.
The formula is used:
Figure BDA0003317555550000163
the weighting is used only when the layers are fused, and the formula is already embodied.
Specifically, the method comprises the following steps: the user embedding by fusing the social embedding propagation layer and the interactive embedding propagation layer can be enabled to carry certain social information, so that the quality of the user embedding is enhanced. After the semantic aggregation embedded vector of the social embedding transmission layer and the semantic aggregation embedded vector of the interactive embedding transmission layer are obtained respectively, the user embedded vectors of the two layers are fused, and the fusion process is shown as a formula (8).
Figure BDA0003317555550000164
Wherein the content of the first and second substances,
Figure BDA0003317555550000165
the user embedding vectors of the k level of the social embedding propagation layer and the interactive embedding propagation layer are fused, wherein g (-) can be in a multiple aggregation mode, and the formula (9) is adopted, and the user embedding vectors are added element by element and then are normalized.
Figure BDA0003317555550000166
Where norm (·) represents regularization,
Figure BDA0003317555550000167
presentation pair
Figure BDA0003317555550000168
The addition is carried out element by element,
Figure BDA0003317555550000169
indicating the embedding of user u at layer k +1,
Figure BDA00033175555500001610
and showing an embedding vector of the user u at the (k + 1) th layer obtained by semantic aggregation of the social embedding propagation layer, namely the user social embedding at the (k + 1) th layer.
Furthermore, g (-) can add an activation function on the basis of the formula (9); or by first taking the Hadamard product and then line regularization, i.e.
Figure BDA00033175555500001611
Or can first
Figure BDA00033175555500001612
Splicing the two parts, wherein the dimension is changed to 2 times of the original dimension, and then reducing the dimension to be the same as the original dimension through a full connection layer f (·), namely
Figure BDA00033175555500001613
Then, carrying out weighted summation and fusion on each order of embedding obtained by embedding and propagation of each layer to obtain final user embedding
Figure BDA00033175555500001614
And article embedding eiAs shown in formula (11).
Figure BDA0003317555550000171
Wherein
Figure BDA0003317555550000172
Representing the fusion of user embedded vectors of a K level of a social embedded propagation layer and an interactive embedded propagation layer, wherein K represents the K level, K represents the total number of layers, and alpha represents the total number of layerskIs the weight, β, at which the k-th layer aggregates the embedding of the userkThe weight of each layer may be the same or different, and if the weight of each layer is the same, the embedding of each layer contributes the same to the finally formed embedding, and the larger the weight is, the larger the contribution is.
4.1.4 prediction layers
The last part of the model recommends a product for the user based on the embedding of the item, where the inner product of the user and the recommended item is used as the prediction score, as shown in equation (12).
Figure BDA0003317555550000173
Figure BDA0003317555550000174
A score representing the score of the prediction score,
Figure BDA0003317555550000175
represents the final embedding of user u,. T represents transpose,. eiRepresents the embedding of item i;
the BPR loss is then calculated and the model parameters are optimized according to the calculated BPR loss as shown in equation (13).
Figure BDA0003317555550000176
Where L represents BPR loss, σ (-) is a sigmoid function,
Figure BDA0003317555550000177
means thatUser u scores the prediction of positive sample i,
Figure BDA0003317555550000178
is refers to the prediction score of the user u for the negative sample j; o { (u, i, j) | (u, i) ∈ R+,(u,j)∈R-Represents paired training data, u is user, i, j are both items, i ≠ j, except that i is a positive sample, which appears in the interaction list of u, j is a negative sample, which does not appear in the interaction list of u. R+Representing observable interactions, R-Representing an unobservable interaction. Θ represents all trainable model parameters, where the parameters of the model include only the initialized embedded vectors for user u and item i
Figure BDA0003317555550000179
And
Figure BDA00033175555500001710
λ represents control L2The strength of the regularization is used to prevent overfitting. ln σ (-) represents the natural logarithm of σ (-),
Figure BDA00033175555500001711
is the square of the two norms.
4.2 recommendation algorithm SRRA
For ease of implementation, the SRRA Algorithm is proposed under the framework of the SGCF model (see Algorithm 1 for details).
Noting the user-item interaction matrix as
Figure BDA00033175555500001712
Where M and N are the number of users and items, respectively, where R is the number of users u and items i, if any, that interactuiNot all right 1, otherwise R ui0. A adjacency matrix of the user-item interaction graph may then be obtained, as shown in equation (14).
Figure BDA0003317555550000181
Where A is the adjacency matrix of the user-item interaction graph, R is the interaction matrix of the user-item, and T represents the transpose.
Let the embedded matrix of layer 0 be
Figure BDA0003317555550000182
Where G is the dimension of the embedding vector, the user or item embedding matrix at the k +1 th layer can be obtained as shown in equation (15).
Figure BDA0003317555550000183
Where D is a degree matrix, which is a diagonal matrix with dimensions (M + N) x (M + N), M and N being the number of users and items, respectively; the value of the ith row and ith column of the matrix D is denoted as Dii,DiiFor degree of node i, i.e. each element DiiRepresenting the number of non-zero values located in the ith row vector of the adjacency matrix a.
Similarly, the social matrix of the user is recorded as
Figure BDA0003317555550000184
Where user u and user v are friends then Suv1, otherwise Suv=0,SuvIs the value of the u-th row, v-th column of the S-matrix. The adjacency matrix of the user social graph may be obtained as shown in equation (16).
Figure BDA0003317555550000185
Let the embedded matrix of layer 0 be
Figure BDA0003317555550000186
The user embedded matrix of the k +1 th layer can be obtained as shown in the formula (17).
Figure BDA0003317555550000187
Wherein, P is the degree matrix corresponding to the matrix B, and B is the adjacency matrix of the user social graph.
Then respectively intercepting the matrix E(k)And matrix C(k)With respect to the user-embedded part, i.e. the truncation matrix E(k)And matrix C(k)The first M rows of (1), respectively denoted as Eu (k)And Cu (k),Eu (k)And Cu (k)All represent the user embedded matrix of the k-th layer, but they are distinct Eu (k)Is derived from user-item interactions, and Cu (k)Is derived from social relationships. Then matrix E(k)About the article embedded part is marked as Ei (k)Having E of(k)=concat(Eu (k),Ei (k)) I.e. E(k)Is actually composed of Eu (k),Ei (k)The two matrixes are spliced to obtain the target; wherein
Figure BDA0003317555550000188
Finally, the user's representation is calculated according to equation (18).
Figure BDA0003317555550000189
Wherein sum (E)u (k),Cu (k)) Represents a pair Eu (k)And Cu (k)The summation is performed, norm (-) indicates the row regularization operation, and the row regularization is normalized by each row unit of the matrix, that is, the row elements are summed and then divided by the sum of each row element, and the obtained value replaces the row.
The final representations of the user and the item, respectively, are obtained by fusing the representations of the layers according to equation (19).
Figure BDA0003317555550000191
αkIs at the k-th layer to the userWeight in the polymerization, betakIs the weight at which the k-th layer aggregates the embedding of the item.
Calculating a prediction score according to equation (20):
Figure BDA0003317555550000192
the loss function is calculated using the BPR as shown in equation (21).
Figure BDA0003317555550000193
Wherein HuA first-order neighbor set representing the user u, namely an item set interacted with the user u; e(0)Representing the embedded matrix of layer 0, M being the number of users, | | · | |, representing the norm.
Equation (21) essentially corresponds to equation (13) except that (21) is in matrix form and the model parameters Θ are only E(0)Nothing else is included.
Figure BDA0003317555550000194
Figure BDA0003317555550000201
5 results and analysis of the experiments
The experiment uses 6 real data sets, which all contain social data and user behavior data, and the statistical data of the data sets are shown in table 2. The proposed SRRA algorithm is compared with two leading-edge baseline algorithms DSCF and LightGCN to verify the reasonability and effectiveness of the proposed SRRA algorithm.
5.1 data set
1) Brightkite this data set includes user check-in data and user social network data, which can be used for location referrals.
In order to ensure the quality of the data set, the lower limit of interaction of the users is limited to 100, and the upper limit of interaction is limited to 500, that is, each user has 100 check-in places and at most 500 check-in places.
2) Gowalla this is a check-in dataset obtained from Gowalla where users share their location through check-in. Similarly, the lower limit of interaction of the users is limited to 100, and the upper limit of interaction is limited to 500, that is, there are 100 or 500 check-in places per user.
3) LastFM is a data set published by the second recommended system information isomerism and convergence international seminar. The data set includes music artist data listened to by the user and social network data of the user. The user interaction is limited to a lower limit of 10, i.e. each user has up to 10 favorite artists.
4) FilmTrust is a small dataset that was crawled from the FilmTrust website in 2011 at 6 months. Including the scoring information of the movie by the user and the social information among the users. The user is limited to an interaction with a lower limit of 10, i.e. each user goes to movies with 10 scores.
5) The Delcious data set contains social networks, bookmarks and tag information among users from the Delcious social bookmarking system. The lower limit of interaction of the users is limited to 10, and the upper limit of interaction is 500, namely, each user has up to 10 social bookmarks.
6) Epins this data set contained scores for 139,738 items of 49,290 users, each item being scored at least once, and contained trust relationships between users, sharing 487,181 user trust pairs. The lower limit of interaction for the users is limited to 10, i.e. each user has up to 10 interactive items.
TABLE 2 statistics of the data set
Figure BDA0003317555550000202
Figure BDA0003317555550000211
Note: interaction is the number of user-item interactions, Connection is the number of user social connections, R-sensitivity is the Density of the user-item matrix, and S-sensitivity is the Density of the social matrix
5.2 Experimental setup
To evaluate the experimental results, each data set was individually evaluated at 7: 3, dividing the proportion into a training set and a testing set, and taking Pre @10, Recall @10 and NDCG @10 as evaluation indexes of the model.
Referring to LightGCN, dimensions of the embedding vectors of all models are set to 64, and embedding parameters are initialized using Xavier method. SGCF was optimized using Adam. The default learning rate is set to 0.001 and the default mini-batch is set to 1024. The regularization factor is searched within range, L2Is 2 norm regular. And (4) selecting an optimal value through experiments, and setting the sum of the polymerization factors of all the layers as the representative number of layers. 1000 rounds of training were performed for all models and experiments were performed with values of 1 to 5 respectively, which showed that the best performance of the model was achieved when 4.
5.3 analysis of results
The provided algorithm SRRA is improved on the basis of LightGCN, so that performances of Pre @10, Recall @10 and NDCG @10 of the two models under the same convolution layer number are specially compared, the SRRA and the LightGCN are respectively trained for 1-5 layers, and specific experimental results are shown in Table 3.
Table 3 comparison of the Performance of different layers of LightGCN and SGCF
Figure BDA0003317555550000212
Figure BDA0003317555550000221
As can be seen from Table 3, the proposed SRRA algorithm is improved by 8.14%, 10.47% and 15.79% respectively on the three indexes of Pre @10, Recall @10 and NDCG @10 compared with the existing algorithm. Furthermore, the proposed SRRA algorithm has different degrees of improvement over LightGCN in all three criteria under the same number of layers trained, with greater performance improvement in the FilmTrust, derilicous and LastFM three datasets, and on average 11.00%, 10.79% and 11.14% improvement in the Pre @10, Recall @10 and NDCG @10 algorithms, respectively, while less performance improvement in the brightkit, Gowalla and epins three datasets, with average improvement of 7.54%, 7.61% and 8.60%, respectively. And as can be seen from table 3, the SRRA algorithm achieves the best effect when Layer is 4. The magnitude of the increase in the algorithm is related to what factor, and the relationship between it and the quality of social data in the dataset, i.e. the Density of the social matrix (S-sensitivity), is explored below.
FIG. 7 analyzes the relationship between the Density (S-sensitivity) of the social matrix corresponding to each data set and the algorithm performance improvement value under the three indexes of Pre @10, Recall @10 and NDCG @10,
it can be seen from fig. 7 that the performance improvement amplitude of the SRRA algorithm is positively correlated with the S-sensitivity of the data set, that is, the higher the Density of the social matrix, the better the performance of the algorithm, which explains why the algorithm improves the recommendation effect to a greater extent after adding the social data for the three data sets of FilmTrust, dericious and LastFM, and improves the recommendation effect to a lesser extent after adding the social data for the three data sets of brightkit, Gowalla and epoinions.
And controlling, namely setting the training layers of the proposed SRRA algorithm and the baseline algorithm as 4 layers, comparing the training layers on the evaluation indexes of Pre @10, Recall @10 and NDCG @10, and obtaining an experimental result shown in the table 4.
TABLE 4 SRRA vs baseline algorithm Performance comparison
Figure BDA0003317555550000231
As can be seen from table 4, the SGCF model generally achieves a better effect than the single index on the single data set.
To observe the difference in training and computational performance between the SRRA algorithm and the two baseline algorithms, all algorithms were trained for 1000 rounds in the experiment and the Pre @10, Recall @10 and NDCG @10 values for 3 algorithms were recorded every 20 epochs during the training of each dataset, all of which can be visualized as fig. 8. FIG. 8 shows the Pre @10, Recall @10 and NDCG @10 indices of the SGCF and baseline algorithms as a function of the number of training rounds on the 6 datasets BrightKite, Gowalla, Epinions, Filmtroust, Delicious, LastFM, respectively.
As can be seen from fig. 8, from the performance of the three evaluation indexes, the proposed SRRA algorithm generally has the best performance compared to the baseline algorithm in each training round; in terms of convergence rate, the SRRA algorithm performs well in most data sets compared to the baseline algorithm, i.e., it can converge to a good result at a relatively fast rate, which indicates that the SRRA algorithm has relatively good computational performance.
6 summary of the invention
The invention provides a product recommendation method realized through graph convolution collaborative filtering of social network relations. Firstly, a general collaborative filtering recommendation model SGCF is constructed, wherein the model comprises 4 parts, namely an initialization embedding layer, a semantic aggregation layer, a semantic fusion layer and a prediction layer, wherein the semantic aggregation layer and the semantic fusion layer are the core of the model SGCF and respectively play roles in extracting high-order semantic information and fusing various semantic information. Then, an implementable algorithm SRRA is provided on the basis of the model, the algorithm is improved on the basis of the LightGCN, social data is merged into the LightGCN in addition to the user-item interaction data, and potential relationships between the user and the item can be mined by using richer social information, so that the recommendation quality is improved. Experiments on 6 real data sets showed that: 1) compared with a baseline algorithm, the proposed SRRA algorithm generally has a better performance effect. 2) The quality (S-sensitivity) of the data set influences the performance improvement range of the proposed SRRA algorithm, and the larger the S-sensitivity value is, the better the performance of the SRRA algorithm is. 3) The proposed SRRA algorithm has superior computational performance compared to the baseline algorithm.
While embodiments of the invention have been shown and described, it will be understood by those of ordinary skill in the art that: various changes, modifications, substitutions and alterations can be made to the embodiments without departing from the principles and spirit of the invention, the scope of which is defined by the claims and their equivalents.

Claims (9)

1. A product recommendation method realized through graph convolution collaborative filtering of social network relationships is characterized by comprising the following steps:
s1, randomly initializing an embedding matrix of the nodes and inquiring to respectively obtain the initialized embedding of the user u and the item i;
s2, after obtaining the initial embedding of the node, using a semantic aggregation layer to aggregate and update the node embedding; firstly, introducing first-order semantic aggregation in a semantic aggregation layer, and then expanding the first-order semantic aggregation to each layer to realize high-order semantic aggregation;
s3, fusing the user embedded vectors of the social embedding and propagation layer and the interactive embedding and propagation layer after respectively obtaining the semantic aggregation embedded vector of the social embedding and propagation layer and the semantic aggregation embedded vector of the interactive embedding and propagation layer; then, weighting, summing and fusing each order of embedding obtained by each embedding propagation layer to obtain final user embedding and article embedding;
the fusion adopts a polymerization mode of firstly solving a Hadamard product and then performing line regularization or a polymerization mode of firstly splicing and then reducing the dimensionality to the same dimensionality as the original dimensionality through a full connection layer;
s4, recommending products for users according to the embedding of the articles;
s5, optimizing the product in the step S4;
and S6, sending the optimized recommended product to the mobile phone of the corresponding user.
2. The method of claim 1, wherein the first-order semantic aggregation in S2 comprises:
the interaction embedding propagation layer refines the embedding of the user by aggregating the embedding of the interaction articles, and refines the embedding of the articles by aggregating the embedding of the interaction users; the first-order semantic aggregation is respectively expressed by the formula (1) and the formula (2):
Figure FDA0003317555540000011
Figure FDA0003317555540000012
wherein e isuRepresenting the embedding of user u obtained by semantic aggregation of interactive embedding propagation layers;
AGG (. cndot.) is the aggregation function;
Hua first-order neighbor set representing the user u, namely an item set interacted with the user u;
eirepresents the embedding of item i;
Hirepresenting a first-order neighbor set of an item i, namely a user set interacted with the item i;
the social embedding propagation layer refines the embedding of the user by aggregating friends, and records the embedding of the user performing semantic aggregation in the social embedding propagation layer as c, so that the first-order semantic aggregation process of the social embedding propagation layer is shown as formula (3):
Figure FDA0003317555540000021
wherein, cuRepresenting the embedding of user u by semantic aggregation of the social embedding propagation layer;
cvrepresenting the embedding of user v by semantic aggregation of social embedding propagation layers;
a user v is a first-order friend of a user u, and v is not equal to u;
AGG (. cndot.) is the aggregation function;
Furepresenting a set of friends of user u.
3. The method for recommending products through collaborative filtering for graph convolution of social network relationships according to claim 1, wherein the high-order semantic aggregation in S2 is implemented by superimposing multiple first-order semantic aggregation layers; the high-order semantic aggregation comprises: semantic aggregation of social embedding propagation layer and semantic aggregation of interactive embedding propagation layer:
the semantic aggregation of the social embedding propagation layer comprises:
semantic aggregation of social embedding propagation layer capturing higher-order friend signals by overlapping a plurality of social embedding propagation layers to achieve the purpose of enhancing user embedding, the mathematical expression of the process is shown as formula (4) and formula (5):
Figure FDA0003317555540000022
Figure FDA0003317555540000023
wherein the content of the first and second substances,
Figure FDA0003317555540000024
an embedding vector representing a user u of a (k + 1) th layer obtained by semantic aggregation of the social embedding propagation layer;
Fua set of friends representing user u;
Fva set of friends representing user v;
Figure FDA0003317555540000031
the embedding vector of the user v of the k layer is obtained through semantic aggregation of the social embedding propagation layer;
Figure FDA0003317555540000032
the embedding vector of the user v at the k +1 th layer is obtained through semantic aggregation of the social embedding propagation layer;
Figure FDA0003317555540000033
an embedding vector representing a user u of a k-th layer obtained by semantic aggregation of the social embedding propagation layer;
| DEG | represents the number of elements in the solution set;
the semantic aggregation of the interaction embedding propagation layer comprises the following steps:
semantic aggregation of interaction embedding propagation layers enhances user and article embedding by stacking multiple interaction embedding propagation layers to capture cooperative signals of interaction high-order connectivity, and the mathematical expression of the process is as shown in formula (6) and formula (7):
Figure FDA0003317555540000034
Figure FDA0003317555540000035
wherein the content of the first and second substances,
Figure FDA0003317555540000036
denotes the embedding of item i at layer k + 1;
Hia first-order neighbor set representing item i;
Hua first-order neighbor set representing user u;
Figure FDA0003317555540000037
represents the embedding of user u at layer k;
Figure FDA0003317555540000038
represents the embedding of user u at layer k + 1;
Figure FDA0003317555540000039
represents the embedding of item i of the k-th layer;
| · | represents the number of elements in the solution set.
4. The method for recommending products through collaborative filtering of graph volume of social network relationships according to claim 1, wherein said fusing in S3 comprises:
Figure FDA00033175555400000310
wherein the content of the first and second substances,
Figure FDA00033175555400000311
representing fusion of user embedding vectors of a k level of a social embedding propagation layer and an interactive embedding propagation layer;
Figure FDA0003317555540000041
representing the embedding of the user u of the k layer obtained by semantic aggregation of interactive embedding propagation layers;
Figure FDA0003317555540000042
an embedding vector representing a user u of a k-th layer obtained by semantic aggregation of the social embedding propagation layer;
g (. cndot.) is the polymerization mode.
5. The method of claim 1, wherein the user embedding and item embedding in S3 includes:
Figure FDA0003317555540000043
wherein the content of the first and second substances,
Figure FDA0003317555540000044
embedding a user u fusing a social embedding propagation layer and an interactive embedding propagation layer;
k represents the total number of layers;
αkis the weight when the k-th layer aggregates the embedding of the user;
Figure FDA0003317555540000045
representing fusion of user embedding vectors of a k level of a social embedding propagation layer and an interactive embedding propagation layer;
eiis the embedding of item i;
βkis the weight at which the kth layer aggregates the embedding of the item;
Figure FDA0003317555540000046
indicating the embedding of item i of the k-th layer.
6. The method of claim 1, wherein the aggregating manner of Hadamard product first and then row regularization comprises:
Figure FDA0003317555540000047
norm (·) represents row regularization;
an indication of a hadamard product;
the polymerization mode of splicing first and then reducing the dimensionality to the same dimensionality as the original dimensionality through the full-connection layer comprises the following steps:
Figure FDA0003317555540000048
wherein f (-) is a fully connected layer;
w is a weight;
Figure FDA0003317555540000051
show that
Figure FDA0003317555540000052
And
Figure FDA0003317555540000053
splicing is carried out;
Figure FDA0003317555540000054
representing the embedding of the user u of the (k + 1) th layer obtained by semantic aggregation of interactive embedding propagation layers;
Figure FDA0003317555540000055
an embedding vector representing a user u of a (k + 1) th layer obtained by semantic aggregation of the social embedding propagation layer;
b is an offset.
7. The method for recommending products through collaborative filtering of graph volume of social network relationships according to claim 1, wherein said S4 includes:
using the inner product of the user and the recommended item as a predictive score, as shown in equation (12):
Figure FDA0003317555540000056
wherein the content of the first and second substances,
Figure FDA0003317555540000057
a score representing a predictive score;
Figure FDA0003317555540000058
represents the final embedding of user u;
·Trepresenting a transpose;
eiindicating the embedding of item i.
8. The method of claim 1, wherein the method for recommending products through collaborative filtering of the volume of the graph of the social network relationship is implemented by using an SRRA, and the SRRA comprises the following steps:
S-A, notation of user-item interaction matrix
Figure FDA0003317555540000059
Where M and N are the number of users and items, R, respectivelyuiIs the value of the u row, i column of the R matrix, where R is the user u and item i if there is an interactionuiNot all right 1, otherwise Rui0; a adjacency matrix of the user-item interaction graph may then be obtained, as shown in equation (14):
Figure FDA00033175555400000510
wherein A is an adjacency matrix of a user and article interaction diagram;
r is an interaction matrix of the user and the article;
·Trepresenting a transpose;
S-B, let the embedded matrix of layer 0 be E(0)The user or article embedding matrix for obtaining the (k + 1) th layer is shown as the formula (15):
Figure FDA0003317555540000061
wherein D is a degree matrix;
a is a adjacency matrix;
E(k)is a user or item embedding matrix of the k-th layer;
S-C, recording the social matrix of the user as
Figure FDA0003317555540000062
Where user u and user v are friends then Suv1, otherwise Suv=0,SuvIs the value of the u row and v column of the S matrix; a adjacency matrix of the user's social graph may be obtained, as shown in equation (16):
Figure FDA0003317555540000063
S-D, let the embedded matrix of layer 0 be
Figure FDA0003317555540000064
The user embedded matrix of the k +1 th layer is obtained as shown in the formula (17):
Figure FDA0003317555540000065
wherein, P is a degree matrix corresponding to the matrix B;
b is an adjacency matrix of the user social graph;
C(k)embedding a matrix for the users of the k layer;
S-E, respectively intercepting matrix E(k)And matrix C(k)The parts of (2) related to user embedding are respectively marked as Eu (k)And Cu (k),Eu (k)And Cu (k)All represent a user embedded matrix of the k-th layer, where Eu (k)Is derived from user-item interactions, and Cu (k)Is derived from social relationships;
then matrix E(k)About the part of the article being embedded is denoted as Ei (k)Having E of(k)=concat(Eu (k),Ei (k)) Wherein concat (E)u (k),Ei (k)) Denotes a reaction of Eu (k)And Ei (k)Splicing is carried out;
S-F, calculating a representation of the user according to equation (18):
Figure FDA0003317555540000071
wherein, sum (E)u (k),Cu (k)) Represents a pair Eu (k)And Cu (k)Summing is carried out;
norm (·) represents a row regularization operation;
Eu (k)representing a user embedding matrix of a k layer obtained according to the user-article interaction relation;
Cu (k)representing a user embedding matrix of a k-th layer obtained by social relations;
S-G, obtaining final representations of the user and the item, respectively, by fusing the representations of the layers according to equation (19):
Figure FDA0003317555540000072
wherein the content of the first and second substances,
Figure FDA0003317555540000073
representing the final user embedding matrix;
k represents a k-th layer;
k represents the total number of layers;
αkis the weight when the k-th layer aggregates the embedding of the user;
Eu (k)representing a user embedding matrix of a k layer obtained according to the user-article interaction relation;
Figure FDA0003317555540000074
representing the final article embedding matrix;
βkis the weight at which the kth layer aggregates the embedding of the item;
Ei (k)representing the obtained article embedding matrix of the k layer;
S-H, calculating a prediction score according to the formula (20):
Figure FDA0003317555540000075
wherein the content of the first and second substances,
Figure FDA0003317555540000076
represents a predicted score;
Figure FDA0003317555540000077
to represent
Figure FDA0003317555540000078
Transposing;
Figure FDA0003317555540000079
representing the final article embedding matrix;
S-I, calculating a loss function using BPR as shown in equation (21):
Figure FDA0003317555540000081
wherein L isBPRRepresenting the BPR loss in matrix form;
m is the number of users;
u is the user;
i, j are both items;
Hua first-order neighbor set representing the user u, namely an item set interacted with the user u;
ln σ (·) denotes the natural logarithm of σ (·);
σ (-) is a sigmoid function;
Figure FDA0003317555540000082
the item i is predicted and scored by a user u;
Figure FDA0003317555540000083
the item j is predicted and scored by the user u;
λ represents control L2The strength of the regularization is used to prevent overfitting;
E(0)an embedded matrix representing layer 0;
| | · | | represents a norm.
9. The method for recommending products through collaborative filtering of graph convolution of social network relationships according to claim 1, wherein in step S5, the optimization method is:
Figure FDA0003317555540000084
wherein L represents BPR loss;
o represents paired training data;
u is the user;
i, j are both items;
ln σ (·) denotes the natural logarithm of σ (·);
σ (-) is a sigmoid function;
Figure FDA0003317555540000085
the item i is predicted and scored by a user u;
Figure FDA0003317555540000091
the item j is predicted and scored by the user u;
λ represents control L2The strength of the regularization is used to prevent overfitting;
Θ represents all trainable model parameters;
Figure FDA0003317555540000092
is the square of the two norms.
CN202111235556.5A 2021-10-22 2021-10-22 Product recommendation method realized through graph convolution collaborative filtering of social network relationship Active CN113918833B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111235556.5A CN113918833B (en) 2021-10-22 2021-10-22 Product recommendation method realized through graph convolution collaborative filtering of social network relationship

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111235556.5A CN113918833B (en) 2021-10-22 2021-10-22 Product recommendation method realized through graph convolution collaborative filtering of social network relationship

Publications (2)

Publication Number Publication Date
CN113918833A true CN113918833A (en) 2022-01-11
CN113918833B CN113918833B (en) 2022-08-16

Family

ID=79242400

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111235556.5A Active CN113918833B (en) 2021-10-22 2021-10-22 Product recommendation method realized through graph convolution collaborative filtering of social network relationship

Country Status (1)

Country Link
CN (1) CN113918833B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114510642A (en) * 2022-02-17 2022-05-17 重庆大学 Book recommendation method, system and equipment based on heterogeneous information network
CN114741430A (en) * 2022-04-21 2022-07-12 武汉大学 Social relationship mining method based on interactive graph propagation
CN115438732A (en) * 2022-09-06 2022-12-06 重庆理工大学 Cross-domain recommendation method for cold start user based on classification preference migration
CN116244501A (en) * 2022-12-23 2023-06-09 重庆理工大学 Cold start recommendation method based on first-order element learning and multi-supervisor association network
CN117540111A (en) * 2024-01-09 2024-02-09 安徽农业大学 Preference perception socialization recommendation method based on graph neural network

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108320187A (en) * 2018-02-02 2018-07-24 合肥工业大学 A kind of recommendation method based on depth social networks
CN111143702A (en) * 2019-12-16 2020-05-12 中国科学技术大学 Information recommendation method, system, device and medium based on graph convolution neural network
CN112800334A (en) * 2021-02-04 2021-05-14 河海大学 Collaborative filtering recommendation method and device based on knowledge graph and deep learning
CN112905900A (en) * 2021-04-02 2021-06-04 辽宁工程技术大学 Collaborative filtering recommendation algorithm based on graph convolution attention mechanism
CN113269603A (en) * 2021-04-28 2021-08-17 北京智谱华章科技有限公司 Recommendation system-oriented space-time graph convolution method and system
CN113343121A (en) * 2021-06-02 2021-09-03 合肥工业大学 Lightweight graph convolution collaborative filtering recommendation method based on multi-granularity popularity characteristics
CN113505311A (en) * 2021-07-12 2021-10-15 中国科学院地理科学与资源研究所 Scenic spot interaction recommendation method based on' potential semantic space

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108320187A (en) * 2018-02-02 2018-07-24 合肥工业大学 A kind of recommendation method based on depth social networks
CN111143702A (en) * 2019-12-16 2020-05-12 中国科学技术大学 Information recommendation method, system, device and medium based on graph convolution neural network
CN112800334A (en) * 2021-02-04 2021-05-14 河海大学 Collaborative filtering recommendation method and device based on knowledge graph and deep learning
CN112905900A (en) * 2021-04-02 2021-06-04 辽宁工程技术大学 Collaborative filtering recommendation algorithm based on graph convolution attention mechanism
CN113269603A (en) * 2021-04-28 2021-08-17 北京智谱华章科技有限公司 Recommendation system-oriented space-time graph convolution method and system
CN113343121A (en) * 2021-06-02 2021-09-03 合肥工业大学 Lightweight graph convolution collaborative filtering recommendation method based on multi-granularity popularity characteristics
CN113505311A (en) * 2021-07-12 2021-10-15 中国科学院地理科学与资源研究所 Scenic spot interaction recommendation method based on' potential semantic space

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114510642A (en) * 2022-02-17 2022-05-17 重庆大学 Book recommendation method, system and equipment based on heterogeneous information network
CN114741430A (en) * 2022-04-21 2022-07-12 武汉大学 Social relationship mining method based on interactive graph propagation
CN114741430B (en) * 2022-04-21 2024-04-09 武汉大学 Social relation mining method based on interaction graph propagation
CN115438732A (en) * 2022-09-06 2022-12-06 重庆理工大学 Cross-domain recommendation method for cold start user based on classification preference migration
CN116244501A (en) * 2022-12-23 2023-06-09 重庆理工大学 Cold start recommendation method based on first-order element learning and multi-supervisor association network
CN116244501B (en) * 2022-12-23 2023-08-08 重庆理工大学 Cold start recommendation method based on first-order element learning and multi-supervisor association network
CN117540111A (en) * 2024-01-09 2024-02-09 安徽农业大学 Preference perception socialization recommendation method based on graph neural network
CN117540111B (en) * 2024-01-09 2024-03-26 安徽农业大学 Preference perception socialization recommendation method based on graph neural network

Also Published As

Publication number Publication date
CN113918833B (en) 2022-08-16

Similar Documents

Publication Publication Date Title
CN113918832B (en) Graph convolution collaborative filtering recommendation system based on social relationship
CN113918833B (en) Product recommendation method realized through graph convolution collaborative filtering of social network relationship
CN112529168B (en) GCN-based attribute multilayer network representation learning method
CN111797321B (en) Personalized knowledge recommendation method and system for different scenes
Yang et al. Friend or frenemy? Predicting signed ties in social networks
CN111310063B (en) Neural network-based article recommendation method for memory perception gated factorization machine
CN112989064B (en) Recommendation method for aggregating knowledge graph neural network and self-adaptive attention
CN113918834B (en) Graph convolution collaborative filtering recommendation method fusing social relations
Lin et al. A survey on reinforcement learning for recommender systems
Hu et al. Bayesian personalized ranking based on multiple-layer neighborhoods
Wan et al. Deep matrix factorization for trust-aware recommendation in social networks
Chen et al. Learning multiple similarities of users and items in recommender systems
CN113050931A (en) Symbolic network link prediction method based on graph attention machine mechanism
Ballandies et al. Mobile link prediction: Automated creation and crowdsourced validation of knowledge graphs
CN113590976A (en) Recommendation method of space self-adaptive graph convolution network
Mu et al. Auxiliary stacked denoising autoencoder based collaborative filtering recommendation
Liu et al. Interest Evolution-driven Gated Neighborhood aggregation representation for dynamic recommendation in e-commerce
Huang et al. Multi-affect (ed): improving recommendation with similarity-enhanced user reliability and influence propagation
Li et al. Capsule neural tensor networks with multi-aspect information for Few-shot Knowledge Graph Completion
Deng et al. A Trust-aware Neural Collaborative Filtering for Elearning Recommendation.
Sangeetha et al. Predicting personalized recommendations using GNN
Bai et al. Meta-graph embedding in heterogeneous information network for top-n recommendation
Liu et al. Collaborative social deep learning for celebrity recommendation
Zhang et al. Hybrid structural graph attention network for POI recommendation
CN114461907B (en) Knowledge graph-based multi-element environment perception recommendation method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant