CN112347362A - Personalized recommendation method based on graph self-encoder - Google Patents
Personalized recommendation method based on graph self-encoder Download PDFInfo
- Publication number
- CN112347362A CN112347362A CN202011283015.5A CN202011283015A CN112347362A CN 112347362 A CN112347362 A CN 112347362A CN 202011283015 A CN202011283015 A CN 202011283015A CN 112347362 A CN112347362 A CN 112347362A
- Authority
- CN
- China
- Prior art keywords
- user
- item
- node
- article
- information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/90—Details of database functions independent of the retrieved data types
- G06F16/95—Retrieval from the web
- G06F16/953—Querying, e.g. by the use of web search engines
- G06F16/9535—Search customisation based on user profiles and personalisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/20—Natural language analysis
- G06F40/279—Recognition of textual entities
- G06F40/284—Lexical analysis, e.g. tokenisation or collocates
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Databases & Information Systems (AREA)
- Artificial Intelligence (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- General Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Computing Systems (AREA)
- Molecular Biology (AREA)
- Evolutionary Computation (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Biophysics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The invention discloses a personalized recommendation method based on a graph self-encoder, which comprises the steps of constructing an adjacent matrix by utilizing the interaction behavior of a user and an article, carrying out normalization processing on the adjacent matrix, carrying out convolution operation by using a graph convolution network, and obtaining hidden layer representation of a node; obtaining an initial feature vector of each node by using a user comment text and an article description text as a source of node information, aggregating neighbor node features by using a graph attention network, and updating the node information; constructing a full-connection network by using the attribute characteristics of users and articles to calculate to obtain hidden layer characteristics; and splicing the hidden layer characteristics to obtain new node information, constructing a full-connection network, encoding, reconstructing scores of the user on the articles as prediction scores by using a bilinear decoder, and generating a recommended article list by adopting Top-N recommendation aiming at the obtained prediction scores. The invention can more accurately help to analyze the preference degree of the user to the articles and find the attention point of the user, thereby carrying out more effective recommendation.
Description
Technical Field
The invention relates to the technical field of text classification, deep learning and recommendation system research, in particular to a personalized recommendation method based on a graph self-encoder.
Background
With the continuous development of internet technology, network information is growing explosively, but the rapid growth also brings information overload problem. Therefore, the process of accurately finding out the real interest of the user from a large amount of information is of great importance, and the proposal of the personalized recommendation algorithm is used for solving the problem and gradually becomes a great hot field in the development prospect of the internet.
The traditional recommendation method such as the collaborative filtering recommendation technology generally utilizes the scoring data of the user on the articles to acquire the user preference, although the algorithm is simple and easy to implement, the input data is single, and other useful historical behavior data of the user are not fully utilized, so that the user preference information cannot be intuitively and comprehensively acquired, and the recommendation effect is to be improved urgently. With the development of deep learning, more and more neural network algorithms are applied to a recommendation system, and therefore, a recommendation method using a graph neural network is proposed in the patent. The graph structure can contain a large amount of information, not only contains information of each node, but also contains interactive information between the node and a neighbor node, and the information contained in the node and the interactive information between the node and the neighbor node can be more prominently displayed through a graph convolution network and a graph attention network and the aggregation updating of the node in the graph. The graph neural network is applied to the recommendation algorithm, and the preference degree of the user on the articles can be accurately obtained through a deep learning method, so that the recommendation accuracy is improved.
In the existing recommendation technology, user preference is generally captured and recommended based on scoring information or interactive information such as user browsing and purchasing records, but the attribute characteristics of a user and an article are rarely considered and fused, and comment information of the user on the article is rarely considered, so that the recommendation effect is not ideal enough. The following problems are mainly faced in the current collaborative filtering-based recommendation system:
(1) only single scoring data, attribute information of the user and the article, and comment information of the user on the article are used less;
(2) the same score given by the user does not identify the user's points of interest well. For example, when a user scores 5 items of the same type, some users pay attention to quality, and some users pay attention to price, it is difficult for a general recommendation system to recognize such differences.
Disclosure of Invention
The invention aims to make up the defects of the prior art and provides a personalized recommendation method based on a graph self-encoder. And Top-N recommendation is adopted, so that the accuracy and recall rate of recommendation are improved.
The invention is realized by the following technical scheme:
a personalized recommendation method based on a graph self-encoder comprises the following specific steps:
and 4, performing Top-N sequencing on the articles by using the scores of the articles reconstructed by the user in the step 3, and selecting the Top N articles to recommend to the user.
The encoder for constructing the fusion graph convolution network, the graph attention network and the full-connection network in the step 1 encodes the user related information by aggregating the information of the neighbor nodes, and specifically comprises the following steps:
step 1.1, constructing a GCN graph convolution network, and learning hidden layer representation of a user node by aggregating neighbor node information;
step 1.1.1, establishing a user-article sparse adjacency matrix for interaction conditions under different scores by using scores of the interaction conditions of the user and the articles, and carrying out normalization processing on the adjacency matrix:
wherein the content of the first and second substances,representing a normalized user-item adjacency matrix with a score of i, DuserRepresenting a diagonal matrix of degrees for each user node,representing an unnormalized user-item adjacency matrix;
step 1.1.2, constructing a GCN layer for convolution operation for interaction conditions under different grades to obtain hidden layer characteristics H of each user nodeGCN_user:
wherein: wi GCN_userRepresenting a weight parameter with a score of i for performing convolution operation on the user node;
step 1.2, constructing a GAT (generic object transform) graph attention network, and learning hidden layer representation from comments of a user on an article;
step 1.2.1, expressing the comment text into a comment feature vector by using word2vec and an average vectorization method by using the comment text of a user:
wherein, UiThe node represents the comment feature vector, U (word), of the ith user nodei) Word used by useriUsing word2vec to vectorize to obtain word vectors, wherein N is the number of words in the comment text;
the same is that:
wherein, IjThe nodes represent descriptive feature vectors, I (word), for the jth item nodej) Word used for describing articlesjUsing word2vec to vectorize to obtain word vectors, wherein N is the number of words in the comment text;
step 1.2.2, compute node UiWith its neighbour node IjThe correlation degree of (c):
wherein the content of the first and second substances,is the node U after normalization processingiAnd node IjThe degree of correlation of (c); i isj∈N(Ui),N(Ui) Representing the item set purchased by the ith user; w is aGAT_userWeight parameter for transformation of user node information, auserAs a weight parameter, LeakyReLU (-) is an activation function; exp (·) represents an exponential function with e as base;
step 1.2.3, aggregating neighbor node information for updating user node UiNode information of (2):
step 1.3, constructing a full-connection network, and learning hidden layer representation from user attribute characteristics;
step 1.3.1, normalization processing is carried out on continuous data by utilizing attribute feature information of a user, one-hot coding is carried out on discrete data, and the length of the processed attribute feature vector is aligned with the length of the attribute feature vector of the article in a way of filling zero at the tail.
Step 1.3.2, the processed user attribute feature vector passes through a full-connection network to obtain hidden layer features H of each user attributeDense_user:
HDense_user=σ(Puser·WDense_user+bDense_user) (8)
Where σ (·) denotes the ReLU activation function; puserRepresenting the user attribute characteristics processed in the step 1.3.1; wDense_userA weight parameter representing a fully connected network for handling user characteristics; bDense_userA bias term representing a fully connected network for processing user characteristics;
step 1.4, learning hidden layer characteristics H from different information in step 1.1, step 1.2 and step 1.3GCN_user、HGAT_user、HDense_userSplicing together as hidden layer characteristic of user, and obtaining coding result E of coder through a full-connection network for codinguser:
Euser=σ([HGCN_user|HGAT_user|HDense_user]·WE_user+bE_user) (9)
Where σ (·) denotes the ReLU activation function; i represents a splicing operation; wE_userA weight parameter representing a fully connected network for encoding user information; bE_userA bias term representing a fully connected network for encoding user information;
the encoder for constructing the fusion graph convolution network, the graph attention network and the full-connection network in the step 2 encodes the article related information by aggregating the information of the neighbor nodes, and specifically comprises the following steps:
step 2.1, constructing a GCN graph convolution network, and learning hidden layer representation of an article node by aggregating information of neighbor nodes (user nodes);
step 2.1.1, constructing an article-user sparse adjacency matrix for the interaction conditions under different scores by using the interaction conditions (scores) of the user and the article, and normalizing the adjacency matrix:
wherein the content of the first and second substances,expressing the normalized article-user adjacency matrix with score i, DitemRepresenting a diagonal matrix of degrees of each item node,representing an unnormalized item-user adjacency matrix;
step 2.1.2, constructing a GCN layer for convolution operation for interaction conditions under different grades to obtain hidden layer characteristics H of each article nodeGCN_item:
Where σ (·) denotes the ReLU activation function; i represents splicingIn the operation of the method, the operation,
Wi GCN_itemrepresenting a weight parameter with a score of i for performing convolution operation on the article node;
step 2.2, constructing a GAT graph attention network, and learning hidden layer representation from comments of users on articles;
step 2.2.1, by using the comment text of the user, expressing the comment text into a comment feature vector by using word2vec and an average vectorization method:
wherein, UiThe node represents the comment feature vector, U (word), of the ith user nodei) Word used by useriUsing word2vec to vectorize to obtain word vectors, wherein N is the number of words in the comment text;
the same is that:
wherein, IjThe nodes represent descriptive feature vectors, I (word), for the jth item nodej) Word used for describing articlesjUsing word2vec to vectorize to obtain word vectors, wherein N is the number of words in the comment text;
step 2.2.2, compute node IjWith its neighbour node UiThe correlation degree of (c):
wherein the content of the first and second substances,is the node I after normalization processingjAnd node UiThe degree of correlation of (c); u shapei∈N(Ij),N(Ij) A set of users representing purchases of a jth item; w is aGAT_itemWeight parameter for item node information transformation, aitemAs a weight parameter, LeakyReLU (-) is an activation function; exp (·) represents an exponential function with e as base;
step 2.2.3, aggregating neighbor node information for updating item node IjNode information of (2):
step 2.3, constructing a full-connection network, and learning hidden layer representation from the article attribute characteristics;
step 2.3.1, normalization processing is carried out on the continuous data by utilizing the attribute feature information of the article, one-hot coding is carried out on the discrete data, and the processed attribute feature vector length is aligned with the attribute feature vector length of the user in a front zero filling mode;
step 2.3.2, the processed article attribute feature vectors pass through a full-connection network to obtain hidden layer features H of each article attributeDense_item:
HDense_item=σ(Pitem·WDense_item+bDense_item) (17)
Where σ (·) denotes the ReLU activation function; pitemRepresenting the attribute characteristics of the article processed in the step 2.3.1; wDense_itemA weight parameter representing a fully connected network for processing characteristics of the item; bDense_itemA bias term representing a fully connected network for processing item features;
step 2.4, learning hidden layer characteristics H from different information in step 2.1, step 2.2 and step 2.3GCN_item、HGAT_item、HDense_itemSplicing together as hidden layer characteristic of the article and obtaining the coding result E of the coder through a full-connection network for codingitem:
Eitem=σ([HGCN_item|HGAT_item|HDense_item]·WE_item+bE_item) (18)
Where σ (·) denotes the ReLU activation function; i represents a splicing operation; wE_itemA weight parameter representing a fully connected network for encoding item information; bE_itemA bias term representing a fully connected network for encoding item information;
y_hat=(embedding1|embedding2)·Wclassifier (19)
wherein y _ hat represents the rating of the item by the reconstructed user; i represents a splicing operation, and:
embedding1=sum(EuserW1)*Eitem (20)
embedding2=sum(EuserW2)*Eitem (21)
where denotes the hadamard product and sum (-) denotes summing each row of the matrix; wclassifier、W1And W2Are the weight parameters in a bilinear decoder.
The invention has the advantages that: 1. the method and the system not only utilize the scores as the interaction condition of the user and the articles, but also utilize the comment text content of the user to the articles and the attribute characteristics of the user and the articles, and integrate various information for discovering the user interest, and compared with the traditional recommendation method using single score data, the method and the system can carry out more reasonable and accurate recommendation;
2. the encoder part (graph convolution network, graph attention network and full connection network) constructed by the invention can effectively obtain the preference degree of the user to the article;
3. according to the method, the mutual information and comment text information in the graph are aggregated by using a graph convolution network and a graph attention network, the attribute characteristics are processed by using a full-connection network to obtain new node information of each node, the full-connection network is used for coding, and finally the score of a user on an article is reconstructed by a bilinear decoder.
Drawings
FIG. 1 is a user graph convolutional network model;
FIG. 2 is a user graph attention network model;
FIG. 3 is a fully connected network that handles user attribute features;
FIG. 4 is a user encoder that merges a graph convolution network, a graph attention network, and a fully connected network;
FIG. 5 is a diagram of a convolutional network model (article);
FIG. 6 is a diagram of an attention network model (article);
FIG. 7 is a fully connected network that handles item attribute features;
FIG. 8 is an article encoder incorporating a graph convolution network, a graph attention network, and a fully connected network;
fig. 9 is a schematic structural diagram of the present invention.
Detailed Description
A personalized recommendation method based on a graph self-encoder is characterized in that an adjacency matrix is constructed by utilizing the interaction behavior of a user and an article, normalization processing is carried out on the adjacency matrix, convolution operation is carried out by using a graph convolution network, and hidden layer representation of a node is obtained; aggregating the characteristics of the neighbor nodes by using the comment text and the article description text and using the attention network, thereby updating the node information; constructing a full-connection network by using the attribute characteristics of users and articles to calculate to obtain hidden layer characteristics; and splicing the hidden layer characteristics obtained by the calculation of the three networks to obtain new node information, constructing a full-connection network, and encoding the information. And then reconstructing the user's score for the item using a bilinear decoder. And selecting the items with high preference degree according to the reconstructed prediction scores of the user on each item to generate a recommendation list.
As shown in fig. 9, in this embodiment, a method for personalized recommendation based on a graph self-encoder is performed according to the following steps:
step 1.1, constructing a GCN graph convolution network, and learning hidden layer representation of a user node by aggregating neighbor node (commodity node) information, as shown in FIG. 1.
Step 1.1.1, constructing a user-article sparse adjacency matrix for the interaction conditions under different scores by using the interaction conditions (scores) of the user and the articles, and normalizing the adjacency matrix:
wherein the content of the first and second substances,representing a normalized user-item adjacency matrix with a score of i, DuserRepresenting a diagonal matrix of degrees for each user node,representing an unnormalized user-item adjacency matrix.
Step 1.1.2, constructing a GCN layer for convolution operation for interaction conditions under different grades to obtain hidden layer characteristics H of each user nodeGCN_user:
wherein: wi GCN_userAnd representing the weight parameter with the score of i for performing convolution operation on the user node.
Step 1.2, constructing a GAT (goal oriented technology) graph attention network, and learning hidden layer representation from comments of users on articles, as shown in figure 2.
Step 1.2.1, expressing the comment text into a comment feature vector by using word2vec and an average vectorization method by using the comment text of a user:
wherein, UiThe node represents the comment feature vector, U (word), of the ith user nodei) Word used by useriUsing word2vec to vectorize to obtain word vectors, wherein N is the number of words in the comment text;
similarly:
wherein, IjThe nodes represent descriptive feature vectors, I (word), for the jth item nodej) Word used for describing articlesjAnd (5) using word2vec to vectorize the obtained word vector, wherein N is the number of words in the comment text.
Step 1.2.2, compute node UiWith its neighbour node IjThe correlation degree of (c):
wherein the content of the first and second substances,is the node U after normalization processingiAnd node IjThe degree of correlation of (c); i isj∈N(Ui),N(Ui) Representing the item set purchased by the ith user; w is aGAT_userWeight parameter for transformation of user node information, auserAs a weight parameter, LeakyReLU (-) is an activation function; exp (. cndot.) represents an exponential function with e as base.
Step 1.2.3, aggregating neighbor node information for updating user node UiNode information of (2):
step 1.3, constructing a full-connection network, and learning hidden layer representation from the user attribute characteristics, as shown in fig. 3.
Step 1.3.1, normalization processing is carried out on continuous data by utilizing attribute feature information of a user, one-hot coding is carried out on discrete data, and the length of the processed attribute feature vector is aligned with the length of the attribute feature vector of the article in a way of filling zero at the tail.
Step 1.3.2, the processed user attribute feature vector passes through a full-connection network to obtain hidden layer features H of each user attributeDense_user:
HDense_user=σ(Puser·WDense_user+bDense_user) (8)
Where σ (·) denotes the ReLU activation function; puserRepresenting the user attribute characteristics processed in the step 1.3.1; wDense_userA weight parameter representing a fully connected network for handling user characteristics; bDense_userA bias term representing a fully connected network for handling user characteristics.
Step 1.4, learning hidden layer characteristics H from different information in step 1.1, step 1.2 and step 1.3GCN_user、HGAT_user、HDense_userSplicing together as hidden layer characteristic of user, and obtaining coding result E of coder through a full-connection network for codinguser:
Euser=σ([HGCN_user|HGAT_user|HDense_user]·WE_user+bE_user) (9)
Where σ (·) denotes the ReLU activation function; i represents a splicing operation; wE_userA weight parameter representing a fully connected network for encoding user information; bE_userAn offset term representing a fully connected network used to encode user information. As shown in fig. 4.
and 2.1, constructing a GCN graph convolution network, and learning hidden layer representation of the article nodes by aggregating information of neighbor nodes (user nodes), as shown in FIG. 5.
Step 2.1.1, constructing an article-user sparse adjacency matrix for the interaction conditions under different scores by using the interaction conditions (scores) of the user and the article, and normalizing the adjacency matrix:
wherein the content of the first and second substances,expressing the normalized article-user adjacency matrix with score i, DitemRepresenting a diagonal matrix of degrees of each item node,representing an unnormalized item-user adjacency matrix.
Step 2.1.2, constructing a GCN layer for convolution operation for interaction conditions under different grades to obtain hidden layer characteristics H of each article nodeGCN_item:
Wi GCN_itema weight parameter is represented that scores i below for the convolution operation on the item node.
And 2.2, constructing a GAT (goal oriented technology) graph attention network, and learning hidden layer representation from the comments of the user on the article, as shown in FIG. 6.
Step 2.2.1, by using the comment text of the user, expressing the comment text into a comment feature vector by using word2vec and an average vectorization method:
wherein, UiThe node represents the comment feature vector, U (word), of the ith user nodei) Word used by useriUsing word2vec to vectorize to obtain word vectors, wherein N is the number of words in the comment text;
similarly:
wherein, IjThe nodes represent descriptive feature vectors, I (word), for the jth item nodej) Word used for describing articlesjAnd (5) using word2vec to vectorize the obtained word vector, wherein N is the number of words in the comment text.
Step 2.2.2, compute node IjWith its neighbour node UiThe correlation degree of (c):
wherein the content of the first and second substances,is the node I after normalization processingjAnd node UiThe degree of correlation of (c); u shapei∈N(Ij),N(Ij) A set of users representing purchases of a jth item; w is aGAT_itemWeight parameter for item node information transformation, aitemAs a weight parameter, LeakyReLU (-) is an activation function; exp (. cndot.) represents an exponential function with e as base.
Step 2.2.3, aggregating neighbor node information for updating item node IjNode information of (2):
and 2.3, constructing a full-connection network, and learning hidden layer representation from the article attribute characteristics, as shown in fig. 7.
And 2.3.1, performing normalization processing on the continuous data by utilizing the attribute feature information of the article, performing one-hot coding on the discrete data, and aligning the length of the processed attribute feature vector with the length of the attribute feature vector of the user in a front zero filling mode.
Step 2.3.2, the processed article attribute feature vectors pass through a full-connection network to obtain hidden layer features H of each article attributeDense_item:
HDense_item=σ(Pitem·WDense_item+bDense_item) (17)
Where σ (·) denotes the ReLU activation function; pitemRepresenting the attribute characteristics of the article processed in the step 2.3.1; wDense_itemA weight parameter representing a fully connected network for processing characteristics of the item; bDense_itemA bias term representing a fully connected network for processing characteristics of an item.
Step 2.4, learning hidden layer characteristics H from different information in step 2.1, step 2.2 and step 2.3GCN_item、HGAT_item、HDense_itemSplicing together as hidden layer characteristic of the article and obtaining the coding result E of the coder through a full-connection network for codingitem:
Eitem=σ([HGCN_item|HGAT_item|HDense_item]·WE_item+bE_item) (18)
Where σ (·) denotes the ReLU activation function; i represents a splicing operation; wE_itemA weight parameter representing a fully connected network for encoding item information; bE_itemA bias term representing a fully connected network for encoding item information. As shown in fig. 8.
And 3, constructing a bilinear decoder, and reconstructing the scoring condition of the user on the article.
Constructing a bilinear decoder, reconstructing the scoring condition of the user on the article, and specifically calculating as follows:
y_hat=(embedding1|embedding2)·Wclassifier (19)
wherein y _ hat represents the rating of the item by the reconstructed user; i represents a splicing operation, and:
embedding1=sum(EuserW1)*Eitem (20)
embedding2=sum(EuserW2)*Eitem (21)
where denotes the hadamard product and sum (-) denotes summing each row of the matrix; wclassifier、W1And W2Are the weight parameters in a bilinear decoder.
And 4, performing Top-N sequencing on the articles by using the scores of the articles reconstructed by the user in the step 3, and selecting the Top N articles to recommend to the user.
Claims (4)
1. A personalized recommendation method based on a graph self-encoder is characterized in that: the method comprises the following specific steps:
step 1, constructing an encoder fusing a graph convolution network, a graph attention network and a full-connection network, and encoding user related information by aggregating information of neighbor nodes;
step 2, constructing an encoder fusing a graph convolution network, a graph attention network and a full-connection network, and encoding article related information by aggregating information of neighbor nodes;
step 3, constructing a bilinear decoder by using the information related to the user and the article coded in the steps 1 and 2, and reconstructing the scoring condition of the user on the article;
and 4, performing Top-N sequencing on the articles by using the scores of the articles reconstructed by the user in the step 3, and selecting the Top N articles to recommend to the user.
2. The personalized recommendation method based on graph self-encoder as claimed in claim 1, wherein: the encoder for constructing the fusion graph convolution network, the graph attention network and the full-connection network in the step 1 encodes the user related information by aggregating the information of the neighbor nodes, and specifically comprises the following steps:
step 1.1, constructing a GCN graph convolution network, and learning hidden layer representation of a user node by aggregating neighbor node information;
step 1.1.1, establishing a user-article sparse adjacency matrix for interaction conditions under different scores by using scores of the interaction conditions of the user and the articles, and carrying out normalization processing on the adjacency matrix:
wherein the content of the first and second substances,representing a normalized user-item adjacency matrix with a score of i, DuserRepresenting a diagonal matrix of degrees for each user node,representing an unnormalized user-item adjacency matrix;
step 1.1.2, interaction conditions under different scores are obtainedConstructing a GCN layer according to conditions to carry out convolution operation to obtain hidden layer characteristics H of each user nodeGCN_user:
wherein: wi GCN_userRepresenting a weight parameter with a score of i for performing convolution operation on the user node;
step 1.2, constructing a GAT (generic object transform) graph attention network, and learning hidden layer representation from comments of a user on an article;
step 1.2.1, expressing the comment text into a comment feature vector by using word2vec and an average vectorization method by using the comment text of a user:
wherein, UiThe node represents the comment feature vector, U (word), of the ith user nodei) Word used by useriUsing word2vec to vectorize to obtain word vectors, wherein N is the number of words in the comment text;
the same is that:
wherein, IjThe nodes represent descriptive feature vectors, I (word), for the jth item nodej) For articles describedWordjUsing word2vec to vectorize to obtain word vectors, wherein N is the number of words in the comment text;
step 1.2.2, compute node UiWith its neighbour node IjThe correlation degree of (c):
wherein the content of the first and second substances,is the node U after normalization processingiAnd node IjThe degree of correlation of (c); i isj∈N(Ui),N(Ui) Representing the item set purchased by the ith user; w is aGAT_userWeight parameter for transformation of user node information, auserFor the weight parameter, Leaky ReLU (-) is the activation function; exp (·) represents an exponential function with e as base;
step 1.2.3, aggregating neighbor node information for updating user node UiNode information of (2):
step 1.3, constructing a full-connection network, and learning hidden layer representation from user attribute characteristics;
step 1.3.1, carrying out normalization processing on continuous data by utilizing attribute feature information of a user, carrying out one-hot coding on discrete data, and aligning the length of a processed attribute feature vector with the length of an attribute feature vector of an article in a way of filling zero at the tail;
step 1.3.2, the processed user attribute feature vector passes through a full-connection network to obtain hidden layer features H of each user attributeDense_user:
HDense_user=σ(Puser·WDense_user+bDense_user) (8)
Where σ (·) denotes the ReLU activation function; puserRepresenting the user attribute characteristics processed in the step 1.3.1; wDense _userA weight parameter representing a fully connected network for handling user characteristics; bDense_userA bias term representing a fully connected network for processing user characteristics;
step 1.4, learning hidden layer characteristics H from different information in step 1.1, step 1.2 and step 1.3GCN_user、HGAT _user、HDense_userSplicing together as hidden layer characteristic of user, and obtaining coding result E of coder through a full-connection network for codinguser:
Euser=σ([HGCN_user|HGAT_user|HDense_user]·WE_user+bE_user) (9)
Where σ (·) denotes the ReLU activation function; i represents a splicing operation; wE_userA weight parameter representing a fully connected network for encoding user information; bE_userAn offset term representing a fully connected network used to encode user information.
3. The personalized recommendation method based on graph self-encoder as claimed in claim 2, wherein: the encoder for constructing the fusion graph convolution network, the graph attention network and the full-connection network in the step 2 encodes the article related information by aggregating the information of the neighbor nodes, and specifically comprises the following steps:
step 2.1, constructing a GCN graph convolution network, and learning hidden layer representation of an article node by aggregating information of neighbor nodes (user nodes);
step 2.1.1, constructing an article-user sparse adjacency matrix for the interaction conditions under different scores by using the interaction conditions (scores) of the user and the article, and normalizing the adjacency matrix:
wherein the content of the first and second substances,expressing the normalized article-user adjacency matrix with score i, DitemRepresenting a diagonal matrix of degrees of each item node,representing an unnormalized item-user adjacency matrix;
step 2.1.2, constructing a GCN layer for convolution operation for interaction conditions under different grades to obtain hidden layer characteristics H of each article nodeGCN_item:
wherein: wi GCN_itemRepresenting a weight parameter with a score of i for performing convolution operation on the article node;
step 2.2, constructing a GAT graph attention network, and learning hidden layer representation from comments of users on articles;
step 2.2.1, by using the comment text of the user, expressing the comment text into a comment feature vector by using word2vec and an average vectorization method:
wherein, UiThe node represents the ith user sectionComment feature vector of points, U (word)i) Word used by useriUsing word2vec to vectorize to obtain word vectors, wherein N is the number of words in the comment text;
the same is that:
wherein, IjThe nodes represent descriptive feature vectors, I (word), for the jth item nodej) Word used for describing articlesjUsing word2vec to vectorize to obtain word vectors, wherein N is the number of words in the comment text;
step 2.2.2, compute node IjWith its neighbour node UiThe correlation degree of (c):
wherein the content of the first and second substances,is the node I after normalization processingjAnd node UiThe degree of correlation of (c); u shapei∈N(Ij),N(Ij) A set of users representing purchases of a jth item; w is aGAT_itemWeight parameter for item node information transformation, aitemFor the weight parameter, Leaky ReLU (-) is the activation function; exp (·) represents an exponential function with e as base;
step 2.2.3, aggregating neighbor node information for updating item node IjNode information of (2):
step 2.3, constructing a full-connection network, and learning hidden layer representation from the article attribute characteristics;
step 2.3.1, normalization processing is carried out on the continuous data by utilizing the attribute feature information of the article, one-hot coding is carried out on the discrete data, and the processed attribute feature vector length is aligned with the attribute feature vector length of the user in a front zero filling mode;
step 2.3.2, the processed article attribute feature vectors pass through a full-connection network to obtain hidden layer features H of each article attributeDense_item:
HDense_item=σ(Pitem·WDense_item+bDense_item) (17)
Where σ (·) denotes the ReLU activation function; pitemRepresenting the attribute characteristics of the article processed in the step 2.3.1; wDense _itemA weight parameter representing a fully connected network for processing characteristics of the item; bDense_itemA bias term representing a fully connected network for processing item features;
step 2.4, learning hidden layer characteristics H from different information in step 2.1, step 2.2 and step 2.3GCN_item、HGAT _item、HDense_itemSplicing together as hidden layer characteristic of the article and obtaining the coding result E of the coder through a full-connection network for codingitem:
Eitem=σ([HGCN_item|HGAT_item|HDense_item]·WE_item+bE_item) (18)
Where σ (·) denotes the ReLU activation function; i represents a splicing operation; wE_itemA weight parameter representing a fully connected network for encoding item information; bE_itemA bias term representing a fully connected network for encoding item information.
4. The personalized recommendation method based on graph self-encoder as claimed in claim 3, wherein: step 3, constructing a bilinear decoder by using the information about the user and the article coded in step 1 and step 2, and reconstructing the scoring condition of the user on the article, specifically calculating as follows:
y_hat=(embedding1|embedding2)·Wclassifier (19)
wherein y _ hat represents the rating of the item by the reconstructed user; i represents a splicing operation, and:
embedding1=sum(EuserW1)*Eitem (20)
embedding2=sum(EuserW2)*Eitem (21)
where denotes the hadamard product and sum (-) denotes summing each row of the matrix; wclassifier、W1And W2Are the weight parameters in a bilinear decoder.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011283015.5A CN112347362B (en) | 2020-11-16 | 2020-11-16 | Personalized recommendation method based on graph self-encoder |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011283015.5A CN112347362B (en) | 2020-11-16 | 2020-11-16 | Personalized recommendation method based on graph self-encoder |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112347362A true CN112347362A (en) | 2021-02-09 |
CN112347362B CN112347362B (en) | 2022-05-03 |
Family
ID=74362932
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011283015.5A Active CN112347362B (en) | 2020-11-16 | 2020-11-16 | Personalized recommendation method based on graph self-encoder |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112347362B (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113139128A (en) * | 2021-05-07 | 2021-07-20 | 厦门大学 | Bo-Wen recommendation method and system based on graph convolution neural network |
CN113269647A (en) * | 2021-06-08 | 2021-08-17 | 上海交通大学 | Graph-based transaction abnormity associated user detection method |
CN113377656A (en) * | 2021-06-16 | 2021-09-10 | 南京大学 | Crowd-sourcing recommendation method based on graph neural network |
CN113516379A (en) * | 2021-06-25 | 2021-10-19 | 深圳信息职业技术学院 | Work order scoring method for intelligent quality inspection |
CN114896467A (en) * | 2022-04-24 | 2022-08-12 | 北京月新时代科技股份有限公司 | Neural network-based field matching method and intelligent data entry method |
CN114896468A (en) * | 2022-04-24 | 2022-08-12 | 北京月新时代科技股份有限公司 | File type matching method and intelligent data entry method based on neural network |
CN115186086A (en) * | 2022-06-27 | 2022-10-14 | 长安大学 | Literature recommendation method for embedding expected value in heterogeneous environment |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110874439A (en) * | 2019-11-20 | 2020-03-10 | 电子科技大学 | Recommendation method based on comment information |
US20200134804A1 (en) * | 2018-10-26 | 2020-04-30 | Nec Laboratories America, Inc. | Fully convolutional transformer based generative adversarial networks |
CN111127146A (en) * | 2019-12-19 | 2020-05-08 | 江西财经大学 | Information recommendation method and system based on convolutional neural network and noise reduction self-encoder |
WO2020147612A1 (en) * | 2019-01-16 | 2020-07-23 | 阿里巴巴集团控股有限公司 | Graph-based convolution network training method, apparatus and system |
CN111782765A (en) * | 2020-06-24 | 2020-10-16 | 安徽农业大学 | Recommendation method based on graph attention machine mechanism |
CN111881363A (en) * | 2020-06-23 | 2020-11-03 | 北京工业大学 | Recommendation method based on graph interaction network |
-
2020
- 2020-11-16 CN CN202011283015.5A patent/CN112347362B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20200134804A1 (en) * | 2018-10-26 | 2020-04-30 | Nec Laboratories America, Inc. | Fully convolutional transformer based generative adversarial networks |
WO2020147612A1 (en) * | 2019-01-16 | 2020-07-23 | 阿里巴巴集团控股有限公司 | Graph-based convolution network training method, apparatus and system |
CN110874439A (en) * | 2019-11-20 | 2020-03-10 | 电子科技大学 | Recommendation method based on comment information |
CN111127146A (en) * | 2019-12-19 | 2020-05-08 | 江西财经大学 | Information recommendation method and system based on convolutional neural network and noise reduction self-encoder |
CN111881363A (en) * | 2020-06-23 | 2020-11-03 | 北京工业大学 | Recommendation method based on graph interaction network |
CN111782765A (en) * | 2020-06-24 | 2020-10-16 | 安徽农业大学 | Recommendation method based on graph attention machine mechanism |
Non-Patent Citations (2)
Title |
---|
CHAO SHANG: "MGAT:Multimodal Graph Attention Network for Recommendation", 《INFORMATION PROCESSING AND MANAGEMENT》 * |
黄立威: "基于深度学习的推荐系统研究综述", 《计算机学报》 * |
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113139128A (en) * | 2021-05-07 | 2021-07-20 | 厦门大学 | Bo-Wen recommendation method and system based on graph convolution neural network |
CN113139128B (en) * | 2021-05-07 | 2024-03-01 | 厦门大学 | Blog recommendation method and system based on graph convolution neural network |
CN113269647B (en) * | 2021-06-08 | 2022-11-18 | 上海交通大学 | Graph-based transaction abnormity associated user detection method |
CN113269647A (en) * | 2021-06-08 | 2021-08-17 | 上海交通大学 | Graph-based transaction abnormity associated user detection method |
CN113377656A (en) * | 2021-06-16 | 2021-09-10 | 南京大学 | Crowd-sourcing recommendation method based on graph neural network |
CN113377656B (en) * | 2021-06-16 | 2023-06-23 | 南京大学 | Public testing recommendation method based on graph neural network |
CN113516379A (en) * | 2021-06-25 | 2021-10-19 | 深圳信息职业技术学院 | Work order scoring method for intelligent quality inspection |
CN113516379B (en) * | 2021-06-25 | 2022-08-16 | 深圳信息职业技术学院 | Work order scoring method for intelligent quality inspection |
CN114896468A (en) * | 2022-04-24 | 2022-08-12 | 北京月新时代科技股份有限公司 | File type matching method and intelligent data entry method based on neural network |
CN114896468B (en) * | 2022-04-24 | 2024-02-02 | 北京月新时代科技股份有限公司 | File type matching method and data intelligent input method based on neural network |
CN114896467B (en) * | 2022-04-24 | 2024-02-09 | 北京月新时代科技股份有限公司 | Neural network-based field matching method and data intelligent input method |
CN114896467A (en) * | 2022-04-24 | 2022-08-12 | 北京月新时代科技股份有限公司 | Neural network-based field matching method and intelligent data entry method |
CN115186086A (en) * | 2022-06-27 | 2022-10-14 | 长安大学 | Literature recommendation method for embedding expected value in heterogeneous environment |
CN115186086B (en) * | 2022-06-27 | 2023-08-08 | 长安大学 | Literature recommendation method for embedding expected value in heterogeneous environment |
Also Published As
Publication number | Publication date |
---|---|
CN112347362B (en) | 2022-05-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112347362B (en) | Personalized recommendation method based on graph self-encoder | |
CN111507796A (en) | Online shopping mall commodity recommendation method based on knowledge graph | |
CN111310063B (en) | Neural network-based article recommendation method for memory perception gated factorization machine | |
CN112115377B (en) | Graph neural network link prediction recommendation method based on social relationship | |
CN112950324B (en) | Knowledge graph assisted pairwise sorting personalized merchant recommendation method and system | |
CN113409121B (en) | Cross-border e-commerce recommendation method based on heterogeneous graph expression learning | |
CN112650929B (en) | Graph neural network recommendation method integrating comment information | |
Ullah et al. | Image-based service recommendation system: A JPEG-coefficient RFs approach | |
CN112231583B (en) | E-commerce recommendation method based on dynamic interest group identification and generation of confrontation network | |
Chen et al. | IR-Rec: An interpretive rules-guided recommendation over knowledge graph | |
CN109101553B (en) | Purchasing user evaluation method and system for industry of non-beneficiary party of purchasing party | |
CN104298787A (en) | Individual recommendation method and device based on fusion strategy | |
CN111949887A (en) | Item recommendation method and device and computer-readable storage medium | |
CN106157156A (en) | A kind of cooperation recommending system based on communities of users | |
CN111782765A (en) | Recommendation method based on graph attention machine mechanism | |
CN115860880B (en) | Personalized commodity recommendation method and system based on multi-layer heterogeneous graph convolution model | |
CN108491477B (en) | Neural network recommendation method based on multi-dimensional cloud and user dynamic interest | |
CN112949322A (en) | E-commerce opinion mining recommendation system driven by online text comments | |
CN112905906A (en) | Recommendation method and system fusing local collaboration and feature intersection | |
CN110020918B (en) | Recommendation information generation method and system | |
CN115391555A (en) | User-perceived knowledge map recommendation system and method | |
CN114997959A (en) | Electronic intelligent product marketing recommendation method | |
CN113496259A (en) | Graph neural network recommendation method integrating label information | |
CN112488355A (en) | Method and device for predicting user rating based on graph neural network | |
Kanakamedala et al. | Sentiment Analysis of Online Customer Reviews for Handicraft Product using Machine Learning: A Case of Flipkart |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |