CN115293851A - Recommendation method for introducing item category information into graph neural network - Google Patents

Recommendation method for introducing item category information into graph neural network Download PDF

Info

Publication number
CN115293851A
CN115293851A CN202210964936.0A CN202210964936A CN115293851A CN 115293851 A CN115293851 A CN 115293851A CN 202210964936 A CN202210964936 A CN 202210964936A CN 115293851 A CN115293851 A CN 115293851A
Authority
CN
China
Prior art keywords
user
vector
item
module
article
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210964936.0A
Other languages
Chinese (zh)
Inventor
鲍军鹏
许宏才
侯力方
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian Jiaotong University
Original Assignee
Xian Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian Jiaotong University filed Critical Xian Jiaotong University
Priority to CN202210964936.0A priority Critical patent/CN115293851A/en
Publication of CN115293851A publication Critical patent/CN115293851A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • G06Q30/0601Electronic shopping [e-shopping]
    • G06Q30/0631Item recommendations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/36Creation of semantic tools, e.g. ontology or thesauri
    • G06F16/367Ontology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/906Clustering; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9536Search customisation based on social or collaborative filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2216/00Indexing scheme relating to additional aspects of information retrieval not explicitly covered by G06F16/00 and subgroups
    • G06F2216/03Data mining

Abstract

A recommendation method for introducing item category information into a graph neural network is characterized in that input data are determined, a potential factor module, a category information module, a category weighting aggregation module and a grading prediction module are constructed, the input data are a user-item interaction graph representing interaction relations among users, items and auxiliary information, and the potential factor module models the intention of the users for selecting commodities; the category information module obtains category information of the article; the category weighting and aggregating module performs weighting and aggregating on the knowledge graph and the article prior knowledge vector; the scoring prediction module integrates and calculates vector representations of the users and the items obtained by the potential factor module and the category weighting aggregation module, and learns a function on a given interaction behavior set and the knowledge graph, wherein the function can predict the possibility of purchasing or selecting an item by a user. The method introduces the category information of the articles in the recommended scene, so that the method not only meets the requirement of the actual recommended scene, but also can improve the interpretability of the model.

Description

Recommendation method for introducing item category information into graph neural network
Technical Field
The invention belongs to the technical field of computers, and particularly relates to a recommendation method for introducing article category information into a graph neural network.
Background
With the rapid development of the related industries of the Internet, the lives of people enter an information-based and intelligent era, and various network platforms provide great convenience for the lives of people. When a user uses the online platform, the user generally only has interest in a few specific item information, and the user experience is reduced in the face of a large amount of irrelevant information. In order to solve the selection difficulty brought by information explosion to users and provide interested data information for the users in a short time, recommendation methods are gradually popular in various field platforms.
Conventional recommendation methods typically include two concepts, one for making recommendations based on content and the other for making recommendations based on collaborative filtering. The fundamental difference between the two is whether auxiliary information of the user or the article is used, such as personal information of the user, category of the article, brand, price and the like. With the progress of computer-related technologies, recommendation methods based on deep learning have gradually become mainstream.
The graph neural network is a novel neural network model specially processing a graph data structure, and can flexibly model scene data closer to human social life. A knowledge graph is a large data network that represents entities in the world and their relationships in the form of a graph. Entities represent things or concepts in various application scenarios, and relationships represent some kind of connection between entities. Therefore, the knowledge map can structurally represent all scene information of the objective world, and provides priori knowledge for other tasks in deep learning through embedded representation and relational reasoning, so that the precision and the effect of the whole task are improved.
However, in the existing graph neural network and knowledge graph recommendation method, the knowledge graph assisted interactive recommendation system for the user and the commodity cannot mine interactive information between user intentions and commodity categories, and the relation between the items and the user intentions is not established from the perspective of the item category information, so that the preference of the user intentions to different categories of items cannot be effectively reflected in recommendation.
Disclosure of Invention
In order to overcome the defects of the prior art, the invention aims to provide a recommendation method for introducing item type information into a graph neural network, wherein the item type information is introduced into a recommendation scene, so that the recommendation method not only meets the requirements of an actual recommendation scene, but also can improve the interpretability of a model.
In order to achieve the purpose, the invention adopts the technical scheme that:
a recommendation method for introducing item category information in a graph neural network comprises the following steps:
step 1, determining input data
The input data is a user-article interaction graph representing interaction relations among users, articles and auxiliary information, and the user-article interaction graph comprises relation data and a knowledge graph; the relationship data represents interaction between the user and the item; the knowledge graph represents a relationship between an item and its ancillary information; the interactive behavior refers to implicit feedback of the user on the item; the auxiliary information refers to a series of characteristic descriptions related to the article;
step 2, constructing a potential factor module
The potential factor module is used for modeling the commodity selection intention of a user, and the user always selects an article for a certain intention to obtain vector representations of the user and the article in the module;
step 3, constructing a category information module
The category information module is used for obtaining category information of the article, and the category information module adopts an article prior knowledge vector as characteristic information of the article;
step 4, constructing a category weighting aggregation module
The category weighting and aggregating module is used for weighting and aggregating the knowledge graph and the prior knowledge vector of the article to obtain the vector representation of the user and the article in the module;
step 5, constructing a scoring prediction module
The scoring prediction module is used for integrating vector representations of the users and the articles obtained by the potential factor module and the category weighting aggregation module, further performing vector calculation, and finally learning a function on a given interaction behavior set and a knowledge graph, wherein the function can predict how likely a user is to buy or select an article.
Compared with the prior art, the invention provides the article recommendation method based on the graph neural network, aiming at the problem that the corresponding relation between the user intention and the article preference cannot be effectively reflected in the recommendation process because the conventional article interactive recommendation system cannot mine the interactive information between the user intention and the article category. The method utilizes the prior knowledge of the articles, realizes article classification through clustering operation, and further realizes a more personalized article recommendation effect through weighted aggregation in the node information aggregation process of the graph neural network.
Drawings
FIG. 1 is a schematic diagram of the overall module of the method of the present invention.
FIG. 2 is a structural diagram of the method of the present invention.
Fig. 3 is a diagram illustrating an exemplary calculation of a browsing record vector after personalized weighting.
Fig. 4 is a schematic diagram of a calculation process of the weighted aggregation module.
Detailed Description
The embodiments of the present invention will be described in detail below with reference to the drawings and examples.
Fig. 1 is a block diagram of the method of the present invention, which includes four blocks: a latent factor module, a category information module, a category weighted aggregation module, and a score prediction module. The knowledge graph and the article information obtain an aggregation result vector containing different information through a potential factor module, a category information module and a category weighting aggregation module, then a user-article score is obtained through a score prediction module, and finally articles are recommended to a user according to the score sorting.
As shown in fig. 2, the method of the present invention is implemented as follows:
(1) Description of the data
The user-article interaction diagram is used as input data of the method, and the user-article interaction diagram represents the interaction relation among the user, the article and the auxiliary information, and comprises two parts: relational data and knowledge maps.
In the present invention, the relationship data represents the interaction between the user and the item. The interactive behavior is implicit feedback of the user to the item, such as browsing, purchasing, praise, collecting, forwarding, and the like. In the relational data, a user set is represented by U, an item set is represented by I, a user is represented by U, an item selected by the user U is represented by I, and then an interactive behavior set O between the user and the item is formed + And { (U, I) | U ∈ U, I ∈ I }, namely the implicit feedback of the user.
The knowledge graph represents the relation between the article and the auxiliary information thereof, V represents an entity set, R represents a relation set, namely a set of the relation R between the article and the auxiliary information thereof, h represents the article, t represents the auxiliary information of the article, and the auxiliary information refers to a series of feature descriptions related to the article. When V represents an entity in the entity set, V can be h or R, and the knowledge graph can be represented as G = { (h, R, t) | h, t ∈ V, and R ∈ R }. For example, the material of a certain book is a novel, h represents the book, r represents the material, and t represents the novel.
The potential factor module and the category weighting and aggregation module respectively utilize the interaction information of the user goods to respectively model, and the final task is to set the interaction set O in a given interaction set + And learning a function on the knowledge-graph G can predict how likely a user is to purchase or select an item.
(2) Latent factor module
In the latent factor module, each user may select an item with an intention, which is a preference of the user for the item, so that the user-selected item intention is referred to as a corresponding latent factor p, which is a latent factor p for the user-selected item. All potential factors P form a potential factor set P, and according to the potential factor set P, the user and item pair (u, i) is divided to form { (u, P, i) | P ∈ P }, so that the original heterograph is recombined. Each potential factor p is matched with the relation in the knowledge graph, and a vector of the potential factors p is constructed by applying an attention mechanisme p
Figure BDA0003794451600000041
In the formula: e.g. of a cylinder r An ID embedding vector representing a relation r, and alpha (r, p) represents an importance score between the potential factor p and the relation of the article in the knowledge graph, and the calculation formula is as follows:
Figure BDA0003794451600000042
in the formula: w is a rp And representing a weight matrix which can be trained, wherein the weight matrix corresponds to a specific relation r in the knowledge graph and a specific potential factor p, r' represents other corresponding relations of the user-selected specified object in the knowledge graph. The attention score does not actually belong to one user, and the score is assigned to a user if the user has a combination of factors that satisfy the relationship and the potential factor.
In the potential factor module, since the user may purchase the goods with various intentions, in order to better distinguish the difference between the several intentions, the independence between the potential factors needs to be improved. The different latent factors should be independent of each other. The independent latent factors can provide information describing different user behaviors, and the embedded vectors guiding the learned latent factors are independent as much as possible. The smaller the distance correlation coefficient of two potential factors, the more independence between the two potential factors, denoted by L ING Representing the independence between the potential factors, the calculation formula is as follows:
Figure BDA0003794451600000051
Figure BDA0003794451600000052
in the formula: e.g. of a cylinder p And e p′ Vector of potential factors p and p', dCor (e) p ,e p′ ) Representing a distance coefficient between the latent factor p and the latent factor p'; dCov (e) p ,e p′ ) Denotes e p And e p′ (ii) a distance covariance of; dVar (e) p ) Denotes e p Distance variance of (d) dVar (e) p′ ) Denotes e p′ The distance variance of (2).
L ING Is a term in the overall loss function. The present invention expects the greater the independence of the latent factors, the better the numerical value calculated by the above formula. By continuously training the neural network, the network automatically will cause L iNG The values become smaller and smaller.
In the latent factor module, a vector representation of the user and the item can be obtained through relational path aggregation or knowledge graph aggregation.
Collaborative filtering is used when performing relational path aggregation. With N u = { (p, i) | (u, p, i) ∈ C } represents the latent factor history, and the first order relevance of user u, C represents the latent factor history. The latent factor information is then integrated from all historical interactive items, resulting in a representation of user u in the latent factor module:
Figure BDA0003794451600000061
in the formula:
Figure BDA0003794451600000062
embedding vectors after aggregation of the potential factor history records of the user u, namely vector representation of users and articles in the potential factor module;
Figure BDA0003794451600000063
an ID embedding vector representing item i in the potential factor module; as indicates a hadamard product. The attention score β (u, p) is used to distinguish the importance of each potential factor p, and is calculated by the formula:
Figure BDA0003794451600000064
in the formula:
Figure BDA0003794451600000065
an ID representing user u embeds a vector.
In knowledge graph polymerization, N is used i = { (r, v) | (i, r, v) ∈ G } represents the property of item i and its first-order linked entity nodes, and takes into account the relationship context of the aggregation function, that is, each entity has different semantics in different relationships. Thereby producing a vector representation of item i:
Figure BDA0003794451600000066
in the formula:
Figure BDA0003794451600000067
representing an item embedding vector after aggregating layer 1 adjacency information for item i on a knowledge-graph, i.e. a vector representation of users and items in the potential factor module;
Figure BDA0003794451600000068
an ID representing entity v is embedded in the vector.
(3) Category information module
In the category information module, the most important thing is to obtain the item category information and provide the corresponding category data for the subsequent module operation. Because the user-article interaction graph does not have related description of article data and features, the method adopts the article prior knowledge vector as the feature information of the article. In addition, no information related to article labels is needed in the data set, and the method adopts a clustering method to aggregate to obtain the article prior knowledge vector. Clustering to obtain a plurality of clusters, wherein each cluster represents a similar article, namely each cluster corresponds to an article category, and a cluster center vector of each cluster, namely an article prior knowledge vector, represents the common characteristics of one article; the serial number of each cluster is a pseudo label, and is used for marking and distinguishing different clusters. And attaching corresponding pseudo labels to the articles through clustering. The pseudo label is used as an extra information guide, and the user representation in the graph neural network recommendation scene is enriched.
(4) Category weighted aggregation module
The category weighting and aggregating module has the main function of selectively weighting and aggregating the information according to the preference of different users to the articles by utilizing the article category information in the category information module. The interaction matrix for the user item is defined as follows:
Figure BDA0003794451600000071
A fg indicating that there is an interaction between the fth user and the gth item, i.e. the fth user selects the gth item;
integrating the article category information into the interactive matrix to obtain a reconstructed adjacency matrix A ic
A ic =Personalized Weighting(A) (9)
In the formula, personalized weighing is an operation function, and normalization operation is carried out according to the quantity of the articles.
Fig. 3 shows a specific example of the personalized weighted browsing record vector operation process. If there are items i1, i2, i3, i4 in the item library and user u1 browses items i1, i2, i3, the vector in the adjacency matrix is represented as (1,1,1,0). And clustering in the category information module to obtain i1, i2 as a first category of articles, i3 as a second category of articles and i4 as a third category of articles. For user u1, the browsed items are only of two types, the number of the first type items is 2, the number of the second type items is 1, and therefore the weight of the first type items is
Figure BDA0003794451600000072
The weight of the second type of article is
Figure BDA0003794451600000073
The vectors in the adjacency matrix are represented as
Figure BDA0003794451600000074
Then normalization operation is carried out, and finally the browsing record vector after personalized weighting is
Figure BDA0003794451600000075
The calculation process of the weighted aggregation module is shown in fig. 4. Obtaining an adjacency matrix A according to the formula (9) ic And then, carrying out aggregation by using the user vector to obtain a user vector, wherein the aggregation process is expressed as the following formula:
Figure BDA0003794451600000076
in the formula:
Figure BDA0003794451600000077
a vector representation representing the users in the category weighted aggregation module. The aggregation of the item vectors can be expressed as the following equation:
Figure BDA0003794451600000081
in the formula:
Figure BDA0003794451600000082
an article embedding vector after the layer 1 adjacent information in the category weighting aggregation module is shown,
Figure BDA0003794451600000083
ID embedding vector representing entity v, e r An ID representing the relationship r is embedded in the vector.
(5) Score prediction module
After the above modules, k layers can be iteratively executed, so as to obtain vector representations in the k-th layer propagation process of the users and the articles in the potential factor module and the category weighting aggregation module respectively. Respectively as follows: potential factor module user vector
Figure BDA0003794451600000084
Latent factor module item vector
Figure BDA0003794451600000085
Category weighted aggregation module user vector
Figure BDA0003794451600000086
And a category weighted aggregation module item vector
Figure BDA0003794451600000087
All the vector dimensions are 64 dimensions, and add operation is respectively carried out on the vector dimensions in a scoring prediction module to obtain a user vector e of the layer u And an item vector e i
Figure BDA0003794451600000088
Figure BDA0003794451600000089
The user and article vectors after add operation can still be propagated in multiple layers, and a user vector e with multiple layers can be generated u And an item vector e i . In order to make the embedded vector contain richer information, add operation needs to be performed on the user vector and the article vector generated by each layer:
e u =e u (0) +…+e u (k) (14)
e i =e i (0) +…+e i (k) (15)
in the formula: e.g. of a cylinder u Representing the final user vector; e.g. of the type i Representing the final item vector. The final score can be obtained by multiplying the two vectors:
y ui =e u T e i (16)
the loss function employed by the method of the invention is as follows:
Figure BDA00037944516000000810
in the formula:
Figure BDA00037944516000000811
parameter representing that the neural network of the present invention can automatically learn, e r An ID-embedded vector representing the relation r, e p A vector representing a potential factor p; lambda [ alpha ] 1 A parameter representing a loss of control independence; lambda 2 The regularization term parameter is represented. L is BPR The loss function is as follows:
Figure BDA0003794451600000091
in the formula: o = { (u, i, j) | (u, i) ∈ O + ,(u,j)∈O - Represents the final user vector training set; o is + Represents a positive sample (observed sample); o is - Represent negative samples (samples not observed); σ denotes a sigmoid function, i denotes an item, j denotes another item, y ui And y uj Representing the scores of the same user u for the item i and the item j; (ii) a
Final user-item score y ui And sequencing to obtain an item list recommended to the user.
One embodiment of the present invention is as follows.
The following is user item interaction behavior relationship data. Wherein the list of digits after the user name is the number of items accessed by the user.
Zhang III: 1 14 15 16 17 18 19 20 21 22
And fourthly, plum: 2 26 27 28 29 30 31 32 33 34
Wang Mei: 3 49 50 51 52 53 54 55 56
……
The article and its ancillary information knowledge map are as follows. Where each row consists of 3 numbers. The top number represents the number of the item, e.g., item 40 represents the "Shanglong Ridge" video, item 44 represents the "My father mother" video, and so on. The last number represents the number of the auxiliary information of the article, for example, information No. 41 represents a war sheet, information No. 45 represents a literary sheet, and the like. The middle number represents the relationship between the item and the auxiliary information, e.g., 0 represents "is a" relationship, 1 represents "director is" relationship, 2 represents "show time is" relationship, etc.
40 0 41
44 0 45
5 1 6
49 0 50
53 0 54
25 2 26
57 0 58
52 1 6
73 0 62
41 10 42
64 0 65
……
The data are input into the module of the invention for training. After training, inputting three sheets, the invention outputs a recommended new item list of three sheets to the user, such as item No. 308, item No. 309 and item No. 310.

Claims (8)

1. A recommendation method for introducing item category information in a graph neural network is characterized by comprising the following steps:
step 1, determining input data
The input data is a user-article interaction graph representing interaction relations among users, articles and auxiliary information, and the user-article interaction graph comprises relation data and a knowledge graph; the relationship data represents interaction between the user and the item; the knowledge graph represents a relationship between an item and its ancillary information; the interactive behavior refers to implicit feedback of the user on the item; the auxiliary information refers to a series of characteristic descriptions related to the article;
step 2, constructing a potential factor module
The potential factor module is used for modeling the commodity selection intention of a user, and the user always selects an article for a certain intention to obtain vector representations of the user and the article in the module;
step 3, constructing a category information module
The category information module is used for obtaining category information of the article, and the category information module adopts an article prior knowledge vector as characteristic information of the article;
step 4, constructing a category weighting aggregation module
The category weighting aggregation module is used for carrying out weighting aggregation on the knowledge map and the prior knowledge vector of the article to obtain vector representation of the user and the article in the category weighting aggregation module;
step 5, constructing a score prediction module
The scoring prediction module is used for integrating vector representations of the users and the articles obtained by the potential factor module and the category weighting aggregation module, further performing vector calculation, and finally learning a function on a given interaction behavior set and a knowledge graph, wherein the function can predict how likely a user is to buy or select an article.
2. The recommendation method for introducing item category information in graph neural network as claimed in claim 1, wherein the implicit feedback of the user to the item comprises browsing, purchasing, praise, collecting and forwarding, and in the relationship data, the set of interaction behaviors between the user and the item is O + = (U, I) | U ∈ U, I ∈ I }, wherein U represents a user set, I represents an item set, U represents a user, and I represents an item selected by the user U;
the knowledge graph is represented as G = { (h, R, t) | h, t belongs to V, R belongs to R }, wherein V represents an entity set, and one entity is an article h or auxiliary information t of the article; r denotes a set of relationships R between the item and its auxiliary information.
3. The recommendation method for introducing item category information in neural network of figure as claimed in claim 2, wherein the potential factor module takes the user-selected item intention as corresponding potential factor p, and based on all potential factorsP, dividing the user and the article pair (u, i) to form { (u, P, i) | P ∈ P }, wherein each latent factor P is matched with the relation in the knowledge graph, and constructing a vector e of the latent factor P by applying an attention mechanism p
Figure FDA0003794451590000021
In the formula: e.g. of the type r An ID embedding vector representing a relation r, and alpha (r, p) represents an importance score between the potential factor p and the relation of the article in the knowledge graph, and the calculation formula is as follows:
Figure FDA0003794451590000022
in the formula: w is a rp And representing a weight matrix which can be trained, wherein the weight matrix corresponds to a specific relation r in the knowledge graph and a specific potential factor p, r' represents other corresponding relations of the user-selected specified object in the knowledge graph.
4. The recommendation method for introducing item category information in neural network of figure according to claim 3, wherein in said latent factor module, the smaller the distance correlation coefficient of two latent factors is, meaning the more independence between two latent factors, the independence between the latent factors is L ING Expressed, the calculation formula is as follows:
Figure FDA0003794451590000023
Figure FDA0003794451590000031
in the formula: e.g. of the type p And e p′ Vector of potential factors p and p', dCor (e) p ,e p′ ) Representing a distance coefficient between the potential factor p and the potential factor p'; dCov (e) p ,e p′ ) Denotes e p And e p′ (ii) a distance covariance of; dVar (e) p ) Denotes e p Distance variance of (d) dVar (e) p′ ) Denotes e p′ The distance variance of (2).
5. The recommendation method for introducing item category information in a graph neural network as claimed in claim 3, wherein in the latent factor module, vector representations of users and items are obtained through relationship path aggregation or knowledge graph aggregation;
in relational path aggregation, collaborative filtering is used, with N u And = { (p, i) | (u, p, i) ∈ C } represents the first order relevance of the latent factor history and user u, C represents the latent factor history, and the latent factor information is integrated from all historical interaction items, so that a vector representation of user u in the latent factor module is generated:
Figure FDA0003794451590000032
in the formula:
Figure FDA0003794451590000033
embedding vectors after aggregation of the potential factor history records of the user u, namely vector representation of users and articles in the potential factor module;
Figure FDA0003794451590000034
an ID embedding vector representing item i in the potential factor block; an as-product is indicated by the Hadamard product, and the attention score β (u, p) is used to distinguish the importance of each potential factor p, and the calculation formula is:
Figure FDA0003794451590000035
in the formula:
Figure FDA0003794451590000036
an ID embedding vector representing user u;
in knowledge spectrum polymerization, N is used i = { (r, v) | (i, r, v) ∈ G } represents the property of item i and its first-order linked entity node, and considers the relationship context of the aggregation function, thereby yielding a vector representation of item i:
Figure FDA0003794451590000041
in the formula:
Figure FDA0003794451590000042
representing an item embedding vector after aggregating layer 1 adjacency information for item i on a knowledge-graph, i.e. a vector representation of users and items in the potential factor module;
Figure FDA0003794451590000043
an ID representing entity v embeds a vector.
6. The recommendation method for introducing the item category information into the neural network of the figure according to claim 2, wherein the category information module adopts a clustering method to obtain an item prior knowledge vector through clustering, a plurality of clusters are obtained through clustering, each cluster represents a similar item, namely each cluster corresponds to an item category, and a cluster center vector of each cluster, namely the item prior knowledge vector, represents the common characteristics of a class of items; the serial number of each cluster is a pseudo label, and is used for marking and distinguishing different clusters.
7. The recommendation method for introducing item category information in a neural network of a graph according to claim 2, wherein the category weighting aggregation module defines an interaction matrix of user items as follows:
Figure FDA0003794451590000044
A fg indicating that there is an interaction between the fth user and the gth item, i.e. the fth user selects the gth item;
merging the article category information into the interactive matrix to obtain a reconstructed adjacency matrix A ic
A ic =Personalized Weighting(A)
In the formula, the qualified weighing is an operation function, and normalization operation is carried out according to the quantity of the articles;
using a adjacency matrix A ic And polymerizing to obtain a user vector, wherein the polymerization process is expressed as the following formula:
Figure FDA0003794451590000045
Figure FDA0003794451590000046
a vector representation representing a user in the category weighted aggregation module;
the aggregation of the commodity vectors is represented by the following formula:
Figure FDA0003794451590000051
in the formula:
Figure FDA0003794451590000052
an article embedding vector after the layer 1 adjacent information in the category weighting aggregation module is shown,
Figure FDA0003794451590000053
ID embedding vector representing entity v, e r An ID representing the relation r is embedded in the vector.
8. According to the claimsSolving 2 the recommendation method for introducing item category information in the graph neural network is characterized in that k layers can be iteratively executed through the modules, so that vector representations in the k-th layer propagation process of the users and the items in the potential factor module and the category weighting aggregation module are obtained and are the vectors of the users of the potential factor module respectively
Figure FDA0003794451590000054
Latent factor module item vector
Figure FDA0003794451590000055
Category weighted aggregation module user vector
Figure FDA0003794451590000056
And category weighted aggregate module item vector
Figure FDA0003794451590000057
All vector dimensions are 64 dimensions, and add operation is respectively carried out on the vector dimensions in the scoring prediction module to obtain a user vector e of a corresponding layer u And an item vector e i
Figure FDA0003794451590000058
Figure FDA0003794451590000059
The user and article vectors after add operation can still be propagated in multiple layers, and a multi-layer user vector e can be generated u And an item vector e i (ii) a Performing add operation on the user vector and the article vector generated by each layer:
e u =e u (0) +…+e u (k)
e i =e i (0) +…+e i (k)
in the formula: e.g. of the type u Representing the final user vector; e.g. of the type i And representing a final article vector, and multiplying the two vectors to obtain a final score:
y ui =e u T e i
the following loss function was used:
Figure FDA00037944515900000510
in the formula:
Figure FDA00037944515900000511
parameters representing that the neural network can learn automatically, e r An ID embedding vector representing the relationship r; lambda 1 A parameter representing a loss of control independence; lambda [ alpha ] 2 Representing a regularization term parameter; l is BPR The loss function is as follows:
Figure FDA0003794451590000061
in the formula: o = { (u, i, j) | (u, i) ∈ O + ,(u,j)∈O - Represents the final user vector training set; o is + Represents a positive sample, i.e. an observed sample; o is - Represents negative samples, i.e. samples that were not observed; σ denotes sigmoid function, y ui And y uj Representing the scores of the same user u for the item i and the item j;
final user-item score y ui And sequencing to obtain an item list recommended to the user.
CN202210964936.0A 2022-08-12 2022-08-12 Recommendation method for introducing item category information into graph neural network Pending CN115293851A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210964936.0A CN115293851A (en) 2022-08-12 2022-08-12 Recommendation method for introducing item category information into graph neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210964936.0A CN115293851A (en) 2022-08-12 2022-08-12 Recommendation method for introducing item category information into graph neural network

Publications (1)

Publication Number Publication Date
CN115293851A true CN115293851A (en) 2022-11-04

Family

ID=83827195

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210964936.0A Pending CN115293851A (en) 2022-08-12 2022-08-12 Recommendation method for introducing item category information into graph neural network

Country Status (1)

Country Link
CN (1) CN115293851A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117038105A (en) * 2023-10-08 2023-11-10 武汉纺织大学 Drug repositioning method and system based on information enhancement graph neural network

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117038105A (en) * 2023-10-08 2023-11-10 武汉纺织大学 Drug repositioning method and system based on information enhancement graph neural network
CN117038105B (en) * 2023-10-08 2023-12-15 武汉纺织大学 Drug repositioning method and system based on information enhancement graph neural network

Similar Documents

Publication Publication Date Title
CN111222332B (en) Commodity recommendation method combining attention network and user emotion
CN106920147B (en) Intelligent commodity recommendation method based on word vector data driving
WO2021139164A1 (en) Sequential recommendation method based on long-term interest and short-term interest
CN109299396A (en) Merge the convolutional neural networks collaborative filtering recommending method and system of attention model
CN108509573B (en) Book recommendation method and system based on matrix decomposition collaborative filtering algorithm
CN113343125B (en) Academic accurate recommendation-oriented heterogeneous scientific research information integration method and system
CN111737578A (en) Recommendation method and system
CN108763367B (en) Method for recommending academic papers based on deep alignment matrix decomposition model
CN112884551A (en) Commodity recommendation method based on neighbor users and comment information
CN112256866A (en) Text fine-grained emotion analysis method based on deep learning
CN112631560A (en) Method and terminal for constructing objective function of recommendation model
Van Dat et al. Solving distribution problems in content-based recommendation system with gaussian mixture model
CN112699310A (en) Cold start cross-domain hybrid recommendation method and system based on deep neural network
CN110727872A (en) Method and device for mining ambiguous selection behavior based on implicit feedback
CN115456707A (en) Method and device for providing commodity recommendation information and electronic equipment
CN115293851A (en) Recommendation method for introducing item category information into graph neural network
CN113190751B (en) Recommendation method fusing keyword generation
CN111178986A (en) User-commodity preference prediction method and system
CN116823321B (en) Method and system for analyzing economic management data of electric business
CN112990978A (en) Method and system for predicting trend of price limit instruction book
Chatterjee Machine Learning and Its Application: A Quick Guide for Beginners
CN116911949A (en) Article recommendation method based on boundary rank loss and neighborhood perception graph neural network
CN115344794A (en) Scenic spot recommendation method based on knowledge map semantic embedding
CN115829683A (en) Power integration commodity recommendation method and system based on inverse reward learning optimization
CN109800424A (en) It is a kind of based on improving matrix decomposition and the recommended method across channel convolutional neural networks

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination