CN116127083A - Content recommendation method, device, equipment and storage medium - Google Patents

Content recommendation method, device, equipment and storage medium Download PDF

Info

Publication number
CN116127083A
CN116127083A CN202210789296.4A CN202210789296A CN116127083A CN 116127083 A CN116127083 A CN 116127083A CN 202210789296 A CN202210789296 A CN 202210789296A CN 116127083 A CN116127083 A CN 116127083A
Authority
CN
China
Prior art keywords
node
vector
knowledge graph
neighbor
nodes
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210789296.4A
Other languages
Chinese (zh)
Inventor
杨茂
冯晟
韩卫强
李云彬
耿福明
蒋宁
吴海英
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Mashang Xiaofei Finance Co Ltd
Original Assignee
Mashang Xiaofei Finance Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Mashang Xiaofei Finance Co Ltd filed Critical Mashang Xiaofei Finance Co Ltd
Priority to CN202210789296.4A priority Critical patent/CN116127083A/en
Publication of CN116127083A publication Critical patent/CN116127083A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9535Search customisation based on user profiles and personalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/36Creation of semantic tools, e.g. ontology or thesauri
    • G06F16/367Ontology

Landscapes

  • Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Animal Behavior & Ethology (AREA)
  • Computational Linguistics (AREA)
  • Compression Of Band Width Or Redundancy In Fax (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The application provides a content recommendation method, a device, equipment and a storage medium, wherein the content recommendation method comprises the following steps: obtaining a collaborative knowledge graph; inputting the collaborative knowledge graph into a recommendation model for recommendation processing to obtain a target node; recommending node information corresponding to the target node to the target user; the recommendation model comprises an embedding layer, an embedding propagation layer and a prediction layer; the embedded layer is used for encoding the collaborative knowledge graph to obtain a first encoding vector corresponding to each node of the collaborative knowledge graph and a second encoding vector corresponding to each side of the collaborative knowledge graph; the embedded propagation layer is used for processing the first coding vector and the second coding vector to obtain a first vector corresponding to each node of the collaborative knowledge graph; and the prediction layer is used for performing prediction processing based on the first vector to obtain the target node. By adopting the embodiment of the application, the accuracy of recommendation can be improved.

Description

Content recommendation method, device, equipment and storage medium
Technical Field
The present disclosure relates to the field of artificial intelligence technologies, and in particular, to a content recommendation method, apparatus, device, and storage medium.
Background
With the rapid growth of the internet, network information has exploded. For example, in the field of electronic commerce, the number and variety of commodities are very large, and users need to spend a lot of time to find the required commodities, so that the problem of browsing a lot of irrelevant information and irrelevant products can lead to continuous loss of users.
At present, in order to accurately mine data of interest of a user from mass data, a common mode is to analyze preference of the user according to historical behaviors of the user and recommend the data of possible interest to the user so as to solve the problem of information overload. However, this method cannot accurately recommend data that has not been interacted with by the user.
Disclosure of Invention
The embodiment of the application provides a content recommendation method, device, equipment and storage medium, which are used for improving the recommendation accuracy.
In a first aspect, an embodiment of the present application provides a content recommendation method, including: acquiring a collaborative knowledge graph, wherein the collaborative knowledge graph comprises nodes of a target user; inputting the collaborative knowledge graph into a recommendation model for recommendation processing to obtain a target node; recommending node information corresponding to the target node to the target user; the recommendation model comprises an embedding layer, an embedding propagation layer and a prediction layer; the embedded layer is used for encoding the collaborative knowledge graph to obtain a first encoding vector corresponding to each node of the collaborative knowledge graph and a second encoding vector corresponding to each side of the collaborative knowledge graph; the embedded propagation layer is used for processing the first coding vector and the second coding vector to obtain a first vector corresponding to each node of the collaborative knowledge graph, wherein the first vector comprises a fusion result after node information of neighbor nodes of the corresponding nodes is iteratively aggregated; the prediction layer is used for carrying out prediction processing based on the first vector to obtain a target node, and the matching degree of the first vector corresponding to the target node and the first vector corresponding to the node of the target user is larger than or equal to a preset threshold value.
In a second aspect, an embodiment of the present application provides a content recommendation device, including:
the acquisition module is used for acquiring a collaborative knowledge graph, wherein the collaborative knowledge graph comprises nodes of a target user;
the processing module is used for inputting the collaborative knowledge graph into the recommendation model to conduct recommendation processing to obtain a target node;
and the recommending module is used for recommending the node information corresponding to the target node to the target user.
The recommendation model comprises an embedding layer, an embedding propagation layer and a prediction layer;
the embedded layer is used for encoding the collaborative knowledge graph to obtain a first encoding vector corresponding to each node of the collaborative knowledge graph and a second encoding vector corresponding to each side of the collaborative knowledge graph;
the embedded propagation layer is used for processing the first coding vector and the second coding vector to obtain a first vector corresponding to each node of the collaborative knowledge graph, wherein the first vector comprises a fusion result after node information of neighbor nodes of the corresponding nodes is iteratively aggregated;
the prediction layer is used for carrying out prediction processing based on the first vector to obtain a target node, and the matching degree of the first vector corresponding to the target node and the first vector corresponding to the node of the target user is larger than or equal to a preset threshold value.
In a third aspect, an embodiment of the present application provides an electronic device, including a processor, a memory, and a computer program stored on the memory and executable on the processor, where the processor implements the content recommendation method according to any one of the first aspects when executing the computer program.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium, in which a computer program is stored, which when run on an electronic device causes the electronic device to perform the content recommendation method according to any one of the first aspects.
In a fifth aspect, embodiments of the present application provide a computer program product comprising a computer program which, when run on an electronic device, causes the electronic device to perform the content recommendation method as in any of the first aspects.
It can be seen that in the embodiment of the present application, the collaborative knowledge graph is encoded through the embedding layer of the recommendation model, so as to obtain a first encoding vector corresponding to each node of the collaborative knowledge graph and a second encoding vector corresponding to each side of the collaborative knowledge graph, the first encoding vector and the second encoding vector are processed through the embedding propagation layer, so as to obtain a first vector corresponding to each node of the collaborative knowledge graph, and the prediction layer predicts based on the first vector, so as to obtain a target node, and further recommend accurate node information to the target user.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the application and together with the description, serve to explain the principles of the application.
Fig. 1 is an application scenario schematic diagram of a content recommendation method provided in the present application;
FIG. 2 is a flowchart illustrating a content recommendation method according to an embodiment of the present disclosure;
FIG. 3 is a schematic diagram of obtaining a collaborative knowledge graph according to an embodiment of the present disclosure;
FIG. 4 is a schematic diagram of a recommendation model according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of a content recommendation device according to an embodiment of the present disclosure;
fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Specific embodiments thereof have been shown by way of example in the drawings and will herein be described in more detail. These drawings and the written description are not intended to limit the scope of the inventive concepts in any way, but to illustrate the concepts of the present application to those skilled in the art by reference to specific embodiments.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present application as detailed in the accompanying claims.
At present, a method for recommending objects is based on collaborative filtering (Collaboration Filtering, CF), specifically based on all interactive behaviors of a user facing objects, and uses collective wisdom to recommend objects, wherein the problem to be solved by the idea of collaborative filtering is expressed in a data form: how the unknown part of the matrix fills the problem. Classical algorithm of matrix filling: singular value decomposition (Singular Value Decomposition, SVD) solves, but this approach has the problem that the minimization process is not regularized (only the minimum variance) and therefore overfitting is easily generated. Another method for recommending objects is a feature-based model, specifically, a recommendation algorithm model such as a Factoring Machine (FM), MF, SVD, svd++, and FISM, which only uses interaction information (such as click rate) of the user and the object, but does not use attribute information and other side information of a large number of users. For example information of the user himself, such as age, sex, occupation; attribute information of the object, such as classification, description and graphic information; and other contextual information such as location, time, weather, etc. Therefore, the accuracy of the recommended object cannot be guaranteed, and no interpretability is provided.
In summary, the knowledge graph can effectively alleviate the problems of data sparseness and cold start in the traditional collaborative filtering, wherein cold start refers to data which is not interacted with a user, so that the current adopted method is to divide the network structure of the knowledge graph into separate paths or only use first-order neighbor information, which causes the problem that the connection between nodes which are not directly connected in the whole knowledge graph cannot be established. Specifically, the knowledge graph is used as a kind of structuring auxiliary information, and because the knowledge graph stores the relationship between the user attribute and the object attribute and contains rich semantics, the accuracy, diversity and interpretability of the recommended object can be improved. The specific method is that the knowledge patterns corresponding to the user-object and the knowledge patterns between the users or between the objects are fused into a collaborative knowledge pattern (Colloborative Knowledge Graph, CKG for short), and the user preference is mined on the CKG based on the historical interaction record of the user. In order to mine potential interests of users on CKGs, implicit connections between nodes of the CKG need to be established. Current methods for CKG processing fall into two categories, one is a path-based approach and the other is a graph-embedding-based approach. The path-based method is to decompose the CGK into a plurality of independent linear paths, the graph embedding method is to acquire direct neighbor information of a user or an object, and the two methods cannot establish implicit connection between nodes on the CGK.
Based on the problems that implicit connections among nodes in the CKG cannot be captured and accurate recommended objects cannot be realized by the method, the recommendation model based on collaborative filtering and the collaborative knowledge graph are fused together, the defect that interaction records among users and objects are mutually independent is overcome, and collaborative information of the users based on object attributes can be obtained. In addition, the attention mechanism is adopted to recursively aggregate the characteristic interaction information and the linear weighting information of the neighbor nodes, and the interaction information among the nodes in the CKG is explicitly coded, so that the recommendation model can capture rich semantics based on the characteristic interaction among the nodes in the collaborative knowledge graph.
Fig. 1 is a schematic application scenario diagram of a content recommendation method provided in the present application. As shown in fig. 1, the application scenario may include: user terminal 10, user X enters a video page through user terminal 10, which recommends movie a to movie F to user X, wherein movie a, movie B and movie C may be movies that user X interacted with before and did not see, and movie D to movie F may be newly-shelved movies (being cold-started nodes), and content recommendation methods according to the present application determine that movie D matches user X, and recommend to user X.
In addition, fig. 1 is only an exemplary application scenario, and the embodiments of the present application may be applied to recommendation scenarios of various objects such as music, POI (points of interest), news, education, books, and commodities of shopping sites. For example, purchasing merchandise in shopping sites, reading books in reading sites, etc., all of which require matching objects to be recommended to the user.
It should be noted that fig. 1 is only a schematic diagram of an application scenario provided in the embodiment of the present application, and the embodiment of the present application is not limited to a specific application scenario. The content recommendation method provided by the embodiment of the application can be applied to a server, and the server can be an independent server or a service cluster or the like.
The following describes the technical scheme of the present application in detail through specific embodiments. It should be noted that the following embodiments may be combined with each other, and the same or similar concepts or processes may not be described in detail in some embodiments.
Fig. 2 is a flowchart of a content recommendation method according to an embodiment of the present application, where the content recommendation method includes the following steps:
s201, acquiring a collaborative knowledge graph.
The collaborative knowledge graph comprises nodes of the target user, the collaborative knowledge graph is stored in a database in advance, and the collaborative knowledge graph corresponding to the nodes of the target user can be determined and obtained according to the nodes of the target user. The collaborative knowledge graph comprises nodes and edges, the nodes comprise user nodes and object nodes, the edges between the user nodes and the object nodes are used for representing interaction information of corresponding user facing objects, and the edges between the object nodes are used for representing association relations between corresponding objects.
Specifically, the knowledge graph is an important branch technology of artificial intelligence, is a structured semantic knowledge base, and is used for describing concepts and interrelationships thereof in a physical world in a symbol form, wherein a basic constituent unit of the knowledge graph is a node-relation-node triplet, and nodes and related attribute constituent value pairs thereof are mutually connected through relations to form a netlike knowledge structure. The knowledge graph is taken as an external knowledge with strong readability, and provides great help for improving the interpretation capability of the algorithm.
Referring to fig. 3, the present invention is implemented by fusing the user-object knowledge graph G1 and the object knowledge graph G2 into a unified collaborative knowledge graph G. Wherein G1 is defined as { (L, interaction, M) |l e L, M e M }, L represents a user node, L represents a set of user nodes, M represents an object node, M represents a set of object nodes, and interaction represents interaction information of user L on object M. Such as user nodes l1, l2 and l3 in fig. 3. Object node m1 to object node m7. Furthermore, G2 is the correlation of objects on their attributes represented by the object node-relation-object node triplet, and the definition of G2 is { (E1, R, E2) |h, t E, R E R }, where E1 represents the head object node, E2 represents the tail object node, R is the relation between two objects, and E is the set of object nodes, where object nodes E1 to E8 in fig. 3 can be either the head object node E1 or the tail object node E2.
Further, referring to fig. 3, a collaborative knowledge graph G is constructed, in which G1 and G2 are integrated into the same graph. The collaborative knowledge graph G is defined as g= { (h, R, t) |h, t E ', R E R' }, where E '=l } E, R' = { interaction } u R. The collaborative knowledge graph G constructed herein can be a combination of an undirected graph and a directed graph, namely, R' has a forward direction and/or a reverse direction, so that information of all adjacent nodes is collected for each node in the graph G. The user nodes u1 to u3 and the object nodes i1 to i9 in fig. 3 may be h in the collaborative knowledge graph G or t in the collaborative knowledge graph G.
In the embodiment of the present application, the collaborative knowledge graph corresponding to the target user includes a user node corresponding to the target user. Referring to fig. 3, nodes u1, u2, and u3 in fig. 3 are user nodes, and nodes i1 to i9 are object nodes. Wherein the target user node is e.g. node u1. In addition, the user node represents node information of the user such as age, sex, account number, etc. of the user, and the object node represents node information of the object such as movie name, movie type, movie duration, movie showing time, and director of the movie, etc.
Further, the interactive information of the user facing the object may represent the historical interactive record of the user on the object, such as the watching times, watching time, watching duration, etc., or may represent the preference of the new user on the object, for example, when the new user registers the video website, inputs the preference of the new user on some movies or movies of a certain type, the server may construct the connection relationship between the new user and the corresponding movie according to the data input by the new user, and embed the connection relationship into the collaborative knowledge graph, so as to implement the recommendation of the object for the new user. The association relationship between the corresponding objects, for example, the movie types of the two movies are the same, the director is the same, or the two movies belong to different parts of the same movie, etc.
Referring to fig. 3, edges between nodes include directed edges and undirected edges, where the directed edges are arrowed and the undirected edges are undirected edges.
In addition, the collaborative knowledge graph may further include: the edges connecting the user nodes and the user nodes represent relationships between the corresponding users, such as friend relationships, attention relationships, and the like, and are not limited herein.
In the embodiment of the application, the combining includes: interaction information of the user and the object and association relation between the objects are contained, so that rich semantics are contained, and accuracy of recommending the objects can be improved.
S202, inputting the collaborative knowledge graph into a recommendation model to conduct recommendation processing, and obtaining the target node.
The node of the target user is a user node corresponding to the target user. Specifically, the recommendation model can determine, according to each node in the input collaborative knowledge graph and the relationship between each node, a target object node with a matching degree with a user node of a target user being greater than a matching degree threshold, and take the target object node as a target node. The object corresponding to the target object node is the mined object which accords with the user interest.
Further, referring to fig. 4, the recommendation model includes an embedding layer 41, an embedding propagation layer 43, and a prediction layer 43; an embedding layer 41, configured to encode the collaborative knowledge graph to obtain a first encoding vector corresponding to each node of the collaborative knowledge graph and a second encoding vector corresponding to each side of the collaborative knowledge graph; the embedded propagation layer 42 is configured to process the first encoding vector and the second encoding vector to obtain a first vector corresponding to each node of the collaborative knowledge graph, where the first vector includes a result of iterative aggregation and fusion of node information of neighboring nodes of the corresponding node; and the prediction layer 43 is configured to perform prediction processing based on the first vector to obtain a target node, where a matching degree between the first vector corresponding to the target node and the first vector corresponding to the node of the target user is greater than or equal to a preset threshold.
Specifically, referring to fig. 4, the collaborative knowledge graph G is input to the embedding layer 41 of the recommendation model 40 to be encoded, and a first encoding vector X1 and a second encoding vector X2 are obtained. Each node in the collaborative knowledge graph G is provided with a corresponding first coding vector, and each side in the collaborative knowledge graph G is provided with a corresponding second coding vector.
The coding may be based on a translation distance model of a transition (Translating Embedding, translation embedding) or a variant thereof, such as a TransR. The TransR can consider that the same node has different semantics in different relations, and when the distance between two nodes is calculated, the consideration in the characteristic relation space is realized.
Furthermore, the embedded layer can generate embedded vectors in a low-dimensional continuous space for each node and each edge in the graph, and maintain the structural information of the collaborative knowledge graph as much as possible, so that rich auxiliary information can be provided for the recommendation model.
Further, referring to fig. 4, the manner of determining the first vector corresponding to each node is the same; in determining the first vector to which the node corresponds, the embedded propagation layer 42 is configured to: determining a second vector D2 corresponding to the node according to the first coding vector X1 and the second coding vector X2 corresponding to the node by adopting an attention mechanism, wherein the second vector D2 comprises node information of neighbor nodes of the node; iteratively aggregating the second vector D2 corresponding to the node and the first coding vector X1 corresponding to the node to obtain a plurality of third vectors D3; each iterative polymerization obtains a third vector D3; the third vector D3 obtained by the first iterative aggregation is obtained by aggregation based on the second vector D2 corresponding to the node and the first coding vector X1 corresponding to the node; the third vector D3 obtained by non-first polymerization is obtained by polymerization based on the third vector D3 obtained by the last iteration polymerization and the second vector D2 corresponding to the node; and fusing the first coding vector X1 and a plurality of third vectors D3 corresponding to the nodes to obtain the first vectors corresponding to the nodes.
Further, determining the second vector D2 corresponding to the node according to the first encoding vector X1 and the second encoding vector X2 corresponding to the node using the attention mechanism in the embedding layer 42 includes: determining a fourth vector D4 of the first neighbor node of the node by adopting an attention mechanism according to the first coding vector of the first neighbor node of the node and the second coding vector of the edge between the node and the first neighbor node, wherein the connecting edge of the first neighbor node and the node is a unidirectional edge; determining a fifth vector D5 according to a first coding vector of a second neighbor node of the node, wherein a connecting edge of the second neighbor node and the node is an undirected edge; and fusing the fourth vector and the fifth vector to obtain a second vector.
The fourth vector D4 includes a result of association between node information of the corresponding node and node information of the first neighboring node. The fifth vector D5 represents the association result between the node information of the corresponding node and the node information of the second neighboring nodes and between the node information of any two second neighboring nodes.
In the present embodiment, the embedding layer 42 includes: a linear collector 421 and a bilinear collector 422. The linear collector 421 obtains a fourth vector D4, and the bilinear collector 422 obtains a fifth vector D5.
Illustratively, referring to fig. 3, for user node u1, the first neighbor node comprises: object node i2 and object node i5, the second neighbor node comprising: object node i1, object node i3, and object node i4.
In the embodiment of the present application, in the following example, the corresponding node h may be any one of the user node u1 to the user node u3, and the object node i1 to the object node i9 in fig. 3
In an alternative embodiment, a first attention score between a node and a first neighboring node is determined from a first encoded vector of the first neighboring node of the node, a second encoded vector of an edge between the node and the first neighboring node; a fourth vector is determined based on the first encoded vector and the first attention score of the first neighbor node.
Wherein the fourth vector is determined by the following formula:
Figure BDA0003733178930000091
in the above formula, LC (h) represents a fourth vector D4,
Figure BDA0003733178930000092
the first encoding vector X1, pi' (h, r, t) representing one of the first neighbor nodes represents the first attention score, h represents the corresponding node, t represents one of the first neighbor nodes, r represents an edge between the corresponding node h and the first neighbor node t, and N (h) represents all of the first neighbor nodes of the corresponding node h.
For example, referring to fig. 3, if the corresponding node h is the user node u1, the first neighboring node is the object node i2 and the object node i5, respectively, and the fourth vector corresponding to the user node u1 is:
Figure BDA0003733178930000093
wherein r1 represents the edge of the user node u1 and the object node i2, and r2 represents the edge of the user node u1 and the object node i5 edge->
Figure BDA0003733178930000094
Is the first encoding vector of object node i2, is->
Figure BDA0003733178930000095
Is the first encoded vector of object node i 5.
In the embodiment of the present application, first, a linear collector 421 is used to collect linear summation information of all first neighbor nodes in a first neighbor node set N (h) of a node h. Where pi' (h, r, t) is a first attention score corresponding to an attention mechanism in the linear collector, and is a numerical scalar for controlling an amount of information that the first neighbor node t propagates to the corresponding node h on the edge r. The linear collector 421 injects first order connectivity information for the corresponding node h into the representation of the fourth vector D4 (LC (h)) and distinguishes the importance of each first neighbor node by a first attention score pi' (h, r, t).
Wherein the first attention score is determined using the following formula:
Figure BDA0003733178930000096
in the above formula, pi' (h, r, t) represents a first attention score, pi (h, r, t) represents a second attention score,
Figure BDA0003733178930000097
M r Transformation matrix representing corresponding edge r +.>
Figure BDA0003733178930000098
A first encoded vector e representing a corresponding node h r Representing the second encoded vector of the corresponding edge.
In the embodiment of the application, tan h is used as a nonlinear activation function, wherein M r Is a transformation matrix of the edge r generated in the embedding layer 41, the magnitude of the second attention fraction pi (h, r, t) depends on the corresponding node h and the first neighboring node t in the embedding layer 41 being on the edger, if the corresponding node h is close to the first neighbor node t, the first neighbor node t can transmit more information on the corresponding node h. Further, the second attention score pi (h, r, t) is normalized by softmax, and a corresponding first attention score pi' (h, r, t) is obtained.
Illustratively, referring to FIG. 3, for user node u1, the first attention score corresponding to object node i2 is
Figure BDA0003733178930000101
The first attention score corresponding to object node i5 is +.>
Figure BDA0003733178930000102
In the embodiment of the application, when the information is propagated and iterated to a higher order, the attention mechanism can enable the recommendation model to focus more attention on part of nodes, so that noise influence is reduced.
Further, the second neighbor nodes of the node are multiple; determining a fifth vector from the first encoded vector of the second neighbor node of the node, comprising: and determining a fifth vector according to the first coding vector of one second neighbor node, the first coding vector of the other second neighbor node and the target combination quantity, wherein the target combination quantity is the combination quantity obtained by combining any two nodes in the plurality of second neighbor nodes.
Specifically, the fifth vector is determined by the following formula:
Figure BDA0003733178930000103
in the above formula, BC (h) represents a fifth vector,
Figure BDA00037331789300001014
all second neighbor nodes representing the corresponding node h, d (h) represents the number of second neighbor nodes of the corresponding node h, +.>
Figure BDA0003733178930000104
Is at->
Figure BDA0003733178930000105
Taking the combined number of two nodes, t1 and t2 are denoted as +.>
Figure BDA0003733178930000106
Two second neighbor nodes in +.>
Figure BDA0003733178930000107
A first encoded vector representing node t1, < >>
Figure BDA0003733178930000108
The first encoding vector representing the node t2, t1 < t2 represents that the number of neighbor nodes of the node t1 is smaller than the number of neighbor nodes of the node t 2.
Illustratively, referring to fig. 3, if the corresponding node is the user node u1, the second neighboring nodes are the object node i1, the object node i3, and the object node i4, and the number of the second neighboring nodes of the user node u1 is 3, the number of combinations
Figure BDA0003733178930000109
3, taking the reciprocal as one third. The number of neighbor nodes of the object node i1 is 2, the number of neighbor nodes of the object node i3 is 3, and the number of neighbor nodes of the object node i4 is also 4, and the fifth vector corresponding to the user node u1 is calculated as follows:
Figure BDA00037331789300001010
wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure BDA00037331789300001011
a first encoding vector representing object node i1, is->
Figure BDA00037331789300001012
First encoded vector representing object node i3,/>
Figure BDA00037331789300001013
The first encoded vector representing object node i 4.
Specifically, with bilinear collector 422, a second set of neighbor nodes including corresponding node h is collected
Figure BDA0003733178930000111
And feature interaction information between every two nodes. Wherein (1)>
Figure BDA0003733178930000112
Taking the reciprocal represents assigning an identical and fixed system to the result of the interaction of each pair of nodes. As a result, the strong signal common to the second neighbor node t1 and the second neighbor node t2 on a certain characteristic is emphasized and the weak signal is weakened by the inner product operation of the elements representing the two first encoded vectors. The bilinear collector uses element inner products among the nodes to interactively update second-order features between the corresponding node h and the second neighbor nodes and between any two second neighbor nodes into the representation of the corresponding node h, so that semantic representation of the corresponding node h is enriched.
Further, the second vector is determined by the following formula:
Figure BDA0003733178930000113
in the above-mentioned method, the step of,
Figure BDA0003733178930000114
the second vector, LC (h) the fourth vector, BC (h) the fifth vector, and α the hyper-parameter. Wherein, the value range of alpha is [0,1 ]]For controlling the information content ratio of the bilinear collector and the linear collector employed.
In the embodiment of the present application, the second vectors of all the nodes in the obtained collaborative knowledge graph G are performed synchronously, and the second vector of the corresponding node h is obtained
Figure BDA0003733178930000115
All the remaining nodes also each obtain a respective second vector.
In this embodiment of the present application, an aggregation manner adopted by iteratively aggregating a second vector corresponding to a node and a first coding vector corresponding to the node includes: addition polymerization mode or combination polymerization mode.
In an alternative embodiment, referring to fig. 4, the second vector D2 and the first encoded vector X1 of the corresponding node are iteratively aggregated to obtain a plurality of third vectors D3 corresponding to different orders, specifically: the embedded propagation layer 42 aggregates the first encoded vector X1 represented by the corresponding node h itself
Figure BDA0003733178930000116
And a representation vector (second vector D2 +.>
Figure BDA0003733178930000117
) The representation of the information of each order is aggregated as node h, denoted as third vector +.>
Figure BDA0003733178930000118
It is defined as:
Figure BDA0003733178930000119
wherein (1)>
Figure BDA00037331789300001110
And representing n-order neighbor information of the corresponding node h, wherein n is an integer greater than or equal to 1. Furthermore, in the present embodiment f () represents an aggregator for aggregating vectors (++>
Figure BDA00037331789300001111
(first encoding vector or third vector) and second vector->
Figure BDA0003733178930000121
The aggregator f () employs a summing aggregator or a merging aggregator, as follows: the third vector is determined in the following manner:
Figure BDA0003733178930000122
wherein f (n) is the addition polymerizer, f (n) is the t third vector, W (n) And b (n) Refers to the parameter at which the nth third vector is generated, and when n is equal to 1,
Figure BDA0003733178930000123
representing a first encoded vector,/-when n is greater than 1>
Figure BDA0003733178930000124
Represents the nth-1 third vector, and Relu represents the nonlinear activation function.
In the examples herein, the amount of polymerizer orientation is summed
Figure BDA0003733178930000125
Sum vector->
Figure BDA0003733178930000126
And then linearly transforming the result, +.>
Figure BDA0003733178930000127
Is a trainable weight matrix, d' is a set vector dimension after linear transformation, b (n) Is a bias term
In one possible embodiment, the third vector is determined in the following way:
Figure BDA0003733178930000128
wherein f (n) is the combining polymerizer, f (n) is the t third vector, W (n) And b (n) Refers to the parameter at which the nth third vector is generated, and when n is equal to 1,
Figure BDA0003733178930000129
representing a first encoded vector,/-when n is greater than 1>
Figure BDA00037331789300001210
Represents the nth-1 third vector, and Relu represents the nonlinear activation function.
Specifically, the merge aggregator employs the merge symbol "||" pair
Figure BDA00037331789300001211
And->
Figure BDA00037331789300001212
After the combination, linear transformation is adopted,
Figure BDA00037331789300001213
is a weight matrix.
Illustratively, in an embodiment of the present application, when n is 3, then the 1 st third vector is
Figure BDA00037331789300001214
Figure BDA00037331789300001215
For the first coding vector, ">
Figure BDA00037331789300001216
The 2 nd third vector is the second vector
Figure BDA00037331789300001217
The 3 rd third vector is +>
Figure BDA00037331789300001218
Referring to fig. 4, the first encoding vector X1 is finally fused
Figure BDA00037331789300001219
And a plurality of third vectors D3
Figure BDA00037331789300001220
Obtaining a first vector D1->
Figure BDA00037331789300001221
The formula for obtaining the first vector D1 by specific fusion is as follows:
Figure BDA0003733178930000131
the prediction layer determines that the product of the transposed vector of the first vector corresponding to the node of the target user and the first vector corresponding to the target node is a matching degree.
Specifically, the matching degree of the first vector corresponding to the target node and the first vector corresponding to the node of the target user is determined by the following formula:
Figure BDA0003733178930000132
wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure BDA0003733178930000133
representing the degree of matching of the target user node u and the object node i,/->
Figure BDA0003733178930000134
A first vector representing the correspondence of the target user node, < >>
Figure BDA0003733178930000135
Transposed vector of first vector corresponding to node of target user, +.>
Figure BDA0003733178930000136
The first vector corresponding to the object node i is represented.
Illustratively, referring to FIG. 3, if the target user node u is user node u1, then
Figure BDA0003733178930000137
Then object node->
Figure BDA0003733178930000138
In the embodiment of the present application, the determined matching degree includes similarity of each object node in the user node u1 and the collaborative knowledge graph G, and +_>
Figure BDA0003733178930000139
Further, the collaborative knowledge graph can include cold-started object nodes, so that the matching degree of the cold-started object nodes and user nodes of the target user can be determined through the determination of the recommendation model, and further the cold-started object is recommended to the user. The object nodes of the cold start are object nodes which are not recommended to any user in the history stage and are not interacted with any user.
In the embodiment of the application, the probability value of interaction generated by the user on the corresponding object can be determined.
S203, recommending node information corresponding to the target node to the target user.
In the embodiment of the present application, the matching degree threshold may be preset, for example, 90%.
The specific implementation of this step is referred to as S203, and is not limited herein.
In this embodiment of the present application, the training process of the recommendation model is further included, specifically, the data set that can be used is a data set MovieLens-1M (a movie data set) of a movie recommendation scene, where the data set includes 100 ten thousand interaction records of 6040 users and 3883 movie objects, and each user has at least historic interactions with 20 movies. In addition to user-MOVIE object interaction data, related information of MOVIE objects needs to be gathered to construct a knowledge graph of MOVIE objects. Since MovieLens-1M contains the type of movie, the detailed training data finally acquired by acquiring director and actor information of 3745 movies includes: user-movie object interaction data (users: 6040; movies: 3883; interactions: 100209), object knowledge maps (nodes: 13593; association relations: 3; node-relation-node triples: 25462).
And then marking the acquired data, namely marking the acquired data as 1 if a certain user has interactive records on a certain film, and marking the acquired data as 0 if the user has interactive records on the certain film. In the embodiment of the application, for each user, 80% of all interaction records marked as 1 are selected for a training set, the remaining 20% are used as positive samples in a test set, and for each positive interaction record in the training set, a negative interaction record is randomly extracted to form a complete training sample set. For each user during testing, positive samples of the user on the training set are removed from the complete object set, and all other objects are given predictive scores of the user.
Further, the loss function applied in the training process includes the following three parts, and the formula is as follows:
L=L KG +L CF +λ||Θ|| 2
wherein L in the above formula KG Is a loss function corresponding to the embedded layer of the recommendation model, which is defined as: l (L) KG =∑ (h,r,t,t′)∈ z-lnσ(g(h,r,t′)-g(h,r,t)),
Figure BDA0003733178930000144
(h, r, t') is a false triplet constructed by randomly replacing the tail object in the real triplet; sigma is a nonlinear activation function. The embedded layer models the object and the relation on the latitude of the triplet, and the information of the connected node t is directly injected into the corresponding node h, so that the representation capability of the recommendation model is improved. LCF is a synergistic signal loss, employing BPR loss (a loss function) which is a score that is assumed to be higher for observed interactions between users and objects than for unobserved interactions, defined as follows:
Figure BDA0003733178930000141
In the above formula, Ω= { (u, I, j) | (u, I) ∈i + ,(u,j)∈I - The training set, I + Positive sample representing interaction between user u and object iThe cost is high; i - A sampled negative sample representing the absence of interactions; sigma (·) is a nonlinear activation function.
In addition, in the case of the optical fiber,
Figure BDA0003733178930000142
is a set of model parameters. Wherein E is the embedded vector of all nodes and edges (e.g., the first encoded vector of the node and the second encoded vector of the edge); mr is the transformation matrix on a particular edge r; w (n) and b (n) represent respectively the generation +.>
Figure BDA0003733178930000143
And when the weight matrix and the bias term corresponding to the linear transformation are carried out by the aggregator. λ is an L2 regularization parameter to prevent overfitting.
Further, in the embodiment of the present application, the number of hidden layers of the recommendation model is set to 3, three dimensions are 128, 64 and 32 respectively, embedding dimensions of nodes and edges in the collaborative knowledge graph are 128, and the super parameter α is set to 0.6. Batch training is adopted for the recommended model, the batch size of the collaborative filtering part is 4096, the batch size of the collaborative knowledge map embedding part is 8092, the initial learning rate is 0.001, the L2 regularization parameter is set to be 1E-5, the maximum iteration number is set to be 100, the training process adopts an early-stop strategy, and when the training process is not improved in the continuous 10 iterations, the training process is stopped.
In the embodiment of the application, an Adam optimizer is adopted to carry out self-adaptive adjustment of the learning rate, and each parameter in the model parameter set Θ is updated and adjusted.
The server can push the node information corresponding to the determined target node to the user terminal corresponding to the user, and the node information is displayed on the user terminal, so that accurate recommendation of the node information is achieved.
In the embodiment of the application, accurate node information is recommended to the target user by combining the recommendation model of the collaborative knowledge graph, wherein the collaborative knowledge graph comprises the association relationship between the object and the object, so that when the cold-started object exists, the accurate recommendation of the cold-started object to the user can be realized according to the association relationship between the cold-started object and the existing object.
In summary, the embodiment of the application can overcome the problem that implicit connection between all nodes of the whole collaborative knowledge graph cannot be established. By adopting the embodiment of the application, the accuracy of the recommended object can be improved, and the accuracy, recall rate and normalized damage accumulation gain are all higher than those of the current mainstream recommendation algorithm. Specifically, the embodiment of the application can integrate the collaborative filtering recommendation method and the collaborative knowledge graph, and design a recommendation model of the end-to-end collaborative knowledge graph, wherein the recommendation model can overcome the defect that user-object interaction records are mutually independent, and can acquire collaborative signals of users based on object attributes. Further, based on the attention mechanism, a bilinear collector is designed, and the characteristic interaction information and the linear weighting information of the neighbor nodes are recursively aggregated by combining with the linear collector. The mode of explicitly coding the node interaction relation in the collaborative knowledge graph enables the recommendation model to capture rich semantics based on the feature interaction between the nodes in the collaborative knowledge graph. The bilinear collector can acquire characteristic interaction information among nodes in an information acquisition stage, and enriches node representation. The attention mechanism propagates each node representation along the collaborative knowledge graph through a recursion embedding propagation algorithm, so that implicit relations among the nodes in the collaborative knowledge graph can be captured. In the experimental process, when the length of the recommendation list is 20, the accuracy rate, the recall rate and the normalized damage accumulation gain are 29.4%, 24.9% and 67.4% respectively, which exceed the recommendation algorithm of the current mainstream fusion knowledge graph, and the experiment proves that the content recommendation method provided by the embodiment of the application can effectively improve the accuracy of the recommendation result.
The following are device embodiments of the present application, which may be used to perform method embodiments of the present application. For details not disclosed in the device embodiments of the present application, please refer to the method embodiments of the present application.
Fig. 5 is a schematic structural diagram of a content recommendation device according to an embodiment of the present application. The embodiment of the application provides a content recommendation device which can be integrated on electronic equipment such as a server. As shown in fig. 5, the content recommendation device 50 includes: an acquisition module 51, a processing module 52 and a recommendation module 53. Wherein:
an obtaining module 51, configured to obtain a collaborative knowledge graph, where the collaborative knowledge graph includes nodes of a target user;
the processing module 52 is configured to input the collaborative knowledge graph into a recommendation model for recommendation processing, so as to obtain a target node;
a recommending module 53, configured to recommend node information corresponding to a target node to a target user;
the recommendation model comprises an embedding layer, an embedding propagation layer and a prediction layer;
the embedded layer is used for encoding the collaborative knowledge graph to obtain a first encoding vector corresponding to each node of the collaborative knowledge graph and a second encoding vector corresponding to each side of the collaborative knowledge graph;
the embedded propagation layer is used for processing the first coding vector and the second coding vector to obtain a first vector corresponding to each node of the collaborative knowledge graph, wherein the first vector comprises a fusion result after node information of neighbor nodes of the corresponding nodes is iteratively aggregated;
The prediction layer is used for carrying out prediction processing based on the first vector to obtain a target node, and the matching degree of the first vector corresponding to the target node and the first vector corresponding to the node of the target user is larger than or equal to a preset threshold value.
In one possible implementation manner, the determination manner of the first vector corresponding to each node is the same; in determining the first vector corresponding to the node, the processing module 52 is specifically configured to determine, based on the embedded propagation layer, a second vector corresponding to the node according to the first encoded vector and the second encoded vector corresponding to the node by using an attention mechanism, where the second vector includes node information of neighboring nodes of the node; iteratively aggregating the second vectors corresponding to the nodes and the first coding vectors corresponding to the nodes to obtain a plurality of third vectors; each iterative polymerization obtains a third vector; the third vector obtained by the first iterative aggregation is obtained by aggregation based on the second vector corresponding to the node and the first coding vector corresponding to the node; the third vector obtained by non-first polymerization is obtained by polymerization based on the third vector obtained by the last iteration polymerization and the second vector corresponding to the node; and fusing the first coding vector corresponding to the node and the plurality of third vectors to obtain the first vector corresponding to the node.
In a possible implementation manner, when the attention mechanism is adopted to determine the second vector corresponding to the node according to the first coding vector and the second coding vector corresponding to the node, the processing module 52 is specifically configured to: determining a fourth vector of the first neighbor node of the node by adopting an attention mechanism according to a first coding vector of the first neighbor node of the node and a second coding vector of an edge between the node and the first neighbor node, wherein a connecting edge of the first neighbor node and the node is a unidirectional edge; determining a fifth vector according to a first coding vector of a second neighbor node of the node, wherein a connecting edge of the second neighbor node and the node is an undirected edge; and fusing the fourth vector and the fifth vector to obtain a second vector.
In a possible implementation, the processing module 52 is specifically configured to, when determining the fourth vector of the first neighboring node of the node using the attention mechanism: determining a first attention score between the node and the first neighbor node according to a first coding vector of the first neighbor node of the node and a second coding vector of an edge between the node and the first neighbor node; a fourth vector is determined based on the first encoded vector and the first attention score of the first neighbor node.
In one possible implementation, the second neighboring node of the node has a plurality of second neighboring nodes; the processing module 52 is specifically configured to, when determining the fifth vector from the first encoded vector of the second neighboring node of the node: and determining a fifth vector according to the first coding vector of one second neighbor node, the first coding vector of the other second neighbor node and the target combination quantity, wherein the target combination quantity is the combination quantity obtained by combining any two nodes in the plurality of second neighbor nodes.
In a possible implementation manner, an aggregation manner adopted by iteratively aggregating the second vector corresponding to the node and the first coding vector corresponding to the node includes: addition polymerization mode or combination polymerization mode.
In a possible implementation manner, the prediction layer determines that the product of the transpose vector of the first vector corresponding to the node of the target user and the first vector corresponding to the target node is a matching degree.
The apparatus provided in the embodiment of the present application may be used to perform the method in the embodiment shown in fig. 2, and its implementation principle and technical effects are similar, and are not described herein again.
It should be noted that, it should be understood that the division of the modules of the above apparatus is merely a division of a logic function, and may be fully or partially integrated into a physical entity or may be physically separated. And these modules may all be implemented in software in the form of calls by the processing element; or can be realized in hardware; the method can also be realized in a form of calling software by a processing element, and the method can be realized in a form of hardware by a part of modules. For example, the processing module may be a processing element that is set up separately, may be implemented in a chip of the above-mentioned apparatus, or may be stored in a memory of the above-mentioned apparatus in the form of program codes, and the functions of the above-mentioned processing module may be called and executed by a processing element of the above-mentioned apparatus. The implementation of the other modules is similar. In addition, all or part of the modules can be integrated together or can be independently implemented. The processing element here may be an integrated circuit with signal processing capabilities. In implementation, each step of the above method or each module above may be implemented by an integrated logic circuit of hardware in a processor element or an instruction in a software form.
For example, the modules above may be one or more integrated circuits configured to implement the methods above, such as: one or more application specific integrated circuits (Application Specific Integrated Circuit, abbreviated as ASIC), or one or more microprocessors (Digital Signal Processor, abbreviated as DSP), or one or more field programmable gate arrays (Field Programmable Gate Array, abbreviated as FPGA), or the like. For another example, when a module above is implemented in the form of a processing element scheduler code, the processing element may be a general-purpose processor, such as a central processing unit (Central Processing Unit, CPU) or other processor that may invoke the program code. For another example, the modules may be integrated together and implemented in the form of a System-On-a-Chip (SOC).
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When the computer instructions are loaded and executed on a computer, the processes or functions in accordance with embodiments of the present application are produced in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in or transmitted from one computer-readable storage medium to another, for example, by wired (e.g., coaxial cable, fiber optic, digital Subscriber Line (DSL)), or wireless (e.g., infrared, wireless, microwave, etc.) means from one website, computer, server, or data center. Computer readable storage media can be any available media that can be accessed by a computer or data storage devices, such as servers, data centers, etc., that contain an integration of one or more available media. Usable media may be magnetic media (e.g., floppy disks, hard disks, magnetic tape), optical media (e.g., DVD), or semiconductor media (e.g., solid State Disk (SSD)), among others.
Fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present application. As shown in fig. 6, the electronic device may include: a processor 61, a memory 62, a communication interface 63 and a system bus 64. The memory 62 and the communication interface 63 are connected to the processor 61 through the system bus 64 and complete communication with each other, the memory 62 is used for storing instructions, the communication interface 63 is used for communicating with other devices, and the processor 61 is used for calling the instructions in the memory to execute the scheme of the embodiment of the content recommendation method as described above.
The system bus 64 mentioned in fig. 6 may be a peripheral component interconnect standard (Peripheral Component Interconnect, PCI) bus or an extended industry standard architecture (Extended Industry Standard Architecture, EISA) bus, or the like. The system bus 64 may be divided into an address bus, a data bus, a control bus, and the like. For ease of illustration, the figures are shown with only one bold line, but not with only one bus or one type of bus.
The communication interface 63 is used to enable communication between the database access apparatus and other devices (e.g., clients, read-write libraries, and read-only libraries).
The memory 62 may include a random access memory (Random Access Memory, simply referred to as RAM) and may also include a non-volatile memory (non-volatile memory), such as at least one disk memory.
The processor 61 may be a general-purpose processor including a central processing unit, a network processor (Network Processor, NP) and the like; but may also be a digital signal processor DSP, an application specific integrated circuit ASIC, a field programmable gate array FPGA or other programmable logic device, a discrete gate or transistor logic device, a discrete hardware component.
The embodiment of the application also provides a computer readable storage medium, in which a computer program is stored, which when run on an electronic device, causes the electronic device to execute the content recommendation method according to any one of the method embodiments above.
The embodiment of the application also provides a chip for running the instruction, and the chip is used for executing the content recommendation method of any method embodiment.
The present application also provides a computer program product, which includes a computer program, where the computer program is stored in a computer readable storage medium, and at least one processor may read the computer program from the computer readable storage medium, where the at least one processor may implement the content recommendation method according to any one of the method embodiments above when executing the computer program.
In the present application, "at least one" means one or more, and "a plurality" means two or more. "and/or", describes an association relationship of an association object, and indicates that there may be three relationships, for example, a and/or B, and may indicate: a alone, a and B together, and B alone, wherein a, B may be singular or plural. The character "/" generally indicates that the front and rear associated objects are an "or" relationship; in the formula, the character "/" indicates that the front and rear associated objects are a "division" relationship. "at least one of" or the like means any combination of these items, including any combination of single item(s) or plural items(s). For example, at least one (one) of a, b, or c may represent: a, b, c, a-b, a-c, b-c, or a-b-c, wherein a, b, c may be single or plural.
It will be appreciated that the various numerical numbers referred to in the embodiments of the present application are merely for ease of description and are not intended to limit the scope of the embodiments of the present application. In the embodiments of the present application, the sequence number of each process does not mean the sequence of execution sequence, and the execution sequence of each process should be determined by its function and internal logic, and should not constitute any limitation on the implementation process of the embodiments of the present application.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some or all of the technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit of the corresponding technical solutions from the scope of the technical solutions of the embodiments of the present application.

Claims (10)

1. A content recommendation method, comprising:
acquiring a collaborative knowledge graph, wherein the collaborative knowledge graph comprises nodes of a target user;
inputting the collaborative knowledge graph into a recommendation model for recommendation processing to obtain a target node;
recommending node information corresponding to the target node to the target user;
the recommendation model comprises an embedding layer, an embedding propagation layer and a prediction layer;
the embedding layer is used for encoding the collaborative knowledge graph to obtain a first encoding vector corresponding to each node of the collaborative knowledge graph and a second encoding vector corresponding to each side of the collaborative knowledge graph;
The embedded propagation layer is used for processing the first coding vector and the second coding vector to obtain a first vector corresponding to each node of the collaborative knowledge graph, wherein the first vector comprises a fusion result after node information of neighbor nodes of the corresponding nodes is iteratively aggregated;
the prediction layer is configured to perform prediction processing based on the first vector to obtain the target node, where a matching degree between the first vector corresponding to the target node and the first vector corresponding to the node of the target user is greater than or equal to a preset threshold.
2. The content recommendation method according to claim 1, wherein the determination manner of the first vector corresponding to each node is the same; in determining a first vector corresponding to the node, the embedded propagation layer is configured to:
determining a second vector corresponding to the node according to the first coding vector and the second coding vector corresponding to the node by adopting an attention mechanism, wherein the second vector comprises node information of neighbor nodes of the node;
iteratively aggregating the second vector corresponding to the node and the first coding vector corresponding to the node to obtain a plurality of third vectors; each iterative polymerization obtains a third vector; the third vector obtained by the first iterative aggregation is obtained by aggregation based on the second vector corresponding to the node and the first coding vector corresponding to the node; the third vector obtained by non-first polymerization is obtained by polymerization based on the third vector obtained by the last iteration polymerization and the second vector corresponding to the node;
And fusing the first coding vector corresponding to the node and the plurality of third vectors to obtain the first vector corresponding to the node.
3. The content recommendation method according to claim 2, wherein the determining a second vector corresponding to the node using an attention mechanism from the first encoded vector and the second encoded vector corresponding to the node comprises:
determining a fourth vector of a first neighbor node of the node by adopting the attention mechanism according to a first coding vector of the first neighbor node of the node and a second coding vector of an edge between the node and the first neighbor node, wherein a connecting edge between the first neighbor node and the node is a unidirectional edge;
determining a fifth vector according to a first coding vector of a second neighbor node of the node, wherein a connecting edge between the second neighbor node and the node is an undirected edge;
and fusing the fourth vector and the fifth vector to obtain the second vector.
4. The content recommendation method of claim 3 wherein said employing said attention mechanism to determine a fourth vector of a first neighbor node of said node comprises:
determining a first attention score between the node and a first neighbor node according to a first coding vector of the first neighbor node of the node and a second coding vector of an edge between the node and the first neighbor node;
The fourth vector is determined from the first encoded vector of the first neighbor node and the first attention score.
5. The content recommendation method according to claim 3, wherein the second neighbor node of the node has a plurality; the determining a fifth vector according to the first coding vector of the second neighboring node of the node includes:
and determining the fifth vector according to the first coding vector of one second neighbor node, the first coding vector of the other second neighbor node and the target combination quantity, wherein the target combination quantity is the combination quantity obtained by combining any two nodes in the plurality of second neighbor nodes.
6. The content recommendation method according to any one of claims 2 to 5, wherein iteratively aggregating the second vector corresponding to the node and the first encoded vector corresponding to the node comprises: addition polymerization mode or combination polymerization mode.
7. The content recommendation method according to any one of claims 2 to 5, wherein the matching degree is a product of a transpose vector of a first vector corresponding to a node of the target user and the first vector corresponding to the target node.
8. A content recommendation device, comprising:
the acquisition module is used for acquiring a collaborative knowledge graph, wherein the collaborative knowledge graph comprises nodes of a target user;
the processing module is used for inputting the collaborative knowledge graph into a recommendation model to conduct recommendation processing to obtain a target node;
and the recommending module is used for recommending the node information corresponding to the target node to the target user.
The recommendation model comprises an embedding layer, an embedding propagation layer and a prediction layer;
the embedding layer is used for encoding the collaborative knowledge graph to obtain a first encoding vector corresponding to each node of the collaborative knowledge graph and a second encoding vector corresponding to each side of the collaborative knowledge graph;
the embedded propagation layer is used for processing the first coding vector and the second coding vector to obtain a first vector corresponding to each node of the collaborative knowledge graph, wherein the first vector comprises a fusion result after node information of neighbor nodes of the corresponding nodes is iteratively aggregated;
the prediction layer is configured to perform prediction processing based on the first vector to obtain the target node, where a matching degree between the first vector corresponding to the target node and the first vector corresponding to the node of the target user is greater than or equal to a preset threshold.
9. An electronic device comprising a processor, a memory and a computer program stored on the memory and executable on the processor, wherein the processor implements the content recommendation method according to any one of claims 1 to 7 when the computer program is executed by the processor.
10. A computer readable storage medium, characterized in that the computer readable storage medium has stored therein a computer program which, when run on an electronic device, causes the electronic device to perform the content recommendation method according to any one of claims 1 to 7.
CN202210789296.4A 2022-07-06 2022-07-06 Content recommendation method, device, equipment and storage medium Pending CN116127083A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210789296.4A CN116127083A (en) 2022-07-06 2022-07-06 Content recommendation method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210789296.4A CN116127083A (en) 2022-07-06 2022-07-06 Content recommendation method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116127083A true CN116127083A (en) 2023-05-16

Family

ID=86301460

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210789296.4A Pending CN116127083A (en) 2022-07-06 2022-07-06 Content recommendation method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116127083A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117710020A (en) * 2024-02-06 2024-03-15 湖南惟客科技集团有限公司 Big data-based user preference analysis method

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117710020A (en) * 2024-02-06 2024-03-15 湖南惟客科技集团有限公司 Big data-based user preference analysis method
CN117710020B (en) * 2024-02-06 2024-05-17 湖南惟客科技集团有限公司 Big data-based user preference analysis method

Similar Documents

Publication Publication Date Title
Wu et al. A comprehensive survey on graph neural networks
Yang et al. Multi-behavior hypergraph-enhanced transformer for sequential recommendation
CN111382309B (en) Short video recommendation method based on graph model, intelligent terminal and storage medium
CN112035743A (en) Data recommendation method and device, computer equipment and storage medium
WO2022011553A1 (en) Feature interaction via edge search
Li et al. From edge data to recommendation: A double attention-based deformable convolutional network
Hou et al. A deep reinforcement learning real-time recommendation model based on long and short-term preference
CN116127083A (en) Content recommendation method, device, equipment and storage medium
CN116541608B (en) House source recommendation method and device, electronic equipment and storage medium
CN116188118B (en) Target recommendation method and device based on CTR prediction model
CN116562357A (en) Click prediction model training method and device
CN113656589B (en) Object attribute determining method, device, computer equipment and storage medium
CN115495663A (en) Information recommendation method and device, electronic equipment and storage medium
CN114692012A (en) Electronic government affair recommendation method based on Bert neural collaborative filtering
Ma et al. Heterogeneous graph neural network for multi-behavior feature-interaction recommendation
CN116501993B (en) House source data recommendation method and device
Tran et al. Improvement graph convolution collaborative filtering with weighted addition input
CN116911955B (en) Training method and device for target recommendation model
CN117390295B (en) Method and device for recommending objects based on mask module
Yin et al. Recursive RNN based shift representation learning for dynamic user-item interaction prediction
CN116911954B (en) Method and device for recommending items based on interests and popularity
CN116468507A (en) Target recommendation method and device
Liu et al. Discovering proper neighbors to improve session-based recommendation
CN116151930A (en) Commodity personalized recommendation method, system, electronic equipment and readable storage medium
CN118013113A (en) Object list recommendation method and device, electronic equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination