CN113570058A - Recommendation method and device - Google Patents
Recommendation method and device Download PDFInfo
- Publication number
- CN113570058A CN113570058A CN202111104093.9A CN202111104093A CN113570058A CN 113570058 A CN113570058 A CN 113570058A CN 202111104093 A CN202111104093 A CN 202111104093A CN 113570058 A CN113570058 A CN 113570058A
- Authority
- CN
- China
- Prior art keywords
- type
- data
- network
- preference
- representation
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N5/00—Computing arrangements using knowledge-based models
- G06N5/02—Knowledge representation; Symbolic representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Mathematical Physics (AREA)
- Computational Linguistics (AREA)
- General Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- Molecular Biology (AREA)
- General Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Health & Medical Sciences (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The present disclosure relates to a recommendation method and apparatus, the recommendation method including: acquiring data to be recommended, wherein the data to be recommended comprises a plurality of user data, a plurality of item data, a user knowledge graph constructed based on the plurality of user data and an item knowledge graph constructed based on the plurality of item data; inputting the data to be recommended into a trained recommendation model to obtain a preference score predicted value of each item data by each user data in the data to be recommended; and acquiring at least one item data for each user data as recommended item data based on the preference score predicted value of each item data by each user data in the data to be recommended. According to the recommendation method and the recommendation device, the TransR model is adopted as the space conversion network in the recommendation model, the correct knowledge expressed in the form of the triples is effectively expressed, and a more efficient fusion mode of the collaborative information and the knowledge transmission is realized.
Description
Technical Field
The present disclosure relates to the field of big data technologies, and more particularly, to a recommendation method and apparatus.
Background
Knowledge maps are widely studied and applied in recommendation systems as auxiliary information. In the application of the recommendation system based on the knowledge-graph technology, various models are adopted for training. Wherein the RippleNet model propagates potential preferences of users along connections in the knowledge graph. The KGCN model and the KGNN-LS model adopt a graph convolution network to acquire embedded representation of items through neighbor entities in a knowledge graph. The KGAT model introduces a Collaborative Knowledge Graph (CKG) that combines a user-project interaction graph with a knowledge graph spectrum, recursively propagating through a graph-convolutional neural network over the CKG. The method presupposes that the items of the user item interaction graph and the associated entities in the knowledge graph belong to the same potential space, but actually both belong to heterogeneous nodes.
A recommendation system CKAN (Collaborative Knowledge-aware Network for Recommendator Systems) of a Collaborative Knowledge embedding attention Network in the related technology cancels independent embedded representation of a user and uses embedded representation of items interacted by the user to form embedded representation of the user, and the CKAN is used as an end-to-end model based on propagation, has a certain recommendation accuracy rate, but ignores processing of complex semantic relation space between entities in a Knowledge graph.
Disclosure of Invention
The present disclosure provides a recommendation method and apparatus to solve at least the problems in the related art described above, and may not solve any of the problems described above.
According to a first aspect of the embodiments of the present disclosure, there is provided a recommendation method, including: acquiring data to be recommended, wherein the data to be recommended comprises a plurality of user data, a plurality of item data, a user knowledge graph constructed based on the plurality of user data and an item knowledge graph constructed based on the plurality of item data; inputting the data to be recommended into a trained recommendation model to obtain a preference score predicted value of each item data by each user data in the data to be recommended; and acquiring at least one item data for each user data as recommended item data based on the preference score predicted value of each item data by each user data in the data to be recommended.
Optionally, the recommendation model includes a spatial transformation network and a preference computation network, and is trained by the following steps: acquiring a user data set based on a user historical knowledge graph and acquiring a project data set based on a project historical knowledge graph, wherein the user data set comprises at least one piece of user data, and the project data set comprises at least one piece of project data; acquiring first-class triples based on the user data set and the project data set, wherein each first-class triplet comprises a head entity, a relation and a tail entity, the head entity and the tail entity of each first-class triplet are user data in the user data set or project data in the project data set, and the head entity and the tail entity of each first-class triplet express correct knowledge through the relation; randomly replacing tail entities in the first type of triples to obtain second type of triples, wherein the second type of triples correspond to the first type of triples one by one; inputting a first type of triple and a second type of triple into the space conversion network to obtain an embedded representation of the first type of triple, wherein the space conversion network is a TransR model; acquiring a loss function of the space transformation network based on the first type of triple and the second type of triple; randomly replacing a head entity, a relation and a tail entity of the first type of triple to obtain a third type of triple, wherein the third type of triple corresponds to the first type of triple one to one; acquiring an embedded representation of a third type of triple based on the embedded representation of the first type of triple; inputting the embedded representation of the first type of triples and the embedded representation of the third type of triples into the preference calculation network, obtaining an attention representation of the user data and the item data in the embedded representation of the first type of triples, and aggregating the attention representations of the user data and the item data layer by layer, obtaining a model representation of the user data and a model representation of the item data, and obtaining a preference score prediction value of each item data for each item data based on the model representation of the user data and the model representation of the item data, wherein the attention representation of the user data is a representation obtained by a head entity for the first type of triples of the user data based on attention weights, and the attention representation of the item data is a representation obtained by the head entity for the first type of triples of the item data based on attention weights, and the preference calculation network is a graph neural network model based on attention mechanisms and comprising multi-layer propagation, propagating in each layer of the preference computation network with the embedded representations of all triples of the first type and all triples of the third type; obtaining a loss function of the preference calculation network according to the preference score predicted value of each item of data of each user; obtaining a loss function of the recommendation model based on the loss function of the spatial transformation network and the loss function of the preference calculation network; and training the recommendation model by adjusting the parameter set of the recommendation model according to the loss function of the recommendation model.
Optionally, the inputting the first type of triplet and the second type of triplet into the space transformation network to obtain the embedded representation of the first type of triplet includes: inputting the first-type triples and the second-type triples into the space conversion network, obtaining a transformation matrix of the relationship in the first-type triples, and obtaining the embedded representation of the first-type triples based on the transformation matrix of the relationship in the first-type triples, wherein the transformation matrix is used for projecting the space of the head entity or the space of the tail entity to the relationship space.
Optionally, the embedded representation of the first type of triplet is represented as:
wherein the content of the first and second substances,
wherein the content of the first and second substances,computing a network's second for the preferenceThe embedded representation of any triplet of the first type in the layer propagation,representing the header entity as user data or the header entity as project data,projecting both the space of the head entity and the space of the tail entity into any first-type triplet of the relationship space for any layer propagation of the preference computation network,a transformation matrix for the relationships in any first-type triplet propagated for any layer of the preference computation network,any first type of triplet propagated for any layer of the preference calculation network,is composed ofThe head entity of (a) is,is composed ofIn the context of (a) or (b),is composed ofThe tail entity of (a) the tail entity,for the user historical knowledge-graph and the project historical knowledge-graph,computing a network's second for the preferenceA recursively defined representation of triples of the first type in the layer propagation,calculating a total number of layers for the network for the preference.
Optionally, the loss function of the spatial transformation network is expressed as:
wherein the content of the first and second substances,
wherein the content of the first and second substances,
wherein the content of the first and second substances,as a function of the loss of the spatial transformation network,any first type of triplet propagated for any layer of the preference calculation network,is composed ofThe head entity of (a) is,is composed ofIn the context of (a) or (b),is composed ofThe tail entity of (a) the tail entity,representing the function of sigmoid and the function of,is prepared by reacting withThe corresponding triplets of the second type are,is composed ofThe tail entity of (a) the tail entity,is composed ofThe similarity score of (a) is calculated,is composed ofThe similarity score of (a) is calculated,for the user historical knowledge-graph and the project historical knowledge-graph,a transformation matrix for the relationships in any first-type triplet propagated for any layer of the preference computation network,is composed ofIs to be used to represent the embedded representation of,is composed ofIs to be used to represent the embedded representation of,is composed ofIs shown embedded.
Optionally, the inputting the embedded representation of the first type of triplet and the embedded representation of the third type of triplet into the preference calculation network, obtaining an attention representation of the user data and the item data in the embedded representation of the first type of triplet, includes: acquiring attention weight of a first type of triple in any layer propagation of the preference calculation network based on a ReLU function and a Sigmoid function, wherein the attention weight is the weight of a head entity to a tail entity; normalizing the attention weight of the first type of triple in any layer propagation of the preference calculation network through a softmax function, and acquiring the normalized attention weight of the first type of triple of which the head entity is user data and item data in any layer propagation of the preference calculation network; acquiring attention representation of the first type of triple of the user data and the item data of the head entity in any layer of propagation of the preference calculation network based on the normalized attention weight of the first type of triple of the user data and the item data of the head entity in any layer of propagation of the preference calculation network; integrating into a set the attention representations of the head entities as the first type of triples of user data and item data in all layer propagations of the preference computation network.
Optionally, the attention weight of the first type of triplet in any layer propagation of the preference calculation network is expressed as:
wherein the content of the first and second substances,
wherein the content of the first and second substances,computing a first in any layer propagation of a network for the preferenceThe first type of triplet being based onThe attention weight of (a) is given,computing a first in any layer propagation of a network for the preferenceThe embedded representation of the head entity of a first type of triplet in the relationship space,computing a first in any layer propagation of a network for the preferenceThe relationship of the first-type triplets,representing the function of sigmoid and the function of,represents the function of the ReLU, and represents,、、、、andall parameters in the parameter group of the recommendation model;
acquiring the normalized attention weight of the first-class triples of the head entity, which are user data and item data, in any layer propagation of the preference calculation network by the following formula based on the attention weight of the first-class triples in any layer propagation of the preference calculation network:
wherein the content of the first and second substances,computing a first in any layer propagation of the network for the normalized preferenceThe first type of triplet being based onThe attention weight of (a) is given,computing a first in any layer propagation of a network for the preferenceBases of triplets of the third typeThe attention weight of (a) is given,computing a first in any layer propagation of a network for the preferenceThe embedded representation of the head entity of the third type of triplet in the relationship space,for any of the triples of the third type,is composed ofThe head entity of (a) is,is composed ofIn the context of (a) or (b),is composed ofThe tail entity of (a) the tail entity,computing a first in any layer propagation of a network for the preferenceA third type of triple relationship.
Optionally, the attention representation of the first type of triplet of user data and item data, where the head entity is propagated in any layer of the preference calculation network, is represented as:
wherein the content of the first and second substances,
wherein the content of the first and second substances,computing a network's second for the preferenceThe head entity in the layer propagation is an attention representation of a first type of triplet of user data or item data,computing a network's second for the preferenceThe header entity in the layer propagation is the total number of triples of the first type of user data or item data,computing a network's second for the preferenceThe embedded representation of any triplet of the first type in the layer propagation,computing a first in any layer propagation of a network for the preferenceThe individual head entity being user data orThe individual head entity is an attention representation of a first type of triplet of item data,computing a first in any layer propagation of a network for the preferenceThe attention representation of the first-type triplet,computing a first in any layer propagation of a network for the preferenceThe embedded representation of the tail entity of the first type of triple in the relation space;
integrating into a set, an attention representation of a first type of triplet of user data and item data for a head entity in all layer propagations of the preference computation network by:
wherein the content of the first and second substances,
wherein the content of the first and second substances,an attention representation set of triples of a first type for user data for a head entity,computing a network's second for the preferenceThe header entity in the layer is an attention representation of a first type of triplet of user data,computing a network's second for the preferenceThe header entity in the layer is an attention representation of a first type of triplet of user data,computing a network's second for the preferenceThe header entity in the layer is an attention representation of a first type of triplet of user data,an attention representation set of first type triples for item data for the head entity,for an initial attention representation of all head entities as first-type triples of item data,computing a network's second for the preferenceMiddle-layer headThe body is an attention representation of a first type of triplet of item data,computing a network's second for the preferenceThe head entity in the layer is an attention representation of a first type of triplet of item data,computing a network's second for the preferenceThe head entity in the layer is an attention representation of a first type of triplet of item data,encoding a set for item data in the item historical knowledge graph,the representative head entity is the item data,representing the head entity as user data.
Optionally, the layer-by-layer aggregation of the attention representations of the user data and the project data is performed by:
wherein the content of the first and second substances,for the attention representation of the layer-by-layer aggregated user data or project data,representing head entities as user data orThe head entity is the item data and,computing a network's second for the preferenceIn layer propagationThe individual head entity being user data orThe individual head entity is an attention representation of a first type of triplet of item data,computing a network's second for the preferenceIn layer propagationThe individual head entity being user data orThe individual head entity is an attention representation of a first type of triplet of item data,computing a network's second for the preferenceIn layer propagationThe individual head entity being user data orThe head entity is the first kind of triple note of the project dataForce means, | is the splicing operation,andare all parameters in the set of parameters of the recommendation model.
Optionally, the preference score prediction value of each user data for each item data is obtained by the following formula:
wherein the content of the first and second substances,a preference score prediction value for any user data to any item data,for the transformed model representation of said any user data,and the model representation of any item data is acquired based on the attention representation of the user data after layer-by-layer aggregation, and the model representation of any item data is acquired based on the attention representation of the item data after layer-by-layer aggregation.
Optionally, the preference calculates a loss function of the network, expressed as:
wherein the content of the first and second substances,a loss function of the network is calculated for the preference,the representative head entity is the item data,the representative header entity is the user data,in the case of the first type of triplet,presentation pairThe cross-entropy loss is solved,a preference score true value for the any user data to the any item data,a preference score prediction value for the any user data for the any item data,is a third type of triplet.
Optionally, the loss function of the recommendation model is expressed as:
wherein the content of the first and second substances,
wherein the content of the first and second substances,is a loss function of the recommendation model,as a function of the loss of the spatial transformation network,a loss function of the network is calculated for the preference,in order to be able to adjust the parameters,for the set of parameters of the recommendation model,is an embedded set of representations of the head and tail entities of a first type of triplet,an embedded representation set of relationships for a first type of triplet,are all made ofThe parameter (1).
Optionally, the obtaining, for each user data, at least one item data as recommended item data based on a preference score prediction value of each user data to each item data in the data to be recommended includes: arranging preference score predicted values of any user data in the data to be recommended to each item data in a descending order according to the size; acquiring item data corresponding to the preference score predicted values sorted from the first place to the Nth place as recommended item data, or acquiring item data corresponding to the preference score predicted values larger than a preset threshold value as recommended item data, wherein N is a preset integer larger than or equal to 1.
According to a second aspect of the embodiments of the present disclosure, there is provided a recommendation apparatus including: a data acquisition unit configured to: acquiring data to be recommended, wherein the data to be recommended comprises a plurality of user data, a plurality of item data, a user knowledge graph constructed based on the plurality of user data and an item knowledge graph constructed based on the plurality of item data; a model prediction unit configured to: inputting the data to be recommended into a trained recommendation model to obtain a preference score predicted value of each item data by each user data in the data to be recommended; an item recommendation unit configured to: and acquiring at least one item data for each user data as recommended item data based on the preference score predicted value of each item data by each user data in the data to be recommended.
According to a third aspect of the embodiments of the present disclosure, there is provided an electronic apparatus including: at least one processor; at least one memory storing computer-executable instructions, wherein the computer-executable instructions, when executed by the at least one processor, cause the at least one processor to perform a recommendation method according to the present disclosure.
According to a fourth aspect of embodiments of the present disclosure, there is provided a computer-readable storage medium, wherein instructions, when executed by at least one processor, cause the at least one processor to perform a recommendation method according to the present disclosure.
According to a fifth aspect of embodiments of the present disclosure, there is provided a computer program product comprising computer instructions which, when executed by at least one processor, implement a recommendation method according to the present disclosure.
The technical scheme provided by the embodiment of the disclosure at least brings the following beneficial effects:
according to the recommendation method and the recommendation device, the TransR model is adopted as the space conversion network in the recommendation model, the correct knowledge expressed in the form of the triples is effectively expressed, a more efficient fusion mode of the collaborative information and the knowledge transmission is realized, and the transmission of the embedded expression of the first type of triples in the preference calculation network is promoted. The trained TransR model can perform space conversion on the entities, can effectively process complex relationships among the entities, measures the correlation of knowledge, and further effectively describes the relationships of the entities. The loss function of the space transformation network considers the correlation relationship between the first-class triples and the second-class triples, the loss of pairwise sequencing can be reduced on the granularity of the first-class triples, the knowledge representation capability of the recommendation model is improved, and the potential relationship between entities in the knowledge map is enhanced. Compared with CKAN, the influence of the distance of different semantic spaces on the knowledge representation can be reduced.
In addition, according to the recommendation method and device disclosed by the invention, the user data set is obtained based on the historical knowledge map of the user, the project data set is obtained based on the historical knowledge map of the project, and then the first type of triple including the head entity, the relation and the tail entity is obtained, so that the representation capability of knowledge can be improved.
In addition, according to the recommendation method and apparatus of the present disclosure, based on the embedded representation obtained by the space transformation network, the attention weight is obtained in the preference calculation network, and is more differentiated than the attention weight obtained by the CKAN.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure and are not to be construed as limiting the disclosure.
Fig. 1 is a schematic diagram illustrating a CKGAN system framework according to an exemplary embodiment of the present disclosure.
Fig. 2 is a flowchart illustrating a recommendation method according to an exemplary embodiment of the present disclosure.
Fig. 3 is a flowchart illustrating a training method of a recommendation model according to an exemplary embodiment of the present disclosure.
Fig. 4 is a diagram illustrating a click rate prediction result based on a last.
Fig. 5 is a diagram illustrating click-through rate prediction results based on a Book-Crossing dataset in a CTR scenario according to an exemplary embodiment of the present disclosure.
Fig. 6 is a diagram illustrating a click-through rate prediction result based on a Dianping-Food data set in a CTR scenario according to an exemplary embodiment of the present disclosure.
Fig. 7 is a diagram illustrating a variation of a Recall @ K value based on a last.
FIG. 8 is a diagram illustrating a change in the Recall @ K value based on the Book-crosslinking dataset in a top-K recommendation scenario according to an illustrative embodiment of the present disclosure.
FIG. 9 is a diagram illustrating the variation of the Recall @ K value based on the Dianning-Food dataset in the top-K recommendation scenario according to an exemplary embodiment of the present disclosure.
Fig. 10 is a block diagram illustrating a recommendation device according to an exemplary embodiment of the present disclosure.
Fig. 11 is a block diagram illustrating a training apparatus of a recommendation model according to an exemplary embodiment of the present disclosure.
Fig. 12 is a block diagram illustrating an electronic device 1200 according to an example embodiment of the present disclosure.
Detailed Description
In order to make the technical solutions of the present disclosure better understood by those of ordinary skill in the art, the technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the above-described drawings are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the disclosure described herein are capable of operation in sequences other than those illustrated or otherwise described herein. The embodiments described in the following examples do not represent all embodiments consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
In this case, the expression "at least one of the items" in the present disclosure means a case where three types of parallel expressions "any one of the items", "a combination of any plural ones of the items", and "the entirety of the items" are included. For example, "include at least one of a and B" includes the following three cases in parallel: (1) comprises A; (2) comprises B; (3) including a and B. For another example, "at least one of the first step and the second step is performed", which means that the following three cases are juxtaposed: (1) executing the step one; (2) executing the step two; (3) and executing the step one and the step two.
In the era of big data processing, knowledge maps are widely researched and applied as auxiliary information in recommendation methods. In the application of the recommendation system based on the knowledge-graph technology, various models are adopted for training. In an application to user recommendation items, the RippleNet model propagates potential preferences of users along connections in a knowledge graph. Based on the improvement of the accuracy of the neighbor entity in the recommendation of the recommendation system, the KGCN model and the KGNN-LS model adopt a graph volume network to obtain the embedded representation of the item through the neighbor entity in the knowledge graph. However, the KGCN model and the KGNN-LS model are difficult to mine the high-order connectivity of users and items, and therefore the embedded representation information is insufficient. The KGAT model introduces a Collaborative Knowledge Graph (CKG) that combines a user-project interaction graph with a knowledge graph spectrum, recursively propagating through a graph-convolutional neural network over the CKG. The method presupposes that the items of the user item interaction graph and the associated entities in the knowledge graph belong to the same potential space, but actually both belong to heterogeneous nodes.
Collaborative Knowledge embedded attention Network recommendation Systems CKANs (Collaborative Knowledge-aware Network for recommendation Systems) in the related art compose embedded representations of users by canceling the independent embedded representations of the users and using the embedded representations of the items that the users have interacted with. The embedded representations of users and items are processed as combinations of sequences of entities within the knowledge-graph, and then the CKAN propagates these embedded representations directly within the knowledge-graph. Although the method directly represents the embedded representations of the user and the project by using the combination of the embedded representations of the entities in the knowledge graph, the distances between the entities in the user project graph and the knowledge graph in different potential spaces in the traditional method are effectively reduced, CKAN ignores the inside of the knowledge graph, the same entity is in different potential spaces under different relations, and especially the relations between the entities in a recommendation scene are often many-to-many.
As a propagation-based end-to-end model, the CKAN sufficiently utilizes the cooperative information, draws the potential space of the entity in the user project interaction diagram and the entity in the knowledge graph, has a certain recommendation accuracy rate, and ignores the processing of the complex semantic relation space between the entities in the knowledge graph.
In order to solve the problems in the related art, the recommendation method and device are provided by the disclosure, a recommendation model is constructed by adopting a spatial conversion network and a preference calculation network, data from a knowledge graph is trained, and a user preference score predicted value of a project is obtained through the recommendation model for recommendation, so that complex relationships among entities can be effectively processed, and potential relationships among the entities in the knowledge graph are enhanced.
Hereinafter, a recommendation method and apparatus according to the present disclosure will be described in detail with reference to fig. 1 to 12.
Fig. 1 is a schematic diagram illustrating a CKGAN system framework according to an exemplary embodiment of the present disclosure. The recommendation method in the exemplary embodiment of the present disclosure is derived based on the CKGAN system framework shown in fig. 1.
Referring to fig. 1, an exemplary embodiment of the present disclosure proposes a recommendation system CKGAN (Collaborative KGE-Guided attention Network for recommendation Systems) of Collaborative knowledge-graph embedding attention-directing Network, including an embedding representation part, a knowledge embedding propagation part based on a graph neural Network, and an aggregation prediction part. The embedded representation part represents the user data and the item data as an embedded sequence combination, namely, an embedded representation, and the embedded representation part fuses a TransR model and projects the entity space to the relationship space. The knowledge embedding propagation part based on the graph neural network is merged into a graph neural network model based on an attention mechanism and comprising multi-layer propagation, and the multi-hop aggregation propagation is carried out on the neighbor nodes. The preference score of each item data is predicted for each user data in the aggregate prediction section.
According to an exemplary embodiment of the present disclosure, the user data refers to an object that needs to be recommended or data indicative of the object that needs to be recommended, the item data refers to a recommended object or data indicative of the recommended object, for example, the user data is a user ID of a store customer, and the item data is a product number of a store product.
Fig. 2 is a flowchart illustrating a recommendation method according to an exemplary embodiment of the present disclosure.
Referring to fig. 2, in step 201, data to be recommended may be acquired, where the data to be recommended includes a plurality of user data, a plurality of item data, a user knowledge graph constructed based on the plurality of user data, and an item knowledge graph constructed based on the plurality of item data.
According to an exemplary embodiment of the present disclosure, the data to be recommended may further include, but is not limited to, at least one historical interaction data between a plurality of user data and a plurality of item data.
In step 202, the data to be recommended may be input into the trained recommendation model, so as to obtain a preference score prediction value of each item of data by each user data in the data to be recommended.
According to an example embodiment of the present disclosure, the recommendation model may include a spatial transformation network, which may be a TransR model, and a preference computation network, which may be an attention-based graph neural network model that includes multi-layer propagation. The trained recommendation model in the examples of the present disclosure is obtained by training a TransR model together with an attention-based graph neural network model including multi-layer propagation through training samples.
According to an exemplary embodiment of the present disclosure, the preference score prediction value of each user data for each item data may be an acquisition probability prediction value of each user data for each item data. Taking any user data as a user with a user name of A and a commodity B to be purchased, taking any item data as a commodity B as an example, and taking the probability predicted value of any user data for any item data as the probability predicted value of the commodity B purchased by the user A.
According to an exemplary embodiment of the present disclosure, the recommendation method may recommend appropriate item data to each user data with reference to the preference score prediction value of each item data based on each user data. In this way, in step 203, at least one item data may be acquired for each user data as recommended item data based on the preference score prediction value of each user data to each item data in the data to be recommended.
According to the exemplary embodiment of the disclosure, preference score predicted values of any user data in the data to be recommended to each item data can be arranged in descending order according to the size; acquiring item data corresponding to the preference score predicted values sorted from the first place to the Nth place as recommended item data, or acquiring item data corresponding to the preference score predicted values larger than a preset threshold value as recommended item data, wherein N is a preset integer larger than or equal to 1.
According to the exemplary embodiment of the present disclosure, taking any user data as a user with a user name of C who wants to watch a movie, and taking item data as movie D, movie E, movie F, movie G, movie H, movie I and movie J as examples, the preference score prediction values of C for movie D, movie E, movie F, movie G, movie H, movie I and movie J are 0.61, 0.82, 0.93, 0.45, 0.56, 0.77 and 0.95, respectively, obtained by a trained recommendation model, then the preference score prediction values of C for each item data are arranged in descending order of size to obtain: and if the preset threshold is 0.6, recommending the movie J, the movie F, the movie E, the movie I and the movie D to C.
It can be seen that the recommendation method in the exemplary embodiment of the present disclosure is obtained based on the prediction result of the trained recommendation model, and the training process of the recommendation model will be further described below. Fig. 3 is a flowchart illustrating a training method of a recommendation model according to an exemplary embodiment of the present disclosure.
Referring to fig. 3, the recommendation model includes a spatial transformation network and a preference calculation network. In step 301, a user data set may be obtained based on a user historical knowledge-graph and a project data set may be obtained based on a project historical knowledge-graph, wherein the user data set includes at least one user data and the project data set includes at least one project data.
According to an example embodiment of the present disclosure, at least one historical interaction data between a plurality of user data and a plurality of project data may be included, but not limited to, in each of the user historical knowledge-graph and the project historical knowledge-graph.
In step 302, first-type triples may be obtained based on the user data set and the item data set, where each first-type triplet includes a head entity, a relationship, and a tail entity, the head entity and the tail entity of each first-type triplet are both the user data in the user data set or the item data in the item data set, and the head entity and the tail entity of each first-type triplet express correct knowledge through the relationship.
According to an exemplary embodiment of the present disclosure, the fact that the head entity and the tail entity of each first-type triplet express correct knowledge through a relationship may mean that, for any first-type triplet, knowledge conveyed by the head entity and the tail entity based on the relationship may be acquired from a user historical knowledge graph and/or a project historical knowledge graph.
In step 303, the tail entities in the first type of triple may be randomly replaced to obtain a second type of triple, where the second type of triple corresponds to the first type of triple one to one.
According to an exemplary embodiment of the present disclosure, the number of the first type triples and the number of the second type triples may be equal. The second type of triplet, which is a corrupted triplet with respect to the first type of triplet, expresses less than correct knowledge.
In step 304, the first type of triplet and the second type of triplet may be input into a space transformation network, and an embedded representation of the first type of triplet is obtained, where the space transformation network is a TransR model.
According to an exemplary embodiment of the present disclosure, the acquired first-type triples may be used as positive samples of a spatial conversion network training process, and the acquired second-type triples may be used as negative samples of the spatial conversion network training process.
According to an exemplary embodiment of the present disclosure, a first type of triplet and a second type of triplet may be input into a spatial transformation network, a transformation matrix of a relationship in the first type of triplet is obtained, and an embedded representation of the first type of triplet is obtained based on the transformation matrix of the relationship in the first type of triplet, where the transformation matrix is used to project a space of a head entity or a space of a tail entity to a relationship space.
According to an exemplary embodiment of the present disclosure, the embedded representation of the first type of triplet is represented by formula (1):
wherein the content of the first and second substances,
wherein the content of the first and second substances,computing network for preferenceThe embedded representation of any triplet of the first type in the layer propagation,representing the header entity as user data or the header entity as project data,projecting both the space of the head entity and the space of the tail entity into any first-type triplet of the relationship space for any layer propagation of the preference computation network,any first type three in any layer propagation for preference computation networkA transformation matrix of the relations in the tuples,for any first type of triplet in any layer propagation of the preference computation network,is composed ofThe head entity of (a) is,is composed ofIn the context of (a) or (b),is composed ofThe tail entity of (a) the tail entity,for the user historical knowledge graph and the project historical knowledge graph,computing network for preferenceA recursively defined representation of triples of the first type in the layer propagation,the total number of layers of the network is calculated for the preference. It should be noted that, in the following description,along user history knowledge graph and project history for entity sequence without damaging user data and project dataThe knowledge-graph connection propagates to extend the recursively defined representation of the first type of triples with their potential vector representation.,To be composed ofProjecting entities in a dimensional entity space toA space of dimensional relationship space.
In step 305, a loss function of the space transformation network may be obtained based on the first type of triplet and the second type of triplet.
According to an exemplary embodiment of the present disclosure, the loss function of the spatial transformation network is represented by equation (3):
wherein the content of the first and second substances,
wherein the content of the first and second substances,
wherein the content of the first and second substances,as a function of the loss of the spatial transform network,any first type of triplet propagated for any layer of a preference computation network,Is composed ofThe head entity of (a) is,is composed ofIn the context of (a) or (b),is composed ofThe tail entity of (a) the tail entity,representing the function of sigmoid and the function of,is prepared by reacting withThe corresponding triplets of the second type are,is composed ofThe tail entity of (a) the tail entity,is composed ofThe similarity score of (a) is calculated,is composed ofThe similarity score of (a) is calculated,a lower score of (a) represents a higher degree of similarity,for the user historical knowledge graph and the project historical knowledge graph,a transformation matrix for the relationships in any first-type triplet propagating in any layer of the network is computed for preference,is composed ofIs to be used to represent the embedded representation of,is composed ofIs to be used to represent the embedded representation of,is composed ofIs shown embedded.,,Is composed ofThe physical space is maintained in a dimensional manner,is composed ofA dimensional relationship space. Note that the similarity score is an energy score.
According to exemplary embodiments of the present disclosure, whenIs infinitely close to the tail entity, there is a relationship as shown in the following equation (6):
wherein the content of the first and second substances,is composed ofThe representation of the projection in the relationship space,is composed ofA projected representation in a relationship space.
In step 306, the head entity, the relationship, and the tail entity of the first type of triple may be randomly replaced, and a third type of triple is obtained, where the third type of triple corresponds to the first type of triple one to one.
According to an exemplary embodiment of the present disclosure, the number of the first type triples and the number of the third type triples may be equal. The third type of triplet, which is a corrupted triplet with respect to the first type of triplet, expresses no correct knowledge.
At step 307, an embedded representation of a third type of triplet may be obtained based on the embedded representation of the first type of triplet.
According to the exemplary embodiment of the present disclosure, the embedded representation of the first-type triplet is also represented in the form of a triplet, and the embedded representation of the third-type triplet is obtained by correspondingly replacing the embedded representation of the first-type triplet with the replacement relationship in step 306.
At step 308, the embedded representations of the first type of triples and the embedded representations of the third type of triples may be input to a preference computation network, the attention representations of the user data and the item data in the embedded representations of the first type of triples are obtained and aggregated layer by layer, the model representations of the user data and the model representations of the item data are obtained, and a predicted preference score value for each item data by each user data is obtained based on the model representations of the user data and the model representations of the item data, wherein the attention representation of the user data is a representation obtained by the head entity based on attention weights for the first type of triples of the user data, the attention representation of the item data is a representation obtained by the head entity based on attention weights for the first type of triples of the item data, the preference computation network is a graph neural network model based on attention weights comprising multi-layer propagation, propagating in each layer of the preference computation network with embedded representations of all triples of the first type and all triples of the third type.
According to an exemplary embodiment of the present disclosure, the first type of triples may be used as a positive sample of the preference calculation network training process, and the third type of triples may be used as a negative sample of the preference calculation network training process.
According to an exemplary embodiment of the present disclosure, the attention representation of the user data and the item data in the embedded representation of the triples of the first type may be obtained in the following manner.
First, the attention weight of the first type of triple in any layer propagation of the preference calculation network can be obtained based on the ReLU function and the Sigmoid function, wherein the attention weight is the weight of the head entity to the tail entity.
The attention weight of the first type of triplet in any layer propagation of the preference computation network can be represented by equation (7):
wherein the content of the first and second substances,
wherein the content of the first and second substances,computing the middle of any layer propagation of the network for preferencesThe first type of triplet being based onThe attention weight of (a) is given,computing the middle of any layer propagation of the network for preferencesThe embedded representation of the head entity of a first type of triplet in the relationship space,computing the middle of any layer propagation of the network for preferencesThe relationship of the first-type triplets,representing the function of sigmoid and the function of,represents the function of the ReLU, and represents,、、、、andare all parameters in the set of parameters of the recommendation model. The ReLU function is a non-linear activation function, | is a stitching operation,andare all the parameters of the weight matrix,、andare all deviation parameters.
Then, the attention weight of the first type of triple in any layer propagation of the preference calculation network can be normalized through a softmax function, and the attention weight of the first type of triple of which the head entity is the user data and the item data in any layer propagation of the normalized preference calculation network is obtained.
The attention weight of the first type triples of user data and item data of the head entity in any layer of the propagated preference calculation network after normalization can be obtained by the following formula (9):
wherein the content of the first and second substances,computing the first in any layer propagation of the network for the normalized preferencesThe first type of triplet being based onThe attention weight of (a) is given,computing the middle of any layer propagation of the network for preferencesBases of triplets of the third typeThe attention weight of (a) is given,computing the middle of any layer propagation of the network for preferencesThe embedded representation of the head entity of the third type of triplet in the relationship space,for any of the triples of the third type,is composed ofThe head entity of (a) is,is composed ofIn the context of (a) or (b),is composed ofThe tail entity of (a) the tail entity,computing the middle of any layer propagation of the network for preferencesA third type of triple relationship.
Next, an attention representation of the first type of triplet of user data and item data as the head entity in any layer of propagation of the preference calculation network may be obtained based on the normalized attention weights of the first type of triplet of user data and item data as the head entity in any layer of propagation of the preference calculation network.
The attention representation of the first type of triples of user data and item data for the head entity in any layer of propagation of the preference computation network can be represented by equation (10):
wherein the content of the first and second substances,
wherein the content of the first and second substances,computing network for preferenceThe head entity in the layer propagation is an attention representation of a first type of triplet of user data or item data,computing network for preferenceThe header entity in the layer propagation is the total number of triples of the first type of user data or item data,computing network for preferenceThe embedded representation of any triplet of the first type in the layer propagation,computing the middle of any layer propagation of the network for preferencesThe individual head entity being user data orThe individual head entity is an attention representation of a first type of triplet of item data,computing the middle of any layer propagation of the network for preferencesThe attention representation of the first-type triplet,computing the middle of any layer propagation of the network for preferencesAnd (3) the embedded representation of the tail entity of the first type of triple in the relation space.
Finally, the attention representation of the first type of triples of user data and item data that prefer the head entity to propagate through all layers of the computing network may be integrated into a set by the following equation (12):
wherein the content of the first and second substances,
wherein the content of the first and second substances,an attention representation set of triples of a first type for user data for a head entity,computing network for preferenceThe header entity in the layer is an attention representation of a first type of triplet of user data,computing network for preferenceThe header entity in the layer is an attention representation of a first type of triplet of user data,computing network for preferenceThe header entity in the layer is an attention representation of a first type of triplet of user data,an attention representation set of first type triples for item data for the head entity,for an initial attention representation of all head entities as first-type triples of item data,computing network for preferenceThe head entity in the layer is an attention representation of a first type of triplet of item data,computing network for preferenceThe head entity in the layer is an attention representation of a first type of triplet of item data,computing network for preferenceThe head entity in the layer is an attention representation of a first type of triplet of item data,encoding a set for item data in an item historical knowledge graph,the representative head entity is the item data,representing the head entity as user data.The project data encoding method is characterized by comprising entities corresponding to project data in project data encoding sets in a project historical knowledge graph.
According to an exemplary embodiment of the present disclosure, layer-by-layer aggregation may be performed on attention representations of user data and project data based on the following equation (14) by means of a stitching aggregation:
wherein the content of the first and second substances,for attention representation of user data or project data after layer-by-layer aggregation,representing the header entity as user data or the header entity as project data,computing network for preferenceIn layer propagationThe individual head entity being user data orThe individual head entity is an attention representation of a first type of triplet of item data,computing network for preferenceIn layer propagationThe individual head entity being user data orThe individual head entity is an attention representation of a first type of triplet of item data,computing network for preferenceIn layer propagationThe individual head entity being user data orThe head entity is an attention representation of the first type of triplet of item data, | is a stitching operation,andare all parameters in the set of parameters of the recommendation model.
According to an exemplary embodiment of the present disclosure, the preference score prediction value of each user data for each item data may be obtained by the following equation (15):
wherein the content of the first and second substances,a preference score prediction value for any user data to any item data,for the model representation of any user data after the migration,for the model representation of any of the user data,the model representation of any item data is acquired based on the attention representation of the user data after layer-by-layer aggregation, and the model representation of any item data is acquired based on the attention representation of the item data after layer-by-layer aggregation.Representative pairAndand (6) solving the inner product.
Returning to fig. 3, in step 309, a loss function of the preference calculation network may be obtained according to the preference score prediction value of each user data for each item data.
According to an exemplary embodiment of the present disclosure, the preference calculates a loss function of the network, represented as equation (16):
wherein the content of the first and second substances,a loss function of the network is calculated for the preference,the representative head entity is the item data,the representative header entity is the user data,in the case of the first type of triplet,presentation pairThe cross-entropy loss is solved,the actual value of the preference score for any user data versus any item data,a preference score prediction value for any user data to any item data,is a third type of triplet.
In step 310, a loss function of the recommendation model may be obtained based on the loss function of the spatial transformation network and the loss function of the preference computation network.
According to an exemplary embodiment of the present disclosure, a loss function of the model is recommended, represented as equation (17):
wherein the content of the first and second substances,
wherein the content of the first and second substances,for the loss function of the proposed model,as a function of the loss of the spatial transform network,a loss function of the network is calculated for the preference,in order to be able to adjust the parameters,in order to recommend a set of parameters for the model,is an embedded set of representations of the head and tail entities of a first type of triplet,an embedded representation set of relationships for a first type of triplet,are all made ofThe parameter (1).Can beAn adjusted regularization function.
In step 311, the recommendation model may be trained by adjusting the set of parameters of the recommendation model according to the loss function of the recommendation model.
Specifically, the parameter set of the recommended model may be adjusted according to a loss function of the recommended model, and when a value of the loss function of the recommended model is smaller than a specific threshold, the training of the recommended model is ended.
Fig. 4 is a diagram illustrating a click rate prediction result based on a last. Fig. 5 is a diagram illustrating click-through rate prediction results based on a Book-Crossing dataset in a CTR scenario according to an exemplary embodiment of the present disclosure. Fig. 6 is a diagram illustrating a click-through rate prediction result based on a Dianping-Food data set in a CTR scenario according to an exemplary embodiment of the present disclosure.
Referring to fig. 4, 5 and 6, the CTR scenario is a click-through rate prediction scenario, i.e. predicting the probability of each data interaction in a data set. model is a model, AUC and F1 are indexes, and BPRMF, CKE, RippleNet, KGCN-LS, KGAT, CKAN and CKGAN are recommendation models. The AUC values of CKGAN in the exemplary embodiments of the present disclosure reached 0.8473 on the last. fm dataset, 0.7538 on the Book-cross dataset, and 0.8783 on the Dianping-Food dataset.
Fig. 7 is a diagram illustrating a variation of a Recall @ K value based on a last. FIG. 8 is a diagram illustrating a change in the Recall @ K value based on the Book-crosslinking dataset in a top-K recommendation scenario according to an illustrative embodiment of the present disclosure. FIG. 9 is a diagram illustrating the variation of the Recall @ K value based on the Dianning-Food dataset in the top-K recommendation scenario according to an exemplary embodiment of the present disclosure. Wherein the Recall @ K value is the Recall rate.
Referring to fig. 7, 8 and 9, the top-K recommendation scenario is a scenario in which K most likely interactive items are recommended for each user in the dataset. CKGAN is preferred over CKAN in exemplary embodiments of the present disclosure.
Fig. 10 is a block diagram illustrating a recommendation device according to an exemplary embodiment of the present disclosure. Referring to fig. 10, the recommendation apparatus 1000 includes a data acquisition unit 1001, a model prediction unit 1002, and an item recommendation unit 1003, in which:
the data acquisition unit 1001 is configured to: acquiring data to be recommended, wherein the data to be recommended comprises a plurality of user data, a plurality of item data, a user knowledge graph constructed based on the user data and an item knowledge graph constructed based on the item data.
According to an exemplary embodiment of the present disclosure, the data to be recommended may further include, but is not limited to, at least one historical interaction data between a plurality of user data and a plurality of item data.
The model prediction unit 1002 is configured to: and inputting the data to be recommended into the trained recommendation model to obtain the preference score predicted value of each item data by each user data in the data to be recommended.
The recommendation model may include a spatial transformation network, which may be a TransR model, and a preference computation network, which may be an attention-based graph neural network model that includes multi-layer propagation, according to example embodiments of the present disclosure. The trained recommendation model in the examples of the present disclosure is obtained by training a TransR model together with an attention-based graph neural network model including multi-layer propagation through training samples.
According to an exemplary embodiment of the present disclosure, the preference score prediction value of each user data for each item data may be an acquisition probability prediction value of each user data for each item data. Taking any user data as a user with a user name of A and a commodity B to be purchased, taking any item data as a commodity B as an example, and taking the probability predicted value of any user data for any item data as the probability predicted value of the commodity B purchased by the user A.
The item recommendation unit 1003 is configured to: and acquiring at least one item data for each user data as recommended item data based on the preference score predicted value of each item data by each user data in the data to be recommended.
According to an exemplary embodiment of the present disclosure, the item recommendation unit 1003 may arrange preference score prediction values of any user data in the data to be recommended for each item data in descending order of magnitude; acquiring item data corresponding to the preference score predicted values sorted from the first place to the Nth place as recommended item data, or acquiring item data corresponding to the preference score predicted values larger than a preset threshold value as recommended item data, wherein N is a preset integer larger than or equal to 1.
According to the exemplary embodiment of the present disclosure, taking any user data as a user with a user name of C who wants to watch a movie, and taking item data as movie D, movie E, movie F, movie G, movie H, movie I and movie J as examples, the preference score prediction values of C for movie D, movie E, movie F, movie G, movie H, movie I and movie J are 0.61, 0.82, 0.93, 0.45, 0.56, 0.77 and 0.95, respectively, obtained by a trained recommendation model, then the preference score prediction values of C for each item data are arranged in descending order of size to obtain: and if the preset threshold is 0.6, recommending the movie J, the movie F, the movie E, the movie I and the movie D to C.
Fig. 11 is a block diagram illustrating a training apparatus of a recommendation model according to an exemplary embodiment of the present disclosure. Referring to fig. 11, training apparatus 1100 includes a dataset acquisition unit 1101, a triplet acquisition unit 1102, a first replacement unit 1103, a spatial conversion network input unit 1104, a first loss function acquisition unit 1105, a second replacement unit 1106, an embedded representation acquisition unit 1107, a preference calculation network input unit 1108, a second loss function acquisition unit 1109, a model loss function acquisition unit 1110, and a parameter adjustment unit 1111.
The data set acquisition unit 1101 is configured to: a user data set is obtained based on the user historical knowledge-graph, and a project data set is obtained based on the project historical knowledge-graph, wherein the user data set comprises at least one user data and the project data set comprises at least one project data.
According to an example embodiment of the present disclosure, at least one historical interaction data between a plurality of user data and a plurality of project data may be included, but not limited to, in each of the user historical knowledge-graph and the project historical knowledge-graph.
The triplet acquisition unit 1102 is configured to: the method comprises the steps of obtaining first-class triples based on a user data set and a project data set, wherein each first-class triplet comprises a head entity, a relation and a tail entity, the head entity and the tail entity of each first-class triplet are user data in the user data set or project data in the project data set, and the head entity and the tail entity of each first-class triplet express correct knowledge through the relation.
According to an exemplary embodiment of the present disclosure, the fact that the head entity and the tail entity of each first-type triplet express correct knowledge through a relationship may mean that, for any first-type triplet, knowledge conveyed by the head entity and the tail entity based on the relationship may be acquired from a user historical knowledge graph and/or a project historical knowledge graph.
The first replacement unit 1103 is configured to: and randomly replacing tail entities in the first type of triples to obtain second type of triples, wherein the second type of triples correspond to the first type of triples one by one.
According to an exemplary embodiment of the present disclosure, the number of the first type triples and the number of the second type triples may be equal. The second type of triplet, which is a corrupted triplet with respect to the first type of triplet, expresses less than correct knowledge.
The space conversion network input unit 1104 is configured to: and inputting the first-class triples and the second-class triples into a space conversion network to obtain the embedded representation of the first-class triples, wherein the space conversion network is a TransR model.
According to an exemplary embodiment of the present disclosure, the acquired first-type triples may be used as positive samples of a spatial conversion network training process, and the acquired second-type triples may be used as negative samples of the spatial conversion network training process.
According to an example embodiment of the present disclosure, the space transformation network input unit 1104 may input the first type of triplet and the second type of triplet into the space transformation network, obtain a transformation matrix of the relationship in the first type of triplet, and obtain the embedded representation of the first type of triplet based on the transformation matrix of the relationship in the first type of triplet, where the transformation matrix is used to project the space of the head entity or the space of the tail entity to the relationship space.
According to an exemplary embodiment of the present disclosure, the embedded representation of the first type of triplet is represented by equation (1) above.
The first loss function acquisition unit 1105 is configured to: and acquiring a loss function of the space transformation network based on the first type of triple and the second type of triple.
According to an exemplary embodiment of the present disclosure, the loss function of the spatial transformation network is represented by equation (3) above.
According to exemplary embodiments of the present disclosure, whenThe sum of the head entity and the relationship of (a) is infinitely close to the tail entity, there is a relationship as shown in the above equation (6).
The second replacement unit 1106 is configured to: and randomly replacing the head entity, the relation and the tail entity of the first type of triple to obtain a third type of triple, wherein the third type of triple corresponds to the first type of triple one to one.
According to an exemplary embodiment of the present disclosure, the number of the first type triples and the number of the third type triples may be equal. The third type of triplet, which is a corrupted triplet with respect to the first type of triplet, expresses no correct knowledge.
The embedded representation acquisition unit 1107 is configured to: based on the embedded representation of the first type of triple, an embedded representation of a third type of triple is obtained.
According to the exemplary embodiment of the present disclosure, the embedded representation of the first-type triplet is also represented in the form of a triplet, and the embedded representation of the third-type triplet is obtained by correspondingly replacing the embedded representation of the first-type triplet with the replacement relationship in step 306.
The preference calculation network input unit 1108 is configured to: inputting the embedded representation of the first type of triples and the embedded representation of the third type of triples into a preference calculation network, acquiring attention representations of user data and item data in the embedded representation of the first type of triples, and aggregating the attention representations of the user data and the item data layer by layer, acquiring a model representation of the user data and a model representation of the item data, and acquiring a preference score prediction value of each item data for each item data based on the model representation of the user data and the model representation of the item data, wherein the attention representation of the user data is a representation acquired by a head entity based on attention weights for the first type of triples of the user data, the attention representation of the item data is a representation acquired by the head entity based on attention weights for the first type of triples of the item data, the preference calculation network is a graph neural network model based on attention mechanisms and containing multi-layer propagation, propagating in each layer of the preference computation network with embedded representations of all triples of the first type and all triples of the third type.
According to an exemplary embodiment of the present disclosure, the first type of triples may be used as a positive sample of the preference calculation network training process, and the third type of triples may be used as a negative sample of the preference calculation network training process.
According to an example embodiment of the present disclosure, the preference calculation network input unit 1108 may obtain the attention representation of the user data and the item data in the embedded representation of the first type of triple by:
the attention weight of the first type of triplet in the propagation of any layer of the preference calculation network can be obtained based on the ReLU function and the Sigmoid function, wherein the attention weight is the weight of the head entity to the tail entity, and the attention weight of the first type of triplet in the propagation of any layer of the preference calculation network can be expressed as the above formula (7);
the attention weight of the first-type triple in any layer propagation of the preference calculation network can be normalized through a softmax function, the attention weight of the first-type triple of the user data and the item data, which is the head entity in any layer propagation of the normalized preference calculation network, is obtained, and the attention weight of the first-type triple of the user data and the item data, which is the head entity in any layer propagation of the normalized preference calculation network, can be obtained through the formula (9);
obtaining an attention representation of the first-class triples of the user data and the item data of the head entity in any layer of propagation of the preference calculation network based on the normalized attention weight of the first-class triples of the user data and the item data of the head entity in any layer of propagation of the preference calculation network, wherein the attention representation of the first-class triples of the user data and the item data of the head entity in any layer of propagation of the preference calculation network can be represented by the formula (10);
the attention representation of the first-type triples of user data and item data for the head entity in all layer propagations of the preference computation network can be integrated into a collection by equation (12) above.
According to an exemplary embodiment of the present disclosure, the preference calculation network input unit 1108 may perform layer-by-layer aggregation of the attention representations of the user data and the item data based on equation (14) above by way of a stitching aggregation.
According to an exemplary embodiment of the present disclosure, the preference calculation network input unit 1108 may obtain a preference score prediction value of each user data to each item data through equation (15) above.
The second loss function acquisition unit 1109 is configured to: and obtaining a loss function of the preference calculation network according to the preference score predicted value of each item of data of each user.
According to an exemplary embodiment of the present disclosure, the preference calculation network's loss function, which may be represented as equation (16) above.
The model loss function acquisition unit 1110 is configured to: and obtaining the loss function of the recommendation model based on the loss function of the space transformation network and the loss function of the preference calculation network.
According to an exemplary embodiment of the present disclosure, the loss function of the recommendation model may be represented as equation (17) above.
The parameter adjustment unit 1111 is configured to: and training the recommendation model by adjusting the parameter set of the recommendation model according to the loss function of the recommendation model.
Specifically, the parameter set of the recommended model may be adjusted according to a loss function of the recommended model, and when a value of the loss function of the recommended model is smaller than a specific threshold, the training of the recommended model is ended.
Fig. 12 is a block diagram illustrating an electronic device 1200 according to an example embodiment of the present disclosure.
Referring to fig. 12, an electronic device 1200 includes at least one memory 1201 and at least one processor 1202, the at least one memory 1201 having stored therein a set of computer-executable instructions that, when executed by the at least one processor 1202, perform a recommendation method in accordance with an example embodiment of the present disclosure.
By way of example, the electronic device 1200 may be a PC computer, tablet device, personal digital assistant, smartphone, or other device capable of executing the set of instructions described above. Here, the electronic device 1200 need not be a single electronic device, but can be any collection of devices or circuits that can execute the above instructions (or sets of instructions) individually or in combination. The electronic device 1200 may also be part of an integrated control system or system manager, or may be configured as a portable electronic device that interfaces with local or remote (e.g., via wireless transmission).
In the electronic device 1200, the processor 1202 may include a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a programmable logic device, a special purpose processor system, a microcontroller, or a microprocessor. By way of example, and not limitation, processors may also include analog processors, digital processors, microprocessors, multi-core processors, processor arrays, network processors, and the like.
The processor 1202 may execute instructions or code stored in the memory 1201, where the memory 1201 may also store data. The instructions and data may also be transmitted or received over a network via a network interface device, which may employ any known transmission protocol.
The memory 1201 may be integrated with the processor 1202, for example, by having RAM or flash memory disposed within an integrated circuit microprocessor or the like. Further, memory 1201 may include a stand-alone device, such as an external disk drive, storage array, or any other storage device usable by a database system. The memory 1201 and the processor 1202 may be operatively coupled or may communicate with each other, e.g., through I/O ports, network connections, etc., such that the processor 1202 is able to read files stored in the memory.
In addition, the electronic device 1200 may also include a video display (such as a liquid crystal display) and a user interaction interface (such as a keyboard, mouse, touch input device, etc.). All components of the electronic device 1200 may be connected to each other via a bus and/or a network.
According to an exemplary embodiment of the present disclosure, there may also be provided a computer-readable storage medium storing instructions, which when executed by at least one processor, cause the at least one processor to perform a recommendation method according to the present disclosure. Examples of the computer-readable storage medium herein include: read-only memory (ROM), random-access programmable read-only memory (PROM), electrically erasable programmable read-only memory (EEPROM), random-access memory (RAM), dynamic random-access memory (DRAM), static random-access memory (SRAM), flash memory, non-volatile memory, CD-ROM, CD-R, CD + R, CD-RW, CD + RW, DVD-ROM, DVD-R, DVD + R, DVD-RW, DVD + RW, DVD-RAM, BD-ROM, BD-R, BD-R LTH, BD-RE, Blu-ray or compact disc memory, Hard Disk Drive (HDD), solid-state drive (SSD), card-type memory (such as a multimedia card, a Secure Digital (SD) card or a extreme digital (XD) card), magnetic tape, a floppy disk, a magneto-optical data storage device, an optical data storage device, a hard disk, a magnetic tape, a magneto-optical data storage device, a hard disk, a magnetic tape, a magnetic data storage device, a magnetic tape, a magnetic data storage device, a magnetic tape, a magnetic data storage device, a magnetic tape, a magnetic data storage device, a magnetic tape, a magnetic data storage device, A solid state disk, and any other device configured to store and provide a computer program and any associated data, data files, and data structures to a processor or computer in a non-transitory manner such that the processor or computer can execute the computer program. The computer program in the computer-readable storage medium described above can be run in an environment deployed in a computer apparatus, such as a client, a host, a proxy device, a server, and the like, and further, in one example, the computer program and any associated data, data files, and data structures are distributed across a networked computer system such that the computer program and any associated data, data files, and data structures are stored, accessed, and executed in a distributed fashion by one or more processors or computers.
According to an exemplary embodiment of the present disclosure, a computer program product may also be provided, in which instructions are executable by a processor of a computer device to perform a recommendation method according to an exemplary embodiment of the present disclosure.
According to the recommendation method and the recommendation device, the TransR model is adopted as the space conversion network in the recommendation model, the correct knowledge expressed in the form of the triples is effectively expressed, a more efficient fusion mode of the collaborative information and the knowledge transmission is realized, and the transmission of the embedded expression of the first type of triples in the preference calculation network is promoted. The trained TransR model can perform space conversion on the entities, can effectively process complex relationships among the entities, measures the correlation of knowledge, and further effectively describes the relationships of the entities. The loss function of the space transformation network considers the correlation relationship between the first-class triples and the second-class triples, the loss of pairwise sequencing can be reduced on the granularity of the first-class triples, the knowledge representation capability of the recommendation model is improved, and the potential relationship between entities in the knowledge map is enhanced. Compared with CKAN, the influence of the distance of different semantic spaces on the knowledge representation can be reduced.
In addition, according to the recommendation method and device disclosed by the invention, the user data set is obtained based on the historical knowledge map of the user, the project data set is obtained based on the historical knowledge map of the project, and then the first type of triple including the head entity, the relation and the tail entity is obtained, so that the representation capability of knowledge can be improved.
In addition, according to the recommendation method and apparatus of the present disclosure, based on the embedded representation obtained by the space transformation network, the attention weight is obtained in the preference calculation network, and is more differentiated than the attention weight obtained by the CKAN.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This disclosure is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.
Claims (16)
1. A recommendation method, comprising:
acquiring data to be recommended, wherein the data to be recommended comprises a plurality of user data, a plurality of item data, a user knowledge graph constructed based on the plurality of user data and an item knowledge graph constructed based on the plurality of item data;
inputting the data to be recommended into a trained recommendation model to obtain a preference score predicted value of each item data by each user data in the data to be recommended;
and acquiring at least one item data for each user data as recommended item data based on the preference score predicted value of each item data by each user data in the data to be recommended.
2. The recommendation method of claim 1, wherein the recommendation model comprises a spatial transformation network and a preference computation network, the recommendation model being trained by:
acquiring a user data set based on a user historical knowledge graph and acquiring a project data set based on a project historical knowledge graph, wherein the user data set comprises at least one piece of user data, and the project data set comprises at least one piece of project data;
acquiring first-class triples based on the user data set and the project data set, wherein each first-class triplet comprises a head entity, a relation and a tail entity, the head entity and the tail entity of each first-class triplet are user data in the user data set or project data in the project data set, and the head entity and the tail entity of each first-class triplet express correct knowledge through the relation;
randomly replacing tail entities in the first type of triples to obtain second type of triples, wherein the second type of triples correspond to the first type of triples one by one;
inputting a first type of triple and a second type of triple into the space conversion network to obtain an embedded representation of the first type of triple, wherein the space conversion network is a TransR model;
acquiring a loss function of the space transformation network based on the first type of triple and the second type of triple;
randomly replacing a head entity, a relation and a tail entity of the first type of triple to obtain a third type of triple, wherein the third type of triple corresponds to the first type of triple one to one;
acquiring an embedded representation of a third type of triple based on the embedded representation of the first type of triple;
inputting the embedded representation of the first type of triples and the embedded representation of the third type of triples into the preference calculation network, obtaining an attention representation of the user data and the item data in the embedded representation of the first type of triples, and aggregating the attention representations of the user data and the item data layer by layer, obtaining a model representation of the user data and a model representation of the item data, and obtaining a preference score prediction value of each item data for each item data based on the model representation of the user data and the model representation of the item data, wherein the attention representation of the user data is a representation obtained by a head entity for the first type of triples of the user data based on attention weights, and the attention representation of the item data is a representation obtained by the head entity for the first type of triples of the item data based on attention weights, and the preference calculation network is a graph neural network model based on attention mechanisms and comprising multi-layer propagation, propagating in each layer of the preference computation network with the embedded representations of all triples of the first type and all triples of the third type;
obtaining a loss function of the preference calculation network according to the preference score predicted value of each item of data of each user;
obtaining a loss function of the recommendation model based on the loss function of the spatial transformation network and the loss function of the preference calculation network;
and training the recommendation model by adjusting the parameter set of the recommendation model according to the loss function of the recommendation model.
3. The recommendation method of claim 2, wherein said inputting a first type of triplet and a second type of triplet into said spatial transformation network to obtain an embedded representation of the first type of triplet comprises:
inputting the first-type triples and the second-type triples into the space conversion network, obtaining a transformation matrix of the relationship in the first-type triples, and obtaining the embedded representation of the first-type triples based on the transformation matrix of the relationship in the first-type triples, wherein the transformation matrix is used for projecting the space of the head entity or the space of the tail entity to the relationship space.
4. A recommendation method according to claim 3, wherein the embedded representation of the first type of triplet is represented as:
wherein the content of the first and second substances,
wherein the content of the first and second substances,computing a network's second for the preferenceThe embedded representation of any triplet of the first type in the layer propagation,representing the header entity as user data or the header entity as project data,projecting both the space of the head entity and the space of the tail entity into any first-type triplet of the relationship space for any layer propagation of the preference computation network,a transformation matrix for the relationships in any first-type triplet propagated for any layer of the preference computation network,any first type of triplet propagated for any layer of the preference calculation network,is composed ofThe head entity of (a) is,is composed ofIn the context of (a) or (b),is composed ofThe tail entity of (a) the tail entity,for the user historical knowledge graph and the project historical knowledge graph,Computing a network's second for the preferenceA recursively defined representation of triples of the first type in the layer propagation,calculating a total number of layers for the network for the preference.
5. The recommendation method of claim 4, wherein the loss function of the spatial transformation network is expressed as:
wherein the content of the first and second substances,
wherein the content of the first and second substances,
wherein the content of the first and second substances,as a function of the loss of the spatial transformation network,any first type of triplet propagated for any layer of the preference calculation network,is composed ofThe head entity of (a) is,is composed ofIn the context of (a) or (b),is composed ofThe tail entity of (a) the tail entity,representing the function of sigmoid and the function of,is prepared by reacting withThe corresponding triplets of the second type are,is composed ofThe tail entity of (a) the tail entity,is composed ofThe similarity score of (a) is calculated,is composed ofThe similarity score of (a) is calculated,for the user historical knowledge-graph and the project historical knowledge-graph,a transformation matrix for the relationships in any first-type triplet propagated for any layer of the preference computation network,is composed ofIs to be used to represent the embedded representation of,is composed ofIs to be used to represent the embedded representation of,is composed ofIs shown embedded.
6. The recommendation method of claim 5, wherein said inputting the embedded representation of the first type of triplet and the embedded representation of the third type of triplet into the preference calculation network to obtain an attention representation of the user data and the item data in the embedded representation of the first type of triplet comprises:
acquiring attention weight of a first type of triple in any layer propagation of the preference calculation network based on a ReLU function and a Sigmoid function, wherein the attention weight is the weight of a head entity to a tail entity;
normalizing the attention weight of the first type of triple in any layer propagation of the preference calculation network through a softmax function, and acquiring the normalized attention weight of the first type of triple of which the head entity is user data and item data in any layer propagation of the preference calculation network;
acquiring attention representation of the first type of triple of the user data and the item data of the head entity in any layer of propagation of the preference calculation network based on the normalized attention weight of the first type of triple of the user data and the item data of the head entity in any layer of propagation of the preference calculation network;
integrating into a set the attention representations of the head entities as the first type of triples of user data and item data in all layer propagations of the preference computation network.
7. The recommendation method of claim 6, wherein the attention weight of the first type of triplet in any layer propagation of the preference computation network is expressed as:
wherein the content of the first and second substances,
wherein the content of the first and second substances,computing a first in any layer propagation of a network for the preferenceThe first type of triplet being based onThe attention weight of (a) is given,computing a first in any layer propagation of a network for the preferenceThe embedded representation of the head entity of a first type of triplet in the relationship space,computing a first in any layer propagation of a network for the preferenceThe relationship of the first-type triplets,representing the function of sigmoid and the function of,represents the function of the ReLU, and represents,、、、、andall parameters in the parameter group of the recommendation model;
acquiring the normalized attention weight of the first-class triples of the head entity, which are user data and item data, in any layer propagation of the preference calculation network by the following formula based on the attention weight of the first-class triples in any layer propagation of the preference calculation network:
wherein the content of the first and second substances,computing a first in any layer propagation of the network for the normalized preferenceThe first type of triplet being based onThe attention weight of (a) is given,computing a first in any layer propagation of a network for the preferenceBases of triplets of the third typeThe attention weight of (a) is given,computing a first in any layer propagation of a network for the preferenceThe embedded representation of the head entity of the third type of triplet in the relationship space,for any of the triples of the third type,is composed ofThe head entity of (a) is,is composed ofIn the context of (a) or (b),is composed ofThe tail entity of (a) the tail entity,computing a first in any layer propagation of a network for the preferenceA third type of triple relationship.
8. The recommendation method of claim 7, wherein propagating at any layer of the preference computation network an attention representation of a first type of triplet of head entities for user data and item data is represented as:
wherein the content of the first and second substances,
wherein the content of the first and second substances,computing a network's second for the preferenceThe head entity in the layer propagation is an attention representation of a first type of triplet of user data or item data,computing a network's second for the preferenceThe header entity in the layer propagation is the total number of triples of the first type of user data or item data,computing a network's second for the preferenceThe embedded representation of any triplet of the first type in the layer propagation,computing a first in any layer propagation of a network for the preferenceThe individual head entity being user data orThe individual head entity being project dataThe attention representation of the first type of triplet,computing a first in any layer propagation of a network for the preferenceThe attention representation of the first-type triplet,computing a first in any layer propagation of a network for the preferenceThe embedded representation of the tail entity of the first type of triple in the relation space;
integrating into a set, an attention representation of a first type of triplet of user data and item data for a head entity in all layer propagations of the preference computation network by:
wherein the content of the first and second substances,
wherein the content of the first and second substances,an attention representation set of triples of a first type for user data for a head entity,computing a network's second for the preferenceThe header entity in the layer is an attention representation of a first type of triplet of user data,computing a network's second for the preferenceThe header entity in the layer is an attention representation of a first type of triplet of user data,computing a network's second for the preferenceThe header entity in the layer is an attention representation of a first type of triplet of user data,an attention representation set of first type triples for item data for the head entity,for an initial attention representation of all head entities as first-type triples of item data,computing a network's second for the preferenceThe head entity in the layer is an attention representation of a first type of triplet of item data,computing a network's second for the preferenceThe head entity in the layer isAn attention representation of a first type of triplet of item data,computing a network's second for the preferenceThe head entity in the layer is an attention representation of a first type of triplet of item data,encoding a set for item data in the item historical knowledge graph,the representative head entity is the item data,representing the head entity as user data.
9. The recommendation method of claim 8, wherein the attention representations of the user data and the project data are aggregated layer by:
wherein the content of the first and second substances,for the attention representation of the layer-by-layer aggregated user data or project data,representing the header entity as user data or the header entity as project data,computing a network's second for the preferenceIn layer propagationThe individual head entity being user data orThe individual head entity is an attention representation of a first type of triplet of item data,computing a network's second for the preferenceIn layer propagationThe individual head entity being user data orThe individual head entity is an attention representation of a first type of triplet of item data,computing a network's second for the preferenceIn layer propagationThe individual head entity being user data orThe head entity being the attention of the first type of triple of the item dataForce represents, | is the splicing operation,andare all parameters in the set of parameters of the recommendation model.
10. The recommendation method according to claim 9, wherein the preference score prediction value of each user data for each item data is obtained by:
wherein the content of the first and second substances,a preference score prediction value for any user data to any item data,for the transformed model representation of said any user data,and the model representation of any item data is acquired based on the attention representation of the user data after layer-by-layer aggregation, and the model representation of any item data is acquired based on the attention representation of the item data after layer-by-layer aggregation.
11. The recommendation method of claim 10, wherein the preference calculates a loss function for the network expressed as:
wherein the content of the first and second substances,a loss function of the network is calculated for the preference,the representative head entity is the item data,the representative header entity is the user data,in the case of the first type of triplet,presentation pairThe cross-entropy loss is solved,a preference score true value for the any user data to the any item data,a preference score prediction value for the any user data for the any item data,is a third type of triplet.
12. The recommendation method of claim 11, wherein the penalty function of the recommendation model is expressed as:
wherein the content of the first and second substances,
wherein the content of the first and second substances,is a loss function of the recommendation model,as a function of the loss of the spatial transformation network,a loss function of the network is calculated for the preference,in order to be able to adjust the parameters,for the set of parameters of the recommendation model,is an embedded set of representations of the head and tail entities of a first type of triplet,an embedded representation set of relationships for a first type of triplet,are all made ofThe parameter (1).
13. The recommendation method according to claim 1, wherein the obtaining at least one item data for each user data as recommended item data based on a preference score prediction value of each user data to each item data in the data to be recommended comprises:
arranging preference score predicted values of any user data in the data to be recommended to each item data in a descending order according to the size;
acquiring item data corresponding to the preference score predicted values sorted from the first place to the Nth place as recommended item data, or acquiring item data corresponding to the preference score predicted values larger than a preset threshold value as recommended item data, wherein N is a preset integer larger than or equal to 1.
14. A recommendation device, comprising:
a data acquisition unit configured to: acquiring data to be recommended, wherein the data to be recommended comprises a plurality of user data, a plurality of item data, a user knowledge graph constructed based on the plurality of user data and an item knowledge graph constructed based on the plurality of item data;
a model prediction unit configured to: inputting the data to be recommended into a trained recommendation model to obtain a preference score predicted value of each item data by each user data in the data to be recommended;
an item recommendation unit configured to: and acquiring at least one item data for each user data as recommended item data based on the preference score predicted value of each item data by each user data in the data to be recommended.
15. An electronic device, comprising:
at least one processor;
at least one memory storing computer-executable instructions,
wherein the computer-executable instructions, when executed by the at least one processor, cause the at least one processor to perform the recommendation method of any one of claims 1 to 13.
16. A computer-readable storage medium storing instructions that, when executed by at least one processor, cause the at least one processor to perform the recommendation method of any of claims 1 to 13.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111104093.9A CN113570058B (en) | 2021-09-22 | 2021-09-22 | Recommendation method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111104093.9A CN113570058B (en) | 2021-09-22 | 2021-09-22 | Recommendation method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113570058A true CN113570058A (en) | 2021-10-29 |
CN113570058B CN113570058B (en) | 2022-01-28 |
Family
ID=78173879
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111104093.9A Active CN113570058B (en) | 2021-09-22 | 2021-09-22 | Recommendation method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113570058B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114780867A (en) * | 2022-05-10 | 2022-07-22 | 杭州网易云音乐科技有限公司 | Recommendation method, medium, device and computing equipment |
CN115587875A (en) * | 2022-11-10 | 2023-01-10 | 广州科拓科技有限公司 | Textile e-commerce recommendation method and device based on balanced perception attention network |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111522962A (en) * | 2020-04-09 | 2020-08-11 | 苏州大学 | Sequence recommendation method and device and computer-readable storage medium |
CN112905900A (en) * | 2021-04-02 | 2021-06-04 | 辽宁工程技术大学 | Collaborative filtering recommendation algorithm based on graph convolution attention mechanism |
CN112989064A (en) * | 2021-03-16 | 2021-06-18 | 重庆理工大学 | Recommendation method for aggregating knowledge graph neural network and self-adaptive attention |
CN113010691A (en) * | 2021-03-30 | 2021-06-22 | 电子科技大学 | Knowledge graph inference relation prediction method based on graph neural network |
CN113032618A (en) * | 2021-03-26 | 2021-06-25 | 齐鲁工业大学 | Music recommendation method and system based on knowledge graph |
CA3106283A1 (en) * | 2020-01-21 | 2021-07-21 | Royal Bank Of Canada | System and method for out-of-sample representation learning |
WO2021179834A1 (en) * | 2020-03-10 | 2021-09-16 | 支付宝(杭州)信息技术有限公司 | Heterogeneous graph-based service processing method and device |
-
2021
- 2021-09-22 CN CN202111104093.9A patent/CN113570058B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CA3106283A1 (en) * | 2020-01-21 | 2021-07-21 | Royal Bank Of Canada | System and method for out-of-sample representation learning |
WO2021179834A1 (en) * | 2020-03-10 | 2021-09-16 | 支付宝(杭州)信息技术有限公司 | Heterogeneous graph-based service processing method and device |
CN111522962A (en) * | 2020-04-09 | 2020-08-11 | 苏州大学 | Sequence recommendation method and device and computer-readable storage medium |
CN112989064A (en) * | 2021-03-16 | 2021-06-18 | 重庆理工大学 | Recommendation method for aggregating knowledge graph neural network and self-adaptive attention |
CN113032618A (en) * | 2021-03-26 | 2021-06-25 | 齐鲁工业大学 | Music recommendation method and system based on knowledge graph |
CN113010691A (en) * | 2021-03-30 | 2021-06-22 | 电子科技大学 | Knowledge graph inference relation prediction method based on graph neural network |
CN112905900A (en) * | 2021-04-02 | 2021-06-04 | 辽宁工程技术大学 | Collaborative filtering recommendation algorithm based on graph convolution attention mechanism |
Non-Patent Citations (1)
Title |
---|
DGL专栏: "深入理解图注意力机制", 《HTTPS://WWW.JIQIZHIXIN.COM/ARTICLES/2019-02-19-7》 * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114780867A (en) * | 2022-05-10 | 2022-07-22 | 杭州网易云音乐科技有限公司 | Recommendation method, medium, device and computing equipment |
CN114780867B (en) * | 2022-05-10 | 2023-11-03 | 杭州网易云音乐科技有限公司 | Recommendation method, medium, device and computing equipment |
CN115587875A (en) * | 2022-11-10 | 2023-01-10 | 广州科拓科技有限公司 | Textile e-commerce recommendation method and device based on balanced perception attention network |
Also Published As
Publication number | Publication date |
---|---|
CN113570058B (en) | 2022-01-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11694122B2 (en) | Distributed machine learning systems, apparatus, and methods | |
US11710071B2 (en) | Data analysis and rendering | |
Emura et al. | A joint frailty-copula model between tumour progression and death for meta-analysis | |
JP6445055B2 (en) | Feature processing recipe for machine learning | |
JP6789934B2 (en) | Learning with transformed data | |
Awan et al. | Feature selection and transformation by machine learning reduce variable numbers and improve prediction for heart failure readmission or death | |
Häggström | Data‐driven confounder selection via Markov and Bayesian networks | |
CN113570058B (en) | Recommendation method and device | |
Weisberg et al. | Post hoc subgroups in clinical trials: Anathema or analytics? | |
US20220188654A1 (en) | System and method for clinical trial analysis and predictions using machine learning and edge computing | |
CN115885297A (en) | Differentiable user-item collaborative clustering | |
Martin et al. | Clinical prediction in defined populations: a simulation study investigating when and how to aggregate existing models | |
Shandilya et al. | Mature-food: Food recommender system for mandatory feature choices a system for enabling digital health | |
CN111047009B (en) | Event trigger probability prediction model training method and event trigger probability prediction method | |
Ge et al. | CausalMGM: an interactive web-based causal discovery tool | |
Zhao et al. | Comparing two machine learning approaches in predicting lupus hospitalization using longitudinal data | |
Mendelevitch et al. | Practical Data Science with Hadoop and Spark: Designing and Building Effective Analytics at Scale | |
CN115080856A (en) | Recommendation method and device and training method and device of recommendation model | |
US20220012236A1 (en) | Performing intelligent affinity-based field updates | |
US20210174912A1 (en) | Data processing systems and methods for repurposing drugs | |
CN113609311A (en) | Method and device for recommending items | |
Meng | Cross-domain information fusion and personalized recommendation in artificial intelligence recommendation system based on mathematical matrix decomposition | |
Inibhunu et al. | Fusing dimension reduction and classification for mining interesting frequent patterns in patients data | |
US20240071623A1 (en) | Patient health platform | |
Chen et al. | Traffic Flow Prediction Based on Interactive Dynamic Spatio-Temporal Graph Convolution with a Probabilistic Sparse Attention Mechanism |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |