CN113570058A - Recommendation method and device - Google Patents

Recommendation method and device Download PDF

Info

Publication number
CN113570058A
CN113570058A CN202111104093.9A CN202111104093A CN113570058A CN 113570058 A CN113570058 A CN 113570058A CN 202111104093 A CN202111104093 A CN 202111104093A CN 113570058 A CN113570058 A CN 113570058A
Authority
CN
China
Prior art keywords
type
data
network
preference
representation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111104093.9A
Other languages
Chinese (zh)
Other versions
CN113570058B (en
Inventor
王潇茵
师博雅
杜红艳
张家华
郑俊康
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Aerospace Hongkang Intelligent Technology Beijing Co ltd
Original Assignee
Aerospace Hongkang Intelligent Technology Beijing Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Aerospace Hongkang Intelligent Technology Beijing Co ltd filed Critical Aerospace Hongkang Intelligent Technology Beijing Co ltd
Priority to CN202111104093.9A priority Critical patent/CN113570058B/en
Publication of CN113570058A publication Critical patent/CN113570058A/en
Application granted granted Critical
Publication of CN113570058B publication Critical patent/CN113570058B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/02Knowledge representation; Symbolic representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The present disclosure relates to a recommendation method and apparatus, the recommendation method including: acquiring data to be recommended, wherein the data to be recommended comprises a plurality of user data, a plurality of item data, a user knowledge graph constructed based on the plurality of user data and an item knowledge graph constructed based on the plurality of item data; inputting the data to be recommended into a trained recommendation model to obtain a preference score predicted value of each item data by each user data in the data to be recommended; and acquiring at least one item data for each user data as recommended item data based on the preference score predicted value of each item data by each user data in the data to be recommended. According to the recommendation method and the recommendation device, the TransR model is adopted as the space conversion network in the recommendation model, the correct knowledge expressed in the form of the triples is effectively expressed, and a more efficient fusion mode of the collaborative information and the knowledge transmission is realized.

Description

Recommendation method and device
Technical Field
The present disclosure relates to the field of big data technologies, and more particularly, to a recommendation method and apparatus.
Background
Knowledge maps are widely studied and applied in recommendation systems as auxiliary information. In the application of the recommendation system based on the knowledge-graph technology, various models are adopted for training. Wherein the RippleNet model propagates potential preferences of users along connections in the knowledge graph. The KGCN model and the KGNN-LS model adopt a graph convolution network to acquire embedded representation of items through neighbor entities in a knowledge graph. The KGAT model introduces a Collaborative Knowledge Graph (CKG) that combines a user-project interaction graph with a knowledge graph spectrum, recursively propagating through a graph-convolutional neural network over the CKG. The method presupposes that the items of the user item interaction graph and the associated entities in the knowledge graph belong to the same potential space, but actually both belong to heterogeneous nodes.
A recommendation system CKAN (Collaborative Knowledge-aware Network for Recommendator Systems) of a Collaborative Knowledge embedding attention Network in the related technology cancels independent embedded representation of a user and uses embedded representation of items interacted by the user to form embedded representation of the user, and the CKAN is used as an end-to-end model based on propagation, has a certain recommendation accuracy rate, but ignores processing of complex semantic relation space between entities in a Knowledge graph.
Disclosure of Invention
The present disclosure provides a recommendation method and apparatus to solve at least the problems in the related art described above, and may not solve any of the problems described above.
According to a first aspect of the embodiments of the present disclosure, there is provided a recommendation method, including: acquiring data to be recommended, wherein the data to be recommended comprises a plurality of user data, a plurality of item data, a user knowledge graph constructed based on the plurality of user data and an item knowledge graph constructed based on the plurality of item data; inputting the data to be recommended into a trained recommendation model to obtain a preference score predicted value of each item data by each user data in the data to be recommended; and acquiring at least one item data for each user data as recommended item data based on the preference score predicted value of each item data by each user data in the data to be recommended.
Optionally, the recommendation model includes a spatial transformation network and a preference computation network, and is trained by the following steps: acquiring a user data set based on a user historical knowledge graph and acquiring a project data set based on a project historical knowledge graph, wherein the user data set comprises at least one piece of user data, and the project data set comprises at least one piece of project data; acquiring first-class triples based on the user data set and the project data set, wherein each first-class triplet comprises a head entity, a relation and a tail entity, the head entity and the tail entity of each first-class triplet are user data in the user data set or project data in the project data set, and the head entity and the tail entity of each first-class triplet express correct knowledge through the relation; randomly replacing tail entities in the first type of triples to obtain second type of triples, wherein the second type of triples correspond to the first type of triples one by one; inputting a first type of triple and a second type of triple into the space conversion network to obtain an embedded representation of the first type of triple, wherein the space conversion network is a TransR model; acquiring a loss function of the space transformation network based on the first type of triple and the second type of triple; randomly replacing a head entity, a relation and a tail entity of the first type of triple to obtain a third type of triple, wherein the third type of triple corresponds to the first type of triple one to one; acquiring an embedded representation of a third type of triple based on the embedded representation of the first type of triple; inputting the embedded representation of the first type of triples and the embedded representation of the third type of triples into the preference calculation network, obtaining an attention representation of the user data and the item data in the embedded representation of the first type of triples, and aggregating the attention representations of the user data and the item data layer by layer, obtaining a model representation of the user data and a model representation of the item data, and obtaining a preference score prediction value of each item data for each item data based on the model representation of the user data and the model representation of the item data, wherein the attention representation of the user data is a representation obtained by a head entity for the first type of triples of the user data based on attention weights, and the attention representation of the item data is a representation obtained by the head entity for the first type of triples of the item data based on attention weights, and the preference calculation network is a graph neural network model based on attention mechanisms and comprising multi-layer propagation, propagating in each layer of the preference computation network with the embedded representations of all triples of the first type and all triples of the third type; obtaining a loss function of the preference calculation network according to the preference score predicted value of each item of data of each user; obtaining a loss function of the recommendation model based on the loss function of the spatial transformation network and the loss function of the preference calculation network; and training the recommendation model by adjusting the parameter set of the recommendation model according to the loss function of the recommendation model.
Optionally, the inputting the first type of triplet and the second type of triplet into the space transformation network to obtain the embedded representation of the first type of triplet includes: inputting the first-type triples and the second-type triples into the space conversion network, obtaining a transformation matrix of the relationship in the first-type triples, and obtaining the embedded representation of the first-type triples based on the transformation matrix of the relationship in the first-type triples, wherein the transformation matrix is used for projecting the space of the head entity or the space of the tail entity to the relationship space.
Optionally, the embedded representation of the first type of triplet is represented as:
Figure 550790DEST_PATH_IMAGE001
wherein the content of the first and second substances,
Figure 969133DEST_PATH_IMAGE002
wherein the content of the first and second substances,
Figure 261574DEST_PATH_IMAGE003
computing a network's second for the preference
Figure 305622DEST_PATH_IMAGE004
The embedded representation of any triplet of the first type in the layer propagation,
Figure 999909DEST_PATH_IMAGE005
representing the header entity as user data or the header entity as project data,
Figure 651470DEST_PATH_IMAGE006
projecting both the space of the head entity and the space of the tail entity into any first-type triplet of the relationship space for any layer propagation of the preference computation network,
Figure 634469DEST_PATH_IMAGE007
a transformation matrix for the relationships in any first-type triplet propagated for any layer of the preference computation network,
Figure 29679DEST_PATH_IMAGE008
any first type of triplet propagated for any layer of the preference calculation network,
Figure 263957DEST_PATH_IMAGE009
is composed of
Figure 351999DEST_PATH_IMAGE010
The head entity of (a) is,
Figure 619032DEST_PATH_IMAGE011
is composed of
Figure 755616DEST_PATH_IMAGE010
In the context of (a) or (b),
Figure 158915DEST_PATH_IMAGE012
is composed of
Figure 604809DEST_PATH_IMAGE010
The tail entity of (a) the tail entity,
Figure 359138DEST_PATH_IMAGE013
for the user historical knowledge-graph and the project historical knowledge-graph,
Figure 33833DEST_PATH_IMAGE014
computing a network's second for the preference
Figure 557218DEST_PATH_IMAGE004
A recursively defined representation of triples of the first type in the layer propagation,
Figure 987063DEST_PATH_IMAGE015
calculating a total number of layers for the network for the preference.
Optionally, the loss function of the spatial transformation network is expressed as:
Figure 651525DEST_PATH_IMAGE016
wherein the content of the first and second substances,
Figure 192227DEST_PATH_IMAGE017
wherein the content of the first and second substances,
Figure 773381DEST_PATH_IMAGE018
wherein the content of the first and second substances,
Figure 108548DEST_PATH_IMAGE019
as a function of the loss of the spatial transformation network,
Figure 837469DEST_PATH_IMAGE008
any first type of triplet propagated for any layer of the preference calculation network,
Figure 103235DEST_PATH_IMAGE009
is composed of
Figure 601212DEST_PATH_IMAGE010
The head entity of (a) is,
Figure 44963DEST_PATH_IMAGE011
is composed of
Figure 261180DEST_PATH_IMAGE010
In the context of (a) or (b),
Figure 143686DEST_PATH_IMAGE012
is composed of
Figure 181656DEST_PATH_IMAGE010
The tail entity of (a) the tail entity,
Figure 124204DEST_PATH_IMAGE020
representing the function of sigmoid and the function of,
Figure 499821DEST_PATH_IMAGE021
is prepared by reacting with
Figure 920439DEST_PATH_IMAGE010
The corresponding triplets of the second type are,
Figure 579959DEST_PATH_IMAGE022
is composed of
Figure 427829DEST_PATH_IMAGE021
The tail entity of (a) the tail entity,
Figure 353060DEST_PATH_IMAGE023
is composed of
Figure 515051DEST_PATH_IMAGE021
The similarity score of (a) is calculated,
Figure 842127DEST_PATH_IMAGE024
is composed of
Figure 814893DEST_PATH_IMAGE010
The similarity score of (a) is calculated,
Figure 227420DEST_PATH_IMAGE013
for the user historical knowledge-graph and the project historical knowledge-graph,
Figure 989840DEST_PATH_IMAGE007
a transformation matrix for the relationships in any first-type triplet propagated for any layer of the preference computation network,
Figure 109106DEST_PATH_IMAGE025
is composed of
Figure 564358DEST_PATH_IMAGE009
Is to be used to represent the embedded representation of,
Figure 464181DEST_PATH_IMAGE026
is composed of
Figure 217242DEST_PATH_IMAGE011
Is to be used to represent the embedded representation of,
Figure 253331DEST_PATH_IMAGE027
is composed of
Figure 817167DEST_PATH_IMAGE012
Is shown embedded.
Optionally, the inputting the embedded representation of the first type of triplet and the embedded representation of the third type of triplet into the preference calculation network, obtaining an attention representation of the user data and the item data in the embedded representation of the first type of triplet, includes: acquiring attention weight of a first type of triple in any layer propagation of the preference calculation network based on a ReLU function and a Sigmoid function, wherein the attention weight is the weight of a head entity to a tail entity; normalizing the attention weight of the first type of triple in any layer propagation of the preference calculation network through a softmax function, and acquiring the normalized attention weight of the first type of triple of which the head entity is user data and item data in any layer propagation of the preference calculation network; acquiring attention representation of the first type of triple of the user data and the item data of the head entity in any layer of propagation of the preference calculation network based on the normalized attention weight of the first type of triple of the user data and the item data of the head entity in any layer of propagation of the preference calculation network; integrating into a set the attention representations of the head entities as the first type of triples of user data and item data in all layer propagations of the preference computation network.
Optionally, the attention weight of the first type of triplet in any layer propagation of the preference calculation network is expressed as:
Figure 204286DEST_PATH_IMAGE028
wherein the content of the first and second substances,
Figure 308509DEST_PATH_IMAGE029
wherein the content of the first and second substances,
Figure 150169DEST_PATH_IMAGE030
computing a first in any layer propagation of a network for the preference
Figure 947224DEST_PATH_IMAGE031
The first type of triplet being based on
Figure 493743DEST_PATH_IMAGE032
The attention weight of (a) is given,
Figure 667235DEST_PATH_IMAGE033
computing a first in any layer propagation of a network for the preference
Figure 677917DEST_PATH_IMAGE031
The embedded representation of the head entity of a first type of triplet in the relationship space,
Figure 458922DEST_PATH_IMAGE034
computing a first in any layer propagation of a network for the preference
Figure 555054DEST_PATH_IMAGE031
The relationship of the first-type triplets,
Figure 938762DEST_PATH_IMAGE020
representing the function of sigmoid and the function of,
Figure 69529DEST_PATH_IMAGE035
represents the function of the ReLU, and represents,
Figure 395337DEST_PATH_IMAGE036
Figure 978765DEST_PATH_IMAGE037
Figure 228481DEST_PATH_IMAGE038
Figure 151437DEST_PATH_IMAGE039
Figure 461196DEST_PATH_IMAGE040
and
Figure 240844DEST_PATH_IMAGE041
all parameters in the parameter group of the recommendation model;
acquiring the normalized attention weight of the first-class triples of the head entity, which are user data and item data, in any layer propagation of the preference calculation network by the following formula based on the attention weight of the first-class triples in any layer propagation of the preference calculation network:
Figure 294250DEST_PATH_IMAGE042
wherein the content of the first and second substances,
Figure 134030DEST_PATH_IMAGE043
computing a first in any layer propagation of the network for the normalized preference
Figure 552373DEST_PATH_IMAGE031
The first type of triplet being based on
Figure 844814DEST_PATH_IMAGE032
The attention weight of (a) is given,
Figure 888863DEST_PATH_IMAGE044
computing a first in any layer propagation of a network for the preference
Figure 583149DEST_PATH_IMAGE031
Bases of triplets of the third type
Figure 234710DEST_PATH_IMAGE045
The attention weight of (a) is given,
Figure 217710DEST_PATH_IMAGE046
computing a first in any layer propagation of a network for the preference
Figure 612919DEST_PATH_IMAGE031
The embedded representation of the head entity of the third type of triplet in the relationship space,
Figure 850128DEST_PATH_IMAGE047
for any of the triples of the third type,
Figure 938169DEST_PATH_IMAGE048
is composed of
Figure 205203DEST_PATH_IMAGE047
The head entity of (a) is,
Figure 76207DEST_PATH_IMAGE049
is composed of
Figure 745085DEST_PATH_IMAGE047
In the context of (a) or (b),
Figure 190979DEST_PATH_IMAGE050
is composed of
Figure 679729DEST_PATH_IMAGE047
The tail entity of (a) the tail entity,
Figure 682320DEST_PATH_IMAGE051
computing a first in any layer propagation of a network for the preference
Figure 143389DEST_PATH_IMAGE031
A third type of triple relationship.
Optionally, the attention representation of the first type of triplet of user data and item data, where the head entity is propagated in any layer of the preference calculation network, is represented as:
Figure 307654DEST_PATH_IMAGE052
wherein the content of the first and second substances,
Figure 234765DEST_PATH_IMAGE053
wherein the content of the first and second substances,
Figure 775468DEST_PATH_IMAGE054
computing a network's second for the preference
Figure 153360DEST_PATH_IMAGE004
The head entity in the layer propagation is an attention representation of a first type of triplet of user data or item data,
Figure 691788DEST_PATH_IMAGE055
computing a network's second for the preference
Figure 155131DEST_PATH_IMAGE004
The header entity in the layer propagation is the total number of triples of the first type of user data or item data,
Figure 624158DEST_PATH_IMAGE003
computing a network's second for the preference
Figure 810551DEST_PATH_IMAGE004
The embedded representation of any triplet of the first type in the layer propagation,
Figure 254302DEST_PATH_IMAGE056
computing a first in any layer propagation of a network for the preference
Figure 470520DEST_PATH_IMAGE031
The individual head entity being user data or
Figure 539976DEST_PATH_IMAGE031
The individual head entity is an attention representation of a first type of triplet of item data,
Figure 892460DEST_PATH_IMAGE057
computing a first in any layer propagation of a network for the preference
Figure 507112DEST_PATH_IMAGE031
The attention representation of the first-type triplet,
Figure 945046DEST_PATH_IMAGE058
computing a first in any layer propagation of a network for the preference
Figure 631243DEST_PATH_IMAGE031
The embedded representation of the tail entity of the first type of triple in the relation space;
integrating into a set, an attention representation of a first type of triplet of user data and item data for a head entity in all layer propagations of the preference computation network by:
Figure 523719DEST_PATH_IMAGE059
wherein the content of the first and second substances,
Figure 637168DEST_PATH_IMAGE060
wherein the content of the first and second substances,
Figure 500082DEST_PATH_IMAGE061
an attention representation set of triples of a first type for user data for a head entity,
Figure 724390DEST_PATH_IMAGE062
computing a network's second for the preference
Figure 51466DEST_PATH_IMAGE063
The header entity in the layer is an attention representation of a first type of triplet of user data,
Figure 522768DEST_PATH_IMAGE064
computing a network's second for the preference
Figure 669715DEST_PATH_IMAGE065
The header entity in the layer is an attention representation of a first type of triplet of user data,
Figure 635397DEST_PATH_IMAGE066
computing a network's second for the preference
Figure 816980DEST_PATH_IMAGE015
The header entity in the layer is an attention representation of a first type of triplet of user data,
Figure 272232DEST_PATH_IMAGE067
an attention representation set of first type triples for item data for the head entity,
Figure 860470DEST_PATH_IMAGE068
for an initial attention representation of all head entities as first-type triples of item data,
Figure 426581DEST_PATH_IMAGE069
computing a network's second for the preference
Figure 400353DEST_PATH_IMAGE063
Middle-layer headThe body is an attention representation of a first type of triplet of item data,
Figure 26507DEST_PATH_IMAGE070
computing a network's second for the preference
Figure 148046DEST_PATH_IMAGE065
The head entity in the layer is an attention representation of a first type of triplet of item data,
Figure 704798DEST_PATH_IMAGE071
computing a network's second for the preference
Figure 860973DEST_PATH_IMAGE015
The head entity in the layer is an attention representation of a first type of triplet of item data,
Figure 595711DEST_PATH_IMAGE072
encoding a set for item data in the item historical knowledge graph,
Figure 204547DEST_PATH_IMAGE073
the representative head entity is the item data,
Figure 112460DEST_PATH_IMAGE074
representing the head entity as user data.
Optionally, the layer-by-layer aggregation of the attention representations of the user data and the project data is performed by:
Figure 808627DEST_PATH_IMAGE075
wherein the content of the first and second substances,
Figure 776583DEST_PATH_IMAGE076
for the attention representation of the layer-by-layer aggregated user data or project data,
Figure 810398DEST_PATH_IMAGE005
representing head entities as user data orThe head entity is the item data and,
Figure 708953DEST_PATH_IMAGE077
computing a network's second for the preference
Figure 839720DEST_PATH_IMAGE065
In layer propagation
Figure 916261DEST_PATH_IMAGE031
The individual head entity being user data or
Figure 922525DEST_PATH_IMAGE031
The individual head entity is an attention representation of a first type of triplet of item data,
Figure 437820DEST_PATH_IMAGE078
computing a network's second for the preference
Figure 360777DEST_PATH_IMAGE079
In layer propagation
Figure 670535DEST_PATH_IMAGE031
The individual head entity being user data or
Figure 475680DEST_PATH_IMAGE031
The individual head entity is an attention representation of a first type of triplet of item data,
Figure 716038DEST_PATH_IMAGE080
computing a network's second for the preference
Figure 555818DEST_PATH_IMAGE081
In layer propagation
Figure 974161DEST_PATH_IMAGE031
The individual head entity being user data or
Figure 532181DEST_PATH_IMAGE031
The head entity is the first kind of triple note of the project dataForce means, | is the splicing operation,
Figure 123699DEST_PATH_IMAGE082
and
Figure 769051DEST_PATH_IMAGE083
are all parameters in the set of parameters of the recommendation model.
Optionally, the preference score prediction value of each user data for each item data is obtained by the following formula:
Figure 420612DEST_PATH_IMAGE084
wherein the content of the first and second substances,
Figure 138032DEST_PATH_IMAGE085
a preference score prediction value for any user data to any item data,
Figure 533242DEST_PATH_IMAGE086
for the transformed model representation of said any user data,
Figure 534564DEST_PATH_IMAGE087
and the model representation of any item data is acquired based on the attention representation of the user data after layer-by-layer aggregation, and the model representation of any item data is acquired based on the attention representation of the item data after layer-by-layer aggregation.
Optionally, the preference calculates a loss function of the network, expressed as:
Figure 357027DEST_PATH_IMAGE088
wherein the content of the first and second substances,
Figure 561743DEST_PATH_IMAGE089
a loss function of the network is calculated for the preference,
Figure 760643DEST_PATH_IMAGE073
the representative head entity is the item data,
Figure 429522DEST_PATH_IMAGE074
the representative header entity is the user data,
Figure 111302DEST_PATH_IMAGE090
in the case of the first type of triplet,
Figure 865631DEST_PATH_IMAGE091
presentation pair
Figure 805905DEST_PATH_IMAGE092
The cross-entropy loss is solved,
Figure 329290DEST_PATH_IMAGE093
a preference score true value for the any user data to the any item data,
Figure 493555DEST_PATH_IMAGE085
a preference score prediction value for the any user data for the any item data,
Figure 922132DEST_PATH_IMAGE094
is a third type of triplet.
Optionally, the loss function of the recommendation model is expressed as:
Figure 462834DEST_PATH_IMAGE095
wherein the content of the first and second substances,
Figure 778409DEST_PATH_IMAGE096
wherein the content of the first and second substances,
Figure 379155DEST_PATH_IMAGE097
is a loss function of the recommendation model,
Figure 842497DEST_PATH_IMAGE019
as a function of the loss of the spatial transformation network,
Figure 872377DEST_PATH_IMAGE089
a loss function of the network is calculated for the preference,
Figure 104775DEST_PATH_IMAGE098
in order to be able to adjust the parameters,
Figure 814105DEST_PATH_IMAGE099
for the set of parameters of the recommendation model,
Figure 764743DEST_PATH_IMAGE100
is an embedded set of representations of the head and tail entities of a first type of triplet,
Figure 647249DEST_PATH_IMAGE101
an embedded representation set of relationships for a first type of triplet,
Figure 186683DEST_PATH_IMAGE102
are all made of
Figure 129232DEST_PATH_IMAGE099
The parameter (1).
Optionally, the obtaining, for each user data, at least one item data as recommended item data based on a preference score prediction value of each user data to each item data in the data to be recommended includes: arranging preference score predicted values of any user data in the data to be recommended to each item data in a descending order according to the size; acquiring item data corresponding to the preference score predicted values sorted from the first place to the Nth place as recommended item data, or acquiring item data corresponding to the preference score predicted values larger than a preset threshold value as recommended item data, wherein N is a preset integer larger than or equal to 1.
According to a second aspect of the embodiments of the present disclosure, there is provided a recommendation apparatus including: a data acquisition unit configured to: acquiring data to be recommended, wherein the data to be recommended comprises a plurality of user data, a plurality of item data, a user knowledge graph constructed based on the plurality of user data and an item knowledge graph constructed based on the plurality of item data; a model prediction unit configured to: inputting the data to be recommended into a trained recommendation model to obtain a preference score predicted value of each item data by each user data in the data to be recommended; an item recommendation unit configured to: and acquiring at least one item data for each user data as recommended item data based on the preference score predicted value of each item data by each user data in the data to be recommended.
According to a third aspect of the embodiments of the present disclosure, there is provided an electronic apparatus including: at least one processor; at least one memory storing computer-executable instructions, wherein the computer-executable instructions, when executed by the at least one processor, cause the at least one processor to perform a recommendation method according to the present disclosure.
According to a fourth aspect of embodiments of the present disclosure, there is provided a computer-readable storage medium, wherein instructions, when executed by at least one processor, cause the at least one processor to perform a recommendation method according to the present disclosure.
According to a fifth aspect of embodiments of the present disclosure, there is provided a computer program product comprising computer instructions which, when executed by at least one processor, implement a recommendation method according to the present disclosure.
The technical scheme provided by the embodiment of the disclosure at least brings the following beneficial effects:
according to the recommendation method and the recommendation device, the TransR model is adopted as the space conversion network in the recommendation model, the correct knowledge expressed in the form of the triples is effectively expressed, a more efficient fusion mode of the collaborative information and the knowledge transmission is realized, and the transmission of the embedded expression of the first type of triples in the preference calculation network is promoted. The trained TransR model can perform space conversion on the entities, can effectively process complex relationships among the entities, measures the correlation of knowledge, and further effectively describes the relationships of the entities. The loss function of the space transformation network considers the correlation relationship between the first-class triples and the second-class triples, the loss of pairwise sequencing can be reduced on the granularity of the first-class triples, the knowledge representation capability of the recommendation model is improved, and the potential relationship between entities in the knowledge map is enhanced. Compared with CKAN, the influence of the distance of different semantic spaces on the knowledge representation can be reduced.
In addition, according to the recommendation method and device disclosed by the invention, the user data set is obtained based on the historical knowledge map of the user, the project data set is obtained based on the historical knowledge map of the project, and then the first type of triple including the head entity, the relation and the tail entity is obtained, so that the representation capability of knowledge can be improved.
In addition, according to the recommendation method and apparatus of the present disclosure, based on the embedded representation obtained by the space transformation network, the attention weight is obtained in the preference calculation network, and is more differentiated than the attention weight obtained by the CKAN.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure and are not to be construed as limiting the disclosure.
Fig. 1 is a schematic diagram illustrating a CKGAN system framework according to an exemplary embodiment of the present disclosure.
Fig. 2 is a flowchart illustrating a recommendation method according to an exemplary embodiment of the present disclosure.
Fig. 3 is a flowchart illustrating a training method of a recommendation model according to an exemplary embodiment of the present disclosure.
Fig. 4 is a diagram illustrating a click rate prediction result based on a last.
Fig. 5 is a diagram illustrating click-through rate prediction results based on a Book-Crossing dataset in a CTR scenario according to an exemplary embodiment of the present disclosure.
Fig. 6 is a diagram illustrating a click-through rate prediction result based on a Dianping-Food data set in a CTR scenario according to an exemplary embodiment of the present disclosure.
Fig. 7 is a diagram illustrating a variation of a Recall @ K value based on a last.
FIG. 8 is a diagram illustrating a change in the Recall @ K value based on the Book-crosslinking dataset in a top-K recommendation scenario according to an illustrative embodiment of the present disclosure.
FIG. 9 is a diagram illustrating the variation of the Recall @ K value based on the Dianning-Food dataset in the top-K recommendation scenario according to an exemplary embodiment of the present disclosure.
Fig. 10 is a block diagram illustrating a recommendation device according to an exemplary embodiment of the present disclosure.
Fig. 11 is a block diagram illustrating a training apparatus of a recommendation model according to an exemplary embodiment of the present disclosure.
Fig. 12 is a block diagram illustrating an electronic device 1200 according to an example embodiment of the present disclosure.
Detailed Description
In order to make the technical solutions of the present disclosure better understood by those of ordinary skill in the art, the technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the above-described drawings are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the disclosure described herein are capable of operation in sequences other than those illustrated or otherwise described herein. The embodiments described in the following examples do not represent all embodiments consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
In this case, the expression "at least one of the items" in the present disclosure means a case where three types of parallel expressions "any one of the items", "a combination of any plural ones of the items", and "the entirety of the items" are included. For example, "include at least one of a and B" includes the following three cases in parallel: (1) comprises A; (2) comprises B; (3) including a and B. For another example, "at least one of the first step and the second step is performed", which means that the following three cases are juxtaposed: (1) executing the step one; (2) executing the step two; (3) and executing the step one and the step two.
In the era of big data processing, knowledge maps are widely researched and applied as auxiliary information in recommendation methods. In the application of the recommendation system based on the knowledge-graph technology, various models are adopted for training. In an application to user recommendation items, the RippleNet model propagates potential preferences of users along connections in a knowledge graph. Based on the improvement of the accuracy of the neighbor entity in the recommendation of the recommendation system, the KGCN model and the KGNN-LS model adopt a graph volume network to obtain the embedded representation of the item through the neighbor entity in the knowledge graph. However, the KGCN model and the KGNN-LS model are difficult to mine the high-order connectivity of users and items, and therefore the embedded representation information is insufficient. The KGAT model introduces a Collaborative Knowledge Graph (CKG) that combines a user-project interaction graph with a knowledge graph spectrum, recursively propagating through a graph-convolutional neural network over the CKG. The method presupposes that the items of the user item interaction graph and the associated entities in the knowledge graph belong to the same potential space, but actually both belong to heterogeneous nodes.
Collaborative Knowledge embedded attention Network recommendation Systems CKANs (Collaborative Knowledge-aware Network for recommendation Systems) in the related art compose embedded representations of users by canceling the independent embedded representations of the users and using the embedded representations of the items that the users have interacted with. The embedded representations of users and items are processed as combinations of sequences of entities within the knowledge-graph, and then the CKAN propagates these embedded representations directly within the knowledge-graph. Although the method directly represents the embedded representations of the user and the project by using the combination of the embedded representations of the entities in the knowledge graph, the distances between the entities in the user project graph and the knowledge graph in different potential spaces in the traditional method are effectively reduced, CKAN ignores the inside of the knowledge graph, the same entity is in different potential spaces under different relations, and especially the relations between the entities in a recommendation scene are often many-to-many.
As a propagation-based end-to-end model, the CKAN sufficiently utilizes the cooperative information, draws the potential space of the entity in the user project interaction diagram and the entity in the knowledge graph, has a certain recommendation accuracy rate, and ignores the processing of the complex semantic relation space between the entities in the knowledge graph.
In order to solve the problems in the related art, the recommendation method and device are provided by the disclosure, a recommendation model is constructed by adopting a spatial conversion network and a preference calculation network, data from a knowledge graph is trained, and a user preference score predicted value of a project is obtained through the recommendation model for recommendation, so that complex relationships among entities can be effectively processed, and potential relationships among the entities in the knowledge graph are enhanced.
Hereinafter, a recommendation method and apparatus according to the present disclosure will be described in detail with reference to fig. 1 to 12.
Fig. 1 is a schematic diagram illustrating a CKGAN system framework according to an exemplary embodiment of the present disclosure. The recommendation method in the exemplary embodiment of the present disclosure is derived based on the CKGAN system framework shown in fig. 1.
Referring to fig. 1, an exemplary embodiment of the present disclosure proposes a recommendation system CKGAN (Collaborative KGE-Guided attention Network for recommendation Systems) of Collaborative knowledge-graph embedding attention-directing Network, including an embedding representation part, a knowledge embedding propagation part based on a graph neural Network, and an aggregation prediction part. The embedded representation part represents the user data and the item data as an embedded sequence combination, namely, an embedded representation, and the embedded representation part fuses a TransR model and projects the entity space to the relationship space. The knowledge embedding propagation part based on the graph neural network is merged into a graph neural network model based on an attention mechanism and comprising multi-layer propagation, and the multi-hop aggregation propagation is carried out on the neighbor nodes. The preference score of each item data is predicted for each user data in the aggregate prediction section.
According to an exemplary embodiment of the present disclosure, the user data refers to an object that needs to be recommended or data indicative of the object that needs to be recommended, the item data refers to a recommended object or data indicative of the recommended object, for example, the user data is a user ID of a store customer, and the item data is a product number of a store product.
Fig. 2 is a flowchart illustrating a recommendation method according to an exemplary embodiment of the present disclosure.
Referring to fig. 2, in step 201, data to be recommended may be acquired, where the data to be recommended includes a plurality of user data, a plurality of item data, a user knowledge graph constructed based on the plurality of user data, and an item knowledge graph constructed based on the plurality of item data.
According to an exemplary embodiment of the present disclosure, the data to be recommended may further include, but is not limited to, at least one historical interaction data between a plurality of user data and a plurality of item data.
In step 202, the data to be recommended may be input into the trained recommendation model, so as to obtain a preference score prediction value of each item of data by each user data in the data to be recommended.
According to an example embodiment of the present disclosure, the recommendation model may include a spatial transformation network, which may be a TransR model, and a preference computation network, which may be an attention-based graph neural network model that includes multi-layer propagation. The trained recommendation model in the examples of the present disclosure is obtained by training a TransR model together with an attention-based graph neural network model including multi-layer propagation through training samples.
According to an exemplary embodiment of the present disclosure, the preference score prediction value of each user data for each item data may be an acquisition probability prediction value of each user data for each item data. Taking any user data as a user with a user name of A and a commodity B to be purchased, taking any item data as a commodity B as an example, and taking the probability predicted value of any user data for any item data as the probability predicted value of the commodity B purchased by the user A.
According to an exemplary embodiment of the present disclosure, the recommendation method may recommend appropriate item data to each user data with reference to the preference score prediction value of each item data based on each user data. In this way, in step 203, at least one item data may be acquired for each user data as recommended item data based on the preference score prediction value of each user data to each item data in the data to be recommended.
According to the exemplary embodiment of the disclosure, preference score predicted values of any user data in the data to be recommended to each item data can be arranged in descending order according to the size; acquiring item data corresponding to the preference score predicted values sorted from the first place to the Nth place as recommended item data, or acquiring item data corresponding to the preference score predicted values larger than a preset threshold value as recommended item data, wherein N is a preset integer larger than or equal to 1.
According to the exemplary embodiment of the present disclosure, taking any user data as a user with a user name of C who wants to watch a movie, and taking item data as movie D, movie E, movie F, movie G, movie H, movie I and movie J as examples, the preference score prediction values of C for movie D, movie E, movie F, movie G, movie H, movie I and movie J are 0.61, 0.82, 0.93, 0.45, 0.56, 0.77 and 0.95, respectively, obtained by a trained recommendation model, then the preference score prediction values of C for each item data are arranged in descending order of size to obtain: and if the preset threshold is 0.6, recommending the movie J, the movie F, the movie E, the movie I and the movie D to C.
It can be seen that the recommendation method in the exemplary embodiment of the present disclosure is obtained based on the prediction result of the trained recommendation model, and the training process of the recommendation model will be further described below. Fig. 3 is a flowchart illustrating a training method of a recommendation model according to an exemplary embodiment of the present disclosure.
Referring to fig. 3, the recommendation model includes a spatial transformation network and a preference calculation network. In step 301, a user data set may be obtained based on a user historical knowledge-graph and a project data set may be obtained based on a project historical knowledge-graph, wherein the user data set includes at least one user data and the project data set includes at least one project data.
According to an example embodiment of the present disclosure, at least one historical interaction data between a plurality of user data and a plurality of project data may be included, but not limited to, in each of the user historical knowledge-graph and the project historical knowledge-graph.
In step 302, first-type triples may be obtained based on the user data set and the item data set, where each first-type triplet includes a head entity, a relationship, and a tail entity, the head entity and the tail entity of each first-type triplet are both the user data in the user data set or the item data in the item data set, and the head entity and the tail entity of each first-type triplet express correct knowledge through the relationship.
According to an exemplary embodiment of the present disclosure, the fact that the head entity and the tail entity of each first-type triplet express correct knowledge through a relationship may mean that, for any first-type triplet, knowledge conveyed by the head entity and the tail entity based on the relationship may be acquired from a user historical knowledge graph and/or a project historical knowledge graph.
In step 303, the tail entities in the first type of triple may be randomly replaced to obtain a second type of triple, where the second type of triple corresponds to the first type of triple one to one.
According to an exemplary embodiment of the present disclosure, the number of the first type triples and the number of the second type triples may be equal. The second type of triplet, which is a corrupted triplet with respect to the first type of triplet, expresses less than correct knowledge.
In step 304, the first type of triplet and the second type of triplet may be input into a space transformation network, and an embedded representation of the first type of triplet is obtained, where the space transformation network is a TransR model.
According to an exemplary embodiment of the present disclosure, the acquired first-type triples may be used as positive samples of a spatial conversion network training process, and the acquired second-type triples may be used as negative samples of the spatial conversion network training process.
According to an exemplary embodiment of the present disclosure, a first type of triplet and a second type of triplet may be input into a spatial transformation network, a transformation matrix of a relationship in the first type of triplet is obtained, and an embedded representation of the first type of triplet is obtained based on the transformation matrix of the relationship in the first type of triplet, where the transformation matrix is used to project a space of a head entity or a space of a tail entity to a relationship space.
According to an exemplary embodiment of the present disclosure, the embedded representation of the first type of triplet is represented by formula (1):
Figure 504849DEST_PATH_IMAGE001
; (1)
wherein the content of the first and second substances,
Figure 191046DEST_PATH_IMAGE002
; (2)
wherein the content of the first and second substances,
Figure 86451DEST_PATH_IMAGE003
computing network for preference
Figure 199901DEST_PATH_IMAGE004
The embedded representation of any triplet of the first type in the layer propagation,
Figure 62815DEST_PATH_IMAGE005
representing the header entity as user data or the header entity as project data,
Figure 287123DEST_PATH_IMAGE006
projecting both the space of the head entity and the space of the tail entity into any first-type triplet of the relationship space for any layer propagation of the preference computation network,
Figure 614199DEST_PATH_IMAGE007
any first type three in any layer propagation for preference computation networkA transformation matrix of the relations in the tuples,
Figure 819921DEST_PATH_IMAGE008
for any first type of triplet in any layer propagation of the preference computation network,
Figure 232448DEST_PATH_IMAGE009
is composed of
Figure 198130DEST_PATH_IMAGE010
The head entity of (a) is,
Figure 379713DEST_PATH_IMAGE011
is composed of
Figure 834965DEST_PATH_IMAGE010
In the context of (a) or (b),
Figure 154694DEST_PATH_IMAGE012
is composed of
Figure 720805DEST_PATH_IMAGE010
The tail entity of (a) the tail entity,
Figure 960156DEST_PATH_IMAGE013
for the user historical knowledge graph and the project historical knowledge graph,
Figure 586310DEST_PATH_IMAGE014
computing network for preference
Figure 894800DEST_PATH_IMAGE004
A recursively defined representation of triples of the first type in the layer propagation,
Figure 264601DEST_PATH_IMAGE015
the total number of layers of the network is calculated for the preference. It should be noted that, in the following description,
Figure 420776DEST_PATH_IMAGE014
along user history knowledge graph and project history for entity sequence without damaging user data and project dataThe knowledge-graph connection propagates to extend the recursively defined representation of the first type of triples with their potential vector representation.
Figure 155514DEST_PATH_IMAGE103
Figure 764350DEST_PATH_IMAGE104
To be composed of
Figure 360679DEST_PATH_IMAGE105
Projecting entities in a dimensional entity space to
Figure 371360DEST_PATH_IMAGE106
A space of dimensional relationship space.
In step 305, a loss function of the space transformation network may be obtained based on the first type of triplet and the second type of triplet.
According to an exemplary embodiment of the present disclosure, the loss function of the spatial transformation network is represented by equation (3):
Figure 339316DEST_PATH_IMAGE016
; (3)
wherein the content of the first and second substances,
Figure 373131DEST_PATH_IMAGE017
; (4)
wherein the content of the first and second substances,
Figure 84735DEST_PATH_IMAGE018
; (5)
wherein the content of the first and second substances,
Figure 136874DEST_PATH_IMAGE019
as a function of the loss of the spatial transform network,
Figure 275731DEST_PATH_IMAGE008
any first type of triplet propagated for any layer of a preference computation network,
Figure 796842DEST_PATH_IMAGE009
Is composed of
Figure 312137DEST_PATH_IMAGE010
The head entity of (a) is,
Figure 31831DEST_PATH_IMAGE011
is composed of
Figure 27076DEST_PATH_IMAGE010
In the context of (a) or (b),
Figure 97800DEST_PATH_IMAGE012
is composed of
Figure 88890DEST_PATH_IMAGE010
The tail entity of (a) the tail entity,
Figure 928670DEST_PATH_IMAGE020
representing the function of sigmoid and the function of,
Figure 409330DEST_PATH_IMAGE021
is prepared by reacting with
Figure 154301DEST_PATH_IMAGE010
The corresponding triplets of the second type are,
Figure 745819DEST_PATH_IMAGE022
is composed of
Figure 377789DEST_PATH_IMAGE021
The tail entity of (a) the tail entity,
Figure 294929DEST_PATH_IMAGE023
is composed of
Figure 74666DEST_PATH_IMAGE021
The similarity score of (a) is calculated,
Figure 158291DEST_PATH_IMAGE024
is composed of
Figure 972663DEST_PATH_IMAGE010
The similarity score of (a) is calculated,
Figure 732809DEST_PATH_IMAGE024
a lower score of (a) represents a higher degree of similarity,
Figure 999842DEST_PATH_IMAGE013
for the user historical knowledge graph and the project historical knowledge graph,
Figure 198742DEST_PATH_IMAGE007
a transformation matrix for the relationships in any first-type triplet propagating in any layer of the network is computed for preference,
Figure 54572DEST_PATH_IMAGE025
is composed of
Figure 47936DEST_PATH_IMAGE009
Is to be used to represent the embedded representation of,
Figure 739948DEST_PATH_IMAGE026
is composed of
Figure 476960DEST_PATH_IMAGE011
Is to be used to represent the embedded representation of,
Figure 345DEST_PATH_IMAGE027
is composed of
Figure 951613DEST_PATH_IMAGE012
Is shown embedded.
Figure 193238DEST_PATH_IMAGE107
Figure 406045DEST_PATH_IMAGE108
Figure 49516DEST_PATH_IMAGE109
Is composed of
Figure 571633DEST_PATH_IMAGE105
The physical space is maintained in a dimensional manner,
Figure 300555DEST_PATH_IMAGE110
is composed of
Figure 379369DEST_PATH_IMAGE106
A dimensional relationship space. Note that the similarity score is an energy score.
According to exemplary embodiments of the present disclosure, when
Figure 815030DEST_PATH_IMAGE010
Is infinitely close to the tail entity, there is a relationship as shown in the following equation (6):
Figure 586676DEST_PATH_IMAGE111
; (6)
wherein the content of the first and second substances,
Figure 225730DEST_PATH_IMAGE112
is composed of
Figure 108236DEST_PATH_IMAGE025
The representation of the projection in the relationship space,
Figure 398403DEST_PATH_IMAGE113
is composed of
Figure 340951DEST_PATH_IMAGE027
A projected representation in a relationship space.
In step 306, the head entity, the relationship, and the tail entity of the first type of triple may be randomly replaced, and a third type of triple is obtained, where the third type of triple corresponds to the first type of triple one to one.
According to an exemplary embodiment of the present disclosure, the number of the first type triples and the number of the third type triples may be equal. The third type of triplet, which is a corrupted triplet with respect to the first type of triplet, expresses no correct knowledge.
At step 307, an embedded representation of a third type of triplet may be obtained based on the embedded representation of the first type of triplet.
According to the exemplary embodiment of the present disclosure, the embedded representation of the first-type triplet is also represented in the form of a triplet, and the embedded representation of the third-type triplet is obtained by correspondingly replacing the embedded representation of the first-type triplet with the replacement relationship in step 306.
At step 308, the embedded representations of the first type of triples and the embedded representations of the third type of triples may be input to a preference computation network, the attention representations of the user data and the item data in the embedded representations of the first type of triples are obtained and aggregated layer by layer, the model representations of the user data and the model representations of the item data are obtained, and a predicted preference score value for each item data by each user data is obtained based on the model representations of the user data and the model representations of the item data, wherein the attention representation of the user data is a representation obtained by the head entity based on attention weights for the first type of triples of the user data, the attention representation of the item data is a representation obtained by the head entity based on attention weights for the first type of triples of the item data, the preference computation network is a graph neural network model based on attention weights comprising multi-layer propagation, propagating in each layer of the preference computation network with embedded representations of all triples of the first type and all triples of the third type.
According to an exemplary embodiment of the present disclosure, the first type of triples may be used as a positive sample of the preference calculation network training process, and the third type of triples may be used as a negative sample of the preference calculation network training process.
According to an exemplary embodiment of the present disclosure, the attention representation of the user data and the item data in the embedded representation of the triples of the first type may be obtained in the following manner.
First, the attention weight of the first type of triple in any layer propagation of the preference calculation network can be obtained based on the ReLU function and the Sigmoid function, wherein the attention weight is the weight of the head entity to the tail entity.
The attention weight of the first type of triplet in any layer propagation of the preference computation network can be represented by equation (7):
Figure 965836DEST_PATH_IMAGE028
; (7)
wherein the content of the first and second substances,
Figure 386453DEST_PATH_IMAGE029
; (8)
wherein the content of the first and second substances,
Figure 859023DEST_PATH_IMAGE030
computing the middle of any layer propagation of the network for preferences
Figure 910156DEST_PATH_IMAGE031
The first type of triplet being based on
Figure 835386DEST_PATH_IMAGE032
The attention weight of (a) is given,
Figure 745180DEST_PATH_IMAGE033
computing the middle of any layer propagation of the network for preferences
Figure 806677DEST_PATH_IMAGE031
The embedded representation of the head entity of a first type of triplet in the relationship space,
Figure 91028DEST_PATH_IMAGE034
computing the middle of any layer propagation of the network for preferences
Figure 441238DEST_PATH_IMAGE031
The relationship of the first-type triplets,
Figure 469237DEST_PATH_IMAGE020
representing the function of sigmoid and the function of,
Figure 837770DEST_PATH_IMAGE035
represents the function of the ReLU, and represents,
Figure 27443DEST_PATH_IMAGE036
Figure 927266DEST_PATH_IMAGE037
Figure 431059DEST_PATH_IMAGE038
Figure 732728DEST_PATH_IMAGE039
Figure 781717DEST_PATH_IMAGE040
and
Figure 168836DEST_PATH_IMAGE041
are all parameters in the set of parameters of the recommendation model. The ReLU function is a non-linear activation function, | is a stitching operation,
Figure 538638DEST_PATH_IMAGE037
and
Figure 366917DEST_PATH_IMAGE038
are all the parameters of the weight matrix,
Figure 163971DEST_PATH_IMAGE039
Figure 959758DEST_PATH_IMAGE040
and
Figure 133250DEST_PATH_IMAGE041
are all deviation parameters.
Then, the attention weight of the first type of triple in any layer propagation of the preference calculation network can be normalized through a softmax function, and the attention weight of the first type of triple of which the head entity is the user data and the item data in any layer propagation of the normalized preference calculation network is obtained.
The attention weight of the first type triples of user data and item data of the head entity in any layer of the propagated preference calculation network after normalization can be obtained by the following formula (9):
Figure 81615DEST_PATH_IMAGE042
; (9)
wherein the content of the first and second substances,
Figure 49571DEST_PATH_IMAGE043
computing the first in any layer propagation of the network for the normalized preferences
Figure 145703DEST_PATH_IMAGE031
The first type of triplet being based on
Figure 542793DEST_PATH_IMAGE032
The attention weight of (a) is given,
Figure 407980DEST_PATH_IMAGE044
computing the middle of any layer propagation of the network for preferences
Figure 484521DEST_PATH_IMAGE031
Bases of triplets of the third type
Figure 67949DEST_PATH_IMAGE045
The attention weight of (a) is given,
Figure 317665DEST_PATH_IMAGE114
computing the middle of any layer propagation of the network for preferences
Figure 489889DEST_PATH_IMAGE031
The embedded representation of the head entity of the third type of triplet in the relationship space,
Figure 799647DEST_PATH_IMAGE047
for any of the triples of the third type,
Figure 808055DEST_PATH_IMAGE048
is composed of
Figure 861461DEST_PATH_IMAGE047
The head entity of (a) is,
Figure 701241DEST_PATH_IMAGE049
is composed of
Figure 870317DEST_PATH_IMAGE047
In the context of (a) or (b),
Figure 162758DEST_PATH_IMAGE050
is composed of
Figure 957539DEST_PATH_IMAGE047
The tail entity of (a) the tail entity,
Figure 651825DEST_PATH_IMAGE051
computing the middle of any layer propagation of the network for preferences
Figure 303386DEST_PATH_IMAGE031
A third type of triple relationship.
Next, an attention representation of the first type of triplet of user data and item data as the head entity in any layer of propagation of the preference calculation network may be obtained based on the normalized attention weights of the first type of triplet of user data and item data as the head entity in any layer of propagation of the preference calculation network.
The attention representation of the first type of triples of user data and item data for the head entity in any layer of propagation of the preference computation network can be represented by equation (10):
Figure 535653DEST_PATH_IMAGE052
; (10)
wherein the content of the first and second substances,
Figure 930863DEST_PATH_IMAGE053
; (11)
wherein the content of the first and second substances,
Figure 417339DEST_PATH_IMAGE054
computing network for preference
Figure 505381DEST_PATH_IMAGE004
The head entity in the layer propagation is an attention representation of a first type of triplet of user data or item data,
Figure 772414DEST_PATH_IMAGE055
computing network for preference
Figure 656800DEST_PATH_IMAGE004
The header entity in the layer propagation is the total number of triples of the first type of user data or item data,
Figure 60099DEST_PATH_IMAGE003
computing network for preference
Figure 256725DEST_PATH_IMAGE004
The embedded representation of any triplet of the first type in the layer propagation,
Figure 11055DEST_PATH_IMAGE056
computing the middle of any layer propagation of the network for preferences
Figure 935017DEST_PATH_IMAGE031
The individual head entity being user data or
Figure 458403DEST_PATH_IMAGE031
The individual head entity is an attention representation of a first type of triplet of item data,
Figure 888247DEST_PATH_IMAGE057
computing the middle of any layer propagation of the network for preferences
Figure 801976DEST_PATH_IMAGE031
The attention representation of the first-type triplet,
Figure 342679DEST_PATH_IMAGE058
computing the middle of any layer propagation of the network for preferences
Figure 674565DEST_PATH_IMAGE031
And (3) the embedded representation of the tail entity of the first type of triple in the relation space.
Finally, the attention representation of the first type of triples of user data and item data that prefer the head entity to propagate through all layers of the computing network may be integrated into a set by the following equation (12):
Figure 9732DEST_PATH_IMAGE059
; (12)
wherein the content of the first and second substances,
Figure 738653DEST_PATH_IMAGE060
; (13)
wherein the content of the first and second substances,
Figure 755151DEST_PATH_IMAGE061
an attention representation set of triples of a first type for user data for a head entity,
Figure 253128DEST_PATH_IMAGE062
computing network for preference
Figure 946147DEST_PATH_IMAGE063
The header entity in the layer is an attention representation of a first type of triplet of user data,
Figure 162365DEST_PATH_IMAGE064
computing network for preference
Figure 44870DEST_PATH_IMAGE065
The header entity in the layer is an attention representation of a first type of triplet of user data,
Figure 335037DEST_PATH_IMAGE066
computing network for preference
Figure 277585DEST_PATH_IMAGE015
The header entity in the layer is an attention representation of a first type of triplet of user data,
Figure 512257DEST_PATH_IMAGE067
an attention representation set of first type triples for item data for the head entity,
Figure 932874DEST_PATH_IMAGE068
for an initial attention representation of all head entities as first-type triples of item data,
Figure 405444DEST_PATH_IMAGE069
computing network for preference
Figure 79746DEST_PATH_IMAGE063
The head entity in the layer is an attention representation of a first type of triplet of item data,
Figure 4976DEST_PATH_IMAGE070
computing network for preference
Figure 416235DEST_PATH_IMAGE065
The head entity in the layer is an attention representation of a first type of triplet of item data,
Figure 743311DEST_PATH_IMAGE071
computing network for preference
Figure 27662DEST_PATH_IMAGE015
The head entity in the layer is an attention representation of a first type of triplet of item data,
Figure 377872DEST_PATH_IMAGE072
encoding a set for item data in an item historical knowledge graph,
Figure 140291DEST_PATH_IMAGE073
the representative head entity is the item data,
Figure 10290DEST_PATH_IMAGE074
representing the head entity as user data.
Figure 262279DEST_PATH_IMAGE068
The project data encoding method is characterized by comprising entities corresponding to project data in project data encoding sets in a project historical knowledge graph.
According to an exemplary embodiment of the present disclosure, layer-by-layer aggregation may be performed on attention representations of user data and project data based on the following equation (14) by means of a stitching aggregation:
Figure 162102DEST_PATH_IMAGE075
; (14)
wherein the content of the first and second substances,
Figure 728213DEST_PATH_IMAGE076
for attention representation of user data or project data after layer-by-layer aggregation,
Figure 92198DEST_PATH_IMAGE005
representing the header entity as user data or the header entity as project data,
Figure 718351DEST_PATH_IMAGE077
computing network for preference
Figure 43154DEST_PATH_IMAGE065
In layer propagation
Figure 147376DEST_PATH_IMAGE031
The individual head entity being user data or
Figure 303551DEST_PATH_IMAGE031
The individual head entity is an attention representation of a first type of triplet of item data,
Figure 897343DEST_PATH_IMAGE078
computing network for preference
Figure 506179DEST_PATH_IMAGE079
In layer propagation
Figure 506103DEST_PATH_IMAGE031
The individual head entity being user data or
Figure 516784DEST_PATH_IMAGE031
The individual head entity is an attention representation of a first type of triplet of item data,
Figure 484740DEST_PATH_IMAGE080
computing network for preference
Figure 377610DEST_PATH_IMAGE081
In layer propagation
Figure 823634DEST_PATH_IMAGE031
The individual head entity being user data or
Figure 282298DEST_PATH_IMAGE031
The head entity is an attention representation of the first type of triplet of item data, | is a stitching operation,
Figure 421155DEST_PATH_IMAGE082
and
Figure 4583DEST_PATH_IMAGE083
are all parameters in the set of parameters of the recommendation model.
According to an exemplary embodiment of the present disclosure, the preference score prediction value of each user data for each item data may be obtained by the following equation (15):
Figure 942714DEST_PATH_IMAGE084
; (15)
wherein the content of the first and second substances,
Figure 927988DEST_PATH_IMAGE085
a preference score prediction value for any user data to any item data,
Figure 34484DEST_PATH_IMAGE086
for the model representation of any user data after the migration,
Figure 105208DEST_PATH_IMAGE115
for the model representation of any of the user data,
Figure 158615DEST_PATH_IMAGE087
the model representation of any item data is acquired based on the attention representation of the user data after layer-by-layer aggregation, and the model representation of any item data is acquired based on the attention representation of the item data after layer-by-layer aggregation.
Figure 326291DEST_PATH_IMAGE116
Representative pair
Figure 806951DEST_PATH_IMAGE086
And
Figure 896130DEST_PATH_IMAGE087
and (6) solving the inner product.
Returning to fig. 3, in step 309, a loss function of the preference calculation network may be obtained according to the preference score prediction value of each user data for each item data.
According to an exemplary embodiment of the present disclosure, the preference calculates a loss function of the network, represented as equation (16):
Figure 753227DEST_PATH_IMAGE088
; (16)
wherein the content of the first and second substances,
Figure 385197DEST_PATH_IMAGE089
a loss function of the network is calculated for the preference,
Figure 36758DEST_PATH_IMAGE073
the representative head entity is the item data,
Figure 82074DEST_PATH_IMAGE074
the representative header entity is the user data,
Figure 303715DEST_PATH_IMAGE090
in the case of the first type of triplet,
Figure 852508DEST_PATH_IMAGE091
presentation pair
Figure 737287DEST_PATH_IMAGE092
The cross-entropy loss is solved,
Figure 4321DEST_PATH_IMAGE093
the actual value of the preference score for any user data versus any item data,
Figure 937642DEST_PATH_IMAGE085
a preference score prediction value for any user data to any item data,
Figure 934416DEST_PATH_IMAGE094
is a third type of triplet.
In step 310, a loss function of the recommendation model may be obtained based on the loss function of the spatial transformation network and the loss function of the preference computation network.
According to an exemplary embodiment of the present disclosure, a loss function of the model is recommended, represented as equation (17):
Figure 193359DEST_PATH_IMAGE095
; (17)
wherein the content of the first and second substances,
Figure 478847DEST_PATH_IMAGE096
; (18)
wherein the content of the first and second substances,
Figure 481438DEST_PATH_IMAGE097
for the loss function of the proposed model,
Figure 4824DEST_PATH_IMAGE019
as a function of the loss of the spatial transform network,
Figure 857504DEST_PATH_IMAGE089
a loss function of the network is calculated for the preference,
Figure 427026DEST_PATH_IMAGE098
in order to be able to adjust the parameters,
Figure 967729DEST_PATH_IMAGE099
in order to recommend a set of parameters for the model,
Figure 345620DEST_PATH_IMAGE100
is an embedded set of representations of the head and tail entities of a first type of triplet,
Figure 743104DEST_PATH_IMAGE101
an embedded representation set of relationships for a first type of triplet,
Figure 206446DEST_PATH_IMAGE102
are all made of
Figure 488523DEST_PATH_IMAGE099
The parameter (1).
Figure 986500DEST_PATH_IMAGE117
Can be
Figure 492568DEST_PATH_IMAGE098
An adjusted regularization function.
In step 311, the recommendation model may be trained by adjusting the set of parameters of the recommendation model according to the loss function of the recommendation model.
Specifically, the parameter set of the recommended model may be adjusted according to a loss function of the recommended model, and when a value of the loss function of the recommended model is smaller than a specific threshold, the training of the recommended model is ended.
Fig. 4 is a diagram illustrating a click rate prediction result based on a last. Fig. 5 is a diagram illustrating click-through rate prediction results based on a Book-Crossing dataset in a CTR scenario according to an exemplary embodiment of the present disclosure. Fig. 6 is a diagram illustrating a click-through rate prediction result based on a Dianping-Food data set in a CTR scenario according to an exemplary embodiment of the present disclosure.
Referring to fig. 4, 5 and 6, the CTR scenario is a click-through rate prediction scenario, i.e. predicting the probability of each data interaction in a data set. model is a model, AUC and F1 are indexes, and BPRMF, CKE, RippleNet, KGCN-LS, KGAT, CKAN and CKGAN are recommendation models. The AUC values of CKGAN in the exemplary embodiments of the present disclosure reached 0.8473 on the last. fm dataset, 0.7538 on the Book-cross dataset, and 0.8783 on the Dianping-Food dataset.
Fig. 7 is a diagram illustrating a variation of a Recall @ K value based on a last. FIG. 8 is a diagram illustrating a change in the Recall @ K value based on the Book-crosslinking dataset in a top-K recommendation scenario according to an illustrative embodiment of the present disclosure. FIG. 9 is a diagram illustrating the variation of the Recall @ K value based on the Dianning-Food dataset in the top-K recommendation scenario according to an exemplary embodiment of the present disclosure. Wherein the Recall @ K value is the Recall rate.
Referring to fig. 7, 8 and 9, the top-K recommendation scenario is a scenario in which K most likely interactive items are recommended for each user in the dataset. CKGAN is preferred over CKAN in exemplary embodiments of the present disclosure.
Fig. 10 is a block diagram illustrating a recommendation device according to an exemplary embodiment of the present disclosure. Referring to fig. 10, the recommendation apparatus 1000 includes a data acquisition unit 1001, a model prediction unit 1002, and an item recommendation unit 1003, in which:
the data acquisition unit 1001 is configured to: acquiring data to be recommended, wherein the data to be recommended comprises a plurality of user data, a plurality of item data, a user knowledge graph constructed based on the user data and an item knowledge graph constructed based on the item data.
According to an exemplary embodiment of the present disclosure, the data to be recommended may further include, but is not limited to, at least one historical interaction data between a plurality of user data and a plurality of item data.
The model prediction unit 1002 is configured to: and inputting the data to be recommended into the trained recommendation model to obtain the preference score predicted value of each item data by each user data in the data to be recommended.
The recommendation model may include a spatial transformation network, which may be a TransR model, and a preference computation network, which may be an attention-based graph neural network model that includes multi-layer propagation, according to example embodiments of the present disclosure. The trained recommendation model in the examples of the present disclosure is obtained by training a TransR model together with an attention-based graph neural network model including multi-layer propagation through training samples.
According to an exemplary embodiment of the present disclosure, the preference score prediction value of each user data for each item data may be an acquisition probability prediction value of each user data for each item data. Taking any user data as a user with a user name of A and a commodity B to be purchased, taking any item data as a commodity B as an example, and taking the probability predicted value of any user data for any item data as the probability predicted value of the commodity B purchased by the user A.
The item recommendation unit 1003 is configured to: and acquiring at least one item data for each user data as recommended item data based on the preference score predicted value of each item data by each user data in the data to be recommended.
According to an exemplary embodiment of the present disclosure, the item recommendation unit 1003 may arrange preference score prediction values of any user data in the data to be recommended for each item data in descending order of magnitude; acquiring item data corresponding to the preference score predicted values sorted from the first place to the Nth place as recommended item data, or acquiring item data corresponding to the preference score predicted values larger than a preset threshold value as recommended item data, wherein N is a preset integer larger than or equal to 1.
According to the exemplary embodiment of the present disclosure, taking any user data as a user with a user name of C who wants to watch a movie, and taking item data as movie D, movie E, movie F, movie G, movie H, movie I and movie J as examples, the preference score prediction values of C for movie D, movie E, movie F, movie G, movie H, movie I and movie J are 0.61, 0.82, 0.93, 0.45, 0.56, 0.77 and 0.95, respectively, obtained by a trained recommendation model, then the preference score prediction values of C for each item data are arranged in descending order of size to obtain: and if the preset threshold is 0.6, recommending the movie J, the movie F, the movie E, the movie I and the movie D to C.
Fig. 11 is a block diagram illustrating a training apparatus of a recommendation model according to an exemplary embodiment of the present disclosure. Referring to fig. 11, training apparatus 1100 includes a dataset acquisition unit 1101, a triplet acquisition unit 1102, a first replacement unit 1103, a spatial conversion network input unit 1104, a first loss function acquisition unit 1105, a second replacement unit 1106, an embedded representation acquisition unit 1107, a preference calculation network input unit 1108, a second loss function acquisition unit 1109, a model loss function acquisition unit 1110, and a parameter adjustment unit 1111.
The data set acquisition unit 1101 is configured to: a user data set is obtained based on the user historical knowledge-graph, and a project data set is obtained based on the project historical knowledge-graph, wherein the user data set comprises at least one user data and the project data set comprises at least one project data.
According to an example embodiment of the present disclosure, at least one historical interaction data between a plurality of user data and a plurality of project data may be included, but not limited to, in each of the user historical knowledge-graph and the project historical knowledge-graph.
The triplet acquisition unit 1102 is configured to: the method comprises the steps of obtaining first-class triples based on a user data set and a project data set, wherein each first-class triplet comprises a head entity, a relation and a tail entity, the head entity and the tail entity of each first-class triplet are user data in the user data set or project data in the project data set, and the head entity and the tail entity of each first-class triplet express correct knowledge through the relation.
According to an exemplary embodiment of the present disclosure, the fact that the head entity and the tail entity of each first-type triplet express correct knowledge through a relationship may mean that, for any first-type triplet, knowledge conveyed by the head entity and the tail entity based on the relationship may be acquired from a user historical knowledge graph and/or a project historical knowledge graph.
The first replacement unit 1103 is configured to: and randomly replacing tail entities in the first type of triples to obtain second type of triples, wherein the second type of triples correspond to the first type of triples one by one.
According to an exemplary embodiment of the present disclosure, the number of the first type triples and the number of the second type triples may be equal. The second type of triplet, which is a corrupted triplet with respect to the first type of triplet, expresses less than correct knowledge.
The space conversion network input unit 1104 is configured to: and inputting the first-class triples and the second-class triples into a space conversion network to obtain the embedded representation of the first-class triples, wherein the space conversion network is a TransR model.
According to an exemplary embodiment of the present disclosure, the acquired first-type triples may be used as positive samples of a spatial conversion network training process, and the acquired second-type triples may be used as negative samples of the spatial conversion network training process.
According to an example embodiment of the present disclosure, the space transformation network input unit 1104 may input the first type of triplet and the second type of triplet into the space transformation network, obtain a transformation matrix of the relationship in the first type of triplet, and obtain the embedded representation of the first type of triplet based on the transformation matrix of the relationship in the first type of triplet, where the transformation matrix is used to project the space of the head entity or the space of the tail entity to the relationship space.
According to an exemplary embodiment of the present disclosure, the embedded representation of the first type of triplet is represented by equation (1) above.
The first loss function acquisition unit 1105 is configured to: and acquiring a loss function of the space transformation network based on the first type of triple and the second type of triple.
According to an exemplary embodiment of the present disclosure, the loss function of the spatial transformation network is represented by equation (3) above.
According to exemplary embodiments of the present disclosure, when
Figure 394271DEST_PATH_IMAGE010
The sum of the head entity and the relationship of (a) is infinitely close to the tail entity, there is a relationship as shown in the above equation (6).
The second replacement unit 1106 is configured to: and randomly replacing the head entity, the relation and the tail entity of the first type of triple to obtain a third type of triple, wherein the third type of triple corresponds to the first type of triple one to one.
According to an exemplary embodiment of the present disclosure, the number of the first type triples and the number of the third type triples may be equal. The third type of triplet, which is a corrupted triplet with respect to the first type of triplet, expresses no correct knowledge.
The embedded representation acquisition unit 1107 is configured to: based on the embedded representation of the first type of triple, an embedded representation of a third type of triple is obtained.
According to the exemplary embodiment of the present disclosure, the embedded representation of the first-type triplet is also represented in the form of a triplet, and the embedded representation of the third-type triplet is obtained by correspondingly replacing the embedded representation of the first-type triplet with the replacement relationship in step 306.
The preference calculation network input unit 1108 is configured to: inputting the embedded representation of the first type of triples and the embedded representation of the third type of triples into a preference calculation network, acquiring attention representations of user data and item data in the embedded representation of the first type of triples, and aggregating the attention representations of the user data and the item data layer by layer, acquiring a model representation of the user data and a model representation of the item data, and acquiring a preference score prediction value of each item data for each item data based on the model representation of the user data and the model representation of the item data, wherein the attention representation of the user data is a representation acquired by a head entity based on attention weights for the first type of triples of the user data, the attention representation of the item data is a representation acquired by the head entity based on attention weights for the first type of triples of the item data, the preference calculation network is a graph neural network model based on attention mechanisms and containing multi-layer propagation, propagating in each layer of the preference computation network with embedded representations of all triples of the first type and all triples of the third type.
According to an exemplary embodiment of the present disclosure, the first type of triples may be used as a positive sample of the preference calculation network training process, and the third type of triples may be used as a negative sample of the preference calculation network training process.
According to an example embodiment of the present disclosure, the preference calculation network input unit 1108 may obtain the attention representation of the user data and the item data in the embedded representation of the first type of triple by:
the attention weight of the first type of triplet in the propagation of any layer of the preference calculation network can be obtained based on the ReLU function and the Sigmoid function, wherein the attention weight is the weight of the head entity to the tail entity, and the attention weight of the first type of triplet in the propagation of any layer of the preference calculation network can be expressed as the above formula (7);
the attention weight of the first-type triple in any layer propagation of the preference calculation network can be normalized through a softmax function, the attention weight of the first-type triple of the user data and the item data, which is the head entity in any layer propagation of the normalized preference calculation network, is obtained, and the attention weight of the first-type triple of the user data and the item data, which is the head entity in any layer propagation of the normalized preference calculation network, can be obtained through the formula (9);
obtaining an attention representation of the first-class triples of the user data and the item data of the head entity in any layer of propagation of the preference calculation network based on the normalized attention weight of the first-class triples of the user data and the item data of the head entity in any layer of propagation of the preference calculation network, wherein the attention representation of the first-class triples of the user data and the item data of the head entity in any layer of propagation of the preference calculation network can be represented by the formula (10);
the attention representation of the first-type triples of user data and item data for the head entity in all layer propagations of the preference computation network can be integrated into a collection by equation (12) above.
According to an exemplary embodiment of the present disclosure, the preference calculation network input unit 1108 may perform layer-by-layer aggregation of the attention representations of the user data and the item data based on equation (14) above by way of a stitching aggregation.
According to an exemplary embodiment of the present disclosure, the preference calculation network input unit 1108 may obtain a preference score prediction value of each user data to each item data through equation (15) above.
The second loss function acquisition unit 1109 is configured to: and obtaining a loss function of the preference calculation network according to the preference score predicted value of each item of data of each user.
According to an exemplary embodiment of the present disclosure, the preference calculation network's loss function, which may be represented as equation (16) above.
The model loss function acquisition unit 1110 is configured to: and obtaining the loss function of the recommendation model based on the loss function of the space transformation network and the loss function of the preference calculation network.
According to an exemplary embodiment of the present disclosure, the loss function of the recommendation model may be represented as equation (17) above.
The parameter adjustment unit 1111 is configured to: and training the recommendation model by adjusting the parameter set of the recommendation model according to the loss function of the recommendation model.
Specifically, the parameter set of the recommended model may be adjusted according to a loss function of the recommended model, and when a value of the loss function of the recommended model is smaller than a specific threshold, the training of the recommended model is ended.
Fig. 12 is a block diagram illustrating an electronic device 1200 according to an example embodiment of the present disclosure.
Referring to fig. 12, an electronic device 1200 includes at least one memory 1201 and at least one processor 1202, the at least one memory 1201 having stored therein a set of computer-executable instructions that, when executed by the at least one processor 1202, perform a recommendation method in accordance with an example embodiment of the present disclosure.
By way of example, the electronic device 1200 may be a PC computer, tablet device, personal digital assistant, smartphone, or other device capable of executing the set of instructions described above. Here, the electronic device 1200 need not be a single electronic device, but can be any collection of devices or circuits that can execute the above instructions (or sets of instructions) individually or in combination. The electronic device 1200 may also be part of an integrated control system or system manager, or may be configured as a portable electronic device that interfaces with local or remote (e.g., via wireless transmission).
In the electronic device 1200, the processor 1202 may include a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a programmable logic device, a special purpose processor system, a microcontroller, or a microprocessor. By way of example, and not limitation, processors may also include analog processors, digital processors, microprocessors, multi-core processors, processor arrays, network processors, and the like.
The processor 1202 may execute instructions or code stored in the memory 1201, where the memory 1201 may also store data. The instructions and data may also be transmitted or received over a network via a network interface device, which may employ any known transmission protocol.
The memory 1201 may be integrated with the processor 1202, for example, by having RAM or flash memory disposed within an integrated circuit microprocessor or the like. Further, memory 1201 may include a stand-alone device, such as an external disk drive, storage array, or any other storage device usable by a database system. The memory 1201 and the processor 1202 may be operatively coupled or may communicate with each other, e.g., through I/O ports, network connections, etc., such that the processor 1202 is able to read files stored in the memory.
In addition, the electronic device 1200 may also include a video display (such as a liquid crystal display) and a user interaction interface (such as a keyboard, mouse, touch input device, etc.). All components of the electronic device 1200 may be connected to each other via a bus and/or a network.
According to an exemplary embodiment of the present disclosure, there may also be provided a computer-readable storage medium storing instructions, which when executed by at least one processor, cause the at least one processor to perform a recommendation method according to the present disclosure. Examples of the computer-readable storage medium herein include: read-only memory (ROM), random-access programmable read-only memory (PROM), electrically erasable programmable read-only memory (EEPROM), random-access memory (RAM), dynamic random-access memory (DRAM), static random-access memory (SRAM), flash memory, non-volatile memory, CD-ROM, CD-R, CD + R, CD-RW, CD + RW, DVD-ROM, DVD-R, DVD + R, DVD-RW, DVD + RW, DVD-RAM, BD-ROM, BD-R, BD-R LTH, BD-RE, Blu-ray or compact disc memory, Hard Disk Drive (HDD), solid-state drive (SSD), card-type memory (such as a multimedia card, a Secure Digital (SD) card or a extreme digital (XD) card), magnetic tape, a floppy disk, a magneto-optical data storage device, an optical data storage device, a hard disk, a magnetic tape, a magneto-optical data storage device, a hard disk, a magnetic tape, a magnetic data storage device, a magnetic tape, a magnetic data storage device, a magnetic tape, a magnetic data storage device, a magnetic tape, a magnetic data storage device, a magnetic tape, a magnetic data storage device, A solid state disk, and any other device configured to store and provide a computer program and any associated data, data files, and data structures to a processor or computer in a non-transitory manner such that the processor or computer can execute the computer program. The computer program in the computer-readable storage medium described above can be run in an environment deployed in a computer apparatus, such as a client, a host, a proxy device, a server, and the like, and further, in one example, the computer program and any associated data, data files, and data structures are distributed across a networked computer system such that the computer program and any associated data, data files, and data structures are stored, accessed, and executed in a distributed fashion by one or more processors or computers.
According to an exemplary embodiment of the present disclosure, a computer program product may also be provided, in which instructions are executable by a processor of a computer device to perform a recommendation method according to an exemplary embodiment of the present disclosure.
According to the recommendation method and the recommendation device, the TransR model is adopted as the space conversion network in the recommendation model, the correct knowledge expressed in the form of the triples is effectively expressed, a more efficient fusion mode of the collaborative information and the knowledge transmission is realized, and the transmission of the embedded expression of the first type of triples in the preference calculation network is promoted. The trained TransR model can perform space conversion on the entities, can effectively process complex relationships among the entities, measures the correlation of knowledge, and further effectively describes the relationships of the entities. The loss function of the space transformation network considers the correlation relationship between the first-class triples and the second-class triples, the loss of pairwise sequencing can be reduced on the granularity of the first-class triples, the knowledge representation capability of the recommendation model is improved, and the potential relationship between entities in the knowledge map is enhanced. Compared with CKAN, the influence of the distance of different semantic spaces on the knowledge representation can be reduced.
In addition, according to the recommendation method and device disclosed by the invention, the user data set is obtained based on the historical knowledge map of the user, the project data set is obtained based on the historical knowledge map of the project, and then the first type of triple including the head entity, the relation and the tail entity is obtained, so that the representation capability of knowledge can be improved.
In addition, according to the recommendation method and apparatus of the present disclosure, based on the embedded representation obtained by the space transformation network, the attention weight is obtained in the preference calculation network, and is more differentiated than the attention weight obtained by the CKAN.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This disclosure is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (16)

1. A recommendation method, comprising:
acquiring data to be recommended, wherein the data to be recommended comprises a plurality of user data, a plurality of item data, a user knowledge graph constructed based on the plurality of user data and an item knowledge graph constructed based on the plurality of item data;
inputting the data to be recommended into a trained recommendation model to obtain a preference score predicted value of each item data by each user data in the data to be recommended;
and acquiring at least one item data for each user data as recommended item data based on the preference score predicted value of each item data by each user data in the data to be recommended.
2. The recommendation method of claim 1, wherein the recommendation model comprises a spatial transformation network and a preference computation network, the recommendation model being trained by:
acquiring a user data set based on a user historical knowledge graph and acquiring a project data set based on a project historical knowledge graph, wherein the user data set comprises at least one piece of user data, and the project data set comprises at least one piece of project data;
acquiring first-class triples based on the user data set and the project data set, wherein each first-class triplet comprises a head entity, a relation and a tail entity, the head entity and the tail entity of each first-class triplet are user data in the user data set or project data in the project data set, and the head entity and the tail entity of each first-class triplet express correct knowledge through the relation;
randomly replacing tail entities in the first type of triples to obtain second type of triples, wherein the second type of triples correspond to the first type of triples one by one;
inputting a first type of triple and a second type of triple into the space conversion network to obtain an embedded representation of the first type of triple, wherein the space conversion network is a TransR model;
acquiring a loss function of the space transformation network based on the first type of triple and the second type of triple;
randomly replacing a head entity, a relation and a tail entity of the first type of triple to obtain a third type of triple, wherein the third type of triple corresponds to the first type of triple one to one;
acquiring an embedded representation of a third type of triple based on the embedded representation of the first type of triple;
inputting the embedded representation of the first type of triples and the embedded representation of the third type of triples into the preference calculation network, obtaining an attention representation of the user data and the item data in the embedded representation of the first type of triples, and aggregating the attention representations of the user data and the item data layer by layer, obtaining a model representation of the user data and a model representation of the item data, and obtaining a preference score prediction value of each item data for each item data based on the model representation of the user data and the model representation of the item data, wherein the attention representation of the user data is a representation obtained by a head entity for the first type of triples of the user data based on attention weights, and the attention representation of the item data is a representation obtained by the head entity for the first type of triples of the item data based on attention weights, and the preference calculation network is a graph neural network model based on attention mechanisms and comprising multi-layer propagation, propagating in each layer of the preference computation network with the embedded representations of all triples of the first type and all triples of the third type;
obtaining a loss function of the preference calculation network according to the preference score predicted value of each item of data of each user;
obtaining a loss function of the recommendation model based on the loss function of the spatial transformation network and the loss function of the preference calculation network;
and training the recommendation model by adjusting the parameter set of the recommendation model according to the loss function of the recommendation model.
3. The recommendation method of claim 2, wherein said inputting a first type of triplet and a second type of triplet into said spatial transformation network to obtain an embedded representation of the first type of triplet comprises:
inputting the first-type triples and the second-type triples into the space conversion network, obtaining a transformation matrix of the relationship in the first-type triples, and obtaining the embedded representation of the first-type triples based on the transformation matrix of the relationship in the first-type triples, wherein the transformation matrix is used for projecting the space of the head entity or the space of the tail entity to the relationship space.
4. A recommendation method according to claim 3, wherein the embedded representation of the first type of triplet is represented as:
Figure 146551DEST_PATH_IMAGE001
wherein the content of the first and second substances,
Figure 59275DEST_PATH_IMAGE002
wherein the content of the first and second substances,
Figure 386351DEST_PATH_IMAGE003
computing a network's second for the preference
Figure 608385DEST_PATH_IMAGE004
The embedded representation of any triplet of the first type in the layer propagation,
Figure 755332DEST_PATH_IMAGE005
representing the header entity as user data or the header entity as project data,
Figure 783331DEST_PATH_IMAGE006
projecting both the space of the head entity and the space of the tail entity into any first-type triplet of the relationship space for any layer propagation of the preference computation network,
Figure 151864DEST_PATH_IMAGE007
a transformation matrix for the relationships in any first-type triplet propagated for any layer of the preference computation network,
Figure 607117DEST_PATH_IMAGE008
any first type of triplet propagated for any layer of the preference calculation network,
Figure 506939DEST_PATH_IMAGE009
is composed of
Figure 10733DEST_PATH_IMAGE010
The head entity of (a) is,
Figure 46822DEST_PATH_IMAGE011
is composed of
Figure 358461DEST_PATH_IMAGE010
In the context of (a) or (b),
Figure 480001DEST_PATH_IMAGE012
is composed of
Figure 849803DEST_PATH_IMAGE010
The tail entity of (a) the tail entity,
Figure 943661DEST_PATH_IMAGE013
for the user historical knowledge graph and the project historical knowledge graph,
Figure 740715DEST_PATH_IMAGE014
Computing a network's second for the preference
Figure 349551DEST_PATH_IMAGE004
A recursively defined representation of triples of the first type in the layer propagation,
Figure 444415DEST_PATH_IMAGE015
calculating a total number of layers for the network for the preference.
5. The recommendation method of claim 4, wherein the loss function of the spatial transformation network is expressed as:
Figure 455096DEST_PATH_IMAGE016
wherein the content of the first and second substances,
Figure 360736DEST_PATH_IMAGE017
wherein the content of the first and second substances,
Figure 456868DEST_PATH_IMAGE018
wherein the content of the first and second substances,
Figure 168472DEST_PATH_IMAGE019
as a function of the loss of the spatial transformation network,
Figure 987654DEST_PATH_IMAGE008
any first type of triplet propagated for any layer of the preference calculation network,
Figure 126511DEST_PATH_IMAGE009
is composed of
Figure 382043DEST_PATH_IMAGE010
The head entity of (a) is,
Figure 897338DEST_PATH_IMAGE011
is composed of
Figure 882612DEST_PATH_IMAGE010
In the context of (a) or (b),
Figure 379321DEST_PATH_IMAGE012
is composed of
Figure 184466DEST_PATH_IMAGE010
The tail entity of (a) the tail entity,
Figure 237873DEST_PATH_IMAGE020
representing the function of sigmoid and the function of,
Figure 15336DEST_PATH_IMAGE021
is prepared by reacting with
Figure 495996DEST_PATH_IMAGE010
The corresponding triplets of the second type are,
Figure 739502DEST_PATH_IMAGE022
is composed of
Figure 331020DEST_PATH_IMAGE021
The tail entity of (a) the tail entity,
Figure 290886DEST_PATH_IMAGE023
is composed of
Figure 880130DEST_PATH_IMAGE021
The similarity score of (a) is calculated,
Figure 659868DEST_PATH_IMAGE024
is composed of
Figure 242028DEST_PATH_IMAGE010
The similarity score of (a) is calculated,
Figure 56400DEST_PATH_IMAGE013
for the user historical knowledge-graph and the project historical knowledge-graph,
Figure 878862DEST_PATH_IMAGE007
a transformation matrix for the relationships in any first-type triplet propagated for any layer of the preference computation network,
Figure 83579DEST_PATH_IMAGE025
is composed of
Figure 282479DEST_PATH_IMAGE009
Is to be used to represent the embedded representation of,
Figure 951358DEST_PATH_IMAGE026
is composed of
Figure 633137DEST_PATH_IMAGE011
Is to be used to represent the embedded representation of,
Figure 387466DEST_PATH_IMAGE027
is composed of
Figure 327740DEST_PATH_IMAGE012
Is shown embedded.
6. The recommendation method of claim 5, wherein said inputting the embedded representation of the first type of triplet and the embedded representation of the third type of triplet into the preference calculation network to obtain an attention representation of the user data and the item data in the embedded representation of the first type of triplet comprises:
acquiring attention weight of a first type of triple in any layer propagation of the preference calculation network based on a ReLU function and a Sigmoid function, wherein the attention weight is the weight of a head entity to a tail entity;
normalizing the attention weight of the first type of triple in any layer propagation of the preference calculation network through a softmax function, and acquiring the normalized attention weight of the first type of triple of which the head entity is user data and item data in any layer propagation of the preference calculation network;
acquiring attention representation of the first type of triple of the user data and the item data of the head entity in any layer of propagation of the preference calculation network based on the normalized attention weight of the first type of triple of the user data and the item data of the head entity in any layer of propagation of the preference calculation network;
integrating into a set the attention representations of the head entities as the first type of triples of user data and item data in all layer propagations of the preference computation network.
7. The recommendation method of claim 6, wherein the attention weight of the first type of triplet in any layer propagation of the preference computation network is expressed as:
Figure 851126DEST_PATH_IMAGE028
wherein the content of the first and second substances,
Figure 15391DEST_PATH_IMAGE029
wherein the content of the first and second substances,
Figure 443967DEST_PATH_IMAGE030
computing a first in any layer propagation of a network for the preference
Figure 984670DEST_PATH_IMAGE031
The first type of triplet being based on
Figure 362561DEST_PATH_IMAGE032
The attention weight of (a) is given,
Figure 900990DEST_PATH_IMAGE033
computing a first in any layer propagation of a network for the preference
Figure 364332DEST_PATH_IMAGE031
The embedded representation of the head entity of a first type of triplet in the relationship space,
Figure 394212DEST_PATH_IMAGE034
computing a first in any layer propagation of a network for the preference
Figure 626610DEST_PATH_IMAGE031
The relationship of the first-type triplets,
Figure 398257DEST_PATH_IMAGE035
representing the function of sigmoid and the function of,
Figure 286579DEST_PATH_IMAGE036
represents the function of the ReLU, and represents,
Figure 169084DEST_PATH_IMAGE037
Figure 708519DEST_PATH_IMAGE038
Figure 651067DEST_PATH_IMAGE039
Figure 89001DEST_PATH_IMAGE040
Figure 712881DEST_PATH_IMAGE041
and
Figure 919871DEST_PATH_IMAGE042
all parameters in the parameter group of the recommendation model;
acquiring the normalized attention weight of the first-class triples of the head entity, which are user data and item data, in any layer propagation of the preference calculation network by the following formula based on the attention weight of the first-class triples in any layer propagation of the preference calculation network:
Figure 33321DEST_PATH_IMAGE043
wherein the content of the first and second substances,
Figure 646967DEST_PATH_IMAGE044
computing a first in any layer propagation of the network for the normalized preference
Figure 871275DEST_PATH_IMAGE031
The first type of triplet being based on
Figure 136034DEST_PATH_IMAGE032
The attention weight of (a) is given,
Figure 154806DEST_PATH_IMAGE045
computing a first in any layer propagation of a network for the preference
Figure 567333DEST_PATH_IMAGE031
Bases of triplets of the third type
Figure 782282DEST_PATH_IMAGE046
The attention weight of (a) is given,
Figure 963865DEST_PATH_IMAGE047
computing a first in any layer propagation of a network for the preference
Figure 419117DEST_PATH_IMAGE031
The embedded representation of the head entity of the third type of triplet in the relationship space,
Figure 991044DEST_PATH_IMAGE048
for any of the triples of the third type,
Figure 557154DEST_PATH_IMAGE049
is composed of
Figure 544308DEST_PATH_IMAGE048
The head entity of (a) is,
Figure 170462DEST_PATH_IMAGE050
is composed of
Figure 292002DEST_PATH_IMAGE048
In the context of (a) or (b),
Figure 599486DEST_PATH_IMAGE051
is composed of
Figure 755661DEST_PATH_IMAGE048
The tail entity of (a) the tail entity,
Figure 739666DEST_PATH_IMAGE052
computing a first in any layer propagation of a network for the preference
Figure 348502DEST_PATH_IMAGE031
A third type of triple relationship.
8. The recommendation method of claim 7, wherein propagating at any layer of the preference computation network an attention representation of a first type of triplet of head entities for user data and item data is represented as:
Figure 256415DEST_PATH_IMAGE053
wherein the content of the first and second substances,
Figure 204780DEST_PATH_IMAGE054
wherein the content of the first and second substances,
Figure 172736DEST_PATH_IMAGE055
computing a network's second for the preference
Figure 957283DEST_PATH_IMAGE004
The head entity in the layer propagation is an attention representation of a first type of triplet of user data or item data,
Figure 668887DEST_PATH_IMAGE056
computing a network's second for the preference
Figure 534075DEST_PATH_IMAGE004
The header entity in the layer propagation is the total number of triples of the first type of user data or item data,
Figure 610616DEST_PATH_IMAGE057
computing a network's second for the preference
Figure 194044DEST_PATH_IMAGE004
The embedded representation of any triplet of the first type in the layer propagation,
Figure 896289DEST_PATH_IMAGE058
computing a first in any layer propagation of a network for the preference
Figure 615984DEST_PATH_IMAGE031
The individual head entity being user data or
Figure 925742DEST_PATH_IMAGE031
The individual head entity being project dataThe attention representation of the first type of triplet,
Figure 934150DEST_PATH_IMAGE059
computing a first in any layer propagation of a network for the preference
Figure 987556DEST_PATH_IMAGE031
The attention representation of the first-type triplet,
Figure 827336DEST_PATH_IMAGE060
computing a first in any layer propagation of a network for the preference
Figure 16919DEST_PATH_IMAGE031
The embedded representation of the tail entity of the first type of triple in the relation space;
integrating into a set, an attention representation of a first type of triplet of user data and item data for a head entity in all layer propagations of the preference computation network by:
Figure 574940DEST_PATH_IMAGE061
wherein the content of the first and second substances,
Figure 104141DEST_PATH_IMAGE062
wherein the content of the first and second substances,
Figure 798428DEST_PATH_IMAGE063
an attention representation set of triples of a first type for user data for a head entity,
Figure 715568DEST_PATH_IMAGE064
computing a network's second for the preference
Figure 682256DEST_PATH_IMAGE065
The header entity in the layer is an attention representation of a first type of triplet of user data,
Figure 77465DEST_PATH_IMAGE066
computing a network's second for the preference
Figure 829521DEST_PATH_IMAGE067
The header entity in the layer is an attention representation of a first type of triplet of user data,
Figure 651983DEST_PATH_IMAGE068
computing a network's second for the preference
Figure 919016DEST_PATH_IMAGE015
The header entity in the layer is an attention representation of a first type of triplet of user data,
Figure 806332DEST_PATH_IMAGE069
an attention representation set of first type triples for item data for the head entity,
Figure 475211DEST_PATH_IMAGE070
for an initial attention representation of all head entities as first-type triples of item data,
Figure 468575DEST_PATH_IMAGE071
computing a network's second for the preference
Figure 160587DEST_PATH_IMAGE065
The head entity in the layer is an attention representation of a first type of triplet of item data,
Figure 897599DEST_PATH_IMAGE072
computing a network's second for the preference
Figure 607935DEST_PATH_IMAGE067
The head entity in the layer isAn attention representation of a first type of triplet of item data,
Figure 37779DEST_PATH_IMAGE073
computing a network's second for the preference
Figure 279405DEST_PATH_IMAGE015
The head entity in the layer is an attention representation of a first type of triplet of item data,
Figure 492211DEST_PATH_IMAGE074
encoding a set for item data in the item historical knowledge graph,
Figure 135682DEST_PATH_IMAGE075
the representative head entity is the item data,
Figure 156335DEST_PATH_IMAGE076
representing the head entity as user data.
9. The recommendation method of claim 8, wherein the attention representations of the user data and the project data are aggregated layer by:
Figure 885256DEST_PATH_IMAGE077
wherein the content of the first and second substances,
Figure 964071DEST_PATH_IMAGE078
for the attention representation of the layer-by-layer aggregated user data or project data,
Figure 399731DEST_PATH_IMAGE079
representing the header entity as user data or the header entity as project data,
Figure 171378DEST_PATH_IMAGE080
computing a network's second for the preference
Figure 122016DEST_PATH_IMAGE067
In layer propagation
Figure 191473DEST_PATH_IMAGE031
The individual head entity being user data or
Figure 543956DEST_PATH_IMAGE031
The individual head entity is an attention representation of a first type of triplet of item data,
Figure 424188DEST_PATH_IMAGE081
computing a network's second for the preference
Figure 862122DEST_PATH_IMAGE082
In layer propagation
Figure 282739DEST_PATH_IMAGE031
The individual head entity being user data or
Figure 443725DEST_PATH_IMAGE031
The individual head entity is an attention representation of a first type of triplet of item data,
Figure 557174DEST_PATH_IMAGE083
computing a network's second for the preference
Figure 482405DEST_PATH_IMAGE084
In layer propagation
Figure 644396DEST_PATH_IMAGE031
The individual head entity being user data or
Figure 705893DEST_PATH_IMAGE031
The head entity being the attention of the first type of triple of the item dataForce represents, | is the splicing operation,
Figure 177194DEST_PATH_IMAGE085
and
Figure 589721DEST_PATH_IMAGE086
are all parameters in the set of parameters of the recommendation model.
10. The recommendation method according to claim 9, wherein the preference score prediction value of each user data for each item data is obtained by:
Figure 617720DEST_PATH_IMAGE087
wherein the content of the first and second substances,
Figure 736986DEST_PATH_IMAGE088
a preference score prediction value for any user data to any item data,
Figure 926659DEST_PATH_IMAGE089
for the transformed model representation of said any user data,
Figure 511967DEST_PATH_IMAGE090
and the model representation of any item data is acquired based on the attention representation of the user data after layer-by-layer aggregation, and the model representation of any item data is acquired based on the attention representation of the item data after layer-by-layer aggregation.
11. The recommendation method of claim 10, wherein the preference calculates a loss function for the network expressed as:
Figure 78078DEST_PATH_IMAGE091
wherein the content of the first and second substances,
Figure 379746DEST_PATH_IMAGE092
a loss function of the network is calculated for the preference,
Figure 678003DEST_PATH_IMAGE075
the representative head entity is the item data,
Figure 65122DEST_PATH_IMAGE076
the representative header entity is the user data,
Figure 621875DEST_PATH_IMAGE093
in the case of the first type of triplet,
Figure 512470DEST_PATH_IMAGE094
presentation pair
Figure 309525DEST_PATH_IMAGE095
The cross-entropy loss is solved,
Figure 856044DEST_PATH_IMAGE096
a preference score true value for the any user data to the any item data,
Figure 29536DEST_PATH_IMAGE088
a preference score prediction value for the any user data for the any item data,
Figure 40218DEST_PATH_IMAGE097
is a third type of triplet.
12. The recommendation method of claim 11, wherein the penalty function of the recommendation model is expressed as:
Figure 696589DEST_PATH_IMAGE098
wherein the content of the first and second substances,
Figure 792721DEST_PATH_IMAGE099
wherein the content of the first and second substances,
Figure 442008DEST_PATH_IMAGE100
is a loss function of the recommendation model,
Figure 307196DEST_PATH_IMAGE019
as a function of the loss of the spatial transformation network,
Figure 446053DEST_PATH_IMAGE092
a loss function of the network is calculated for the preference,
Figure 216432DEST_PATH_IMAGE101
in order to be able to adjust the parameters,
Figure 466148DEST_PATH_IMAGE102
for the set of parameters of the recommendation model,
Figure 451421DEST_PATH_IMAGE103
is an embedded set of representations of the head and tail entities of a first type of triplet,
Figure 698863DEST_PATH_IMAGE104
an embedded representation set of relationships for a first type of triplet,
Figure 769587DEST_PATH_IMAGE105
are all made of
Figure 508480DEST_PATH_IMAGE102
The parameter (1).
13. The recommendation method according to claim 1, wherein the obtaining at least one item data for each user data as recommended item data based on a preference score prediction value of each user data to each item data in the data to be recommended comprises:
arranging preference score predicted values of any user data in the data to be recommended to each item data in a descending order according to the size;
acquiring item data corresponding to the preference score predicted values sorted from the first place to the Nth place as recommended item data, or acquiring item data corresponding to the preference score predicted values larger than a preset threshold value as recommended item data, wherein N is a preset integer larger than or equal to 1.
14. A recommendation device, comprising:
a data acquisition unit configured to: acquiring data to be recommended, wherein the data to be recommended comprises a plurality of user data, a plurality of item data, a user knowledge graph constructed based on the plurality of user data and an item knowledge graph constructed based on the plurality of item data;
a model prediction unit configured to: inputting the data to be recommended into a trained recommendation model to obtain a preference score predicted value of each item data by each user data in the data to be recommended;
an item recommendation unit configured to: and acquiring at least one item data for each user data as recommended item data based on the preference score predicted value of each item data by each user data in the data to be recommended.
15. An electronic device, comprising:
at least one processor;
at least one memory storing computer-executable instructions,
wherein the computer-executable instructions, when executed by the at least one processor, cause the at least one processor to perform the recommendation method of any one of claims 1 to 13.
16. A computer-readable storage medium storing instructions that, when executed by at least one processor, cause the at least one processor to perform the recommendation method of any of claims 1 to 13.
CN202111104093.9A 2021-09-22 2021-09-22 Recommendation method and device Active CN113570058B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111104093.9A CN113570058B (en) 2021-09-22 2021-09-22 Recommendation method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111104093.9A CN113570058B (en) 2021-09-22 2021-09-22 Recommendation method and device

Publications (2)

Publication Number Publication Date
CN113570058A true CN113570058A (en) 2021-10-29
CN113570058B CN113570058B (en) 2022-01-28

Family

ID=78173879

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111104093.9A Active CN113570058B (en) 2021-09-22 2021-09-22 Recommendation method and device

Country Status (1)

Country Link
CN (1) CN113570058B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114780867A (en) * 2022-05-10 2022-07-22 杭州网易云音乐科技有限公司 Recommendation method, medium, device and computing equipment
CN115587875A (en) * 2022-11-10 2023-01-10 广州科拓科技有限公司 Textile e-commerce recommendation method and device based on balanced perception attention network

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111522962A (en) * 2020-04-09 2020-08-11 苏州大学 Sequence recommendation method and device and computer-readable storage medium
CN112905900A (en) * 2021-04-02 2021-06-04 辽宁工程技术大学 Collaborative filtering recommendation algorithm based on graph convolution attention mechanism
CN112989064A (en) * 2021-03-16 2021-06-18 重庆理工大学 Recommendation method for aggregating knowledge graph neural network and self-adaptive attention
CN113010691A (en) * 2021-03-30 2021-06-22 电子科技大学 Knowledge graph inference relation prediction method based on graph neural network
CN113032618A (en) * 2021-03-26 2021-06-25 齐鲁工业大学 Music recommendation method and system based on knowledge graph
CA3106283A1 (en) * 2020-01-21 2021-07-21 Royal Bank Of Canada System and method for out-of-sample representation learning
WO2021179834A1 (en) * 2020-03-10 2021-09-16 支付宝(杭州)信息技术有限公司 Heterogeneous graph-based service processing method and device

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA3106283A1 (en) * 2020-01-21 2021-07-21 Royal Bank Of Canada System and method for out-of-sample representation learning
WO2021179834A1 (en) * 2020-03-10 2021-09-16 支付宝(杭州)信息技术有限公司 Heterogeneous graph-based service processing method and device
CN111522962A (en) * 2020-04-09 2020-08-11 苏州大学 Sequence recommendation method and device and computer-readable storage medium
CN112989064A (en) * 2021-03-16 2021-06-18 重庆理工大学 Recommendation method for aggregating knowledge graph neural network and self-adaptive attention
CN113032618A (en) * 2021-03-26 2021-06-25 齐鲁工业大学 Music recommendation method and system based on knowledge graph
CN113010691A (en) * 2021-03-30 2021-06-22 电子科技大学 Knowledge graph inference relation prediction method based on graph neural network
CN112905900A (en) * 2021-04-02 2021-06-04 辽宁工程技术大学 Collaborative filtering recommendation algorithm based on graph convolution attention mechanism

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
DGL专栏: "深入理解图注意力机制", 《HTTPS://WWW.JIQIZHIXIN.COM/ARTICLES/2019-02-19-7》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114780867A (en) * 2022-05-10 2022-07-22 杭州网易云音乐科技有限公司 Recommendation method, medium, device and computing equipment
CN114780867B (en) * 2022-05-10 2023-11-03 杭州网易云音乐科技有限公司 Recommendation method, medium, device and computing equipment
CN115587875A (en) * 2022-11-10 2023-01-10 广州科拓科技有限公司 Textile e-commerce recommendation method and device based on balanced perception attention network

Also Published As

Publication number Publication date
CN113570058B (en) 2022-01-28

Similar Documents

Publication Publication Date Title
US11694122B2 (en) Distributed machine learning systems, apparatus, and methods
US11710071B2 (en) Data analysis and rendering
Emura et al. A joint frailty-copula model between tumour progression and death for meta-analysis
JP6445055B2 (en) Feature processing recipe for machine learning
JP6789934B2 (en) Learning with transformed data
Awan et al. Feature selection and transformation by machine learning reduce variable numbers and improve prediction for heart failure readmission or death
Häggström Data‐driven confounder selection via Markov and Bayesian networks
CN113570058B (en) Recommendation method and device
Weisberg et al. Post hoc subgroups in clinical trials: Anathema or analytics?
US20220188654A1 (en) System and method for clinical trial analysis and predictions using machine learning and edge computing
CN115885297A (en) Differentiable user-item collaborative clustering
Martin et al. Clinical prediction in defined populations: a simulation study investigating when and how to aggregate existing models
Shandilya et al. Mature-food: Food recommender system for mandatory feature choices a system for enabling digital health
CN111047009B (en) Event trigger probability prediction model training method and event trigger probability prediction method
Ge et al. CausalMGM: an interactive web-based causal discovery tool
Zhao et al. Comparing two machine learning approaches in predicting lupus hospitalization using longitudinal data
Mendelevitch et al. Practical Data Science with Hadoop and Spark: Designing and Building Effective Analytics at Scale
CN115080856A (en) Recommendation method and device and training method and device of recommendation model
US20220012236A1 (en) Performing intelligent affinity-based field updates
US20210174912A1 (en) Data processing systems and methods for repurposing drugs
CN113609311A (en) Method and device for recommending items
Meng Cross-domain information fusion and personalized recommendation in artificial intelligence recommendation system based on mathematical matrix decomposition
Inibhunu et al. Fusing dimension reduction and classification for mining interesting frequent patterns in patients data
US20240071623A1 (en) Patient health platform
Chen et al. Traffic Flow Prediction Based on Interactive Dynamic Spatio-Temporal Graph Convolution with a Probabilistic Sparse Attention Mechanism

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant