CN114519097B - Academic paper recommendation method for heterogeneous information network enhancement - Google Patents

Academic paper recommendation method for heterogeneous information network enhancement Download PDF

Info

Publication number
CN114519097B
CN114519097B CN202210418401.3A CN202210418401A CN114519097B CN 114519097 B CN114519097 B CN 114519097B CN 202210418401 A CN202210418401 A CN 202210418401A CN 114519097 B CN114519097 B CN 114519097B
Authority
CN
China
Prior art keywords
matrix
thesis
papers
features
paper
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210418401.3A
Other languages
Chinese (zh)
Other versions
CN114519097A (en
Inventor
刘柏嵩
吴俊超
沈小烽
张雪垣
王冰源
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ningbo University
Original Assignee
Ningbo University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ningbo University filed Critical Ningbo University
Priority to CN202210418401.3A priority Critical patent/CN114519097B/en
Publication of CN114519097A publication Critical patent/CN114519097A/en
Application granted granted Critical
Publication of CN114519097B publication Critical patent/CN114519097B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/335Filtering based on additional data, e.g. user or group profiles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Abstract

The invention discloses an academic paper recommendation method for heterogeneous information network enhancement, which comprises the following steps: step 1, constructing a heterogeneous information network, wherein the heterogeneous information network comprises 3 types of nodes including users, papers and labels, and 3 types of relations including an interaction relation between users and papers, a reference relation between papers and papers, and a subordinate relation between papers and labels; step 2, learning the interactive characteristics of the user and the thesis by utilizing a matrix decomposition algorithm; step 3, inputting the interactive features into a heterogeneous graph attention network to learn high-order features of the thesis in the heterogeneous information network; step 4, utilizing the outer product to calculate and fuse the characteristics obtained by learning in the steps 2 and 3; and 5, inputting the features fused in the step 4 into a depth recommendation model prediction score. The invention solves the problem of sparse interactive data by using a heterogeneous information network, and can improve the accuracy of recommendation.

Description

Academic paper recommendation method for heterogeneous information network enhancement
Technical Field
The invention relates to an academic paper recommendation method, in particular to an academic paper recommendation method based on heterogeneous information network enhancement.
Background
With the explosive growth of the published amount of Academic achievements and the rapid iteration of knowledge, scientific researchers are difficult to easily find Academic papers meeting their needs, and face the problem of more and more serious Paper information overload, and Academic Paper Recommendation Systems (APRs) accurately recommend papers to researchers, and are becoming indispensable tools for researchers. Collaborative Filtering (CF) is widely used in recommendation systems, which predicts user personalized preferences by exploring historical interactions of users, however, CF cannot produce robust performance when the interaction matrix is very sparse; in recent years, many methods have been proposed to improve recommendation performance using various auxiliary information. In the paper recommendation, two kinds of auxiliary information are widely adopted: textual information and structural information.
The text information enhances the characteristics of the papers by using titles and abstracts of the papers, but the topics of the papers of the recommendation lists generated by the text information tend to be similar and lack diversity and novelty. The structural Information can be divided into a quotation Network and a Heterogeneous Information Network (HIN), the quotation Network reflects the relationship between one paper and another paper, but the cold start problem that a new paper is quoted as zero exists; the heterogeneous information network is a method for fusing multi-source data, and the rich semantic information can effectively alleviate the problems; how to enhance the performance of paper recommendation by utilizing a heterogeneous information network has attracted the interest and attention of scientific research personnel. The basic idea of the existing thesis recommendation method based on the heterogeneous information network is to learn the characteristics of users and thesis by using a graph embedding method, calculate the scores of the users and the thesis according to the learned characteristics, and generate recommendations such as PRHNE and HGRec by ranking the scores, although the methods effectively improve the thesis recommendation performance, the following problems exist:
1) when learning features, only the first-order or second-order similarity among network nodes is considered, and the high-order connection relation among the nodes is not mined;
2) when the recommendation is generated, only the cosine similarity between the user and the paper features is considered, the complex interaction relationship between the user and the paper is not mined, and the enhancement effect of the heterogeneous information network on the recommendation performance cannot be fully reflected.
Therefore, there is a need to develop a new academic paper recommendation method based on heterogeneous information network enhancement to solve the existing problems.
Disclosure of Invention
The invention aims to provide an academic paper recommendation method for heterogeneous information network enhancement. The invention solves the problem of sparse interactive data by utilizing a heterogeneous information network, and can improve the recommendation accuracy.
The technical scheme of the invention is as follows: a heterogeneous information network enhanced academic paper recommendation method comprises the following steps:
step 1, constructing a heterogeneous information network, wherein the heterogeneous information network comprises 3 types of nodes including users, papers and labels, and 3 types of relations including an interaction relation between users and papers, a reference relation between papers and papers, and a subordinate relation between papers and labels;
step 2, learning the interactive characteristics of the user and the thesis by utilizing a matrix decomposition algorithm;
step 3, inputting the interactive features into a heterogeneous graph attention network to learn high-order features of the thesis in the heterogeneous information network;
step 4, utilizing the outer product to calculate and fuse the characteristics obtained by learning in the steps 2 and 3;
and 5, inputting the features fused in the step 4 into a depth recommendation model prediction score.
In the academic paper recommendation method for heterogeneous information network enhancement, the step 1 constructs a heterogeneous information network based on a citeuulike data set, the 3 types of nodes of a user, a paper and a label are respectively represented by a symbol U, P, T, and the step 1 specifically comprises the following sub-steps:
substep 1.1, converting data in the source file into a triple form: (h t r), h represents a head node id, t represents a tail node id, and r represents the relationship type between the head node h and the tail node t;
substep 1.2, establishing a relation matrix according to the triples, wherein the relation matrix comprises 3 matrixes: the method comprises the following steps of establishing a | U | × | P | interaction relation matrix between users and papers, a | P | × | P | reference relation matrix between papers and papers, and a | P | × | T | subordinate relation matrix between papers and labels, wherein the establishing process of the matrixes is as follows: firstly, initializing a matrix with 0, then positioning the positions of matrix elements according to the triples, wherein the head node of the matrix is a matrix row number, the tail node of the matrix row number is a matrix column number, and setting the value of the matrix row number and the tail node of the matrix column number to be 1;
substep 1.3, aligning the thesis id number of the relation matrix, and establishing a heterogeneous information network.
In the foregoing method for recommending academic papers enhanced by heterogeneous information network, the step 2 specifically includes the following sub-steps:
substep 2.1, initializing the user interaction feature matrix randomly
Figure 104061DEST_PATH_IMAGE001
And a thesis interaction feature matrix
Figure 21202DEST_PATH_IMAGE002
Substep 2.2 updating the matrix based on the loss function
Figure 863256DEST_PATH_IMAGE001
And
Figure 258465DEST_PATH_IMAGE002
the loss function is as follows:
Figure 72837DEST_PATH_IMAGE003
wherein the content of the first and second substances,
Figure 456152DEST_PATH_IMAGE004
to represent
Figure 723185DEST_PATH_IMAGE005
The user interaction feature is located at the user interface,
Figure 922085DEST_PATH_IMAGE006
to represent
Figure 653281DEST_PATH_IMAGE007
The cross-sectional paper features the interaction of the paper,
Figure 646645DEST_PATH_IMAGE008
the dimensions of the interactive features are represented,
Figure 400974DEST_PATH_IMAGE009
representing regularization coefficients that prevent overfitting, iteratively updating the matrix
Figure 200303DEST_PATH_IMAGE001
And
Figure 723688DEST_PATH_IMAGE002
up to
Figure 153532DEST_PATH_IMAGE010
No longer decreases.
In the aforementioned academic paper recommendation method for heterogeneous information network enhancement, the step 3 specifically includes the following substeps:
substep 3.1, giving a set of paper related meta-paths
Figure 457475DEST_PATH_IMAGE011
And calculating the element path neighbor matrix based on the relation matrix obtained in the substep 1.2, wherein the calculation formula is as follows:
for the meta-path PUP, it is possible to,
Figure 732598DEST_PATH_IMAGE012
for the meta-path PP,
Figure 376069DEST_PATH_IMAGE013
for the meta-path PTP the information is,
Figure 711236DEST_PATH_IMAGE014
wherein the content of the first and second substances,
Figure 3939DEST_PATH_IMAGE015
the transpose of the matrix is represented, the neighbor matrix obtained by the calculation needs to be converted into a binary matrix, and a threshold value is set
Figure 82753DEST_PATH_IMAGE016
When the elements in the matrix are larger than
Figure 580731DEST_PATH_IMAGE016
If so, setting the value to be 1, otherwise, setting the value to be 0, and calculating the formula as follows:
Figure 414695DEST_PATH_IMAGE017
wherein the content of the first and second substances,
Figure 365333DEST_PATH_IMAGE018
the values of the elements representing the ith row and the jth column of the neighbor matrix,
Figure 247838DEST_PATH_IMAGE016
to customize the threshold, pass pair
Figure 662639DEST_PATH_IMAGE019
Figure 605188DEST_PATH_IMAGE020
And
Figure 43122DEST_PATH_IMAGE021
by calculation, 3 binary neighbor matrices can be obtained:
Figure 526056DEST_PATH_IMAGE022
Figure 998626DEST_PATH_IMAGE023
and
Figure 112075DEST_PATH_IMAGE024
the median value of the matrix is 1 to represent the neighbor relation, and the median value of the matrix is 0 without the neighbor relation;
and substep 3.2, aggregating the neighbor features based on the binary neighbor matrix, introducing node-level attention, aggregating the meaningful neighbor features to learn the target node features, wherein the calculation formula is as follows:
Figure 598158DEST_PATH_IMAGE025
wherein the content of the first and second substances,
Figure 822466DEST_PATH_IMAGE026
node for thesisjFor target thesis nodeiThe weight coefficient of (a) is calculated,
Figure 883963DEST_PATH_IMAGE027
in order to be a function of the power exponent,
Figure 230631DEST_PATH_IMAGE028
in order to activate the function(s),
Figure 643157DEST_PATH_IMAGE029
representing the transpose of the node attention layer query vector,
Figure 671156DEST_PATH_IMAGE030
node for thesisiThe characteristics of the interaction of (a) with (b),
Figure 915056DEST_PATH_IMAGE031
splicing operation is carried out; and aggregating the neighbor information according to the weight coefficient, wherein the calculation formula is as follows:
Figure 104729DEST_PATH_IMAGE032
wherein the content of the first and second substances,
Figure 4551DEST_PATH_IMAGE033
presentation paperiAccording to the coefficient
Figure 632979DEST_PATH_IMAGE026
Aggregating meta-specific paths
Figure 934647DEST_PATH_IMAGE034
Features of the neighborhood;
substep 3.3 learning thesis features under different element paths through node attention layer
Figure 295221DEST_PATH_IMAGE035
Introduces meta-path level attention, and merges the thesis features under different meta-pathsGFirst order features of learning articles in heterogeneous information networks
Figure 682340DEST_PATH_IMAGE036
The calculation formula is as follows:
Figure 615924DEST_PATH_IMAGE037
Figure 506519DEST_PATH_IMAGE038
wherein the content of the first and second substances,
Figure 303574DEST_PATH_IMAGE039
respectively the weight matrix and the bias of the meta-path attention,
Figure 974727DEST_PATH_IMAGE040
as a transpose of the query vector of the meta-path attention layer,
Figure 148219DEST_PATH_IMAGE041
is as followsiThe weight coefficient corresponding to the element path is calculated,
Figure 158900DEST_PATH_IMAGE042
is the total number of meta-paths;
substep 3.4, passing iterativelyLHigh-order features of layer heterogeneous graph attention network learning paper
Figure 189173DEST_PATH_IMAGE043
The calculation formula is as follows:
Figure 285305DEST_PATH_IMAGE044
wherein, the first and the second end of the pipe are connected with each other,
Figure 996909DEST_PATH_IMAGE045
the method represents the characteristics of the thesis obtained by the attention network learning of the heterogeneous graphs of different layers.
In the aforementioned academic paper recommendation method for heterogeneous information network enhancement, step 4 is performed before step 2 and step 3 to obtain user characteristics
Figure 924414DEST_PATH_IMAGE046
Paper interaction feature
Figure 63271DEST_PATH_IMAGE047
Paper network node characteristics
Figure 646699DEST_PATH_IMAGE043
Summing the thesis interaction characteristics and the thesis network node characteristics to obtain new thesis characteristics
Figure 480705DEST_PATH_IMAGE002
=
Figure 465978DEST_PATH_IMAGE047
+
Figure 775737DEST_PATH_IMAGE043
In the aforementioned academic thesis recommendation method enhanced by heterogeneous information network, the step 4 is based on new thesis features
Figure 908778DEST_PATH_IMAGE002
And then, the characteristics of the outer product fusion user and thesis are utilized to obtain an interactive map
Figure 962185DEST_PATH_IMAGE048
The calculation formula is as follows:
Figure 801965DEST_PATH_IMAGE049
wherein the content of the first and second substances,
Figure 282625DEST_PATH_IMAGE050
is a two-dimensional matrix, and the matrix is,
Figure 637383DEST_PATH_IMAGE051
and
Figure 494480DEST_PATH_IMAGE052
representing specific users and theoryThe subscript of the text feature vector is the value of the location of the vector.
In the aforementioned method for recommending academic papers for heterogeneous information network enhancement, the step 5 specifically includes the following substeps:
substep 5.1, constructing a neural network structure of 6 convolutional layers and 1 fully-connected layer, wherein the number of convolution kernels in each layer is 32, the size of the convolution kernels is 2 multiplied by 2, the step length is 2, the dimensionality of the fully-connected layer is 32 multiplied by 1, and then, obtaining the interactive graph in the step 4
Figure 188767DEST_PATH_IMAGE048
Inputting the prediction score of the convolutional network, and calculating according to the following formula:
Figure 902645DEST_PATH_IMAGE053
wherein the content of the first and second substances,
Figure 947961DEST_PATH_IMAGE054
in order to predict the score for the model,
Figure 343170DEST_PATH_IMAGE055
Figure 455745DEST_PATH_IMAGE056
is shown as
Figure 543787DEST_PATH_IMAGE057
The parameters of the layer convolution kernel and the bias term,
Figure 810820DEST_PATH_IMAGE058
representing a convolution operation, Relu being an activation function,
Figure 72037DEST_PATH_IMAGE059
and
Figure 475337DEST_PATH_IMAGE060
representing the weight and the offset of full connection, and flatten is matrix steering operation;
substep 5.2, selecting BPR as the loss function, which optimizes the model parameters by maximizing the scoring distance of the positive and negative samples, the calculation formula is as follows:
Figure 734280DEST_PATH_IMAGE061
wherein the content of the first and second substances,
Figure 550926DEST_PATH_IMAGE062
a training set is represented that represents the training set,
Figure 287938DEST_PATH_IMAGE063
for the user
Figure 811323DEST_PATH_IMAGE064
Is detected in the positive sample of (a),
Figure 303484DEST_PATH_IMAGE065
for the user
Figure 279531DEST_PATH_IMAGE064
The negative sample of (a) is,
Figure 820233DEST_PATH_IMAGE028
in order to activate the function(s),
Figure 24556DEST_PATH_IMAGE066
and
Figure 359723DEST_PATH_IMAGE067
the scores are positive and negative samples predicted by the model,
Figure 88644DEST_PATH_IMAGE068
are regularization coefficients that prevent over-fitting.
Compared with the prior art, the invention has the beneficial effects that: the invention provides a thought for an academic paper recommendation algorithm, which is used for solving the problems that the high-order relation between nodes and the complex interaction relation between users and papers are not mined in the conventional paper recommendation algorithm and the problem of sparse interaction data between users and papers is solved by a heterogeneous information network.
Drawings
FIG. 1 is a flow chart of the present invention;
FIG. 2 is an exemplary diagram of a heterogeneous information network;
FIG. 3 is a portion of an academic paper recommendation model diagram augmented by heterogeneous information networks;
FIG. 4 is another part of academic paper recommendation model map enhanced by heterogeneous information network.
Detailed Description
The invention is further described with reference to the following figures and examples, which are not to be construed as limiting the invention.
Example (b): a method for recommending academic papers for heterogeneous information network enhancement, the flow is shown in fig. 1.
Step 1, constructing a heterogeneous information network.
As shown in fig. 2, which is a heterogeneous information network constructed based on a citeuliuke data set, part (a) of fig. 2 represents a node type, (b) represents a heterogeneous information network, (c) represents a meta-path, and (d) represents a meta-path neighbor, and the network includes 3 types of nodes: user U, paper P and tag T, 3 relations: user paper interaction relation, reference relation among papers and paper label containing relation. The citesulide data set is a public data set suitable for the field of paper recommendation, three files, namely user.dat, positions.dat and item-tag.dat, are selected as original data, wherein the user.dat is a history clicked paper record of a user, the positions.dat is a quotation record of a paper, and the item-tag.dat is a tag record owned by the paper; the method comprises the following specific steps:
substep 1.1, converting the data in the three files into a triple form: (h t r), h represents a head node id, t represents a tail node id, and r represents the relationship type between the head node h and the tail node t;
substep 1.2, establishing a relation matrix according to the triples, wherein the relation matrix comprises 3 matrixes: the method comprises the following steps of establishing a | U | × | P | interaction relation matrix (UP) among user papers, a | P | × | P | reference relation matrix (PP) among papers and a | P | × | T | subordinate relation matrix (PT) among paper labels, wherein the establishing process of the matrixes comprises the following steps: firstly, initializing a matrix with 0, then positioning the positions of matrix elements according to the triples, wherein the head node of the matrix is a matrix row number, the tail node of the matrix is a matrix column number, and setting the value of the matrix column number to be 1.
And a substep 1.3 of aligning the thesis id numbers of the relation matrix and establishing a heterogeneous information network.
And 2, learning the interactive characteristics of the user and the thesis by utilizing a matrix decomposition algorithm.
The matrix is decomposed into a classical machine learning recommendation model that models the historical interaction matrix of the user's paper
Figure 167459DEST_PATH_IMAGE069
Decomposed into a matrix of user interaction characteristics
Figure 727753DEST_PATH_IMAGE001
And a thesis interaction feature matrix
Figure 233821DEST_PATH_IMAGE002
The method comprises the following specific steps:
substep 2.1, initializing the user interaction feature matrix at random
Figure 450038DEST_PATH_IMAGE001
And a thesis interaction feature matrix
Figure 394861DEST_PATH_IMAGE002
Substep 2.2 updating the matrix based on the loss function
Figure 747345DEST_PATH_IMAGE001
And
Figure 689893DEST_PATH_IMAGE002
the loss function is as follows:
Figure 190144DEST_PATH_IMAGE070
wherein the content of the first and second substances,
Figure 610761DEST_PATH_IMAGE004
to represent
Figure 83331DEST_PATH_IMAGE005
The user interaction feature is located at the user interface,
Figure 494983DEST_PATH_IMAGE006
to represent
Figure 420214DEST_PATH_IMAGE007
The interactive features of the paper are described,
Figure 644522DEST_PATH_IMAGE008
a dimension of the interactive feature is represented,
Figure 33915DEST_PATH_IMAGE009
representing regularization coefficients that prevent overfitting, iteratively updating the matrix
Figure 318266DEST_PATH_IMAGE001
And
Figure 730792DEST_PATH_IMAGE002
up to
Figure 555529DEST_PATH_IMAGE010
No longer decreases.
And 3, inputting the interactive features into the attention network of the heterogeneous graph to learn the high-order features of the paper in the heterogeneous information network.
The existing paper recommendation method based on the graph neural network does not consider the inherent difference of nodes and can lose important heterogeneous information. The meta path is a specific path connecting network nodes, and the meta path reflects different semantic information of the network nodes, as shown in part (c) of fig. 2, two papers can be connected through multiple meta paths, such as a paper-user-paper (PUP) and a paper-tag-paper (PTP), where the PUP indicates that the two papers are interacted by the same user, and the PTP indicates that the two papers have the same tag. The nodes of the thesis have different adjacent nodes under different meta-paths, and referring to fig. 3, on one hand, the adjacent nodes of the same meta-path may also have different importance; for example, paper-paper (PP) reflects inter-paper reference relationships, while papers may refer to papers for different aspects. On the other hand, different meta-paths may have different effects on the target node; for example, a paper-user-paper (PUP) may provide more information on learning of the characteristics of a paper than a paper-tag-paper (PTP). Considering the influence of the neighbors of the thesis on the target node under different element paths, learning the node characteristics of the thesis by using a heterogeneous graph attention network, and specifically comprising the following steps of:
substep 3.1, the working principle of the graph neural network is to propagate information between nodes based on neighbor matrices, giving a set of paper related element paths
Figure 737112DEST_PATH_IMAGE011
And calculating the meta-path neighbor matrix based on the relationship matrix obtained in the substep 1.2, wherein the calculation formula is as follows:
for the meta-path PUP, there is no meta-path PUP,
Figure 192364DEST_PATH_IMAGE012
for the meta-path PP,
Figure 154503DEST_PATH_IMAGE013
for the meta-path PTP the information is,
Figure 720614DEST_PATH_IMAGE014
wherein, the first and the second end of the pipe are connected with each other,
Figure 756703DEST_PATH_IMAGE015
the transposition of the expression matrix, the neighbor matrix obtained by the calculation has the problem of unbalanced data distribution, the neighbor matrix needs to be converted into a binary matrix, and a threshold value is set
Figure 943709DEST_PATH_IMAGE016
When the elements in the matrix are larger than
Figure 330828DEST_PATH_IMAGE016
If so, setting the value to be 1, otherwise, setting the value to be 0, and calculating the formula as follows:
Figure 435050DEST_PATH_IMAGE017
wherein the content of the first and second substances,
Figure 653542DEST_PATH_IMAGE018
the element values representing the ith row and jth column of the neighbor matrix,
Figure 450596DEST_PATH_IMAGE016
to customize the threshold, by
Figure 59432DEST_PATH_IMAGE019
Figure 295241DEST_PATH_IMAGE020
And
Figure 305923DEST_PATH_IMAGE021
by calculation, 3 binary neighbor matrices can be obtained:
Figure 273879DEST_PATH_IMAGE022
Figure 432328DEST_PATH_IMAGE023
and
Figure 878352DEST_PATH_IMAGE024
the median value of the matrix is 1 to represent the neighbor relation, and the median value of the matrix is 0 without the neighbor relation;
substep 3.2, aggregating neighbor features based on binary neighbor matrices, the traditional graph convolution network ignores different importance of the target node and its neighbor nodes, taking meta-path PP as an example, which reflects the reference relationship between two papers, refer to fig. 3 forUnder different meta-paths, the target node p3Of two citations p2And p4The authors may refer to the two citations for different purposes; in order to distinguish the importance of different nodes to the target node, node-level attention is introduced, meaningful neighbor features are aggregated to learn the target node features, and the calculation formula is as follows:
Figure 9119DEST_PATH_IMAGE071
wherein the content of the first and second substances,
Figure 711758DEST_PATH_IMAGE026
node for thesisjFor target thesis nodeiThe weight coefficient of (a) is calculated,
Figure 295187DEST_PATH_IMAGE027
in order to be a function of the power exponent,
Figure 544902DEST_PATH_IMAGE028
in order to activate the function(s),
Figure 592493DEST_PATH_IMAGE029
representing the transpose of the node attention layer query vector,
Figure 902251DEST_PATH_IMAGE030
node for thesisiThe characteristics of the interaction of (a) with (b),
Figure 972976DEST_PATH_IMAGE031
splicing operation is carried out; and aggregating the neighbor information according to the weight coefficient, wherein the calculation formula is as follows:
Figure 88699DEST_PATH_IMAGE072
wherein the content of the first and second substances,
Figure 928479DEST_PATH_IMAGE033
presentation paperiAccording to the coefficient
Figure 409139DEST_PATH_IMAGE026
Aggregating meta-specific paths
Figure 763897DEST_PATH_IMAGE034
Features of the neighborhood;
substep 3.3 learning to obtain thesis features under different element paths through node attention layer
Figure 620995DEST_PATH_IMAGE035
Introduces meta-path level attention, and merges the thesis features under different meta-pathsGLearning first-order features of paper in heterogeneous information networks
Figure 315281DEST_PATH_IMAGE036
The calculation formula is as follows:
Figure 527694DEST_PATH_IMAGE073
Figure 573011DEST_PATH_IMAGE074
wherein, the first and the second end of the pipe are connected with each other,
Figure 968220DEST_PATH_IMAGE039
respectively the weight matrix and the bias of the meta-path attention,
Figure 517013DEST_PATH_IMAGE040
as a transpose of the query vector of the meta-path attention layer,
Figure 667372DEST_PATH_IMAGE041
is as followsiThe weight coefficient corresponding to the element path is calculated,
Figure 934405DEST_PATH_IMAGE042
is the total number of meta-paths;
substep 3.4, iteratively passingLLayer heterogeneous graph attention networkingHigh-order features of treatises
Figure 867726DEST_PATH_IMAGE043
The calculation formula is as follows:
Figure 598921DEST_PATH_IMAGE075
wherein the content of the first and second substances,
Figure 857864DEST_PATH_IMAGE045
and (4) representing the characteristics of the paper obtained by the attention network learning of the heterogeneous graph of different layers.
And 4, calculating and fusing the characteristics obtained by learning in the steps 2 and 3 by utilizing the outer product.
User characteristics are available through steps 2 and 3
Figure 346615DEST_PATH_IMAGE046
Paper interaction feature
Figure 411523DEST_PATH_IMAGE047
Network node characterization of paper
Figure 934908DEST_PATH_IMAGE043
Summing the thesis interaction characteristics and the thesis network node characteristics to obtain new thesis characteristics
Figure 99173DEST_PATH_IMAGE002
=
Figure 904580DEST_PATH_IMAGE047
+
Figure 445283DEST_PATH_IMAGE043
And then the characteristics of the user and the thesis are fused by utilizing the outer product to obtain an interactive graph
Figure 823175DEST_PATH_IMAGE048
The calculation formula is as follows:
Figure 486237DEST_PATH_IMAGE076
wherein the content of the first and second substances,
Figure 949580DEST_PATH_IMAGE050
is a two-dimensional matrix, and the two-dimensional matrix is a three-dimensional matrix,
Figure 293973DEST_PATH_IMAGE051
and
Figure 854268DEST_PATH_IMAGE052
representing the feature vector of the particular user and paper, the subscript of the vector being the value of where it is located. Compared with the calculation mode of inner product and splicing, the outer product has the following advantages: 1) the inner product calculation only obtains diagonal elements in the interactive graph, and the outer product has more modelable information and still has rich semantics even on sparse data; 2) the splicing calculation ignores the correlation among different feature dimensions, and the outer product models different dimension features, so that the features in the heterogeneous information network can be fully utilized; 3) the structure of the two-dimensional matrix is beneficial to the convolutional neural network to learn complex interactive relations, and the convolutional neural network has fewer parameters than a multilayer perceptron under the same network scale, so that a model can be stacked into a deeper network, the enhancement effect of heterogeneous information network characteristics on recommendation is fully excavated, and the higher generalization capability is achieved.
Step 5, recommending models, namely a academic paper recommending model diagram enhanced by heterogeneous information network after splicing the models in the figures 3 and 4, wherein e in the figure 3HPoint to e in figure 4H
In the traditional thesis recommendation method, cosine similarity prediction scoring is adopted, the scoring is sorted and a thesis list of top-k is returned; simple cosine operation cannot be used for fitting the interaction relation between the complex user and the paper, the interaction relation between the complex user and the paper can be mined by the ability of the neural network for fitting any function, a recommendation model is built by adopting a convolutional neural network, and the method comprises the following specific steps:
substep 5.1, construct the neural network structure of 6 layers of convolution layers and 1 layer of full connection layer, the number of convolution kernels of each layer is 32, the size of convolution kernel is 2Step length is 2, the dimension of the full connection layer is 32 multiplied by 1, and then the interactive graph obtained in the step 4 is used
Figure 360335DEST_PATH_IMAGE048
Inputting the prediction score of the convolutional network, and calculating according to the following formula:
Figure 576553DEST_PATH_IMAGE077
wherein, the first and the second end of the pipe are connected with each other,
Figure 521375DEST_PATH_IMAGE054
in order to predict the score for the model,
Figure 873859DEST_PATH_IMAGE055
Figure 550828DEST_PATH_IMAGE056
is shown as
Figure 549615DEST_PATH_IMAGE057
The parameters of the layer convolution kernel and the bias term,
Figure 235811DEST_PATH_IMAGE058
representing a convolution operation, Relu being an activation function,
Figure 442801DEST_PATH_IMAGE059
and
Figure 618568DEST_PATH_IMAGE060
representing the weight and the bias of full connection, and flatten is matrix steering quantity operation;
substep 5.2, the present invention focuses more on top-K performance, so BPR is chosen as the loss function, which optimizes the model parameters by maximizing the scoring distance between the positive and negative samples, with the following calculation formula:
Figure 543799DEST_PATH_IMAGE078
wherein,
Figure 768106DEST_PATH_IMAGE062
A training set is represented that represents the training set,
Figure 157499DEST_PATH_IMAGE063
for the user
Figure 441850DEST_PATH_IMAGE064
Is detected in the positive sample of (a),
Figure 588798DEST_PATH_IMAGE065
for the user
Figure 679114DEST_PATH_IMAGE064
The negative sample of (a) is,
Figure 860696DEST_PATH_IMAGE028
in order to activate the function(s),
Figure 315948DEST_PATH_IMAGE066
and
Figure 779553DEST_PATH_IMAGE067
the positive and negative samples of the model prediction are scored,
Figure 345664DEST_PATH_IMAGE068
are regularization coefficients that prevent over-fitting.
The process of the HIN-APR algorithm of the invention is as follows:
Algorithm: HIN-APR
Input: HIN G=(V,E); meta-path set MP; depth L;
Interaction matrix R;
Output: Predicted Function
Figure 381753DEST_PATH_IMAGE079
Initialize all parameters;
PQ=matrices-factorization(R);
For number of training epochs do
For batch (u,v) from R do
Figure 70223DEST_PATH_IMAGE080
;
Figure 191763DEST_PATH_IMAGE081
p=P(u);
q=Q(v);
Figure 561564DEST_PATH_IMAGE082
;
Figure 780056DEST_PATH_IMAGE083
;
Update parameters by gradient descent;
End for
End for
Return
Figure 577111DEST_PATH_IMAGE084
;
Function
Figure 185947DEST_PATH_IMAGE085
:
Figure 93860DEST_PATH_IMAGE086
;
For l=0…L do:
For mp in MP. do:
y= node-attention(neibor(mp),
Figure 166858DEST_PATH_IMAGE087
);
Y.append(y);
End for
Figure 134814DEST_PATH_IMAGE088
= meta-path-attention (Y);
Y = [];
End for
Return
Figure 230946DEST_PATH_IMAGE089
;
in order to fully embody the advantages of the invention in academic paper recommendation, experiments are carried out on a citeuliuke academic paper recommendation data set, the data set is divided into citeuliuke-a and citeuliuke-t, the sparsity of interaction data is 0.22% and 0.07% respectively, leave-one-out is adopted in data set division, for each user in the data set, the last interaction is reserved as a test positive sample, the rest interactions are used as training positive samples, 999 papers which are not interacted before are randomly selected as test negative samples for each user, and the training negative samples are calculated according to the following formula (1: 1 are randomly chosen during the training process. Experiments are compared with 6 mainstream models at present, namely BPR-MF, Neu-MF, NGCF, HE-Rec, LGRec and CGPRec, and the Hit Rate (HR) and the normalized cumulative discount gain (NDCG) are used as evaluation indexes. The following table shows the experimental results:
Figure 503402DEST_PATH_IMAGE091
from the above table, it can be seen that:
1) the performance of the HIN-APR algorithm is superior to that of other model methods on two data sets, compared with the best performance of other model methods, the HR is averagely improved by 1.85%, the NDCG is averagely improved by 3.42%, and the effectiveness of HIN-APR modeling high-order connection and complex interaction in enhancement of the recommended performance of a thesis is verified.
2) In other model methods, the recommendation method based on the HIN is generally superior to the collaborative filtering method, which shows that the problem of data sparsity is effectively alleviated and the recommendation performance is improved by adding the HIN for recommendation. In the HIN-based method, indexes of all aspects of CGPRec are superior to HE-Rec and LGRec, which shows the limitation that the latter only models a low-order connection relation between nodes and the effectiveness of GCN modeling a high-order relation on recommendation; in addition, the performance of the HIN-APR is superior to that of the CGPRec because the HIN-APR introduces a node and meta-path attention layer, which can more accurately propagate neighbor information, while taking into account high-order relationships.
3) In the collaborative filtering method: compared with the recommended performances of MF-BPR, Neu-MF and NGCF, the recommended performances are improved in different degrees on two data sets, and the defects that complex interaction cannot be mined by matrix decomposition are shown; in a sparse data set citeuulike-t, promotion of Neu-MF and NGCF on MF-BPR is not as good as that in citeuulike-a, which shows that under sparse data, a collaborative filtering method faces an over-fitting problem, while HIN-APR is still greatly promoted, which shows that heterogeneous information network characteristics can be well utilized through outer product and convolution calculation, and the data sparseness problem is relieved.
The above is only a preferred embodiment of the present invention, and the protection scope of the present invention is not limited to the above-mentioned examples, and all technical solutions belonging to the idea of the present invention belong to the protection scope of the present invention. It should be noted that modifications and embellishments within the scope of the invention may occur to those skilled in the art without departing from the principle of the invention, and are considered to be within the scope of the invention.

Claims (7)

1. A heterogeneous information network enhanced academic thesis recommendation method is characterized by comprising the following steps: the method comprises the following steps:
step 1, constructing a heterogeneous information network, wherein the heterogeneous information network comprises 3 types of nodes including users, papers and labels, and 3 types of relations including an interaction relation between users and papers, a reference relation between papers and papers, and a subordinate relation between papers and labels;
step 2, learning the interactive features of the user and the interactive features of the thesis by using a matrix decomposition algorithm;
step 3, inputting the interactive features into a heterogeneous graph attention network to learn high-order features of the thesis in the heterogeneous information network;
step 4, utilizing the outer product to calculate and fuse the characteristics obtained by learning in the steps 2 and 3;
and 5, inputting the features fused in the step 4 into a depth recommendation model prediction score.
2. The method of claim 1, wherein the method comprises: the step 1 is to construct a heterogeneous information network based on a citeuliuke data set, wherein 3 types of nodes, namely users, papers and labels, are respectively represented by a symbol U, P, T, and the step 1 specifically comprises the following substeps:
substep 1.1, converting data in the source file into a triple form: (h t r), h represents a head node id, t represents a tail node id, and r represents the relationship type between the head node h and the tail node t;
substep 1.2, establishing a relation matrix according to the triples, wherein the relation matrix comprises 3 matrixes: the method comprises the following steps of establishing a | U | × | P | interaction relation matrix between users and papers, a | P | × | P | reference relation matrix between papers and papers, and a | P | × | T | subordinate relation matrix between papers and labels, wherein the establishing process of the matrixes is as follows: firstly, initializing all 0 matrixes, then positioning the positions of matrix elements according to the triples, wherein the head nodes are matrix row numbers, the tail nodes are matrix column numbers, and the values are set to be 1;
substep 1.3, aligning the thesis id number of the relation matrix, and establishing a heterogeneous information network.
3. The method of claim 2, wherein the method comprises: the step 2 specifically comprises the following substeps:
substep 2.1, initializing the user interaction feature matrix randomly
Figure 903163DEST_PATH_IMAGE001
And a thesis interaction feature matrix
Figure 648265DEST_PATH_IMAGE002
Substep 2.2 updating the matrix based on the loss function
Figure 475276DEST_PATH_IMAGE001
And
Figure 305828DEST_PATH_IMAGE002
the loss function is as follows:
Figure DEST_PATH_IMAGE003
wherein the content of the first and second substances,
Figure 846793DEST_PATH_IMAGE004
to represent
Figure 446402DEST_PATH_IMAGE005
The user interaction feature is located at the user interface,
Figure 319680DEST_PATH_IMAGE006
represent
Figure 637529DEST_PATH_IMAGE007
The interactive features of the paper are described,
Figure 746299DEST_PATH_IMAGE008
the dimensions of the interactive features are represented,
Figure 465994DEST_PATH_IMAGE009
representing regularization coefficients that prevent overfitting, iteratively updating the matrix
Figure 510173DEST_PATH_IMAGE001
And
Figure 174372DEST_PATH_IMAGE002
up to
Figure 962200DEST_PATH_IMAGE010
And (4) not dropping any more, wherein R is a user paper history interaction matrix.
4. The method of claim 3, wherein the method comprises: the step 3 specifically comprises the following substeps:
substep 3.1, giving a set of paper related meta-paths
Figure 536401DEST_PATH_IMAGE011
And calculating the meta-path neighbor matrix based on the relationship matrix obtained in the substep 1.2, wherein the calculation formula is as follows:
for the meta-path PUP, it is possible to,
Figure 751481DEST_PATH_IMAGE012
for the meta-path PP there is a path,
Figure 902977DEST_PATH_IMAGE013
for the meta-path PTP the information is,
Figure 494495DEST_PATH_IMAGE014
wherein, the first and the second end of the pipe are connected with each other,
Figure 923203DEST_PATH_IMAGE015
the transpose of the matrix is represented, the neighbor matrix obtained by the calculation needs to be converted into a binary matrix, and a threshold value is set
Figure 309185DEST_PATH_IMAGE016
When the elements in the matrix are larger than
Figure 452371DEST_PATH_IMAGE016
If so, setting the value to be 1, otherwise, setting the value to be 0, and calculating the formula as follows:
Figure 582001DEST_PATH_IMAGE017
wherein, the first and the second end of the pipe are connected with each other,
Figure 724269DEST_PATH_IMAGE018
the values of the elements representing the ith row and the jth column of the neighbor matrix,
Figure 546732DEST_PATH_IMAGE016
to customize the threshold, by
Figure 548186DEST_PATH_IMAGE019
Figure 481507DEST_PATH_IMAGE020
And
Figure 743861DEST_PATH_IMAGE021
by calculation, 3 binary neighbor matrices can be obtained:
Figure 737225DEST_PATH_IMAGE022
Figure 225975DEST_PATH_IMAGE023
and
Figure 556462DEST_PATH_IMAGE024
the median value of the matrix is 1 to represent the neighbor relation, and the median value of the matrix is 0 without the neighbor relation;
and substep 3.2, aggregating the neighbor features based on the binary neighbor matrix, introducing node-level attention, aggregating the meaningful neighbor features to learn the target node features, wherein the calculation formula is as follows:
Figure 814268DEST_PATH_IMAGE025
wherein, the first and the second end of the pipe are connected with each other,
Figure 978533DEST_PATH_IMAGE026
node for thesisjFor target thesis nodeiThe weight coefficient of (a) is calculated,
Figure 689000DEST_PATH_IMAGE027
in order to be a function of the power exponent,
Figure 590222DEST_PATH_IMAGE028
in order to activate the function(s),
Figure 968114DEST_PATH_IMAGE029
representing the transpose of the node attention layer query vector,
Figure 37701DEST_PATH_IMAGE030
node for thesisiThe characteristics of the interaction of (a) with (b),
Figure 501044DEST_PATH_IMAGE031
splicing operation is carried out; and aggregating the neighbor information according to the weight coefficient, wherein the calculation formula is as follows:
Figure 173333DEST_PATH_IMAGE032
wherein, the first and the second end of the pipe are connected with each other,
Figure 405732DEST_PATH_IMAGE033
presentation paperiAccording to the coefficient
Figure 646220DEST_PATH_IMAGE026
Aggregating meta-specific paths
Figure 721492DEST_PATH_IMAGE034
Features of the neighborhood;
substep 3.3 learning thesis characteristics under different element paths through node attention layer
Figure 338418DEST_PATH_IMAGE035
Introduces meta-path level attention, and merges the thesis features under different meta-pathsGFirst order features of learning articles in heterogeneous information networks
Figure 159744DEST_PATH_IMAGE036
The calculation formula is as follows:
Figure 961347DEST_PATH_IMAGE037
Figure 868123DEST_PATH_IMAGE038
wherein the content of the first and second substances,
Figure 23161DEST_PATH_IMAGE039
respectively the weight matrix and the bias of the meta-path attention,
Figure 853320DEST_PATH_IMAGE040
as a transpose of the query vector of the meta-path attention layer,
Figure 435611DEST_PATH_IMAGE041
is as followsiThe weight coefficient corresponding to the element path,
Figure 95263DEST_PATH_IMAGE042
is the total number of meta-paths;
substep 3.4, passing iterativelyLHigh-order features of layer heterogeneous graph attention network learning paper
Figure 913046DEST_PATH_IMAGE043
The calculation formula is as follows:
Figure 974543DEST_PATH_IMAGE044
wherein the content of the first and second substances,
Figure 993314DEST_PATH_IMAGE045
obtained by network learning of attention representing heterogeneous graphs of different layersAnd (5) characteristics of the thesis.
5. The method of claim 4, wherein the method comprises: before the step 4 is carried out, the user characteristics are obtained through the step 2 and the step 3
Figure 140262DEST_PATH_IMAGE046
Paper interaction feature
Figure 761736DEST_PATH_IMAGE047
Network node characterization of paper
Figure 412160DEST_PATH_IMAGE043
Summing the thesis interaction characteristics and the thesis network node characteristics to obtain new thesis characteristics
Figure 460888DEST_PATH_IMAGE002
=
Figure 95131DEST_PATH_IMAGE048
+
Figure 959445DEST_PATH_IMAGE043
6. The method of claim 5, wherein the method comprises: said step 4 is based on new paper features
Figure 729954DEST_PATH_IMAGE002
And then the characteristics of the outer product fusion user and the thesis are utilized to obtain an interactive graph
Figure DEST_PATH_IMAGE049
The calculation formula is as follows:
Figure 684004DEST_PATH_IMAGE050
wherein the content of the first and second substances,
Figure 805544DEST_PATH_IMAGE051
is a two-dimensional matrix, and the matrix is,
Figure 503241DEST_PATH_IMAGE052
and
Figure 393837DEST_PATH_IMAGE053
a feature vector representing a particular user and paper, the subscript of the vector being the value of where it is located.
7. The method of claim 6, wherein the method comprises: the step 5 specifically includes the following substeps:
substep 5.1, constructing a neural network structure of 6 convolutional layers and 1 fully-connected layer, wherein the number of convolution kernels in each layer is 32, the size of the convolution kernels is 2 multiplied by 2, the step length is 2, the dimensionality of the fully-connected layer is 32 multiplied by 1, and then, obtaining the interactive graph in the step 4
Figure 925312DEST_PATH_IMAGE049
Inputting the prediction score of the convolutional network, and calculating according to the following formula:
Figure 393203DEST_PATH_IMAGE054
wherein the content of the first and second substances,
Figure 301116DEST_PATH_IMAGE055
in order to predict the score for the model,
Figure 46218DEST_PATH_IMAGE056
Figure 748595DEST_PATH_IMAGE057
is shown as
Figure 196457DEST_PATH_IMAGE058
The parameters of the layer convolution kernel and the bias term,
Figure 111324DEST_PATH_IMAGE059
representing a convolution operation, Relu being an activation function,
Figure 976511DEST_PATH_IMAGE060
and
Figure DEST_PATH_IMAGE061
representing the weight and the bias of full connection, and flatten is matrix steering quantity operation;
substep 5.2, selecting BPR as the loss function, which optimizes the model parameters by maximizing the scoring distance of the positive and negative samples, the calculation formula is as follows:
Figure 708844DEST_PATH_IMAGE062
wherein the content of the first and second substances,
Figure 26693DEST_PATH_IMAGE063
a training set is represented that represents the training set,
Figure 10829DEST_PATH_IMAGE064
for the user
Figure 855158DEST_PATH_IMAGE065
Is detected in the positive sample of (a),
Figure 899337DEST_PATH_IMAGE066
for the user
Figure 438903DEST_PATH_IMAGE065
The negative sample of (a) is,
Figure 351364DEST_PATH_IMAGE028
in order to activate the function(s),
Figure 925565DEST_PATH_IMAGE067
and
Figure 140645DEST_PATH_IMAGE068
the positive and negative samples of the model prediction are scored,
Figure 167507DEST_PATH_IMAGE069
are regularization coefficients that prevent over-fitting.
CN202210418401.3A 2022-04-21 2022-04-21 Academic paper recommendation method for heterogeneous information network enhancement Active CN114519097B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210418401.3A CN114519097B (en) 2022-04-21 2022-04-21 Academic paper recommendation method for heterogeneous information network enhancement

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210418401.3A CN114519097B (en) 2022-04-21 2022-04-21 Academic paper recommendation method for heterogeneous information network enhancement

Publications (2)

Publication Number Publication Date
CN114519097A CN114519097A (en) 2022-05-20
CN114519097B true CN114519097B (en) 2022-07-19

Family

ID=81600510

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210418401.3A Active CN114519097B (en) 2022-04-21 2022-04-21 Academic paper recommendation method for heterogeneous information network enhancement

Country Status (1)

Country Link
CN (1) CN114519097B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116578884B (en) * 2023-07-07 2023-10-31 北京邮电大学 Scientific research team identification method and device based on heterogeneous information network representation learning

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105843799A (en) * 2016-04-05 2016-08-10 电子科技大学 Academic paper label recommendation method based on multi-source heterogeneous information graph model
CN106815297A (en) * 2016-12-09 2017-06-09 宁波大学 A kind of academic resources recommendation service system and method
CN111400591A (en) * 2020-03-11 2020-07-10 腾讯科技(北京)有限公司 Information recommendation method and device, electronic equipment and storage medium
CN113505294A (en) * 2021-06-15 2021-10-15 黄萌 Heterogeneous network representation recommendation algorithm fusing meta-paths

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180240015A1 (en) * 2017-02-21 2018-08-23 Scriyb LLC Artificial cognitive declarative-based memory model to dynamically store, retrieve, and recall data derived from aggregate datasets
CN108763367B (en) * 2018-05-17 2020-07-10 南京大学 Method for recommending academic papers based on deep alignment matrix decomposition model
CN111061856B (en) * 2019-06-06 2022-05-27 北京理工大学 Knowledge perception-based news recommendation method
CN111061935B (en) * 2019-12-16 2022-04-12 北京理工大学 Science and technology writing recommendation method based on self-attention mechanism
CN111897974B (en) * 2020-08-12 2024-04-16 吉林大学 Heterogeneous knowledge graph learning method based on multilayer attention mechanism
CN112380434B (en) * 2020-11-16 2022-09-16 吉林大学 Interpretable recommendation method fusing heterogeneous information network
CN113392319B (en) * 2021-05-13 2022-06-24 宁波大学 Academic paper recommendation method based on network representation and auxiliary information embedding
CN113420221B (en) * 2021-07-01 2022-09-09 宁波大学 Interpretable recommendation method integrating implicit article preference and explicit feature preference of user

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105843799A (en) * 2016-04-05 2016-08-10 电子科技大学 Academic paper label recommendation method based on multi-source heterogeneous information graph model
CN106815297A (en) * 2016-12-09 2017-06-09 宁波大学 A kind of academic resources recommendation service system and method
CN111400591A (en) * 2020-03-11 2020-07-10 腾讯科技(北京)有限公司 Information recommendation method and device, electronic equipment and storage medium
CN113505294A (en) * 2021-06-15 2021-10-15 黄萌 Heterogeneous network representation recommendation algorithm fusing meta-paths

Also Published As

Publication number Publication date
CN114519097A (en) 2022-05-20

Similar Documents

Publication Publication Date Title
Wang et al. MCNE: An end-to-end framework for learning multiple conditional network representations of social network
Parvin et al. TCFACO: Trust-aware collaborative filtering method based on ant colony optimization
CN112232925A (en) Method for carrying out personalized recommendation on commodities by fusing knowledge maps
Su et al. Attention-based knowledge graph representation learning for predicting drug-drug interactions
Wang et al. Multitask feature learning approach for knowledge graph enhanced recommendations with RippleNet
CN114519097B (en) Academic paper recommendation method for heterogeneous information network enhancement
CN115221413B (en) Sequence recommendation method and system based on interactive graph attention network
Wang et al. CANE: community-aware network embedding via adversarial training
CN114817712A (en) Project recommendation method based on multitask learning and knowledge graph enhancement
CN116401542A (en) Multi-intention multi-behavior decoupling recommendation method and device
Lu et al. A recommendation algorithm based on fine-grained feature analysis
Song et al. Coarse-to-fine: A dual-view attention network for click-through rate prediction
Sridhar et al. Content-Based Movie Recommendation System Using MBO with DBN.
Sun et al. Graph force learning
Gan et al. DeepInteract: Multi-view features interactive learning for sequential recommendation
CN111291260A (en) Multi-information-driven approximate fusion network recommendation propagation method
CN113342994A (en) Recommendation system based on non-sampling cooperative knowledge graph network
CN116842260A (en) Knowledge enhancement recommendation method based on graphic neural network multi-space interaction modeling
Shu et al. Multi-task feature and structure learning for user-preference based knowledge-aware recommendation
Sangeetha et al. Predicting personalized recommendations using GNN
CN115391555A (en) User-perceived knowledge map recommendation system and method
WO2022011652A1 (en) Multi-graph convolution collaborative filtering
Gu et al. Combining user-end and item-end knowledge graph learning for personalized recommendation
CN114510642A (en) Book recommendation method, system and equipment based on heterogeneous information network
Joshi et al. Interest-aware collaborative filtering recommendation model based on graph neural networks

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant