CN113742597A - Interest point recommendation method based on LBSN (location based service) and multi-graph fusion - Google Patents

Interest point recommendation method based on LBSN (location based service) and multi-graph fusion Download PDF

Info

Publication number
CN113742597A
CN113742597A CN202111103851.5A CN202111103851A CN113742597A CN 113742597 A CN113742597 A CN 113742597A CN 202111103851 A CN202111103851 A CN 202111103851A CN 113742597 A CN113742597 A CN 113742597A
Authority
CN
China
Prior art keywords
user
interest
points
point
space
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111103851.5A
Other languages
Chinese (zh)
Inventor
方金凤
孟祥福
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Liaoning Technical University
Original Assignee
Liaoning Technical University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Liaoning Technical University filed Critical Liaoning Technical University
Priority to CN202111103851.5A priority Critical patent/CN113742597A/en
Publication of CN113742597A publication Critical patent/CN113742597A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9535Search customisation based on user profiles and personalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9536Search customisation based on social or collaborative filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9537Spatial or temporal dependent retrieval, e.g. spatiotemporal queries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • General Health & Medical Sciences (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Probability & Statistics with Applications (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses an interest point recommendation method based on LBSN and multi-graph fusion, which comprises the following steps of modeling the internal characteristics of a user and an interest point: splitting a user-interest point scoring matrix into a product of a user matrix and an interest point matrix through a matrix decomposition algorithm, wherein the product is used as an internal potential vector of the user and the interest point; modeling external characteristics of users and interest points: learning the feature vectors of the user in an interest point space and a social space and the feature vectors of the interest points in a user space and a position space through multi-graph fusion and an improved k-means clustering algorithm, and further obtaining external characterization vectors of the user and the interest points; and inputting the final vectors of the user and the interest points into a neural network for learning, and recommending the top k interest points with the highest scores to the user according to the scores. The method and the system learn the user-interest point interaction graph and the user social relationship graph in a multi-graph fusion mode, and provide a new idea for interest point recommendation; the recommendation error can be effectively reduced, and the accuracy of the recommendation result is improved.

Description

Interest point recommendation method based on LBSN (location based service) and multi-graph fusion
Technical Field
The invention belongs to the technical field of natural language processing and geographic information, and particularly relates to an interest point recommendation method based on LBSN (location based service) and multi-graph fusion.
Background
The development of geographic information systems and mobile networks has promoted the rapid development of social media based on location awareness. With the increasing number of spatial Web objects (also referred to as points of interest), point of interest recommendations, one of the important services of location-based social networks, are becoming hot topics for current Web queries, natural language processing domains, and location-based social network (LBSN) analysis. Collaborative filtering is the earliest method for recommending the interest points, and the user and the interest points are represented in a vector form by learning the potential characteristics of the user and the interest points, and then the preference of the user for the interest points is predicted based on the vector. The matrix decomposition method uses the ID information of the user as a representative vector of the user.
With the advent of word embedding, researchers began to embed characteristic information of users and points of interest to represent them. However, this type of approach fails to capture the collaboration information in the user-point of interest interaction record. Therefore, a graph neural network is proposed to learn vector representation in graph data, the appearance of the graph neural network brings a new idea for processing the graph data, and the method can integrate node information, side information and topological structure in the graph and obtain great progress in learning vector representation. In addition, the recommendation problem is essentially a matrix completion problem and can also be understood as a link prediction problem in a bipartite graph, so that a data set used in the recommendation system can be converted into graph data, the graph neural network is combined with the recommendation system, and the graph data are learned by the graph neural network to improve the performance of the recommendation system.
Disclosure of Invention
Based on the defects of the prior art, the technical problem to be solved by the invention is to provide the interest point recommendation method based on LBSN and multi-graph fusion, which can learn the characteristics of the users and the interest points from the internal and external angles, thereby obtaining comprehensive descriptions of the users and the interest points; clustering is better realized by improving a clustering algorithm; the graph neural network is introduced into the interest point recommendation, the user-interest point interaction graph and the user social relationship graph are learned in a multi-graph fusion mode, and a new idea is provided for the interest point recommendation; the recommendation error can be effectively reduced, and the accuracy of the recommendation result is improved.
In order to solve the technical problems, the invention is realized by the following technical scheme: the invention provides an interest point recommendation method based on LBSN and multi-graph fusion, which comprises the following steps:
s1, modeling the internal characteristics of the user and the interest points: splitting a user-interest point scoring matrix into a product of a user matrix and an interest point matrix through a matrix decomposition algorithm, wherein the product is used as an internal potential vector of the user and the interest point;
s2, modeling external characteristics of the user and the interest points: learning the feature vectors of the user in an interest point space and a social space and the feature vectors of the interest points in a user space and a position space through multi-graph fusion and an improved k-means clustering algorithm, and further obtaining external characterization vectors of the user and the interest points;
and S3, inputting the final vectors of the user and the interest points into a neural network for learning, thereby obtaining the score of the user for each interest point, and recommending the top k interest points with the highest scores to the user according to the scores.
Further, the specific steps of modeling the internal features of the user and the interest points in step S1 are as follows:
two optimal sub-matrixes can be obtained by carrying out matrix decomposition operation on a scoring matrix R with m rows and n columns: user matrix Um*dAnd a point of interest matrix Vn*dM is the number of users in the scoring matrix, and n is the number of interest points in the scoring matrix; mapping the users and the interest points to a d-dimensional space respectively, wherein m rows of d-dimensional vectors in the U matrix are projections of m users on the d-dimensional space, and the preference degree of the users to the d potential features is reflected; the d-dimensional data of each row constitutes an internal potential vector F for each userui(ii) a The d-dimensional vectors of n rows in the V matrix are projections of n interest points on the d-dimensional space, and the closeness degree of the interest points to the d potential features is reflected; the d-dimensional data of each row forms each interest pointInner potential vector F ofpi
Further, the specific method in step S2 is as follows:
given a point-of-interest set POI ═ v1,v2,…,vnFirstly, calculating the probability density of each interest point v according to the following formula (a) based on a probability density estimation method of a Gaussian kernel function, and taking the probability density as the typical degree of the interest point;
Figure BDA0003269441470000031
wherein,
Figure BDA0003269441470000032
representing points of interest v and vjThe overall distance between the two elements is,
Figure BDA0003269441470000033
is a Gaussian kernel function, and n represents the number of interest points;
selecting an interest point with the highest typical degree as a first initial clustering center according to the calculated typical degree, then selecting an interest point with the largest distance from the currently selected clustering center from the rest interest points as a next initial clustering center, repeating the steps until all the initial clustering centers are found out, then clustering the interest points according to the selected initial clustering centers, dividing each interest point into the class center which is closest to the interest point, completing clustering until the clustering results of the two adjacent times are unchanged, and finally embedding the interest points into a position space according to the class label of each interest point to obtain the feature vector of the interest points in the position space;
constructing a user-interest point interaction graph, and learning a feature vector of a user in an interest point space and a feature vector of an interest point in a user space from the user-interest point interaction graph through an aggregation function of the following formula; constructing a user social relationship graph, and learning a feature vector of a user in a social space from the user social relationship graph through an aggregation function of the following formula (b);
Figure BDA0003269441470000034
wherein, Fi TRepresenting a characterization vector obtained after aggregation, wherein sigma is a nonlinear activation function, W and b are weights and biases of a neural network, Agg is an aggregator, C (i) is a set of adjacent nodes of a node i in a user-interest point interaction graph and a user social relation graph, and XijAn interaction vector for evaluation perception between a node i and a node j in the graph;
then, splicing the feature vectors of the user in the interest point space and the social space through the following three formulas (c-e) to obtain an external characterization vector of the user; splicing the feature vectors of the interest points in the user space and the position space to obtain external characterization vectors of the interest points;
Figure BDA0003269441470000041
c2=σ(W2·c1+b2) (d)
Fi=σ(Wl·cl-1+bl) (e)
wherein T isiI ∈ {1,2} represents the embedding space, when T1And T2When point of interest space and social space, respectively, FiExternal token vector F for a useruo(ii) a When T is1And T2When position space and user space, respectively, FiExternal token vector F as a point of interestpo
Further, in step S3, the points of interest are sorted according to their scores, and the top k points of interest with the highest score are selected to form a recommendation list for recommendation to the user, which includes the following steps:
first concatenating the internal potential vectors F of the usersuiAnd an external token vector FuoObtain the final vector F of the useru(ii) a Inner potential vector F of splicing interest pointspiAnd an external token vector FpoObtaining a final direction of the point of interestQuantity FpInputting the final vectors of the user and the interest points into a neural network, carrying out score prediction in a multilayer perceptron MLP, and calculating a method as formulas (f) - (i) by adopting a Relu activation function;
Figure BDA0003269441470000042
g2=σ(W2·g1+b2) (g)
gl=σ(Wl·gl-1+bl) (h)
rij'=WT·gl (i)
learning the parameters of the model by taking the following formula (j) as an objective function;
Figure BDA0003269441470000051
where | o | is the number of points of interest scored by the user in the dataset, rijFor user uiFor points of interest vjTrue score of r'ijPredicted users u for the modeliFor points of interest vjScoring of (4);
after the score of each interest point of the user is obtained, the interest points are arranged according to the score in a descending order, and the top k interest points with the highest scores are selected to form a recommendation list to be recommended to the user.
According to the method, the user and the interest points are analyzed through an internal angle and an external angle, the internal potential vectors of the user and the interest points are obtained through matrix decomposition of a user-interest point scoring matrix, the external characterization vectors of the user and the interest points are learned through multi-image fusion and an improved k-means clustering algorithm, and finally the final vector description of the user and the interest points is obtained through combination of the internal characteristic and the external characteristic. And inputting the final vectors of the user and the interest points into a neural network for grading prediction, and recommending the top k interest points with the highest scores to the user.
The point of interest recommendation method based on the location-based social network (LBSN) and multi-graph fusion provided by the invention analyzes the user and the point of interest from the internal and external aspects. And the internal module learns the scoring matrix of the user and the interest points through matrix decomposition to obtain internal potential vectors of the user and the interest points. In the external module, a user-interest point interaction graph is firstly constructed, and the representation vector of the user in the interest point space and the representation vector of the interest point in the user space are learned from the graph. Then, constructing a user social relationship graph, modeling an information diffusion phenomenon in user social, capturing the friend relationship of the user, and obtaining a representation vector of the user in a social space; and clustering the interest points according to the geographical positions of the interest points, and embedding the clustering result into a position space to obtain a characterization vector of the interest points in the position space. Finally, combining the characterization vector of the user in the interest point space and the characterization vector of the user in the social space to obtain an external characterization vector of the user; and combining the characterization vector of the interest point in the user space and the characterization vector of the interest point in the position space to obtain an external characterization vector of the interest point. And fusing vectors of the user and the interest points in the inner module and the outer module to obtain final vector representation of the user and the interest points, inputting the final vector representation into the multilayer neural network model for grading prediction, and recommending the interest points according to the grading. The extracted model is verified on a real data set, and the result shows that the method can effectively reduce the recommendation error and improve the accuracy of the recommendation result.
The foregoing description is only an overview of the technical solutions of the present invention, and in order to make the technical means of the present invention more clearly understood, the technical solutions of the present invention can be implemented according to the content of the description, and in order to make the above and other objects, features, and advantages of the present invention more concise and understandable, the following detailed description is given with reference to the preferred embodiments and accompanying drawings.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings of the embodiments will be briefly described below.
FIG. 1 is a block diagram of the overall structure of the present invention;
FIG. 2 is a flow chart of a clustering algorithm of the present invention;
FIG. 3 is an exemplary diagram of a multi-graph fusion architecture of the present invention;
FIG. 4 is a schematic view of the attention mechanism of the present invention;
FIG. 5 is a graph comparing the results of the experiment of the present invention, (a) is a graph comparing the results of the RMSE index with the PMF algorithm, and (b) is a graph comparing the results of the RMSE index with the MAE index with the PMF algorithm.
Detailed Description
The following describes in detail a specific embodiment of the interest point recommendation method based on lbs n and multi-graph fusion according to the present invention with reference to the accompanying drawings.
As shown in fig. 1 to 5, the interest point recommendation method based on lbs n and multi-graph fusion of the present invention is mainly applied to the fields of currently popular natural language processing, geographic information system and spatio-temporal data analysis, and comprises the following steps:
(1) modeling the internal characteristics of the user and the interest points: and performing matrix decomposition on the user-interest point score matrix, optimizing by taking the minimum mean square error between the real score and the prediction score of the user on the interest point as a target, and obtaining two optimal implicit matrixes U and V which are used as internal potential vectors of the user and the interest point.
Two optimal sub-matrixes can be obtained by carrying out matrix decomposition operation on a scoring matrix R with m rows and n columns: user matrix Um*dAnd a point of interest matrix Vn*dM is the number of users in the scoring matrix, and n is the number of interest points in the scoring matrix; mapping the users and the interest points to a d-dimensional space respectively, wherein m rows of d-dimensional vectors in the U matrix are projections of m users on the d-dimensional space, and the preference degree of the users to the d potential features is reflected; the d-dimensional data of each row constitutes an internal potential vector F for each userui(ii) a The d-dimensional vectors of n rows in the V matrix are projections of n interest points on the d-dimensional space, and the closeness degree of the interest points to the d potential features is reflected; the d-dimensional data of each row constitutes an internal potential vector F for each point of interestpi
(2) Modeling external characteristics of users and interest points: four embedding spaces are set: the interest point space, the social space, the user space and the location space respectively learn the feature vectors of the user in the interest point space and the social space and the feature vectors of the interest point in the user space and the location space through an aggregation function. The feature vector of the user in the interest point space and the feature vector of the interest point in the user space are learned from the user-interest point interaction graph; learning a feature vector of a user in a social space from a user social graph; and clustering the interest points according to the geographical positions of the interest points by using an improved k-means clustering algorithm, and embedding the interest points into the position space according to corresponding class labels to obtain the feature vectors of the interest points in the position space.
Given a point-of-interest set POI ═ v1,v2,…,vnFirstly, calculating the probability density of each interest point v according to the following formula (a) based on a probability density estimation method of a Gaussian kernel function, and taking the probability density as the typical degree of the interest point;
Figure BDA0003269441470000071
wherein,
Figure BDA0003269441470000072
representing points of interest v and vjThe overall distance between the two elements is,
Figure BDA0003269441470000081
is a Gaussian kernel function, and n represents the number of interest points;
selecting an interest point with the highest typical degree as a first initial clustering center according to the calculated typical degree, then selecting an interest point with the largest distance from the currently selected clustering center from the rest interest points as a next initial clustering center, repeating the steps until all the initial clustering centers are found out, then clustering the interest points according to the selected initial clustering centers, dividing each interest point into the class center which is closest to the interest point, completing clustering until the clustering results of the two adjacent times are unchanged, and finally embedding the interest points into a position space according to the class label of each interest point to obtain the feature vector of the interest points in the position space;
constructing a user-interest point interaction graph, and learning a feature vector of a user in an interest point space and a feature vector of an interest point in a user space from the user-interest point interaction graph through an aggregation function of the following formula; constructing a user social relationship graph, and learning a feature vector of a user in a social space from the user social relationship graph through an aggregation function of the following formula (b);
Figure BDA0003269441470000082
wherein, Fi TRepresenting a characterization vector obtained after aggregation, wherein sigma is a nonlinear activation function, W and b are weights and biases of a neural network, Agg is an aggregator, C (i) is a set of adjacent nodes of a node i in a user-interest point interaction graph and a user social relation graph, and XijAn interaction vector for evaluation perception between a node i and a node j in the graph;
then, splicing the feature vectors of the user in the interest point space and the social space through the following three formulas (c-e) to obtain an external characterization vector of the user; splicing the feature vectors of the interest points in the user space and the position space to obtain external characterization vectors of the interest points;
Figure BDA0003269441470000083
c2=σ(W2·c1+b2) (d)
Fi=σ(Wl·cl-1+bl) (e)
wherein T isiI ∈ {1,2} represents the embedding space, when T1And T2When point of interest space and social space, respectively, FiExternal token vector F for a useruo(ii) a When T is1And T2When position space and user space, respectively, FiExternal token vector F as a point of interestpo
(3) And (3) score prediction: splicing the internal potential vector and the external characterization vector of the user to obtain a final vector of the user; and splicing the internal potential vector and the external characterization vector of the interest point to obtain a final vector of the interest point. Inputting the final characterization vectors of the user and the interest points into a multi-layer perceptron MLP for rating prediction, ranking the interest points according to the rating from high to low, and composing the top k interest points with the highest scores into a recommendation list to recommend the recommendation list to the user.
The interest points are sorted according to the scores of the interest points, the top k interest points with the highest scores are selected to form a recommendation list to be recommended to a user, and the method comprises the following steps:
first concatenating the internal potential vectors F of the usersuiAnd an external token vector FuoObtain the final vector F of the useru(ii) a Inner potential vector F of splicing interest pointspiAnd an external token vector FpoObtaining a final vector F of the interest pointspInputting the final vectors of the user and the interest points into a neural network, carrying out score prediction in a multilayer perceptron MLP, and calculating a method as formulas (f) - (i) by adopting a Relu activation function;
Figure BDA0003269441470000091
g2=σ(W2·g1+b2) (g)
gl=σ(Wl·gl-1+bl) (h)
rij'=WT·gl (i)
learning the parameters of the model by taking the following formula (j) as an objective function;
Figure BDA0003269441470000092
where | o | is the number of points of interest scored by the user in the dataset, rijFor user uiFor points of interest vjTrue score of r'ijPredicted users u for the modeliFor points of interest vjScoring of (4);
after the score of each interest point of the user is obtained, the interest points are arranged according to the score in a descending order, and the top k interest points with the highest scores are selected to form a recommendation list to be recommended to the user.
In order to achieve the above object, the method of the present invention is performed as follows:
step 1: modeling the internal characteristics of the user and the interest points. Performing matrix decomposition on the user-interest point scoring matrix, optimizing by taking the minimum mean square error in the formula (1) as a target to obtain two optimal implicit matrixes U and V which are used as internal potential vectors F of the user and the interest pointuiAnd Fpi
Figure BDA0003269441470000101
U and V are d-dimensional implicit matrixes obtained after embedding of the user and the interest points respectively, and d is the dimension of the implicit matrix. r isi,jIs user uiFor points of interest vjThe real data to be scored is then taken,
Figure BDA0003269441470000102
indicating predicted user uiFor points of interest vjThe score of (1).
Step 2: location feature modeling of points of interest
Step 2.1: the traditional k-means clustering is improved, and the initial clustering center selection process of the k-means clustering algorithm is improved. Giving a POI (point of interest) set (POI ═ { v) by adopting a probability density estimation method based on a Gaussian kernel function1,v2,…,vnV points of interestjCan be defined by a probability density function f (v).
Figure BDA0003269441470000103
Wherein,
Figure BDA0003269441470000104
representing points of interest v and vjThe overall distance between the two elements is,
Figure BDA0003269441470000105
is a gaussian kernel function, and n represents the number of interest points.
The typical degree of each interest point in the interest point set is calculated one by one through the process, and the interest point with the highest typical degree is selected as the first initial clustering center. And then selecting the interest point with the largest distance from the currently selected cluster center from the rest interest points as the next initial cluster center, and so on until all the initial cluster centers are found.
Step 2.2: clustering the interest points, and embedding the class labels into the position space to obtain the characterization vectors of the interest points in the position space
Figure BDA0003269441470000106
And step 3: and fusing the multiple graphs.
Step 3.1: and constructing a user-interest point interaction graph and a user social relation graph.
Step 3.2: and constructing an aggregation function. The characterization vector of the interest point in the user space, the characterization vector of the user in the interest point space and the characterization vector of the user in the social space are uniformly obtained through the aggregation function of the formula (3).
Figure BDA0003269441470000111
Wherein, Fi TRepresents the characterization vector obtained after polymerization (when Fi TIs Pi URepresenting the characterization vector of the interest point in the user space; when F is presenti TIs composed of
Figure BDA0003269441470000112
Representing the user in the space of points of interestCharacterizing the vector; when F is presenti TIs composed of
Figure BDA0003269441470000113
Representing the characterization vector of the user in the social space), sigma is a nonlinear activation function, W and b are weights and biases of a neural network, Agg is an aggregator, C (i) is a set of adjacent nodes of a node i in a user-interest point interaction graph and a user social relation graph, and XijAnd evaluating the perceived interaction vector between the node i and the node j in the graph.
And 4, step 4: final vector representations of the user and the points of interest.
Step 4.1: an external token vector is computed for the user and the point of interest. Splicing the feature vector of the user in the interest point space and the feature vector of the user in the social space to form an external characterization vector of the user; and splicing the feature vector of the interest point in the position space and the feature vector of the interest point in the user space together to form an external characterization vector of the interest point.
Step 4.2: a final vector representation of the user and the points of interest is calculated. Splicing the internal potential vector of the user with the external characterization vector to obtain a final characterization vector of the user; splicing the internal potential vector and the external characterization vector of the interest point to obtain a final characterization vector F of the interest pointp
And 5: and (4) score prediction. Final characterization vectors F of users and points of interestuAnd FpInputting the data into a multi-layer perceptron MLP for score prediction.
After the score of each interest point of the user is obtained, the interest points are ranked from high to low according to the score, and the top k interest points with the highest score form a recommendation list to be recommended to the user.
In the internal modeling link of users and interest points, the method is realized by adopting a classical matrix decomposition (MF) algorithm. The MF decomposes the high-dimensional scoring matrix into two low-dimensional implicit matrices, performs product operation on the two low-dimensional implicit matrices, optimizes the mean square error between the original matrix and the product matrix, further obtains two optimal implicit matrices, and performs optimization by taking a minimization formula (4) as a target.
Figure BDA0003269441470000121
U and V are d-dimensional implicit matrixes obtained after embedding of the user and the interest points respectively, and d is the dimension of the implicit matrix. ri, j are users uiFor points of interest vjThe real data to be scored is then taken,
Figure BDA0003269441470000122
indicating predicted user uiFor points of interest vjThe score of (1). For better generalization capability, an L2 regular term is added to the loss function to constrain the parameters, and two implicit matrixes are updated by a gradient descent method, as shown in formula (5).
Figure BDA0003269441470000123
Figure BDA0003269441470000124
Figure BDA0003269441470000125
Figure BDA0003269441470000126
Where α and β are parameters to be learned in the algorithm optimization process, α is 20 and β is 0.2.
By performing the decomposition process on a scoring matrix R with m rows and n columns as described above, two optimal sub-matrices can be obtained: user matrix Um*dAnd a point of interest matrix Vn*dM is the number of users in the scoring matrix, and n is the number of interest points in the scoring matrix; respectively mapping the users and the interest points into a d-dimensional space, wherein m rows of d-dimensional vectors in the U matrix are actually m usersThe projection on the d-dimensional space reflects the preference degree of the user for the d potential features; the d-dimensional data of each row constitutes an internal potential vector F for each userui(ii) a The d-dimensional vectors of n rows in the V matrix are projections of n interest points on the d-dimensional space, and the closeness degree of the interest points to the d potential features is reflected; the d-dimensional data of each row constitutes an internal potential vector F for each point of interestpi
Fig. 2 depicts a flow chart of an improved clustering algorithm. The stability of the clustering result greatly depends on the selection of an initial clustering center, and the traditional k-means clustering algorithm has larger contingency because of random initial clustering centers, so that the traditional k-means algorithm can not meet the clustering requirements of people. The invention provides an improved k-means clustering method, which selects the initial center of the cluster by a probability density estimation method to replace the traditional random generation of the initial class center, so that the cluster is stabilized as soon as possible and a more ideal clustering result is obtained. Giving a POI (point of interest) set (POI ═ { v) by adopting a probability density estimation method based on a Gaussian kernel function1,v2,…,vnV points of interestjCan be defined by a probability density function f (v).
Figure BDA0003269441470000131
Wherein,
Figure BDA0003269441470000132
representing points of interest v and vjThe overall distance between the two elements is,
Figure BDA0003269441470000133
is a gaussian kernel function, and n represents the number of interest points.
The typical degree of each interest point in the interest point set is calculated one by one through the process, and the interest point with the highest typical degree is selected as the first initial clustering center. And then selecting the interest point with the largest distance from the currently selected cluster center from the rest interest points as the next initial cluster center, and so on until all the initial cluster centers are found. Algorithm 1 presents pseudo code for the improved k-means clustering algorithm to determine the initial centroid. After the initial clustering centers are determined, the distances from the residual interest points to k clustering centers (k represents the number of clusters) are respectively calculated and are classified into the nearest class, and the first round of clustering is finished after all the interest points are classified. And respectively calculating the average value of the interest points in each class as a new clustering center of the class. And sequentially calculating the distance from each interest point to the new clustering center, and classifying the interest points into the closest class to complete the second round of clustering. By analogy, new class centers are continuously generated in each class, and the interest points are classified according to the new class centers until the clustering results of two adjacent clusters are the same (the clustering centers are unchanged), so that the clustering is stable.
Algorithm 1. improved k-means clustering algorithm determines initial class center
Inputting: the interest point set V ═ V1,...,vn}
And (3) outputting: k initial centroid sets Tk={vi1,...,vik}
The method comprises the following steps:
(1)
Figure BDA0003269441470000141
(2) calculating the probability density of each interest point in the interest point set V through formula (6), and selecting the interest point V with the maximum probability densityi1∈V
(3)Tk←Tk∪{vi1}
(4)V←V-{vi1}
(5) for all j=2 to k do
(6)
Figure BDA0003269441470000142
(7)Tk←Tk∪{vij},V←V-{vij}
(8)end for
(9) return Tk={ vi1, ..., vik}
After clustering is ended, each interest point obtains a class label, and the clustering label of the interest point is embedded into the vector, so that the characteristic vector of the interest point in the position space can be obtained
Figure BDA0003269441470000145
In order to verify the effectiveness of the improved k-means clustering algorithm, taking a yelp data set as an example, respectively adopting a traditional k-means clustering algorithm and an improved k-means clustering algorithm to cluster yelp data into 10 classes, comparing the traditional k-means clustering algorithm and the improved k-means clustering algorithm by two common clustering judgment standards of intra-cluster variation einer and inter-cluster variation Einter, wherein the calculation processes of the intra-cluster variation einer and the inter-cluster variation Einter are shown in formula (7) and formula (8), and the comparison result is shown in table 1.
Figure BDA0003269441470000143
Figure BDA0003269441470000144
Where c (i) ═ k represents the set of data points contained in the kth class, DijRepresenting the distance between data points i and j. I isiAnd IjRepresenting the ith and jth class centers.
TABLE 1 Intra-Cluster variance and inter-Cluster variance for both algorithms
Figure BDA0003269441470000151
The experimental results show that the intra-cluster variation and the inter-cluster variation in the results obtained by the improved k-means clustering algorithm are smaller, which shows that the improved k-means clustering can enable data points in the same cluster to be more similar and interest points between different clusters to be more different, and therefore the improved k-means clustering can prove that the clustering results are more reasonable and the effects are better.
FIG. 3 is an exemplary diagram of a multi-map fusion architecture. u. ofiRepresentative user, vjRepresenting points of interest. User uiAnd a point of interest vjThere is a connection between them, representing user uiVisited point of interest vjThe value on the line represents the user's score for the point of interest. User uiBeing connected to each other represents the presence of a friendship between users. Suppose user u1Visited point of interest v1,v2,v3Then user u1Will respectively correspond to the interest points v1,v2,v3Generating a connecting line, which forms a graph structure called a user-interest point interaction graph, as shown in fig. 3 (a); suppose user u1And u2、u3Has a friendship of u3And u5Having a friendship, u5And u2Have a friendship while u4And u2、u6Have a friendship, then for user uiThe friendship of the user can be combated to obtain a user social relationship diagram, as shown in fig. 3 (b). Integrating the user-interest point interaction graph and the user social relationship graph together to form multi-graph fusion.
And after two graph data are obtained, inputting the graph data into a neural network, and extracting the features in each graph data through an aggregation function. The characterization vector of the interest point in the user space, the characterization vector of the user in the interest point space and the characterization vector of the user in the social space are obtained through the aggregation function of the formula (9).
Figure BDA0003269441470000152
Wherein, Fi TRepresents the characterization vector obtained after polymerization (when Fi TIs Pi URepresenting the characterization vector of the interest point in the user space; when F is presenti TIs composed of
Figure BDA0003269441470000153
For the hourA characterization vector of a user in the interest point space; when F is presenti TIs composed of
Figure BDA0003269441470000161
Representing the characterization vector of the user in the social space), sigma is a nonlinear activation function, W and b are weights and biases of a neural network, Agg is an aggregator, C (i) is a set of adjacent nodes of a node i in a user-interest point interaction graph and a user social relation graph, and XijThe evaluation perception interaction vector between the node i and the node j in the graph is shown in a specific form by formula (10).
Figure BDA0003269441470000162
Where g isvRepresents a multi-layer perceptron;
Figure BDA0003269441470000163
a join operation representing two vectors; q. q.sjRepresenting the embedded vector of the user accessing the interest point i in the user space, and representing the embedded vector of the interest point j accessed by the user i in the interest point space; e.g. of the typerE, Rd represents an evaluation embedding vector of the user for the interest point; p is a radical ofjAn embedded vector representing a friend of user i (neighbor node) in the user's social graph.
To alleviate the limitations of the mean aggregator, the present invention personalizes the aggregation of each interaction by setting weights, i.e., Agg ∑j∈C(i)αijEach interaction is allowed to contribute differently to the feature vector of the user/point of interest. Thus, the characterization vector for the available users/points of interest is:
Figure BDA0003269441470000164
wherein alpha isijAttention weights, representing interactions, are learned through an attention mechanism. The attention mechanism here refers to a double-layer neural network to evaluate the perceptual interaction vector XijAnd an embedded vector p of the target objectiAs an input pair alphaijParameterizing, and the structure is shown in fig. 4, and the concrete form is shown in formula (12):
Figure BDA0003269441470000165
the attention scores are normalized by a Softmax function to obtain attention weights, which represent the contributions of different interactions to the user/interest point feature vectors:
Figure BDA0003269441470000166
therefore, the feature vector of the user in the interest point space can be obtained
Figure BDA0003269441470000167
Feature vector of user in social space
Figure BDA0003269441470000171
Feature vector P of interest point in user spacei UAnd the feature vector of the point of interest in the location space
Figure BDA0003269441470000172
Feature vectors of users in the interest point space
Figure BDA0003269441470000173
And feature vectors of users in social space
Figure BDA0003269441470000174
Spliced together to form an external token vector F for the useruo(ii) a Characteristic vector P of interest point in user spacei UAnd feature vectors of points of interest in location space
Figure BDA0003269441470000175
External characterizing vectors F spliced together to form points of interestpoThe concrete process is as shown in formulas (14) - (16)As shown.
Figure BDA0003269441470000176
c2=σ(W2·c1+b2) (15)
.......
Fi=σ(Wl·cl-1+bl) (16)
Wherein T isiI ∈ {1,2} represents the embedding space, when T1And T2When point of interest space and social space, respectively, FiExternal token vector F for a useruo(ii) a When T is1And T2When position space and user space, respectively, FiExternal token vector F as a point of interestpo
Obtaining an internal embedding vector F of a user and an interest pointui、FpiAnd after the outer token vector Fuo、FpoRespectively splicing the internal embedded vector and the external characterization vector to obtain final vector representation F of the user and the interest pointu、Fp
Final characterization vectors F of users and points of interestuAnd FpInputting the data into a multi-layer perceptron MLP for score prediction, and adopting a Relu activation function to calculate a method as formulas (17) to (20).
Figure BDA0003269441470000177
g2=σ(W2·g1+b2) (18)
gl=σ(Wl·gl-1+bl) (19)
r'ij=WT·gl (20)
The parameters of the model are learned with equation (21) as the objective function.
Figure BDA0003269441470000178
Where | o | is the number of points of interest scored by the user in the dataset, rijFor user uiFor points of interest vjTrue score of r'ijPredicted users u for the modeliFor points of interest vjScoring of (4);
after the score of each interest point of the user is obtained, the interest points are ranked from high to low according to the score, and the top k interest points with the highest score form a recommendation list to be recommended to the user.
The method adopts the root mean square error RMSE and the average absolute error MAE as evaluation indexes to evaluate the accuracy of the point of interest recommendation method. The smaller the values of both, the better the result.
Figure BDA0003269441470000181
Figure BDA0003269441470000182
Where N is the total number of points of interest, r'iIs a predicted value, riAre true values.
The method adopts a Yelp (one of the largest global comment websites) data set, intercepts data with the longitude between-112.0 and-111.9 and the latitude between 33.3 and 33.45 as experimental data, randomly selects 80 percent and 60 percent of the data as training sets, and selects the rest 20 percent and 40 percent of the data as test sets, and considers the influence of the training sets and the test sets under different proportions on the recommendation result.
Firstly, the experimental results obtained by comparing the traditional k-means clustering algorithm and the improved k-means clustering algorithm are shown in the table 2.
TABLE 2 Performance of different clustering algorithms
Figure BDA0003269441470000183
From the results in table 2, it can be seen that no matter the training set accounts for 80% or 60%, the recommended model obtained by using the improved k-means clustering algorithm has better performance, because the improved k-means algorithm can more reasonably determine the initial clustering center, and the clustering result depends on the initial clustering center to a great extent, so that the improved k-means algorithm can obtain more accurate clustering result, which has positive effect on later-stage calculation of the feature vector of the interest point in the position space.
Secondly, the invention proves that the attention mechanism is adopted to replace the traditional mean operation to aggregate the characteristics of the users/interest points in each embedding space. The method is divided into three cases that all aggregators adopt mean value operation (namely, all friends of a user are considered to have the same influence degree on the aggregators, and the user/interest point has the same contribution to the interest point/user with interactive behavior); only considering user intimacy (namely, considering that different friends of the user have different influence degrees on the user and the user/interest point contributes the same to the interest point/user with interactive behavior); and thirdly, simultaneously considering the intimacy and the attention mechanism of the user (namely, considering that different friends of the user have different influence degrees on the user and the contribution of the user/interest point to the interest point/user with the interactive behavior is different). The results of the experiment are shown in table 3.
TABLE 3 different polymerizers Performance
Figure BDA0003269441470000191
From the result values of RMSE and MAE, it is clear that the recommendation performance of all aggregators operating with mean is much smaller than that of the case that only user intimacy is considered, which means that considering intimacy between friends has a positive influence on the recommendation result. And meanwhile, the performance of the user intimacy degree is only considered to be lower than that of the user intimacy degree and the attention mechanism model are also considered, and the effectiveness of the attention mechanism in capturing the interaction information of the user and the interest point is reflected.
And finally, comparing the experimental result with a classical probability matrix decomposition (PMF) algorithm in the field of interest point recommendation, wherein the experimental result is shown in a table 4.
TABLE 4 different recommended model Performance
Figure BDA0003269441470000192
Figure BDA0003269441470000201
As can be seen from Table 4, when 80% of the data set is used as the training set, the algorithm of the present invention reduces the RMSE and MAE indexes by 31.84% and 34.51% respectively compared with the PMF; when 60% of the data set was used as the training set, the algorithm of the present invention reduced the RMSE and MAE indices by 34.89% and 38.22% respectively over the PMF. In order to more intuitively show the comparison results, the experimental results obtained by the two methods are plotted as a bar chart as shown in fig. 5. The method provided by the invention obviously reduces the scoring prediction error, and can effectively improve the accuracy of recommendation, thereby proving the effectiveness of the algorithm provided by the invention.
Finally, it should be noted that: while the foregoing is directed to the preferred embodiment of the present invention, other and further embodiments of the invention may be devised without departing from the basic scope thereof, and the scope thereof is determined by the claims that follow.

Claims (4)

1. An interest point recommendation method based on LBSN and multi-graph fusion is characterized by comprising the following steps:
s1, modeling the internal characteristics of the user and the interest points: splitting a user-interest point scoring matrix into a product of a user matrix and an interest point matrix through a matrix decomposition algorithm, wherein the product is used as an internal potential vector of the user and the interest point;
s2, modeling external characteristics of the user and the interest points: learning the feature vectors of the user in an interest point space and a social space and the feature vectors of the interest points in a user space and a position space through multi-graph fusion and an improved k-means clustering algorithm, and further obtaining external characterization vectors of the user and the interest points;
and S3, inputting the final vectors of the user and the interest points into a neural network for learning, thereby obtaining the score of the user for each interest point, and recommending the top k interest points with the highest scores to the user according to the scores.
2. The LBSN-based point of interest recommendation method as claimed in claim 1, wherein the user and the point of interest internal feature modeling of step S1 comprises the following specific steps:
two optimal sub-matrixes can be obtained by carrying out matrix decomposition operation on a scoring matrix R with m rows and n columns: user matrix Um*dAnd a point of interest matrix Vn*dM is the number of users in the scoring matrix, and n is the number of interest points in the scoring matrix; mapping the users and the interest points to a d-dimensional space respectively, wherein m rows of d-dimensional vectors in the U matrix are projections of m users on the d-dimensional space, and the preference degree of the users to the d potential features is reflected; the d-dimensional data of each row constitutes an internal potential vector F for each userui(ii) a The d-dimensional vectors of n rows in the V matrix are projections of n interest points on the d-dimensional space, and the closeness degree of the interest points to the d potential features is reflected; the d-dimensional data of each row constitutes an internal potential vector F for each point of interestpi
3. The LBSN-based point of interest recommendation method in accordance with claim 2, wherein said step S2 is embodied as follows:
given a point-of-interest set POI ═ v1,v2,…,vnFirstly, calculating the probability density of each interest point v according to the following formula (a) based on a probability density estimation method of a Gaussian kernel function, and taking the probability density as the typical degree of the interest point;
Figure FDA0003269441460000021
wherein,
Figure FDA0003269441460000022
representing points of interest v and vjThe overall distance between the two elements is,
Figure FDA0003269441460000023
is a Gaussian kernel function, and n represents the number of interest points;
selecting an interest point with the highest typical degree as a first initial clustering center according to the calculated typical degree, then selecting an interest point with the largest distance from the currently selected clustering center from the rest interest points as a next initial clustering center, repeating the steps until all the initial clustering centers are found out, then clustering the interest points according to the selected initial clustering centers, dividing each interest point into the class center which is closest to the interest point, completing clustering until the clustering results of the two adjacent times are unchanged, and finally embedding the interest points into a position space according to the class label of each interest point to obtain the feature vector of the interest points in the position space;
constructing a user-interest point interaction graph, and learning a feature vector of a user in an interest point space and a feature vector of an interest point in a user space from the user-interest point interaction graph through an aggregation function of the following formula; constructing a user social relationship graph, and learning a feature vector of a user in a social space from the user social relationship graph through an aggregation function of the following formula (b);
Figure FDA0003269441460000024
wherein, Fi TRepresenting a characterization vector obtained after aggregation, wherein sigma is a nonlinear activation function, W and b are weights and biases of a neural network, Agg is an aggregator, C (i) is a set of adjacent nodes of a node i in a user-interest point interaction graph and a user social relation graph, and XijIs node i and node in the graphj, evaluating the perceived interaction vector;
then, splicing the feature vectors of the user in the interest point space and the social space through the following three formulas (c-e) to obtain an external characterization vector of the user; splicing the feature vectors of the interest points in the user space and the position space to obtain external characterization vectors of the interest points;
Figure FDA0003269441460000031
c2=σ(W2·c1+b2) (d)
Fi=σ(Wl·cl-1+bl) (e)
wherein T isiI ∈ {1,2} represents the embedding space, when T1And T2When point of interest space and social space, respectively, FiExternal token vector F for a useruo(ii) a When T is1And T2When position space and user space, respectively, FiExternal token vector F as a point of interestpo
4. The LBSN recommendation method based on LBSN and multi-graph fusion as claimed in claim 3, wherein in step S3, the POIs are sorted according to the scores of the POIs, and the top k POIs with the highest scores are selected to form a recommendation list to be recommended to the user, wherein the steps are as follows:
first concatenating the internal potential vectors F of the usersuiAnd an external token vector FuoObtain the final vector F of the useru(ii) a Inner potential vector F of splicing interest pointspiAnd an external token vector FpoObtaining a final vector F of the interest pointspInputting the final vectors of the user and the interest points into a neural network, carrying out score prediction in a multilayer perceptron MLP, and calculating a method as formulas (f) - (i) by adopting a Relu activation function;
Figure FDA0003269441460000032
g2=σ(W2·g1+b2) (g)
gl=σ(Wl·gl-1+bl) (h)
r′ij=WT·gl (i)
learning the parameters of the model by taking the following formula (j) as an objective function;
Figure FDA0003269441460000033
where | o | is the number of points of interest scored by the user in the dataset, rijFor user uiFor points of interest vjTrue score of r'ijPredicted users u for the modeliFor points of interest vjScoring of (4);
after the score of each interest point of the user is obtained, the interest points are arranged according to the score in a descending order, and the top k interest points with the highest scores are selected to form a recommendation list to be recommended to the user.
CN202111103851.5A 2021-09-18 2021-09-18 Interest point recommendation method based on LBSN (location based service) and multi-graph fusion Pending CN113742597A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111103851.5A CN113742597A (en) 2021-09-18 2021-09-18 Interest point recommendation method based on LBSN (location based service) and multi-graph fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111103851.5A CN113742597A (en) 2021-09-18 2021-09-18 Interest point recommendation method based on LBSN (location based service) and multi-graph fusion

Publications (1)

Publication Number Publication Date
CN113742597A true CN113742597A (en) 2021-12-03

Family

ID=78740035

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111103851.5A Pending CN113742597A (en) 2021-09-18 2021-09-18 Interest point recommendation method based on LBSN (location based service) and multi-graph fusion

Country Status (1)

Country Link
CN (1) CN113742597A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160217479A1 (en) * 2015-01-28 2016-07-28 Ajay Kashyap Method and system for automatically recommending business prospects
CN109241454A (en) * 2018-07-18 2019-01-18 广东工业大学 A kind of point of interest recommended method merging social networks and picture material
CN110134885A (en) * 2019-05-22 2019-08-16 广东工业大学 A kind of point of interest recommended method, device, equipment and computer storage medium
CN111061961A (en) * 2019-11-19 2020-04-24 江西财经大学 Multi-feature-fused matrix decomposition interest point recommendation method and implementation system thereof
CN112084427A (en) * 2020-09-15 2020-12-15 辽宁工程技术大学 Interest point recommendation method based on graph neural network

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160217479A1 (en) * 2015-01-28 2016-07-28 Ajay Kashyap Method and system for automatically recommending business prospects
CN109241454A (en) * 2018-07-18 2019-01-18 广东工业大学 A kind of point of interest recommended method merging social networks and picture material
CN110134885A (en) * 2019-05-22 2019-08-16 广东工业大学 A kind of point of interest recommended method, device, equipment and computer storage medium
CN111061961A (en) * 2019-11-19 2020-04-24 江西财经大学 Multi-feature-fused matrix decomposition interest point recommendation method and implementation system thereof
CN112084427A (en) * 2020-09-15 2020-12-15 辽宁工程技术大学 Interest point recommendation method based on graph neural network

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
孟祥福 等: "基于位置-文本关系的空间对象top-k 查询与排序方法", 《智能系统学报》 *
孟祥福 等: "基于地理-社会-评论关系的典型化兴趣点推荐方法", 《小微型计算机》, vol. 40, no. 11, pages 5 *

Similar Documents

Publication Publication Date Title
CN111428147B (en) Social recommendation method of heterogeneous graph volume network combining social and interest information
CN112084427A (en) Interest point recommendation method based on graph neural network
CN113807422B (en) Weighted graph convolutional neural network scoring prediction model integrating multi-feature information
CN112231592B (en) Graph-based network community discovery method, device, equipment and storage medium
CN114265986B (en) Information pushing method and system fusing knowledge graph structure and path semantics
CN112131261B (en) Community query method and device based on community network and computer equipment
CN113065974A (en) Link prediction method based on dynamic network representation learning
CN112256981A (en) Rumor detection method based on linear and nonlinear propagation
CN111475724B (en) Random walk social network event recommendation method based on user similarity
CN114461907B (en) Knowledge graph-based multi-element environment perception recommendation method and system
CN113255895A (en) Graph neural network representation learning-based structure graph alignment method and multi-graph joint data mining method
CN114564597B (en) Entity alignment method integrating multidimensional and multi-information
CN113792110A (en) Equipment trust value evaluation method based on social networking services
CN112311608A (en) Multilayer heterogeneous network space node characterization method
CN114461943B (en) Deep learning-based multi-source POI semantic matching method and device and storage medium thereof
CN110008411B (en) Deep learning interest point recommendation method based on user sign-in sparse matrix
CN115858919A (en) Learning resource recommendation method and system based on project field knowledge and user comments
CN116416478A (en) Bioinformatics classification model based on graph structure data characteristics
CN113849725B (en) Socialized recommendation method and system based on graph attention confrontation network
Bai et al. Hda2l: Hierarchical domain-augmented adaptive learning for sketch-based 3d shape retrieval
Xing et al. Exploiting Two‐Level Information Entropy across Social Networks for User Identification
CN117788122A (en) Goods recommendation method based on heterogeneous graph neural network
CN111898039B (en) Attribute community searching method integrating hidden relations
CN112084418B (en) Microblog user community discovery method based on neighbor information and attribute network characterization learning
CN117495511A (en) Commodity recommendation system and method based on contrast learning and community perception

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20211203