CN112632404A - Multi-granularity self-attention-based next interest point recommendation method - Google Patents
Multi-granularity self-attention-based next interest point recommendation method Download PDFInfo
- Publication number
- CN112632404A CN112632404A CN202011616569.2A CN202011616569A CN112632404A CN 112632404 A CN112632404 A CN 112632404A CN 202011616569 A CN202011616569 A CN 202011616569A CN 112632404 A CN112632404 A CN 112632404A
- Authority
- CN
- China
- Prior art keywords
- poi
- sequence
- attention
- self
- encoding
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 48
- 230000008569 process Effects 0.000 claims abstract description 25
- 238000012549 training Methods 0.000 claims abstract description 22
- 238000012545 processing Methods 0.000 claims abstract description 15
- 230000008447 perception Effects 0.000 claims description 36
- 230000006870 function Effects 0.000 claims description 28
- 230000002776 aggregation Effects 0.000 claims description 23
- 238000004220 aggregation Methods 0.000 claims description 23
- 238000012360 testing method Methods 0.000 claims description 12
- 230000004913 activation Effects 0.000 claims description 5
- 238000013528 artificial neural network Methods 0.000 claims description 3
- 230000002123 temporal effect Effects 0.000 description 11
- 235000019580 granularity Nutrition 0.000 description 8
- 239000000126 substance Substances 0.000 description 7
- 230000000694 effects Effects 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 2
- 239000011159 matrix material Substances 0.000 description 2
- 230000006855 networking Effects 0.000 description 2
- 230000002194 synthesizing effect Effects 0.000 description 2
- 238000012546 transfer Methods 0.000 description 2
- 230000007704 transition Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 230000010365 information processing Effects 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/90—Details of database functions independent of the retrieved data types
- G06F16/95—Retrieval from the web
- G06F16/953—Querying, e.g. by the use of web search engines
- G06F16/9536—Search customisation based on social or collaborative filtering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/047—Probabilistic or stochastic networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/048—Activation functions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
- G06Q50/01—Social networking
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Computing Systems (AREA)
- Data Mining & Analysis (AREA)
- Computational Linguistics (AREA)
- Molecular Biology (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Artificial Intelligence (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Computation (AREA)
- Databases & Information Systems (AREA)
- Business, Economics & Management (AREA)
- Probability & Statistics with Applications (AREA)
- Economics (AREA)
- Human Resources & Organizations (AREA)
- Marketing (AREA)
- Primary Health Care (AREA)
- Strategic Management (AREA)
- Tourism & Hospitality (AREA)
- General Business, Economics & Management (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The invention discloses a multi-granularity self-attention-based next interest point recommendation method, which comprises the following steps: constructing a multi-granularity self-attention network model, wherein the network model comprises: the system comprises a first encoder for perceptually encoding time, a second encoder for perceptually encoding space, and a decoder for decoding POI characteristics; collecting a plurality of historical sign-in data of the user, training the network model, and predicting the next interest point of the user by using the trained network model; the process of training and prediction includes: extracting a POI sequence containing time characteristics and space characteristics according to historical sign-in data of a user, and extracting a POI business circle sequence containing time characteristics and space characteristics according to the historical sign-in data of the user; and taking the POI sequence and the POI business circle sequence as input sequences, and processing the input sequences through the network model to obtain the next POI id of the user. The invention comprehensively considers the POI sequence and the POI business circle sequence and then applies the multi-granularity self-attention network, thereby being capable of recommending the next interest point more accurately.
Description
Technical Field
The invention relates to the technical field of information processing. More particularly, the present invention relates to a multi-granularity self-attention-based next point of interest recommendation method.
Background
With the rapid development of intelligent devices, social networks such as WeChat, microblog and Facebook attract billions of users to exchange and share information. In recent years, a significant advance in social networking applications has been the introduction of location-based systems, which have led to the emergence of location-based social networks (lbs), such as Foursquare and Yelp. Point of interest (POI) recommendation systems play a crucial role in social networking, which can help users find local interesting attractions or facilities. It may also help service providers recommend personalized services (e.g., advertisements) based on a user's check-in sequence. However, most of the next point-of-interest recommendations focus only on the transition patterns on the POI sequence, and the transition patterns between POI businesses are ignored, resulting in low accuracy of the next point-of-interest recommendation.
Disclosure of Invention
An object of the present invention is to solve at least the above problems and to provide at least the advantages described later.
The invention also aims to provide a multi-granularity self-attention-based next interest point recommendation method, which can recommend a next interest point more accurately by applying a multi-granularity self-attention network after comprehensively considering the POI sequence and the POI business circle sequence.
To achieve these objects and other advantages in accordance with the purpose of the invention, there is provided a multi-granularity self-attention based next point of interest recommendation method, comprising:
constructing a multi-granularity self-attention network model, wherein the multi-granularity self-attention network model comprises the following steps: the system comprises a first encoder for perceptually encoding time, a second encoder for perceptually encoding space, and a decoder for decoding POI characteristics;
collecting historical sign-in data of a plurality of users, taking the historical sign-in data of a part of users as a training set to train the multi-granularity self-attention network model, taking the historical sign-in data of another part of users as a test set, and predicting the next interest point of the users by using the trained multi-granularity self-attention network model;
wherein, the training process of the training set and the testing process of the testing set comprise: extracting a POI sequence containing time characteristics and space characteristics according to historical sign-in data of a user, and extracting a POI business circle sequence containing time characteristics and space characteristics according to the historical sign-in data of the user;
and taking the POI sequence and the POI business circle sequence as input sequences, and processing the input sequences through the multi-granularity self-attention network model to obtain the next POI id of the user.
Preferably, the user historical check-in data includes user id, POI id, check-in time, category to which the POI belongs, category of the POI, and geographical location of the POI.
Preferably, the decoders include a first decoder for decoding a category of POI, a second decoder for decoding a category to which the POI belongs, and a third decoder for decoding an id of the POI.
Preferably, the first encoder and the second encoder are each composed of a feature layer, an aggregation layer, and a self-attention network.
Preferably, the process of obtaining the next POI id of the user through the multi-granularity self-attention network model processing by using the POI sequence and the POI business circle sequence as input sequences comprises:
the method comprises the steps that a first encoder is used for encoding time characteristics in a POI sequence to obtain time perception encoding of the POI sequence, a second encoder is used for encoding space characteristics in the POI sequence to obtain space perception encoding of the POI sequence, and the time perception encoding of the POI sequence and the space perception encoding of the POI sequence are integrated to obtain POI sequence representation;
the method comprises the steps that a first encoder is used for encoding time features in a POI business circle sequence to obtain time perception encoding of the POI business circle sequence, a second encoder is used for encoding space features in the POI business circle sequence to obtain space perception encoding of the POI business circle sequence, and the time perception encoding of the POI business circle sequence and the space perception encoding of the POI business circle sequence are integrated to obtain POI business circle sequence representation;
taking the time perception code of the POI sequence and the time perception code of the POI business circle sequence as input quantities, and processing the input quantities through a first decoder to obtain a next POI type predicted value;
using the aggregation representation obtained in the coding process of the time features in the POI sequence and the aggregation representation obtained in the coding process of the time features in the POI quotient circle sequence as input quantities, and processing the input quantities through a second decoder to obtain a predicted value of the category to which the next POI belongs;
and taking the POI sequence representation, the POI business circle sequence representation, the next POI category predicted value and the next POI belonged category predicted value as input quantities, and processing by a third decoder to obtain the next POI id of the user.
Preferably, the self-attention network is added with a layer of feedforward neural network, and the activation function used is a ReLu function.
Preferably, the activation function of the first decoder is a sigmoid function, and the loss function is a cross-entropy loss function.
Preferably, the loss function of the second decoder is a BPR loss function.
Preferably, the loss function of the third decoder is a BPR loss function.
Preferably, an AdaGrad optimizer is used in the multi-granularity self-attention network model training process.
The invention at least comprises the following beneficial effects: according to the invention, the POI sequence is used as a fine granularity sequence, the POI business circle sequence is used as a coarse granularity sequence, modeling is carried out on two levels of the coarse granularity sequence and the fine granularity sequence, the model expression capability can be effectively improved, a multi-granularity self-attention network is adopted to capture an ordered transfer mode of two granularities, an activity task for predicting the next interest point type and an auxiliary task for predicting the category of the next interest point are introduced, and finally, the next interest point is recommended by integrating the results of the coarse granularity sequence representation and the two tasks.
Additional advantages, objects, and features of the invention will be set forth in part in the description which follows and in part will become apparent to those having ordinary skill in the art upon examination of the following or may be learned from practice of the invention.
Drawings
FIG. 1 is a flow chart of a multi-granularity self-attention-based next point of interest recommendation method according to the present invention;
FIG. 2 is a flow chart of a multi-granular self-attention network model training and testing process according to the present invention;
FIG. 3 is a flow chart of a multi-granular self-attention network model encoding and decoding process according to the present invention;
FIG. 4 is a diagram of a historical access record of a user Alice according to one embodiment of the invention.
Detailed Description
The present invention is further described in detail below with reference to the attached drawings so that those skilled in the art can implement the invention by referring to the description text.
It should be noted that the embodiments described in this application are only a part of the embodiments of the present application, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
As shown in fig. 1, the present invention provides a multi-granularity self-attention-based next point of interest recommendation method, which includes:
s101, constructing a multi-granularity self-attention network model, wherein the multi-granularity self-attention network model comprises the following steps: the system comprises a first encoder for perceptually encoding time, a second encoder for perceptually encoding space, and a decoder for decoding POI characteristics;
here, the first encoder and the second encoder are each composed of a feature layer, an aggregation layer, and a self-attention network.
The self-attention network utilizes an attention mechanism to calculate the relevance between each element and all other elements. To calculate the self-attention score, there are mainly the following processes:
firstly, obtaining Q, K, V according to the dot multiplication of an input matrix X and a weight matrix:
Q=XWQ;
K=XWK;
V=XWV;
secondly, performing operation on Q, K and V, and normalizing the operation result by using a softmax function:
wherein the content of the first and second substances,the effect of (c) is to scale the results. Since the previous calculations are all linear calculations, in order to make the model have better nonlinear fitting capability, a layer of feedforward neural network is added, and the activation function used is the ReLu function:
F=FFN(Z)=ReLu(ZW1+b1);
finally, a result F from the attention network is obtained.
Here, the POI feature includes POI id, category to which the POI belongs, category of the POI, and geographical position of the POI, and the decoder for decoding the POI feature includes a first decoder for decoding the category of the POI, a second decoder for decoding the category to which the POI belongs, and a third decoder for decoding the POI id.
S102, collecting historical sign-in data of a plurality of users, taking the historical sign-in data of a part of users as a training set to train the multi-granularity self-attention network model, taking the historical sign-in data of the other part of users as a test set, and predicting the next interest point of the users by using the trained multi-granularity self-attention network model;
say there are 10 users, each with some history, assume p1~p10During training, p is used for each user in the training set1~p9Used as training to predict p10Assuming a ratio of training set to test setIs 8: 2, that is to say, 8 user history records are used for training to obtain learnable parameters of the multi-granularity self-attention network model, and 2 user history records are used as a test set to be tested independently to predict the next interest point of the user. Of course, this is merely an example, in the practical application process, the number of training sets is continuously increased with the increase of the number of users, so that the prediction result of the multi-granularity self-attention network model is more and more accurate.
Here, the training set training process and the test set testing process include:
s201, extracting a POI sequence containing time characteristics and space characteristics according to historical sign-in data of a user, and extracting a POI business circle sequence containing time characteristics and space characteristics according to the historical sign-in data of the user;
here, the user history check-in data includes user id (u), POI id (p), check-in time (t), category (c) to which the POI belongs, category (y) of the POI, and geographical location (g) of the POI, and the usage parameter is represented as (u, p, t, c, y, g).
Here, the user id (u), POI id (p), check-in time (t), and POI geographical location (g) are well understood and can be known literally, and the category (c) to which the POI belongs can be roughly classified into: catering, hospitals, education, etc. For the POI category (y), we explain with the following example:
as shown in FIG. 4, the user Alice has the following historical access record p on the electronic map2→p3→…→p8. Wherein the individual POI p2,p3,p4,p5At business circles POI p1Among them. In this model, the POI sequence (fine-grained sequence) is p2→p3→…→p8POI quotient circle sequence (coarse-grained sequence) is p1→p6→p7→p8Wherein p is2→p3→p4→p5Simplified business district POI p1. Due to individual POI p6,p7,p8There is no corresponding POI quotient circle at a higher granularity, so we will still keep p to keep the sequence context semantic integrity6,p7,p8A reservation is made. If it will bePOI categories (y) are distinguished by 0 and 1, then, in the POI sequence, POI p2~p8Class (y) has a value of 0, p in the POI quotient circle sequence1The value of the belonging category (y) is 1.
In the sequence of the POI, the POI sequence,for the temporal characteristics of user u at time n,the spatial signature of user u at time n. Similarly, in the POI trade circle sequence,for the temporal characteristics of user u at time n,the spatial signature of user u at time n.
S202, the POI sequence and the POI business circle sequence are used as input sequences, and the next POI id of the user is obtained through processing of the multi-granularity self-attention network model.
The steps specifically include the following processes:
s301, a first encoder is used for encoding the time characteristics in the POI sequence to obtain the time perception encoding of the POI sequence.
In the sequence of the POI, the POI sequence,for the temporal characteristics of user u at time n, the first encoder consists of a characteristic layer, an aggregation layer, and a self-attention network.
First, the temporal feature aggregation is represented as:
wherein the content of the first and second substances,is an embedding of the kind that is,is embedding with a timestamp mapped to 24 hours,is the imbedding of the category to which it belongs, b1are learnable parameters.
Then, the aggregated representation of the temporal features is sent into a self-attention network, resulting in a time-aware encoding of the sequence of POIs:
wherein SAN stands for self-attention network.Represented is the time-aware coding of POI sequences.
And coding the spatial features in the POI sequence by using a second coder to obtain spatial perception coding of the POI sequence, and synthesizing the temporal perception coding of the POI sequence and the spatial perception coding of the POI sequence to obtain POI sequence representation.
In the sequence of the POI, the POI sequence,for the spatial feature of the user u at the time n, it is noted that the feature extraction process is not directly usedI.e. spatial coordinates, but the distance between the current position and the last position is notedThe second encoder is also comprised of a feature layer, an aggregation layer, and a self-attention network.
First, the spatial feature aggregation is represented as:
wherein the content of the first and second substances,for POI id to correspond to embedding,the interval between the current position and the last position, b2are learnable parameters.
Then, willThe type of the POI sequence is set as an integer type, so that the computational complexity is reduced, and then the aggregation representation of the spatial features is sent to a self-attention network to obtain the spatial perception coding of the POI sequence:
wherein SAN stands for self-attention network.Represented is the spatial perceptual coding of POI sequences. Finally, combining the temporal perceptual coding and the spatial perceptual coding of the POI sequence into a POI sequence representation:
s302, coding time characteristics in the POI business circle sequence by using a first coder to obtain time perception coding of the POI business circle sequence
In the sequence of POI business circles,the temporal characteristics of user u at time n. The first encoder consists of a feature layer, an aggregation layer, and a self-attention network.
First, the temporal feature aggregation is represented as:
wherein the content of the first and second substances,is an embedding of the kind that is,is embedding with a timestamp mapped to 24 hours,is the imbedding of the category to which it belongs, b3are learnable parameters.
And then, sending the aggregate representation of the time characteristics into a self-attention network to obtain time perception codes of POI business circle sequences:
wherein SAN stands for self-attention network,represented is the time-aware coding of POI quotient circle sequences.
Coding the spatial features in the POI business circle sequence by using a second coder to obtain spatial perception coding of the POI business circle sequence, and synthesizing the time perception coding of the POI business circle sequence and the spatial perception coding of the POI business circle sequence to obtain POI business circle sequence representation;
in the sequence of POI business circles,the spatial signature of user u at time n. It is noted that the process of extracting features is not used directlyI.e. spatial coordinates, but the distance between the current position and the last position is notedThe second encoder is also comprised of a feature layer, an aggregation layer, and a self-attention network.
First, the spatial feature aggregation is represented as:
wherein the content of the first and second substances,for POI id to correspond to embedding,the interval between the current position and the last position, b4are learnable parameters.
Then, willThe type of the POI business circle sequence is set as an integer type, so that the computational complexity is reduced, and then the aggregation representation of the spatial features is sent into a self-attention network to obtain the spatial perception code of the POI business circle sequence:
wherein SAN stands for self-attention network.Represented is the spatial perceptual coding of POI quotient circle sequences. Finally, combining the temporal perception coding and the spatial perception coding of the POI quotient circle sequence into a POI quotient circle sequence representation:
s303, taking the time perception code of the POI sequence and the time perception code of the POI business circle sequence as input quantities, and processing the input quantities through a first decoder to obtain a next POI type predicted value;
to predict the next POI category, we represent using temporal perceptual coding from two sequences:
wherein the content of the first and second substances,is an aggregation of two sequences of time-aware encoders, σ is a sigmoid function,is the predicted value of the next POI category.
For prediction of the next POI category, we use the Cross-entropy (Cross-entropy) loss function:
s304, using the aggregation representation obtained in the coding process of the time features in the POI sequence and the aggregation representation obtained in the coding process of the time features in the POI quotient circle sequence as input quantities, and processing the input quantities through a second decoder to obtain a predicted value of the category to which the next POI belongs;
to predict the class to which the next POI belongs, we compute the temporally encoded aggregate representation from the two sequences first, and then compute the aggregate A of the two temporally encoded aggregate representationsrepAnd finally, calculating the category of the next POI:
wherein A isrepFor the purpose of the time-aware representation,to indicate to the user that,is a predicted value of the category to which the next POI belongs.
For the prediction of the category to which the next POI belongs, we use the BPR loss function:
s305, using the POI sequence representation, the POI business circle sequence representation, the next POI category predicted value and the next POI category predicted value as input quantities, and processing through a third decoder to obtain the next POI id of the user.
In order to recommend the next POI, the POI sequence representation, the POI business circle sequence representation, the next POI category predicted value and the next POI category predicted value are aggregated, and then an aggregation P represented by two sequence space coding aggregation is calculatedrepAnd finally recommending the next POI id:
wherein the content of the first and second substances,is an aggregation of the first encoder result and the second encoder result of different sequences,is a multi-granular sequence representation, PrepIs a representation of the spatial perception that,to indicate to the user that,and predicting a value for the next POI id of the user.
For recommending the next POI, we use the BPR loss function:
for the total loss function, there is the following expression:
wherein λ isy,λcAnd λpFor adjusting different losses, λy+λc+λpλ is the regularization coefficient, and Θ (W, b) is the set of learnable parameters.
Here, when steps S303-305 are performed during the training process, an AdaGrad optimizer is also employed to optimize the training process for the learnable parameters.
According to the method in the embodiment, the POI sequence is used as a fine-grained sequence, the POI business circle sequence is used as a coarse-grained sequence, modeling is performed on two levels of the coarse-grained sequence and the fine-grained sequence, the model expression capacity can be effectively improved, a multi-grained self-attention network is adopted to capture an ordered transfer mode of two granularities, an activity task for predicting the type of the next interest point and an auxiliary task for predicting the type of the next interest point are introduced, and finally the result of the coarse-grained sequence representation and the result of the two tasks are integrated to recommend the next interest point.
While embodiments of the invention have been described above, it is not limited to the applications set forth in the description and the embodiments, which are fully applicable in various fields of endeavor to which the invention pertains, and further modifications may readily be made by those skilled in the art, it being understood that the invention is not limited to the details shown and described herein without departing from the general concept defined by the appended claims and their equivalents.
Claims (10)
1. The next interest point recommendation method based on multi-granularity self-attention is characterized by comprising the following steps:
constructing a multi-granularity self-attention network model, wherein the multi-granularity self-attention network model comprises the following steps: the system comprises a first encoder for perceptually encoding time, a second encoder for perceptually encoding space, and a decoder for decoding POI characteristics;
collecting historical sign-in data of a plurality of users, taking the historical sign-in data of a part of users as a training set to train the multi-granularity self-attention network model, taking the historical sign-in data of another part of users as a test set, and predicting the next interest point of the users by using the trained multi-granularity self-attention network model;
wherein, the training process of the training set and the testing process of the testing set comprise: extracting a POI sequence containing time characteristics and space characteristics according to historical sign-in data of a user, and extracting a POI business circle sequence containing time characteristics and space characteristics according to the historical sign-in data of the user;
and taking the POI sequence and the POI business circle sequence as input sequences, and processing the input sequences through the multi-granularity self-attention network model to obtain the next POI id of the user.
2. The multi-granularity self-attention-based next point-of-interest recommendation method according to claim 1, wherein the user historical check-in data comprises user id, POI id, check-in time, category to which the POI belongs, POI category and POI geographic location.
3. The multi-granularity self-attention-based next point-of-interest recommendation method according to claim 2, wherein the decoders comprise a first decoder for decoding categories of POIs, a second decoder for decoding categories to which POIs belong, and a third decoder for decoding POI ids.
4. The multi-granularity self-attention-based next point-of-interest recommendation method of claim 3, wherein the first encoder and the second encoder are each composed of a feature layer, an aggregation layer, and a self-attention network.
5. The multi-granularity self-attention-based next interest point recommendation method according to claim 4, wherein the process of processing the multi-granularity self-attention network model to obtain the next POI id of the user by using the POI sequence and the POI quotient circle sequence as input sequences comprises the following steps:
the method comprises the steps that a first encoder is used for encoding time characteristics in a POI sequence to obtain time perception encoding of the POI sequence, a second encoder is used for encoding space characteristics in the POI sequence to obtain space perception encoding of the POI sequence, and the time perception encoding of the POI sequence and the space perception encoding of the POI sequence are integrated to obtain POI sequence representation;
the method comprises the steps that a first encoder is used for encoding time features in a POI business circle sequence to obtain time perception encoding of the POI business circle sequence, a second encoder is used for encoding space features in the POI business circle sequence to obtain space perception encoding of the POI business circle sequence, and the time perception encoding of the POI business circle sequence and the space perception encoding of the POI business circle sequence are integrated to obtain POI business circle sequence representation;
taking the time perception code of the POI sequence and the time perception code of the POI business circle sequence as input quantities, and processing the input quantities through a first decoder to obtain a next POI type predicted value;
using the aggregation representation obtained in the coding process of the time features in the POI sequence and the aggregation representation obtained in the coding process of the time features in the POI quotient circle sequence as input quantities, and processing the input quantities through a second decoder to obtain a predicted value of the category to which the next POI belongs;
and taking the POI sequence representation, the POI business circle sequence representation, the next POI category predicted value and the next POI belonged category predicted value as input quantities, and processing by a third decoder to obtain the next POI id of the user.
6. The multi-granularity self-attention-based next interest point recommendation method according to claim 4, wherein the self-attention network is added with a layer of feedforward neural network, and an activation function used is a ReLu function.
7. The multi-granularity self-attention-based next interest point recommendation method of claim 5, wherein the activation function of the first decoder is a sigmoid function, and the loss function is a cross-entropy loss function.
8. The multi-granular self-attention-based next point of interest recommendation method of claim 5, wherein the loss function of the second decoder is a BPR loss function.
9. The multi-granular self-attention-based next point of interest recommendation method of claim 5, wherein the loss function of the third decoder is a BPR loss function.
10. The multi-granularity self-attention-based next interest point recommendation method according to claim 5, wherein an AdaGrad optimizer is used in the multi-granularity self-attention network model training process.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011616569.2A CN112632404A (en) | 2020-12-30 | 2020-12-30 | Multi-granularity self-attention-based next interest point recommendation method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011616569.2A CN112632404A (en) | 2020-12-30 | 2020-12-30 | Multi-granularity self-attention-based next interest point recommendation method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112632404A true CN112632404A (en) | 2021-04-09 |
Family
ID=75287137
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011616569.2A Pending CN112632404A (en) | 2020-12-30 | 2020-12-30 | Multi-granularity self-attention-based next interest point recommendation method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112632404A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113205427A (en) * | 2021-06-07 | 2021-08-03 | 广西师范大学 | Recommendation method for next interest point of social network |
-
2020
- 2020-12-30 CN CN202011616569.2A patent/CN112632404A/en active Pending
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113205427A (en) * | 2021-06-07 | 2021-08-03 | 广西师范大学 | Recommendation method for next interest point of social network |
CN113205427B (en) * | 2021-06-07 | 2022-09-16 | 广西师范大学 | Recommendation method for next interest point of social network |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Cui et al. | Personalized travel route recommendation using collaborative filtering based on GPS trajectories | |
CN104737523B (en) | The situational model in mobile device is managed by assigning for the situation label of data clustering | |
CN106776928B (en) | Position recommendation method based on memory computing framework and fusing social contact and space-time data | |
CN111091196B (en) | Passenger flow data determination method and device, computer equipment and storage medium | |
CN105825297A (en) | Markov-model-based position prediction method | |
CN113139140B (en) | Tourist attraction recommendation method based on space-time perception GRU and combined with user relationship preference | |
CN110008414B (en) | Method and device for determining geographic information point | |
Liu et al. | Holiday passenger flow forecasting based on the modified least-square support vector machine for the metro system | |
CN111949877B (en) | Personalized interest point recommendation method and system | |
Hawelka et al. | Collective prediction of individual mobility traces for users with short data history | |
CN115774819B (en) | Point of interest recommendation method and system based on hierarchical cyclic neural network | |
Li et al. | Urban mobility analytics: A deep spatial–temporal product neural network for traveler attributes inference | |
Moon et al. | A Large-Scale Study in Predictability of Daily Activities and Places. | |
CN111639961A (en) | Information prediction method, information prediction device, electronic equipment and computer readable medium | |
CN114741614A (en) | Position recommendation method based on position encoder and space-time embedding | |
CN111340522A (en) | Resource recommendation method, device, server and storage medium | |
CN112632404A (en) | Multi-granularity self-attention-based next interest point recommendation method | |
CN116541608B (en) | House source recommendation method and device, electronic equipment and storage medium | |
CN112954066A (en) | Information pushing method and device, electronic equipment and readable storage medium | |
CN113032688B (en) | Method for predicting access position of social network user at given future time | |
CN116049887A (en) | Privacy track release method and device based on track prediction | |
Guo et al. | How to pay less: a location‐specific approach to predict dynamic prices in ride‐on‐demand services | |
CN112765493B (en) | Method for obtaining time preference fusion sequence preference for point of interest recommendation | |
CN110781929A (en) | Training method, prediction device, medium, and apparatus for credit prediction model | |
CN117828193B (en) | Multi-interest semi-joint learning interest recommendation method, system, equipment and medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |