CN114461906A - Sequence recommendation method and device focusing on user core interests - Google Patents

Sequence recommendation method and device focusing on user core interests Download PDF

Info

Publication number
CN114461906A
CN114461906A CN202210024433.5A CN202210024433A CN114461906A CN 114461906 A CN114461906 A CN 114461906A CN 202210024433 A CN202210024433 A CN 202210024433A CN 114461906 A CN114461906 A CN 114461906A
Authority
CN
China
Prior art keywords
sequence
attention
matrix
user
self
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210024433.5A
Other languages
Chinese (zh)
Inventor
艾正阳
王树鹏
贾思宇
王振宇
王勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Information Engineering of CAS
Original Assignee
Institute of Information Engineering of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Information Engineering of CAS filed Critical Institute of Information Engineering of CAS
Priority to CN202210024433.5A priority Critical patent/CN114461906A/en
Publication of CN114461906A publication Critical patent/CN114461906A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9535Search customisation based on user profiles and personalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses a sequence recommendation method and a device focusing on user core interest, which comprises the steps of obtaining an interaction sequence of a user and a project and a timestamp corresponding to each interaction behavior in the interaction sequence; obtaining an embedded matrix of the interactive sequence; performing self-attention calculation on the embedded matrix to obtain the probability distribution of the attention values of all the keys of each query; obtaining predefined fixed default probability distribution of each query; obtaining the activity measurement of each query according to the similarity of the two probability distributions; respectively calculating the attention value of each key based on the activity measurement so as to construct a self-attention matrix; and obtaining the item recommendation result of the user according to the self-attention matrix. According to the invention, by adding the time interval and the activity measurement index into the embedded layer, the relevance between the project and the core interest of the user can be adaptively measured, and the model expression capability and the accuracy of the recommendation result are improved.

Description

Sequence recommendation method and device focusing on user core interests
Technical Field
The invention relates to the field of recommendation systems, in particular to a sequence recommendation method and device focusing on user core interests.
Background
Traditional recommendation systems, such as collaborative filtering, model user and item interactions in a static manner. In contrast, the sequence recommendation system treats user item interactions as a dynamic sequence and takes into account its sequential relevance. The focus of research on sequence recommendations is to compactly capture useful patterns from continuous dynamic behavior to obtain accurate recommendations.
The markov chain based approach is a typical example, and makes a simplified assumption: the next action is conditioned on one or more recent actions. The disadvantage is obvious, it may not be possible to capture complex dynamics in some complex scenes. Another representative work is to use a recurrent neural network for sequence recommendation. Given a user's historical sequence of interactions, recursive neural network-based sequence recommendations attempt to predict the user's next interaction by modeling the sequence dependencies. However, limited to its strict one-way architecture, recursive neural network-based sequence recommendations are prone to spurious dependencies and difficult to perform in parallel training.
In recent years, inspired by the Transformer model for machine translation, it has become a trend to adopt a self-attention mechanism for sequence recommendation. Models based on self-attention mechanisms can emphasize truly relevant and important interactions in a sequence while reducing irrelevant interactions. Therefore, they have higher flexibility and expressive power than models based on markov chains and recurrent neural networks.
In general, when modeling a sequence of interactions of a user, the present invention desires to generate a characterization of the user's interests and make predictions based thereon. However, in real-life scenarios, not all interactions between the user and the item reflect the user's interests. On the one hand, the interaction sequence typically contains a drift in user interest caused by accidental clicks. On the other hand, in some cases, users may find that they have no actual interest in the item with which they interact. For example, watching a movie they do not like or purchasing an ill-fitting garment. Thus, considering all of the above items may not work as positively, or may even work as negatively, in generating a characterization of the user's interests. The present invention refers to these interactions, which do not represent the real interest of the user and which do not have any influence on the subsequent behavior of the user, as noise interactions. In contrast, core interests reflect the user's deep preferences for items and dominate their selection of candidates. Therefore, finding interactions from a user's interaction sequence that represent the user's core interests is crucial for generating user interest characteristics and for making predictions of candidates. The model based on the classical self-attention mechanism achieves some emphasis on important items by performing a scaled dot product calculation of the query and key between all items. However, there is still a lack of methods in current research to clearly distinguish core interest-related interactions from noise interactions, so as to directly eliminate the negative effects of the latter. Furthermore, the simplifying assumption made by most models before is to treat the history of interactions as an ordered sequence, regardless of the time interval between each interaction. This approach can result in the loss of valid information because the time intervals between interactions are also part of the user behavior pattern and should be included in the user interest characterization. Therefore, the existing sequence recommendation method has a plurality of defects.
Disclosure of Invention
In order to overcome the defects of the existing sequence recommendation method, the invention provides a sequence recommendation method focusing on the core interest of a user, which can explicitly distinguish the core interest related interaction and the noise interaction so as to directly eliminate the negative influence of the latter. While taking into account the time interval between interactions to preserve valid information.
The technical content of the invention comprises:
a method for sequence recommendation focusing on a user's core interest, comprising the steps of:
acquiring an interaction sequence of a user and a project and a timestamp corresponding to each interaction behavior in the interaction sequence;
acquiring an embedded matrix of the interactive sequence by combining the time stamp;
performing self-attention calculation on the embedded matrix to obtain each query qiAttention value for all keysProbability distribution p, and calculating each query q by setting a scaling exponential functioniPredefining a fixed default probability distribution q;
obtaining each query q according to the similarity of the attention value probability distribution p and a predefined fixed default probability distribution qiAn activity metric of;
respectively calculating the attention value of each key based on the activity measurement so as to construct a self-attention matrix;
and obtaining the item recommendation result of the user according to the self-attention matrix.
Further, an embedded matrix of the interaction sequence is obtained by:
1) converting the interactive sequence into a detection sequence with a fixed length of l;
2) constructing a timestamp sequence based on the timestamp and the detection sequence;
3) for each item in the detected sequence, mapping to vector X by a one-dimensional convolution filteriAnd each vector X is combinediSuperposing to obtain an item embedding matrix;
4) obtaining a position embedding matrix according to the position of each item in the detection sequence;
5) obtaining a time interval embedded matrix by calculating time intervals among time stamps in the time stamp sequence;
6) and acquiring the embedding matrix of the interaction sequence based on the item embedding matrix, the position embedding matrix and the time interval embedding matrix.
Further, the time interval embedding matrix is obtained by the following steps:
1) obtaining the shortest time interval in the time intervals among the timestamps;
2) dividing each time interval by the shortest time interval to obtain an individualized time interval;
3) constructing a time interval sequence with the length of l-1 based on the personalized time interval;
4) and filling 0 to the rightmost side of the time interval sequence with the length of l-1, and then obtaining a time interval embedded matrix through projection and superposition.
Further, the method of calculating the similarity includes: the KL divergence was used for the measurements.
Further, an activity metric
Figure BDA0003458826640000031
Wherein K represents a bond, LKX d is the dimension of the bond K, KjFor the jth row vector of key K, μ is a constant that controls the importance of the most recent behavior.
Further, a self-attention matrix is constructed by:
1) based on each query qiObtaining the active query concerned by each key;
2) calculating a self-attention value of the active query;
3) for an inactive query, using a predefined fixed default probability distribution q as a self-attentiveness value;
4) and recombining the self-attention values of the active query and the inactive query according to the original positions to obtain a self-attention matrix.
Further, when the self-attention network of the self-attention matrix is constructed through training, parameter feedback is carried out through the two layers of feedforward neural networks.
Further, the item recommendation result of the user is obtained through the following steps:
1) processing the self-attention moment array through a layer normalization method, a residual error connection method and a dropout method to obtain user interest expression;
2) calculating preference scores of the user for the items based on the user interest representation and the item embedding matrix of the items;
3) and sequencing the items according to the preference scores, and taking the items with the highest scores as item recommendation results.
A storage medium having a computer program stored therein, wherein the computer program is arranged to perform the above method when executed.
An electronic device comprising a memory and a processor, wherein the memory stores a program that performs the above described method.
Compared with the prior art, the invention has the beneficial effects that:
1. the present invention proposes a new attention model that can directly and unambiguously eliminate the influence of irrelevant items, thereby focusing more attention on items that are truly relevant to the user's interest. A novel activity metric index is designed, and the relevance between the project and the core interest of the user can be measured in a self-adaptive mode.
2. The invention takes time interval into consideration in the embedding layer, and obviously improves the expression capability of the model on the premise of not generating huge additional calculation cost.
3. The evaluation results on a plurality of reference data sets show that the method is superior to the existing sequence recommendation model, the most advanced level is achieved, and the proposed components play important roles respectively.
Drawings
FIG. 1 is a flowchart of a sequence recommendation method focusing on user core interests according to the present invention.
FIG. 2 is a graph illustrating the recommendation of effectiveness tests in different potential dimensions and comparison of effectiveness with other models according to an embodiment of the present invention.
FIG. 3 is a recommended effectiveness test chart under different sampling factors according to an embodiment of the present invention.
Fig. 4 is a visualization diagram of an attention moment matrix of a user at the 1 st head in a multi-head view according to an embodiment of the present invention.
FIG. 5 is a diagram illustrating an attention moment matrix of a 2 nd head of a user from a multi-head perspective according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is to be understood that the described embodiments are merely specific embodiments of the present invention, rather than all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the invention without making creative efforts, are described.
As shown in fig. 1, the present invention can be divided into 5 steps in total:
s1: and acquiring an interaction sequence of the user and the project and a time stamp corresponding to each interaction behavior, and intercepting or filling the interaction sequence and the time stamp sequence into a fixed length.
The data preprocessing module comprises the following specific construction steps:
s1.1: interaction sequence for user u
Figure BDA0003458826640000041
Wherein n isuFor the number of items, then take the sequence
Figure BDA0003458826640000042
For training data, it is converted into a sequence(s) of fixed length l1,s2,…,sl). If the length of the original sequence is larger than l, intercepting the latest l interactive items; if the length of the original sequence is less than l, the left side of the sequence is filled with 0.
S1.2: time stamp sequence for user u
Figure BDA0003458826640000043
Wherein n isuFor the number of items, then take the sequence
Figure BDA0003458826640000044
For training data, it is converted into a sequence of fixed length l (t)1,t2,…,tl). If the length of the original sequence is larger than l, intercepting the latest l interactive items; if the length of the original sequence is less than l, the sequence is used
Figure BDA0003458826640000045
Fill to the left of the sequence.
S2: and establishing an embedding layer, which mainly comprises three parts of scalar mapping embedding, position embedding and time interval embedding.
The specific construction steps of the embedded matrix of each part are as follows:
s2.1: scalar mapping. For the sequence(s)1,s2,…,sl) Each item s iniIt is mapped to d-dimensional vector X by a one-dimensional convolution filteri. Where d is the potential dimension. After all the items are superimposed together, an embedded matrix of items is obtained:
Figure BDA0003458826640000046
s2.2: and (4) embedding the position. Since the self-attention model does not contain a recursive or convolution module, it cannot determine the actual position of an item in a sequence, so a position embedding module is added to identify the specific position of an item in a sequence. Here, a learnable matrix is used
Figure BDA0003458826640000047
As a position embedding matrix.
S2.3: the time interval is embedded. For the series of timestamps (t) acquired in S1.21,t2,…,tl) And calculating the time difference of every two adjacent timestamps. For each user, the invention only concerns the relative length of the time intervals in the sequence. Thus, for all time intervals, the present invention divides them by the shortest time interval in the user sequence
Figure BDA0003458826640000051
As personalized time intervals:
Figure BDA0003458826640000052
Figure BDA0003458826640000053
after a time interval sequence with the length of l-1 is obtained, 0 is filled on the rightmost side until the length of l is obtained, and then a time interval embedding matrix of a user is obtained through projection and superposition:
Figure BDA0003458826640000054
s2.4: the final embedded matrix is the sum of the three parts:
Figure BDA0003458826640000055
wherein
Figure BDA0003458826640000056
S3: a core interest focused self-attention network layer is established. A novel evaluation index (activity measure) is designed, and the relevance between the item and the core interest of the user can be measured in an adaptive mode. According to the evaluation index, the interactive items of the user can be divided into two parts, namely noise interaction and core interest related interaction, and attention values of the two parts are obtained in different calculation modes. And fusing the attention values of the two parts to obtain the user interest representation.
The specific steps of constructing the self-attention network layer with focused core interest are as follows:
s3.1: the standard self-attention model is in the form:
Figure BDA0003458826640000057
wherein Q, K, V represent query, key, value, respectively, with their respective dimensions being
Figure BDA0003458826640000058
By q separatelyi,ki,viRepresenting the ith row vector of Q, K, V, then Q can be divided intoiThe attention value of (c) translates to a probabilistic form of kernel smoothing:
Figure BDA0003458826640000059
wherein p (k)j|qi)=k(qi,kj)/∑k(qi,kl) Is a probability distribution representing the attention values of the ith query for all keys, and the self-attention mechanism combines all the values according to the probability distributionIs the final output.
Figure BDA00034588266400000510
Represents qiAnd kjThe correlation between them.
S3.2: and querying the liveness measure. Query activity is a concept proposed by the present invention to represent the relevance between items and the core interests of a user. First, a scaling index function is set as the default distribution:
Figure BDA00034588266400000511
where μ is a constant that controls the importance of recent behavior. p (k)j|qi) Is the probability distribution, q (k), actually calculatedj|qi) Is to predefine a fixed default probability distribution if qiThe corresponding item can represent the core interest of the user, and the actually calculated probability distribution p is different from the fixed distribution q. KL divergence was used to measure the similarity between distributions q and p:
Figure BDA0003458826640000061
removing the last constant term, the invention defines the activity measure of the ith query as:
Figure BDA0003458826640000062
the core interest of the user in the item forces the attention probability distribution of the corresponding query away from the fixed distribution. If an item corresponds to a query with a larger M (q)iK), the greater the probability that it corresponds to the core interest of the user.
S3.3: based on the proposed activity metric, the present invention brings each key to focus on only m active queries, and then obtains the self-attention values of the core interest focus:
Figure BDA0003458826640000063
wherein
Figure BDA0003458826640000064
Is a sampling matrix of matrix Q that contains only m active queries. m is calculated by
Figure BDA0003458826640000067
Where c ∈ (0,1) is the sampling factor. It is worth mentioning that in a multi-head view, this attention extracts a different active query-key pair for each head, thereby avoiding severe information loss. For the rest (L)Q-m) queries, not computed but directly using the default distribution q (k)j|qi) As its value of attention. Finally, the two parts of attention values are recombined according to the original positions to obtain a final attention matrix S.
S4: and establishing a point-by-point feedforward neural network, wherein a Relu activation function is used for endowing the model with nonlinearity, and layer normalization, residual error connection and dropout technologies are respectively introduced aiming at the problems of overfitting, gradient disappearance and low training speed possibly existing in the model.
The point-by-point feedforward neural network is established by the following steps:
s4.1: after each attention layer, two layers of feedforward neural networks are employed, and Relu is employed as the activation function, which may render the model non-linear and take into account the interaction between different potential dimensions:
FFN(S)=ReLU(SW(1)+b(1))W(2)+b(2)
where S is the attention matrix, W, obtained in step S3.3(1)、W(2)As weight matrix, dimensions are all
Figure BDA0003458826640000065
b(1)、b(2)As offset vectors, dimensions are
Figure BDA0003458826640000066
S4.2: as the stack of self-attention and feedforward layers and the network go deeper, some problems become more severe, including overfitting, gradient extinction, and slower training processes. The invention respectively introduces layer normalization, residual connection and dropout technologies to solve the problems and obtain a user interest representation S'
S′=S+Dropout(FFN(LayerNorm(S)))
S5: and generating a recommendation list for the user according to the finally obtained user interest characterization and the candidate item set.
The specific construction steps of the prediction layer are as follows:
s5.1: through step S1-4, the present invention extracts user interest representations S' from previous projects adaptively and hierarchically. To predict the next item, the present invention uses the latent factor model to calculate the user's preference score for item i:
Ri,t=S′Xi
wherein
Figure BDA0003458826640000071
Is the embedded vector for item i acquired in step S2.1.
S5.2: and sorting the candidate items according to the calculated preference scores of the user to the items. And selecting the k items with the highest scores to recommend to the user.
In the embodiment of the invention, the effectiveness and feasibility of the sequence recommendation system focusing on the core interest of the user are verified through experiments, and the performance of the system is verified through three experiments.
First, the influence of the potential dimension d is considered. As shown in fig. 2, the potential dimension d is NDCG @10 from 10 to 100, keeping the other best superparameters unchanged. As the potential dimensionality increases, the recommendation performance improves and gradually approaches the convergence point. And the model of the present invention is consistently superior to other baseline models.
Second, the impact of the query sampling factor c is considered. The process of changing the query sampling factor c from 0.2 to 1.0 under both ML-1m and Beauty data sets is shown in FIG. 3. The model performance of the ML-1m dataset reaches the best point when c is 0.8, and the model reaches the best point when c is 0.5, which means that the sparse dataset contains more noise interactions.
In addition, in the experiment of the embodiment, it is also verified that the attention mechanism provided by the invention can extract different active query-key pairs for each head under a multi-head view angle. Fig. 4 and fig. 5 are diagrams illustrating a matrix visualization of attention scores of candidates by a user under a two-head attention mechanism. It can be clearly seen that different active query-key pairs are extracted on different headers.
Finally, it should be noted that: the described embodiments are only some embodiments of the present application and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.

Claims (10)

1. A method for sequence recommendation focusing on a user's core interest, comprising the steps of:
acquiring an interaction sequence of a user and a project and a timestamp corresponding to each interaction behavior in the interaction sequence;
acquiring an embedded matrix of the interactive sequence by combining the time stamp;
performing self-attention calculation on the embedded matrix to obtain each query qiThe probability distribution p of attention values for all keys is calculated and each query q is computed by setting a scaling exponential functioniPredefining a fixed default probability distribution q;
obtaining each query q according to the similarity of the attention value probability distribution p and a predefined fixed default probability distribution qiAn activity metric of;
respectively calculating the attention value of each key based on the activity measurement so as to construct a self-attention matrix;
and obtaining the item recommendation result of the user according to the self-attention matrix.
2. The method of claim 1, wherein the embedded matrix of interaction sequences is obtained by:
1) converting the interactive sequence into a detection sequence with a fixed length of l;
2) constructing a timestamp sequence based on the timestamp and the detection sequence;
3) for each item in the detected sequence, mapping to vector X by a one-dimensional convolution filteriAnd each vector X is combinediSuperposing to obtain an item embedding matrix;
4) obtaining a position embedding matrix according to the position of each item in the detection sequence;
5) obtaining a time interval embedded matrix by calculating time intervals among time stamps in the time stamp sequence;
6) and acquiring the embedding matrix of the interaction sequence based on the item embedding matrix, the position embedding matrix and the time interval embedding matrix.
3. The method of claim 2, wherein the time interval embedding matrix is obtained by:
1) obtaining the shortest time interval in the time intervals among the timestamps;
2) dividing each time interval by the shortest time interval to obtain an individualized time interval;
3) constructing a time interval sequence with the length of l-1 based on the personalized time interval;
4) and filling 0 to the rightmost side of the time interval sequence with the length of l-1, and then obtaining a time interval embedded matrix through projection and superposition.
4. The method of claim 1, wherein the method of calculating the similarity comprises: the KL divergence was used for the measurements.
5. The method of claim 1, in which an activity metric
Figure FDA0003458826630000011
Figure FDA0003458826630000012
Wherein K represents a bond, LKX d is the dimension of the bond K, KjFor the jth row vector of key K, μ is a constant that controls the importance of the most recent behavior.
6. The method of claim 1, wherein the self-attention matrix is constructed by:
1) based on each query qiObtaining the active query concerned by each key;
2) calculating a self-attention value of the active query;
3) for an inactive query, using a predefined fixed default probability distribution q as a self-attentiveness value;
4) and recombining the self-attention values of the active query and the inactive query according to the original positions to obtain a self-attention matrix.
7. The method of claim 1, wherein the training of the self-attention network constructed from the attention matrix is performed by parameter feedback through a two-layer feedforward neural network.
8. The method of claim 1, wherein the item recommendation of the user is obtained by:
1) processing the self-attention moment array through a layer normalization method, a residual error connection method and a dropout method to obtain user interest expression;
2) calculating preference scores of the user for the items based on the user interest representation and the item embedding matrix of the items;
3) and sequencing the items according to the preference scores, and taking the items with the highest scores as item recommendation results.
9. A storage medium having a computer program stored thereon, wherein the computer program is arranged to, when run, perform the method of any of claims 1-8.
10. An electronic device comprising a memory having a computer program stored therein and a processor arranged to run the computer program to perform the method according to any of claims 1-8.
CN202210024433.5A 2022-01-06 2022-01-06 Sequence recommendation method and device focusing on user core interests Pending CN114461906A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210024433.5A CN114461906A (en) 2022-01-06 2022-01-06 Sequence recommendation method and device focusing on user core interests

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210024433.5A CN114461906A (en) 2022-01-06 2022-01-06 Sequence recommendation method and device focusing on user core interests

Publications (1)

Publication Number Publication Date
CN114461906A true CN114461906A (en) 2022-05-10

Family

ID=81409189

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210024433.5A Pending CN114461906A (en) 2022-01-06 2022-01-06 Sequence recommendation method and device focusing on user core interests

Country Status (1)

Country Link
CN (1) CN114461906A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116186309A (en) * 2023-04-21 2023-05-30 江西财经大学 Graph convolution network recommendation method based on interaction interest graph fusing user intention
CN116992155A (en) * 2023-09-20 2023-11-03 江西财经大学 User long tail recommendation method and system utilizing NMF with different liveness

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110633789A (en) * 2019-08-27 2019-12-31 苏州市职业大学 Self-attention network information processing method for streaming media recommendation
CN111259243A (en) * 2020-01-14 2020-06-09 中山大学 Parallel recommendation method and system based on session
US20200288205A1 (en) * 2019-05-27 2020-09-10 Beijing Dajia Internet Information Technology Co., Ltd. Method, apparatus, electronic device, and storage medium for recommending multimedia resource
CN112950325A (en) * 2021-03-16 2021-06-11 山西大学 Social behavior fused self-attention sequence recommendation method
CN113762477A (en) * 2021-09-08 2021-12-07 中山大学 Method for constructing sequence recommendation model and sequence recommendation method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200288205A1 (en) * 2019-05-27 2020-09-10 Beijing Dajia Internet Information Technology Co., Ltd. Method, apparatus, electronic device, and storage medium for recommending multimedia resource
CN110633789A (en) * 2019-08-27 2019-12-31 苏州市职业大学 Self-attention network information processing method for streaming media recommendation
CN111259243A (en) * 2020-01-14 2020-06-09 中山大学 Parallel recommendation method and system based on session
CN112950325A (en) * 2021-03-16 2021-06-11 山西大学 Social behavior fused self-attention sequence recommendation method
CN113762477A (en) * 2021-09-08 2021-12-07 中山大学 Method for constructing sequence recommendation model and sequence recommendation method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
何海洋;王勇;蔡国永;: "基于用户类别偏好相似度和联合矩阵分解的推荐算法", 数据采集与处理, no. 01, 15 January 2018 (2018-01-15) *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116186309A (en) * 2023-04-21 2023-05-30 江西财经大学 Graph convolution network recommendation method based on interaction interest graph fusing user intention
CN116992155A (en) * 2023-09-20 2023-11-03 江西财经大学 User long tail recommendation method and system utilizing NMF with different liveness
CN116992155B (en) * 2023-09-20 2023-12-12 江西财经大学 User long tail recommendation method and system utilizing NMF with different liveness

Similar Documents

Publication Publication Date Title
Zhou et al. Atrank: An attention-based user behavior modeling framework for recommendation
US10783361B2 (en) Predictive analysis of target behaviors utilizing RNN-based user embeddings
CN111581520B (en) Item recommendation method and system based on item importance in session
CN106462608A (en) Knowledge source personalization to improve language models
CN110781409B (en) Article recommendation method based on collaborative filtering
CN109376222A (en) Question and answer matching degree calculation method, question and answer automatic matching method and device
CN114461906A (en) Sequence recommendation method and device focusing on user core interests
CN111401219B (en) Palm key point detection method and device
CN109685104B (en) Determination method and device for recognition model
CN110633421A (en) Feature extraction, recommendation, and prediction methods, devices, media, and apparatuses
CN113609388B (en) Sequence recommendation method based on anti-facts user behavior sequence generation
CN113656699B (en) User feature vector determining method, related equipment and medium
KR20200047006A (en) Method and system for constructing meta model based on machine learning
CN117892011B (en) Intelligent information pushing method and system based on big data
Chen et al. Deciphering the noisy landscape: Architectural conceptual design space interpretation using disentangled representation learning
CN113705792A (en) Personalized recommendation method, device, equipment and medium based on deep learning model
CN117390289B (en) House construction scheme recommending method, device and equipment based on user portrait
CN114880709B (en) E-commerce data protection method and server applying artificial intelligence
CN116467466A (en) Knowledge graph-based code recommendation method, device, equipment and medium
WO2023154351A2 (en) Apparatus and method for automated video record generation
Wang et al. Multi‐feedback Pairwise Ranking via Adversarial Training for Recommender
Cao et al. Fuzzy emotional semantic analysis and automated annotation of scene images
CN113469819A (en) Recommendation method of fund product, related device and computer storage medium
JP2022080367A (en) Model evaluation device, model evaluation method, and program
de Oliveira Monteiro et al. Market prediction in criptocurrency: A systematic literature mapping

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination