WO2023108324A1 - 对比学习增强的双流模型推荐系统及算法 - Google Patents
对比学习增强的双流模型推荐系统及算法 Download PDFInfo
- Publication number
- WO2023108324A1 WO2023108324A1 PCT/CN2021/137367 CN2021137367W WO2023108324A1 WO 2023108324 A1 WO2023108324 A1 WO 2023108324A1 CN 2021137367 W CN2021137367 W CN 2021137367W WO 2023108324 A1 WO2023108324 A1 WO 2023108324A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- learning
- gcn
- contrastive
- layer
- transformer
- Prior art date
Links
- 230000000052 comparative effect Effects 0.000 title claims abstract description 14
- 238000000034 method Methods 0.000 claims abstract description 31
- 230000007774 longterm Effects 0.000 claims abstract description 23
- 230000008569 process Effects 0.000 claims abstract description 9
- 238000006243 chemical reaction Methods 0.000 claims abstract description 5
- 239000011159 matrix material Substances 0.000 claims description 34
- 238000013528 artificial neural network Methods 0.000 claims description 24
- 230000006870 function Effects 0.000 claims description 24
- 239000013598 vector Substances 0.000 claims description 17
- 230000004913 activation Effects 0.000 claims description 9
- 230000009466 transformation Effects 0.000 claims description 8
- 230000007704 transition Effects 0.000 claims description 7
- 238000005259 measurement Methods 0.000 claims description 3
- 230000003993 interaction Effects 0.000 description 9
- 238000005516 engineering process Methods 0.000 description 7
- 230000007246 mechanism Effects 0.000 description 5
- 238000012360 testing method Methods 0.000 description 4
- 230000006399 behavior Effects 0.000 description 3
- 230000008901 benefit Effects 0.000 description 3
- 230000008859 change Effects 0.000 description 3
- 238000013135 deep learning Methods 0.000 description 3
- 238000001914 filtration Methods 0.000 description 3
- 238000012549 training Methods 0.000 description 3
- 230000009471 action Effects 0.000 description 2
- 238000013527 convolutional neural network Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 230000004927 fusion Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000008450 motivation Effects 0.000 description 2
- 238000010606 normalization Methods 0.000 description 2
- 230000000306 recurrent effect Effects 0.000 description 2
- 230000000295 complement effect Effects 0.000 description 1
- 230000002354 daily effect Effects 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000013136 deep learning model Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 230000003203 everyday effect Effects 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000003058 natural language processing Methods 0.000 description 1
- 238000003062 neural network model Methods 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000000630 rising effect Effects 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/90—Details of database functions independent of the retrieved data types
- G06F16/95—Retrieval from the web
- G06F16/953—Querying, e.g. by the use of web search engines
- G06F16/9537—Spatial or temporal dependent retrieval, e.g. spatiotemporal queries
Definitions
- the invention relates to the technical field of recommendation systems, in particular to a dual-stream model recommendation system and algorithm enhanced by contrastive learning.
- Session-based recommendation algorithm refers to an algorithm that only relies on anonymous sessions to predict the user's next behavior when the user is not logged in. It is widely used in many fields (such as e-commerce, short video, live broadcast, etc. important role. Recommender systems are effective information filtering tools that have become very common due to increased Internet access, personalization trends, and changing habits of computer users. Although existing recommender systems are successful in generating decent recommendations, they still face challenges such as accuracy, scalability, and cold-start. In the past few years, deep learning, the state-of-the-art machine learning technique used in many complex tasks, has been used in recommender systems to improve the quality of recommendations.
- SR-GNN Session-based recommendation with graph neural networks
- SR-GNN Session-based recommendation with graph neural networks
- GNN is able to capture the transformation of items and accordingly generate accurate item embedding vectors, which is difficult for traditional sequential methods such as MC-based and RNN-based methods.
- SR-GNN builds a more reliable session representation that can infer the probability of the next clicked item.
- SR-GNN models disjoint conversation sequences as graph-structured data, and uses a graph neural network to capture complex item relationships. It provides a novel perspective for modeling in session-based recommendation scenarios. To generate session-based recommendations, SR-GNN does not rely on user representations, but uses session embeddings, which can derive recommendations based solely on item embeddings involved in each session.
- a graph neural network is used to model a session.
- the graph neural network-based method mostly transfers information between receiving items, so the information of items that are not directly connected will be ignored.
- Multi-layer GNNs are then used to propagate information without direct connections between items, which can easily lead to overfitting.
- the related technology also discloses the SGNN-HN algorithm, and proposes a star map neural network with a highway network for session recommendation.
- the star graph neural network is used to model the complex transition pattern in a certain period of time, and a star node is added on the basis of the gated graph neural network to consider non-adjacent items, thus solving the problem of long-distance information propagation .
- highway network HN
- HN highway network
- DCN-SR algorithm Dynamic Co-attention Network for Session-based Recommendation
- a session-based dynamic co-attention network model is designed, which is able to integrate users' long-term and short-term preferences.
- a context-gated recurrent unit (CGRU) is designed to integrate different types of short-term user behaviors to better estimate users' next consumption interests. DCN-SR was found to consistently meet or exceed the state of the art, especially with regard to short sessions and active users.
- NISER-GNN Normalized Item and Session Representations with Graph Neural Networks
- SR session-based recommendation
- the goal of the session-based recommendation (SR) model is to use information from past operations (eg, item/product clicks) in the session to recommend The item the user is likely to click next.
- past operations e.g, item/product clicks
- Graph Neural Networks can learn useful representations for such conversational graphs and have been shown to improve sequential models such as recurrent neural networks.
- GNN-based recommendation models suffer from a popularity bias: the models are biased towards recommending popular items, while failing to recommend related long-tail items (less popular or less frequent items). Thus, in practical online settings, these models perform poorly for the less popular new items arriving every day. It turns out that this problem is somehow related to learning the size or norm of the item and the session graph representation (embedding vector).
- a training procedure is proposed to alleviate this problem by using normalized representations. Models using standardized item and session graph representations perform better: for less popular long-tail items in the offline setting; and for less popular newly introduced items in the online setting.
- the present invention proposes a dual-stream model recommendation system and algorithm enhanced by contrastive learning, which can obtain the user's long-term preferences and interests, and can combine the user's long-term and short-term preferences, and use implicit preferences to capture the user's dynamics Preference data to better improve the accuracy of recommendations.
- the present invention provides a dual-stream model recommendation algorithm enhanced by contrastive learning, which includes: firstly, using the characteristics of Transformer to learn the time series features in the data to obtain the long-term interest of the user; then using GCN to learn and explore the process of item conversion The feature information of the spatial structure; finally, the feature information obtained by Transformer and GCN is combined using position encoding and global map encoding, and at the same time, the comparative learning method is used to assist the model's representation learning.
- the Encoder structure of the Transformer includes a Self-Attention module, and the data is passed through the Self-Attention module to obtain a weighted feature vector Z, and the feature vector Z is Attention (Q, K, V):
- Q is the Query matrix
- K is the Key matrix
- V is the Value matrix
- d k is the dimension of the Query matrix and the Key matrix.
- the structure of the Transformer's Decoder includes an Encoder-Decoder Attention module, which is used to calculate the weights of input and output, that is, the relationship between the currently translated and encoded feature vectors.
- x i (l+1) is the output of the l+1 layer
- ⁇ is the nonlinear activation function
- c ij is the square root of the product of the degree d i of node i and the degree d j of node j
- x j (l ) is the output of layer l
- w (l) is the weight of layer l
- b (l) is the bias value of layer l
- j belongs to the set of N i
- N i is the neighbor node of node i.
- the GCN performs feature transformation on nodes from the previous hidden layer to the next hidden layer:
- X (l+1) is the output of layer l+1
- X (l) is the output of layer l
- A is an adjacency matrix
- f is a function.
- X (l+1) ⁇ (AX (l) W (l) +b (l) ), where W (l) is the first layer Weights;
- the goal of the contrastive learning is to learn the encoder f, so that:
- x + is a positive sample similar to x
- x - is a negative sample that is not similar to x
- score is a measurement function to measure the similarity between samples.
- the loss function of the comparative learning is expressed as:
- the corresponding sample x has 1 positive sample and N-1 negative samples, and T means transpose.
- the present invention also provides a dual-stream model recommendation system enhanced by contrastive learning, which is used to implement the above-mentioned dual-stream model recommendation algorithm enhanced by contrastive learning, including:
- the Transformer unit is used to learn the time series features in the data and obtain the long-term interest of the user;
- the GCN unit is used to learn and explore the feature information of the spatial structure in the process of item transformation
- Combination unit for combining feature information obtained by Transformer unit and GCN unit using position encoding and global graph encoding
- the comparative learning unit is used to assist in model representation learning.
- the present invention proposes a session-based recommendation system of Transformer unit, GCN unit, combination unit and comparative learning unit, utilizes Transformer to extract and learn time series features in data, and then uses GCN (graph convolutional neural network ) to obtain the spatial structure features, and then integrate the two information, and use the contrastive learning method to assist, so as to better improve the accuracy of the recommendation.
- GCN graph convolutional neural network
- the mechanism of Transformer uses the method of contrastive learning to assist in the learning of the representation of the model, Transformer, GCN
- the session-based recommendation method combined with the contrastive learning method captures the time series information and spatial structure information of the user's historical session data at the same time, and also considers the collaborative filtering representation that the existing technology does not consider.
- the present invention uses A contrastive learning method is used to strengthen the fusion of the two representations.
- the present invention can obtain the user's long-term preference and interest, and can combine the user's long-term and short-term preference, and use the implicit preference to capture the user's dynamic preference data, thereby better improving the accuracy of recommendation.
- Fig. 1 is Transformer structural diagram of the present invention
- Fig. 2 is the figure that adopts recommendation algorithm of the present invention to carry out test on six data sets after HIT and MRR change;
- Fig. 3 is the change figure of weight matrix after adopting recommendation algorithm of the present invention to test on six data sets;
- Fig. 4 is the comparison result of experiments using the present invention and the existing recommendation algorithm.
- the session-based recommendation algorithm refers to an algorithm that only relies on anonymous sessions to predict the user's next behavior when the user is not logged in. It plays an important role in many fields, such as e-commerce, short video, and live broadcast.
- the present invention provides a dual-stream model recommendation system enhanced by contrastive learning, including: a Transformer unit, used to learn the time series features in the data, to obtain the long-term interest of the user; a GCN unit, used to learn and explore the spatial structure of the item conversion process feature information; a combination unit for combining the feature information obtained by the Transformer unit and the GCN unit using position encoding and global graph encoding; and a contrastive learning unit for assisting representation learning of the model.
- Transformer is a model that uses the attention mechanism to improve the speed of model training.
- Transformer can be said to be a deep learning model based entirely on the self-attention mechanism, because it is suitable for parallel computing, and the complexity of its own model makes it higher in accuracy and performance than the previously popular RNN cycle neural network.
- the three matrices of Q (Query), K (Key) and V (Value) all come from the same input.
- the point product between Q and K must be calculated, and then in order to prevent the result from being too large, will divide by a scale scale Among them, d k is the dimension of a Query and Key vector, and then use the Softmax operation to normalize the result into a probability distribution, and then multiply it by the matrix V to obtain the representation of the sum of the weights.
- This operation can be expressed as:
- Feed Forward Neural Network After getting Z, it will be sent to the next module of Encoder, namely Feed Forward Neural Network.
- the full connection of Feed Forward Neural Network module has two layers.
- the activation function of the first layer is ReLU, and the second layer is a linear activation function. ,It can be expressed as:
- W 1 is the weight matrix 1
- W 2 is the weight matrix 2
- b 1 is the bias value 1
- b 2 is the bias value 2
- max is the maximum function.
- the difference between the Transformer's Decoder structure and Encoder is that the Decoder has an additional Encoder-Decoder Attention, and the two Attentions are used to calculate the input and output weights:
- Self-Attention the relationship between the current translation and the translated previous text
- Encoder-Decnoder Attention The relationship between the currently translated and encoded feature vectors.
- x i (l+1) is the output of the l+1 layer
- ⁇ is the nonlinear activation function
- c ij is the square root of the product of the degree d i of node i and the degree d j of node j
- x j (l) is the output of layer l
- w (l) is the weight of layer l
- b (l) is the bias value of layer l
- j is the set belonging to N i
- N i is the neighbor node of node i.
- GCN performs feature transformation on nodes from the previous hidden layer to the next hidden layer:
- X (l+1) is the output of layer l+1
- X (l) is the output of layer l
- A is the adjacency matrix
- f is a function
- W (l) is the lth layer weight matrix
- the normalization of the adjacency matrix A can be achieved by the degree matrix D.
- it is more effective and interesting to use symmetric normalization which becomes the following formula:
- the goal of contrastive learning is to learn an encoder f such that:
- x + is a positive sample similar to x
- x - is a negative sample that is not similar to x
- score is a measurement function to measure the similarity between samples.
- the loss function of comparative learning can be expressed as:
- the corresponding sample x has 1 positive sample and N-1 negative samples, and T means transpose. It can be found that this form is similar to the cross-entropy loss function, and the goal of learning is to make the features of x more similar to the features of positive samples, and less similar to the features of N-1 negative samples. In the relevant literature of contrastive learning, this loss function is called InfoNCE loss. There are also some other works that call this loss function multi-class n-pair loss or ranking-based NCE.
- the present invention uses the mechanism of Transformer to learn the time series information existing in the data to obtain the characteristics of the user's long-term interest, thereby solving the problem of long-term interest acquisition, and uses the method of contrastive learning to assist in the learning of the representation of the model, and at the same time
- the Transformer and GCN are used to capture the time-series information and spatial structure information of the user's historical session data.
- the present invention also considers the collaborative filtering representation that is not considered in the prior art, which is also an advantage.
- the present invention uses a contrastive learning method to strengthen the fusion of two representations, further improving the accuracy of the recommendation system.
- the present invention combines Transformer, GCN, and comparative learning methods to perform session-based recommendation.
- the algorithm of the present invention is used to test and verify on six data sets, the results are shown in Figure 2 and Figure 3, it can be seen from Figure 2 that both the HIR and MRR indicators are rising. It can be seen from Figure 3 that the weight matrix changes during the training process. Among them, hit and mrr are commonly used evaluation indicators for recommendation algorithms.
Landscapes
- Engineering & Computer Science (AREA)
- Databases & Information Systems (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Image Analysis (AREA)
Abstract
对比学习增强的双流模型推荐系统及算法,能够获取用户长期偏好兴趣,且能够将用户的长期和短期偏好结合,利用隐式偏好捕捉用户的动态偏好数据,从而更好地提升推荐的准确性,包括:首先利用Transformer的特性来进行数据中时序特征的学习,获得用户的长期的兴趣;然后利用GCN学习探究物品转换过程中的空间结构的特征信息;最后将Transformer和GCN获得的特征信息使用位置编码和全局图编码来组合,并且同时利用对比学习方法来进行模型的表征学习辅助。
Description
本发明涉及推荐系统技术领域,具体涉及对比学习增强的双流模型推荐系统及算法。
基于会话的推荐算法(Session-based Recommendation)是指在用户未登录状态下,仅仅依赖匿名会话进行用户下一个行为预测的一种算法,在许多领域(如电商、短视频、直播等)有着重要的作用。推荐系统是有效的信息过滤工具,由于互联网接入的增加、个性化趋势和计算机用户习惯的改变,这种工具非常普遍。尽管现有的推荐系统成功地产生了不错的推荐,但是它们仍然面临着诸如准确性、可伸缩性和冷启动等挑战。在过去的几年中,深度学习,即在许多复杂任务中使用的最先进的机器学习技术,已经被用于推荐系统以提高推荐质量。目前,许多在线供应商为他们的系统配备了推荐引擎,大多数互联网用户在日常活动中利用这些服务,如看书、听音乐和购物。在典型的推荐系统中,术语项目是指系统向其用户推荐的产品或服务。为用户生成推荐项目列表或预测用户对特定项目的喜好程度需要推荐系统分析志同道合的用户过去的偏好或从关于项目的描述信息中获益。近年来,由于计算能力和大数据存储设施的增加,人工神经网络已经开始引起人们的极大关注。研究者成功地建立和训练了深层神经网络模型(辛顿等人)。它促进了作为计算机科学新兴领域的深度学习。目前,图像处理、对象识别、自然语言处理和语音识别中的许多最新技术都将深度神经网络作为主要工具。深度学习技术的潜力也鼓励研究人员在推荐任务中采用深度架构。推荐系统面临的四大挑战:准确性、数据稀疏、冷启动、可伸缩性。
推荐系统中基于会话的推荐方式的学习探索一直被广泛研究,相关技术公开了SR-GNN算法(Session-based recommendation with graph neural networks),主要用于探究物品之间复杂联系和生成准确的物品特征嵌入。对于基于会话的推荐,SR-GNN首先从历史会话序列构造有向图。基于会话图,GNN能够捕获物品的转换,并相应地生成准确的项目嵌入向量,这是传统顺序方法如基于mc和基于RNN的方法难以实现的。基于精确的物品嵌入向量,SR-GNN构建了更可靠的会话表示,可以推断下一个点击物品的概率。首先,将所有会话序列建模为有向会话图,其中每个会话序列可以被视为一个子图。然后依次处理每个会话图,通过门控图神经网络得到每个会话图中所有节点的嵌入。然后,我们将每个会话表示为用户在该会话中的全局偏好和当前兴趣的组合,其中这些全局和局部会话嵌入都由节点的嵌入组成。最后,对于每个会话,预测每个物品被下一次点击的概率。SR-GNN将分离的会话序列建模为图结构数据,并使用图神经网络来捕获复杂的物品关系。它为基于会话的推荐场景中 的建模提供了一个新颖的视角。为了生成基于会话的推荐,SR-GNN不依赖于用户表示,而是使用会话嵌入,它可以仅仅根据每个会话中涉及的物品嵌入来得到推荐结果。
为了更加准确地建模物品的转换模式,图神经网络被用来建模一个会话。但是基于图神经网络的方式大多数地在领接物品之间传递信息,因此会忽视没有直接连接的物品信息。多层gnn随后被用于在项目之间不直接连接的情况下传播信息,而这会轻易的造成过拟合。相关技术还公开了SGNN-HN算法,提出了拥有高速路网络的星图神经网络来进行会话推荐。首先利用星图神经网络(SGNN)对某一时段的复杂过渡模式进行建模,在门控图神经网络的基础上增加一个星形节点来考虑非相邻项,从而解决了远距离信息传播问题。然后,为了避免图神经网络的过拟合问题,利用高速公路网络(HN)动态选择SGNN之前和之后的物品向量,这有助于探索物品之间的复杂过渡关系。最后,我们在一个正在进行的会话中聚精会神地聚合由SGNN生成的物品向量,从而表示用户对物品的偏好来进行推荐。在基于会话的推荐中,第一个考虑会话中项目之间的远距离关系来进行信息传播的图神经网络。提出一种星图神经网络(SGNN)来模拟正在进行的会话中物品之间的复杂过渡关系,并应用一种高速公路网络(HN)来处理图神经网络中存在的过拟合问题。
如今的基于会话的推荐系统成功地捕捉了用户的短期决策过程。但它们没有捕捉到用户长期和短期兴趣对基于会话的推荐的相对重要性的差异。即使在相同的会话环境下,具有不同购物偏好的用户也可能喜欢不同的下一个项目。因此,如何更好地捕捉个人用户的动态消费动机至关重要。因此相关技术提出了DCN-SR算法(Dynamic Co-attention Network for Session-based Recommendation),假设用户长期交互历史中事件的相对重要性取决于他们短期交互历史中的事件,反之亦然。以在当前会话中搜索过相机的用户为例:在决定下一步推荐什么时,用户与电子产品相关的长期互动可能应该比与服装相关的互动更受重视。相反,如果用户过去的互动表现出对某品牌的强烈兴趣,那么在当前会话中,与该品牌相关的互动在预测下一个商品时可能比其他互动更重要。但是,除了过去和现在的相互作用之间的关系之外,还有更多的东西需要建模。不同的用户操作,例如,点击、添加到购物车或购买,提供不同类型的关于用户兴趣的信息,因此,应该触发不同的后续操作。例如,点击相机可能表明当前的建议不令人满意,因此应该推荐替代产品;将商品添加到购物车中可能会显示出用户对该商品的强烈消费动机;虽然重复购买是很重要的,涉及到相机的购买行为可能应该被推荐补充的物品。设计了一个基于会话的动态协同注意网络模型,该模型能够整合用户的长期和短期偏好。设计了上下文门控循环单元CGRU,以整合不同类型的短期用户行为,从而更好地估计用户的下一个消费兴趣。发现DCN-SR始终符合或超过了最新的技术水平,特别 是在短会话和活跃用户方面。
相关技术还公开了NISER-GNN算法(Normalized Item and Session Representations with Graph Neural Networks),基于会话的推荐(SR)模型的目标是利用会话中来自过去操作(例如,项目/产品点击)的信息来推荐用户接下来可能会点击的项目。最近有研究表明,会话中的条目交互序列可以建模为图结构数据,以更好地解释复杂的条目转换。图神经网络GNN可以学习有用的表示这样的会话图,并已被证明改善顺序模型,如循环神经网络。然而,注意到,这些基于GNN的推荐模型存在流行偏倚:模型偏向于推荐流行的项目,而不能推荐相关的长尾项目(不太流行或不太频繁的项目)。因此,在实际的在线设置中,这些模型对于每天到达的不太受欢迎的新商品表现不佳。证明了这个问题在某种程度上与学习项目的大小或范数以及会话图表示(嵌入向量)有关。提出了一个训练程序,通过使用规范化表示来缓解这个问题。使用标准化项目和会话图表示的模型表现得更好:对于离线设置中不太受欢迎的长尾项目;对于不太受欢迎的新引进的项目在网上设置。
然而相关技术中存在如何获取长期偏好兴趣,如何结合用户的长期和短期偏好,如何利用隐式偏好捕捉用户的动态偏好数据等问题,推荐的准确性有待提高。
发明内容
为了解决现有技术中的问题,本发明提出了对比学习增强的双流模型推荐系统及算法,能够获取用户长期偏好兴趣,且能够将用户的长期和短期偏好结合,利用隐式偏好捕捉用户的动态偏好数据,从而更好地提升推荐的准确性。
为了实现以上目的,本发明提供了对比学习增强的双流模型推荐算法,包括:首先利用Transformer的特性来进行数据中时序特征的学习,获得用户的长期的兴趣;然后利用GCN学习探究物品转换过程中的空间结构的特征信息;最后将Transformer和GCN获得的特征信息使用位置编码和全局图编码来组合,并且同时利用对比学习方法来进行模型的表征学习辅助。
进一步地,所述Transformer的Encoder结构包括Self-Attention的模块,数据经过Self-Attention模块得到加权之后的特征向量Z,特征向量Z即Attention(Q,K,V):
进一步地,所述Transformer的Encoder结构还包括Feed Forward Neural Network模块, Feed Forward Neural Network模块的全连接包括第一层的ReLU激活函数和第二层的线性激活函数FFN(Z)=max(0,ZW
1+b
1)W
2+b
2,其中,W
1为参数矩阵1,W
2为参数矩阵2,b
1为偏置值1,b
2为偏置值2,max为取最大函数。
进一步地,所述Transformer的Decoder的结构包括Encoder-Decoder Attention模块,用于计算输入和输出的权值,即当前翻译和编码的特征向量之间的关系。
进一步地,所述GCN公式表示为:
其中,x
i
(l+1)为第l+1层的输出,σ为非线性激活函数,c
ij为节点i的度d
i和节点j的度d
j乘积开根号,x
j
(l)为第l层的输出,w
(l)为第l层的权重,b
(l)为第l层的偏置值,j属于N
i的集合,N
i为节点i的邻居节点。
进一步地,所述GCN从前一个隐藏层到后一个隐藏层,对结点进行特征变换:
X
(l+1)=f(X
(l),A)
其中,X
(l+1)为第l+1层的输出,X
(l)为第l层的输出,A为邻接矩阵,f为函数。
进一步地,所述GCN对结点进行特征变换具体实现为:X
(l+1)=σ(AX
(l)W
(l)+b
(l)),其中,W
(l)为第l层权重;
对邻接矩阵A进行归一化,得到:X
(l+1)=σ(D
-1AX
(l)W
(l)+b
(l)),其中,D为过度矩阵;
进一步地,所述对比学习中对任意数据x,对比学习的目标是学习编码器f,使得:
score(f(x),f(x
+))>>score(f(x),f(x
-))
其中,x
+是和x相似的正样本,x
-是和x不相似的负样本,score是度量函数,来衡量样 本间的相似度。
进一步地,所述对比学习中若用向量内积来计算两个样本的相似度,则对比学习的损失函数表示为:
其中,对应样本x有1个正样本和N-1个负样本,T表示转置。
本发明还提供了对比学习增强的双流模型推荐系统,用于实现上述的对比学习增强的双流模型推荐算法,包括:
Transformer单元,用于进行数据中时序特征的学习,获得用户的长期的兴趣;
GCN单元,用于学习探究物品转换过程中的空间结构的特征信息;
组合单元,用于使用位置编码和全局图编码来组合Transformer单元和GCN单元获得的特征信息;
以及,对比学习单元,用于进行模型的表征学习辅助。
与现有技术相比,本发明提出Transformer单元、GCN单元、组合单元和对比学习单元的基于会话的推荐系统,利用Transformer对数据中的时序特征进行提取学习,再使用GCN(图卷积神经网络)获取空间结构特征,再将两者信息整合,利用对比学习方法进行辅助,从而更好的提升推荐的准确性。利用Transformer、GCN和对比学习进行结合起来的方法,来进行基于会话的推荐系统的探索研究,利用Transformer的特性来进行数据中时序特征的学习,获得用户的长期的兴趣,再利用GCN学习探究物品转换过程中的空间结构的特征信息,再将两者获得特征信息进行组合,然后通过使用位置编码和全局图编码来组合,并且同时利用对比学习的方法来进行模型的表征学习的辅助,来增加推荐的效果。利用Transformer学习数据中存在的时序信息的机制,从而获得用户的长期兴趣的特征,从而解决长期兴趣获取的难题,使用了对比学习的方法,来进行模型的表征的学习的辅助,将Transformer、GCN和对比学习方法进行结合的基于会话的推荐的方法,同时捕获了用户历史会话数据的时序中的信息和空间结构的信息,还考虑了现有技术没有考虑到的协同过滤表征,最后本发明使用了对比学习的方法来加强了两个表征的融合。本发明能够获取用户长期偏好兴趣,且能够将用户的长期和短期偏好结合,利用隐式偏好捕捉用户的动态偏好数据,从而更好地提升推荐的准确性。
图1为本发明的Transformer结构图;
图2为采用本发明的推荐算法在六个数据集上进行试验后HIT和MRR变化的图;
图3为采用本发明的推荐算法在六个数据集上进行试验后权重矩阵的变化图;
图4为采用本发明和现有的推荐算法进行试验的对比结果。
为使本发明实施例的目的、技术方案和优点更加清楚,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本发明一部分实施例,而不是全部的实施例。通常在此处附图中描述和示出的本发明实施例的组件可以以各种不同的配置来布置和设计。因此,以下对在附图中提供的本发明的实施例的详细描述并非旨在限制要求保护的本发明的范围,而是仅仅表示本发明的选定实施例。基于本发明中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
基于会话的推荐算法是指在用户未登录状态下,仅仅依赖匿名会话进行用户下一个行为预测的一种算法,在许多领域,如:电商、短视频、直播等有着重要的作用。本发明提供了对比学习增强的双流模型推荐系统,包括:Transformer单元,用于进行数据中时序特征的学习,获得用户的长期的兴趣;GCN单元,用于学习探究物品转换过程中的空间结构的特征信息;组合单元,用于使用位置编码和全局图编码来组合Transformer单元和GCN单元获得的特征信息;以及,对比学习单元,用于进行模型的表征学习辅助。
Transformer是一个利用注意力机制来提高模型训练速度的模型。Transformer可以说是完全基于自注意力机制的一个深度学习模型,因为它适用于并行化计算,和它本身模型的复杂程度导致它在精度和性能上都要高于之前流行的RNN循环神经网络。
对于自注意力机制来讲,Q(Query)、K(Key)和V(Value)三个矩阵均来自同一输入,首先要计算Q与K之间的点乘,然后为了防止其结果过大,会除以一个尺度标度
其中d
k为一个Query和Key向量的维度,再利用Softmax操作将其结果归一化为概率分布,然后再乘以矩阵V就得到权重求和的表示,该操作可以表示为:
参见图1,在Transformer的Encoder结构中,数据首先会经过Self-Attention模块得到一个加权之后的特征向量Z,这个Z即为Attention(Q,K,V):
得到Z之后,它会被送到Encoder的下一个模块,即Feed Forward Neural Network,Feed Forward Neural Network模块的全连接有两层,第一层的激活函数是ReLU,第二层是一个线性激活函数,可以表示为:
FFN(Z)=max(0,ZW
1+b
1)W
2+b
2
其中,W
1为权重矩阵1,W
2为权重矩阵2,b
1为偏置值1,b
2为偏置值2,max为取最大函数。
Transformer的Decoder的结构Encoder的不同之处在于Decoder多了一个Encoder-Decoder Attention,两个Attention分别用于计算输入和输出的权值:
1.Self-Attention:当前翻译和已经翻译的前文之间的关系;
2.Encoder-Decnoder Attention:当前翻译和编码的特征向量之间的关系。
GCN图卷积神经网络的公式表示为:
其中,x
i
(l+1)为第l+1层输出,σ为非线性激活函数,c
ij为节点i的度d
i和节点j的度d
j乘积开根号,x
j
(l)为第l层输出,w
(l)为第l层权重,b
(l)为第l层偏置值,j为属于N
i的集合,N
i为节点i的邻居节点。
具体地,GCN从前一个隐藏层到后一个隐藏层,对结点进行特征变换:
X
(l+1)=f(X
(l),A)
其中,X
(l+1)为第l+1层的输出,X
(l)为第l层的输出,A为邻接矩阵,f为函数(不同模型实现不同)。
对上一步的具体实现为:
X
(l+1)=σ(AX
(l)W
(l)+b
(l))
其中,W
(l)为第l层权重矩阵;
对邻接矩阵A进行归一化(行之和为1),得到:
X
(l+1)=σ(D
-1AX
(l)W
(l)+b
(l))
邻接矩阵A的归一化,可以通过度矩阵D来实现,在实践中,使用对称归一化更加有效和有趣,变成下式:
加入自循环(每个节点从自身出发,又指向自己),得到:
实际上,就是把邻接矩阵对角线上的数,全部由0变成1
考虑到每个结点与邻结点的关系,得到:
最终得到的GCN公式为:
对比学习的对比学习一般泛式:
对任意数据x,对比学习的目标是学习一个编码器f使得:
score(f(x),f(x
+))>>score(f(x),f(x
-))
其中,x
+是和x相似的正样本,x
-是和x不相似的负样本,score是一个度量函数,来衡量样本间的相似度。
如果用向量内积来计算两个样本的相似度,则对比学习的损失函数可以表示成:
其中,对应样本x有1个正样本和N-1个负样本,T表示转置。可以发现,这个形式类似于交叉熵损失函数,学习的目标就是让x的特征和正样本的特征更相似,同时和N-1个负样本的特征更不相似。在对比学习的相关文献中把这一损失函数称作InfoNCE损失。也有一些其他的工作把这一损失函数称为multi-class n-pair loss或者ranking-based NCE。
本发明利用Transformer学习数据中存在的时序信息的机制,从而获得用户的长期兴趣的特征,从而解决长期兴趣获取的难题,使用了对比学习的方法,来进行模型的表征的学习的辅助,同时分别使用了Transformer和GCN捕获了用户历史会话数据的时序中的信息和空间结构的信息,除此以外,本发明还考虑了现有技术没有考虑到的协同过滤表征,这也是一种优势。最后,本发明使用了对比学习的方法来加强了两个表征的融合,进一步提高了推荐系统的准确性。本发明将Transformer、GCN、对比学习方法结合进行基于会话的推荐。
为了验证本发明算法的优越性,采用本发明算法在六个数据集上进行了试验验证,结果参见图2和图3,从图2中可以看出HIR和MRR指标都是上升的。从图3中可以看出训练过程中权重矩阵的变化。其中hit和mrr为推荐算法常用评价指标。
另外采用本发明和现有的推荐算法分别进行试验,对比结果如图4所示,从图4中可以看出本发明的实验结果,明显由于其他模型的结果。
以上所述仅为本发明的优选实施例而已,并不用于限制本发明,对于本领域的技术人员 来说,本发明可以有各种更改和变化。凡在本发明的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本发明的保护范围之内。
Claims (10)
- 对比学习增强的双流模型推荐算法,其特征在于,包括:首先利用Transformer的特性来进行数据中时序特征的学习,获得用户的长期的兴趣;然后利用GCN学习探究物品转换过程中的空间结构的特征信息;最后将Transformer和GCN获得的特征信息使用位置编码和全局图编码来组合,并且同时利用对比学习方法来进行模型的表征学习辅助。
- 根据权利要求2所述的对比学习增强的双流模型推荐算法,其特征在于,所述Transformer的Encoder结构还包括Feed Forward Neural Network模块,Feed Forward Neural Network模块的全连接包括第一层的ReLU激活函数和第二层的线性激活函数FFN(Z)=max(0,ZW 1+b 1)W 2+b 2,其中,W 1为权重矩阵1,W 2为权重矩阵2,b 1为偏置值,b 2为偏置值,max为取最大函数。
- 根据权利要求3所述的对比学习增强的双流模型推荐算法,其特征在于,所述Transformer的Decoder的结构包括Encoder-Decoder Attention模块,用于计算输入和输出的权值,即当前翻译和编码的特征向量之间的关系。
- 根据权利要求5所述的对比学习增强的双流模型推荐算法,其特征在于,所述GCN从前一个隐藏层到后一个隐藏层,对结点进行特征变换:X (l+1)=f(X (l),A)其中,X (l+1)为第l+1层的输出,X (l)为第l层的输出,A为邻接矩阵,f为函数。
- 根据权利要求1所述的对比学习增强的双流模型推荐算法,其特征在于,所述对比学习中对任意数据x,对比学习的目标是学习编码器f,使得:score(f(x),f(x +))>>score(f(x),f(x -))其中,x +是和x相似的正样本,x -是和x不相似的负样本,score是度量函数,来衡量样本间的相似度。
- 对比学习增强的双流模型推荐系统,其特征在于,用于实现如权利要求1至9中任一项所述的对比学习增强的双流模型推荐算法,包括:Transformer单元,用于进行数据中时序特征的学习,获得用户的长期的兴趣;GCN单元,用于学习探究物品转换过程中的空间结构的特征信息;组合单元,用于使用位置编码和全局图编码来组合Transformer单元和GCN单元获得的特征信息;以及,对比学习单元,用于进行模型的表征学习辅助。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2021/137367 WO2023108324A1 (zh) | 2021-12-13 | 2021-12-13 | 对比学习增强的双流模型推荐系统及算法 |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2021/137367 WO2023108324A1 (zh) | 2021-12-13 | 2021-12-13 | 对比学习增强的双流模型推荐系统及算法 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2023108324A1 true WO2023108324A1 (zh) | 2023-06-22 |
Family
ID=86775254
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2021/137367 WO2023108324A1 (zh) | 2021-12-13 | 2021-12-13 | 对比学习增强的双流模型推荐系统及算法 |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2023108324A1 (zh) |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116911953A (zh) * | 2023-09-12 | 2023-10-20 | 深圳须弥云图空间科技有限公司 | 物品推荐方法、装置、电子设备及计算机可读存储介质 |
CN116993821A (zh) * | 2023-06-25 | 2023-11-03 | 哈尔滨理工大学 | 一种基于Transformer-AdaRNN模型的船舶姿态实时预测方法 |
CN116992919A (zh) * | 2023-09-28 | 2023-11-03 | 之江实验室 | 一种基于多组学的植物表型预测方法和装置 |
CN117312679A (zh) * | 2023-11-28 | 2023-12-29 | 江西财经大学 | 一种双分支信息协同增强的长尾推荐方法与系统 |
CN117390300A (zh) * | 2023-10-09 | 2024-01-12 | 中国测绘科学研究院 | 多通道交互学习兴趣点推荐模型的构建方法及装置 |
CN117474637A (zh) * | 2023-12-28 | 2024-01-30 | 中国海洋大学 | 基于时序图卷积网络的个性化商品推荐方法及系统 |
CN117933304A (zh) * | 2024-02-08 | 2024-04-26 | 哈尔滨工业大学 | 一种基于自监督的多视角会话推荐桥接模型 |
CN118036654A (zh) * | 2024-03-19 | 2024-05-14 | 江苏大学 | 一种基于图神经网络和注意力机制的会话推荐方法 |
CN118134606A (zh) * | 2024-05-06 | 2024-06-04 | 烟台大学 | 基于用户偏好的服务推荐方法、系统、设备和存储介质 |
CN118364910A (zh) * | 2024-04-16 | 2024-07-19 | 重庆理工大学 | 融合角度距离和鲁棒尺度度量学习的推荐方法 |
CN118568681A (zh) * | 2024-07-22 | 2024-08-30 | 齐鲁工业大学(山东省科学院) | 一种基于深度学习的制冷系统能耗预测方法及系统 |
CN118568358A (zh) * | 2024-07-30 | 2024-08-30 | 江西师范大学 | 一种融合区域负采样和图对比学习的推荐方法 |
CN118643221A (zh) * | 2024-08-14 | 2024-09-13 | 江苏亿友慧云软件股份有限公司 | 一种基于对比学习的兴趣点推荐方法 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110119467A (zh) * | 2019-05-14 | 2019-08-13 | 苏州大学 | 一种基于会话的项目推荐方法、装置、设备及存储介质 |
CN111932336A (zh) * | 2020-07-17 | 2020-11-13 | 重庆邮电大学 | 一种基于长短期兴趣偏好的商品列表推荐方法 |
CN112364976A (zh) * | 2020-10-14 | 2021-02-12 | 南开大学 | 基于会话推荐系统的用户偏好预测方法 |
CN112733018A (zh) * | 2020-12-31 | 2021-04-30 | 哈尔滨工程大学 | 一种基于图神经网络gnn和多任务学习的会话推荐方法 |
EP3862892A1 (en) * | 2019-12-09 | 2021-08-11 | Beijing Baidu Netcom Science And Technology Co., Ltd. | Session recommendation method and apparatus, and electronic device |
-
2021
- 2021-12-13 WO PCT/CN2021/137367 patent/WO2023108324A1/zh unknown
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110119467A (zh) * | 2019-05-14 | 2019-08-13 | 苏州大学 | 一种基于会话的项目推荐方法、装置、设备及存储介质 |
EP3862892A1 (en) * | 2019-12-09 | 2021-08-11 | Beijing Baidu Netcom Science And Technology Co., Ltd. | Session recommendation method and apparatus, and electronic device |
CN111932336A (zh) * | 2020-07-17 | 2020-11-13 | 重庆邮电大学 | 一种基于长短期兴趣偏好的商品列表推荐方法 |
CN112364976A (zh) * | 2020-10-14 | 2021-02-12 | 南开大学 | 基于会话推荐系统的用户偏好预测方法 |
CN112733018A (zh) * | 2020-12-31 | 2021-04-30 | 哈尔滨工程大学 | 一种基于图神经网络gnn和多任务学习的会话推荐方法 |
Non-Patent Citations (1)
Title |
---|
WANG, LU ET AL.: "TCL: Transformer-based Dynamic Graph Modelling via Contrastive Learning", HTTPS://ARXIV.ORG, 17 May 2021 (2021-05-17), XP093071570 * |
Cited By (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116993821A (zh) * | 2023-06-25 | 2023-11-03 | 哈尔滨理工大学 | 一种基于Transformer-AdaRNN模型的船舶姿态实时预测方法 |
CN116911953A (zh) * | 2023-09-12 | 2023-10-20 | 深圳须弥云图空间科技有限公司 | 物品推荐方法、装置、电子设备及计算机可读存储介质 |
CN116911953B (zh) * | 2023-09-12 | 2024-01-05 | 深圳须弥云图空间科技有限公司 | 物品推荐方法、装置、电子设备及计算机可读存储介质 |
CN116992919A (zh) * | 2023-09-28 | 2023-11-03 | 之江实验室 | 一种基于多组学的植物表型预测方法和装置 |
CN116992919B (zh) * | 2023-09-28 | 2023-12-19 | 之江实验室 | 一种基于多组学的植物表型预测方法和装置 |
CN117390300A (zh) * | 2023-10-09 | 2024-01-12 | 中国测绘科学研究院 | 多通道交互学习兴趣点推荐模型的构建方法及装置 |
CN117312679B (zh) * | 2023-11-28 | 2024-02-09 | 江西财经大学 | 一种双分支信息协同增强的长尾推荐方法与系统 |
CN117312679A (zh) * | 2023-11-28 | 2023-12-29 | 江西财经大学 | 一种双分支信息协同增强的长尾推荐方法与系统 |
CN117474637A (zh) * | 2023-12-28 | 2024-01-30 | 中国海洋大学 | 基于时序图卷积网络的个性化商品推荐方法及系统 |
CN117474637B (zh) * | 2023-12-28 | 2024-04-16 | 中国海洋大学 | 基于时序图卷积网络的个性化商品推荐方法及系统 |
CN117933304A (zh) * | 2024-02-08 | 2024-04-26 | 哈尔滨工业大学 | 一种基于自监督的多视角会话推荐桥接模型 |
CN118036654A (zh) * | 2024-03-19 | 2024-05-14 | 江苏大学 | 一种基于图神经网络和注意力机制的会话推荐方法 |
CN118364910A (zh) * | 2024-04-16 | 2024-07-19 | 重庆理工大学 | 融合角度距离和鲁棒尺度度量学习的推荐方法 |
CN118134606A (zh) * | 2024-05-06 | 2024-06-04 | 烟台大学 | 基于用户偏好的服务推荐方法、系统、设备和存储介质 |
CN118568681A (zh) * | 2024-07-22 | 2024-08-30 | 齐鲁工业大学(山东省科学院) | 一种基于深度学习的制冷系统能耗预测方法及系统 |
CN118568358A (zh) * | 2024-07-30 | 2024-08-30 | 江西师范大学 | 一种融合区域负采样和图对比学习的推荐方法 |
CN118643221A (zh) * | 2024-08-14 | 2024-09-13 | 江苏亿友慧云软件股份有限公司 | 一种基于对比学习的兴趣点推荐方法 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2023108324A1 (zh) | 对比学习增强的双流模型推荐系统及算法 | |
CN112364976B (zh) | 基于会话推荐系统的用户偏好预测方法 | |
CN112733018B (zh) | 一种基于图神经网络gnn和多任务学习的会话推荐方法 | |
CN110032630A (zh) | 话术推荐设备、方法及模型训练设备 | |
WO2021139415A1 (zh) | 数据处理方法、装置、计算机可读存储介质及电子设备 | |
CN113361928B (zh) | 一种基于异构图注意力网络的众包任务推荐方法 | |
CN111506821B (zh) | 推荐模型、方法、装置、设备及存储介质 | |
CN114595383A (zh) | 一种基于会话序列的海洋环境数据推荐方法及系统 | |
CN115470406A (zh) | 一种基于双通道信息融合的图神经网络会话推荐方法 | |
CN112883170A (zh) | 一种用户反馈引导的自适应对话推荐方法和系统 | |
CN114117229A (zh) | 一种基于有向和无向结构信息的图神经网络的项目推荐方法 | |
CN115238191A (zh) | 对象推荐方法以及装置 | |
Zeng et al. | Collaborative filtering via heterogeneous neural networks | |
CN113344648B (zh) | 一种基于机器学习的广告推荐方法及系统 | |
CN114625969A (zh) | 一种基于交互近邻会话的推荐方法 | |
CN115952360B (zh) | 基于用户和物品共性建模的域自适应跨域推荐方法及系统 | |
CN116263794A (zh) | 对比学习增强的双流模型推荐系统及算法 | |
CN117076763A (zh) | 基于超图学习的会话推荐方法、装置、电子设备及介质 | |
CN116701766A (zh) | 面向序列化推荐的图耦合时间间隔网络研究 | |
CN114880576A (zh) | 基于时间感知超图图卷积的预测方法 | |
Lu | Knowledge distillation-enhanced multitask framework for recommendation | |
Lin et al. | Bi-directional self-attention with relative positional encoding for video summarization | |
Gao et al. | TE-Spikformer: Temporal-enhanced spiking neural network with transformer | |
Sun et al. | DeepPRFM: Pairwise Ranking Factorization Machine Based on Deep Neural Network Enhancement | |
CN112749332A (zh) | 数据处理方法、装置以及计算机可读介质 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21967469 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |