WO2020233245A1 - 一种基于回归树上下文特征自动编码的偏置张量分解方法 - Google Patents

一种基于回归树上下文特征自动编码的偏置张量分解方法 Download PDF

Info

Publication number
WO2020233245A1
WO2020233245A1 PCT/CN2020/082641 CN2020082641W WO2020233245A1 WO 2020233245 A1 WO2020233245 A1 WO 2020233245A1 CN 2020082641 W CN2020082641 W CN 2020082641W WO 2020233245 A1 WO2020233245 A1 WO 2020233245A1
Authority
WO
WIPO (PCT)
Prior art keywords
mik
context
bias
user
item
Prior art date
Application number
PCT/CN2020/082641
Other languages
English (en)
French (fr)
Inventor
赵建立
王伟
吴文敏
杨尚成
Original Assignee
山东科技大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 山东科技大学 filed Critical 山东科技大学
Publication of WO2020233245A1 publication Critical patent/WO2020233245A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9535Search customisation based on user profiles and personalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/10Text processing
    • G06F40/12Use of codes for handling textual entities
    • G06F40/14Tree-structured documents
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis

Definitions

  • the invention belongs to the field of personalized recommendation, and specifically relates to a bias tensor decomposition method based on automatic encoding of context features of regression trees.
  • the main task of the recommendation system is to integrate users' historical behavior and other information and provide users with personalized information services.
  • the principle is to analyze and mine the binary relationship between users and items, and then help users find the information they are most likely to be interested in from a large amount of information, thereby greatly reducing the time for users to find useful information and improving user experience.
  • Tensor decomposition is a more commonly used context recommendation algorithm. By modeling data into a user-item-context N-dimensional tensor, tensor decomposition can flexibly integrate context information. Then by decomposing the tensor based on the known data, the model parameters can be obtained and the new data can be predicted based on the model.
  • the existing tensor decomposition algorithms have the following problems:
  • the model parameters of the tensor decomposition model increase exponentially with the context category, and the computational cost is high.
  • the present invention proposes a bias tensor decomposition method based on automatic encoding of context features of regression trees, which is reasonable in design, overcomes the shortcomings of the prior art, and has good effects.
  • a bias tensor decomposition method based on automatic encoding of context features of regression trees including the following steps:
  • Step 1 Input: b, U, V, C, ⁇ , ⁇ ;
  • b represents the bias information
  • U represents the user feature matrix
  • V represents the item feature matrix
  • C represents the context feature matrix
  • represents the regularization parameter
  • represents the learning rate
  • Step 2 Calculate ⁇ ,b m ,b i and construct ⁇ (feature 1 ,target 1 ),...,(feature n ,target n ) ⁇ ;
  • represents the global average score
  • b m represents the user bias
  • b i represents the item bias
  • feature n represents the contextual feature in the training sample n
  • target n is the user score excluding the global average score, user bias, and item bias
  • Step 3 Train the regression tree T to construct new context features
  • Step 4 randomly initialized b m, b i, b k , U m, V i, C k;
  • Step 5 When y mik ⁇ Y ', the objective function is calculated
  • Y′ represents the non-empty part of the original score tensor Y
  • y mik and f mik represent the actual and predicted scores of user m on item i under context k
  • b k represents context bias
  • U md represents user
  • V id represents the dth element of the D-dimensional implicit semantic vector of item i
  • C kd represents the dth element of the D-dimensional implicit semantic vector of context k;
  • Step 6 Iterate each factor in the objective function according to the following formula
  • V i V i + ⁇ ⁇ (U m ⁇ C k ⁇ (y mik -f mik) - ⁇ ⁇ V i);
  • represents the operation of multiplying the corresponding elements of the vector
  • Step 7 Use the SGD (Stochastic gradient descent, stochastic gradient descent) method to optimize the objective function, traverse each score in the training set through the SGD method, update the parameters in the objective function in step 6, and then calculate the RMSE ( Root Mean Squared Error, to determine whether the training model converges;
  • SGD Spochastic gradient descent, stochastic gradient descent
  • step 8 If: the difference of the root mean square error obtained by the two optimizations before and after is less than the set minimum value, it is judged as converged, and then step 8 is executed;
  • step 5 the difference of the root mean square error obtained by the two optimizations before and after the optimization is greater than or equal to the set minimum value, it is judged as not converged, and then step 5 is performed;
  • Step 8 Output: b, U, V, C and regression tree T;
  • this application first proposes a context-aware recommendation Bias tensor decomposition model.
  • this application proposes an automatic encoding algorithm for context features based on regression trees, and combines the algorithm with the bias tensor decomposition algorithm, and proposes an algorithm based on Regression tree context auto-encoding bias tensor decomposition algorithm.
  • this application improves the recommendation accuracy of the recommendation system and solves the problem of excessive context dimensions.
  • Figure 1 is a schematic diagram of automatic context feature coding based on regression trees.
  • Figure 2 is a flow chart of the method of the present invention.
  • This application records the scores of N items from M users under K contexts as a tensor Y.
  • Y contains M ⁇ N ⁇ K records, and each record represents the score of item i by user m under context k, denoted as
  • the idea of the matrix factorization model is to use a low-dimensional matrix to approximate the original interaction matrix.
  • This application uses tensor decomposition to model user-item-context interaction information. This method stores implicit semantic features in three matrices U ⁇ M ⁇ D , V ⁇ N ⁇ D and C ⁇ K ⁇ D .
  • U m represents the D-dimensional implicit semantic vector of user m
  • Vi and C k represent the D-dimensional implicit semantic vector of item i and context k.
  • the CP decomposition algorithm is used to decompose the tensor, and the user m's rating of item i under context k is modeled as follows:
  • f mik represents the prediction score of user m on item i under context k
  • U md represents the dth element of the D-dimensional implicit semantic vector of user m
  • V id represents the dth element of the D-dimensional implicit semantic vector of item i Element
  • C kd represents the d-th element of the D-dimensional implicit semantic vector of context k
  • model (1) This application is improved on the basis of model (1), adding global average score, user bias, item bias, and context bias.
  • the improved model is as follows:
  • represents the global average score
  • b m , b i , and b k represent user bias, item bias, and context bias, respectively.
  • observation score is decomposed into 5 parts: global average score, user bias, item bias, context bias, and user-item-context interaction, which makes each component only explain its correlation in the score part.
  • y mik represents the actual score of user m on item i under context k
  • f mik represents the predicted score of user m on item i under context k
  • U m represents the D-dimensional implicit semantic vector of user m
  • the corresponding Vi and C k denote items i and k in the context of a D-dimensional hidden semantic vector
  • b m, b i, b k representing user bias
  • biasing the article with a bias context [lambda] is a regularization parameter.
  • This application uses the SGD (Stochastic Gradient Descent) method to optimize the objective function.
  • the SGD method traverses each score in the training set and updates the parameters in the model.
  • This application addresses the exponential growth of traditional tensor model parameters with the context dimension, and proposes a regression tree-based context feature encoding mechanism. By controlling the depth of the regression tree, it not only can effectively control the context dimension, but also improve the accuracy of the algorithm. .
  • the automatic context feature coding is shown in Figure 1, where feature i represents the context feature in the training sample i.
  • feature i represents the context feature in the training sample i.
  • the target value target i of the regression tree training sample is the user score, removing the remaining parts of the global average score, user bias, and item bias, namely:
  • y mik m the actual rating for the user in the context of the Item i k
  • is the overall average, b m, b i respectively and items for the user bias offset.
  • bias tensor decomposition is combined with the context feature automatic encoding, and the bias tensor decomposition method based on the regression tree context feature automatic encoding is proposed.
  • the process is shown in Figure 2, where ⁇ represents the learning rate and ⁇ represents the regularization.
  • the values of parameters, hyperparameters ⁇ and ⁇ can be obtained through cross-validation, which include the following:

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Data Mining & Analysis (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

本发明公开了一种基于回归树上下文特征自动编码的偏置张量分解方法,属于个性化推荐领域,本申请首先提出了用于上下文感知推荐的偏置张量分解模型,然后针对张量分解模型的模型参数随上下文类别呈指数增长的问题,提出了基于回归树上下文自动编码的偏置张量分解算法,提高了推荐系统的推荐精度,解决了上下文维数过多的问题。

Description

一种基于回归树上下文特征自动编码的偏置张量分解方法 技术领域
本发明属于个性化推荐领域,具体涉及一种基于回归树上下文特征自动编码的偏置张量分解方法。
背景技术
互联网和移动通讯设备的发展使得信息的产生和获取变得越来越容易,为了帮助用户解决信息过载问题,诞生了两种互联网技术,分别是搜索引擎和推荐系统。
其中推荐系统的主要任务是综合用户的历史行为等信息,向用户提供个性化的信息服务。其原理是分析和挖掘用户和物品间的二元关系,进而帮助用户从海量的信息中找到他们最有可能感兴趣的信息,从而大大减少用户找到有用信息的时间,改善用户体验。
传统的推荐算法仅利用用户的行为数据来挖掘用户兴趣。这类算法是基于用户的兴趣不会在短期内发生变化这一假设,因而可以通过历史数据训练出模型来对用户将来的兴趣进行预测。事实上,这一假设只适用于部分场景。虽然用户的一般兴趣可能相对稳定,但是用户的兴趣还受很多额外因素的影响。例如,在电影推荐系统中,用户对电影的需求与观影时间(如春节、圣诞节、情人节等),以及观影时的同伴(如情侣、父母、同学、同事等)有关。在推荐过程中使用上下文信息,有助于向用户提供更加个性化的推荐。
张量分解是较常用的上下文推荐算法,通过把数据建模成用户-物品-上下文N维张量的形式,张量分解可以灵活地整合上下文信息。然后通过基于已知数据对张量进行分解可以求得模型参数并依据该模型对新的数据进行预测。但是现有的张量分解算法存在以下问题:
1、未考虑用户偏置、物品偏置、上下文偏置及全局平均分等因素对评分的影响;
2、张量分解模型的模型参数随上下文类别呈指数增长,计算成本高。
发明内容
针对现有技术中存在的上述技术问题,本发明提出了一种基于回归树上下文特征自动编码的偏置张量分解方法,设计合理,克服了现有技术的不足,具有良好的效果。
为了实现上述目的,本发明采用如下技术方案:
一种基于回归树上下文特征自动编码的偏置张量分解方法,包括如下步骤:
步骤1:输入:b,U,V,C,λ,α;
其中,b表示偏置信息,U表示用户特征矩阵,V表示物品特征矩阵,C表示上下文特征矩阵,λ表示正则化参数,α表示学习率;
步骤2:计算μ,b m,b i并构造{(feature 1,target 1),…,(feature n,target n)};
其中,μ表示全局平均分,b m表示用户偏置,b i表示物品偏置,feature n表示训练样本n中 的上下文特征,target n为用户打分去掉全局平均分、用户偏置、物品偏置后剩余的部分;
步骤3:训练回归树T,构造新上下文特征;
步骤4:随机初始化b m,b i,b k,U m,V i,C k
步骤5:当y mik∈Y′时,计算目标函数
Figure PCTCN2020082641-appb-000001
其中,Y′表示原评分张量Y中非空的部分,y mik和f mik分别代表用户m在上下文k下对物品i的实际评分和预测评分,b k表示上下文偏置,U md表示用户m的D维隐语义向量的第d个元素,V id表示物品i的D维隐语义向量的第d个元素,C kd表示上下文k的D维隐语义向量的第d个元素;
步骤6:按如下公式迭代目标函数中各个因子;
b m←b m+α·(y mik-f mik-λ·b m);
b i←b i+α·(y mik-f mik-λ·b i);
b k←b k+α·(y mik-f mik-λ·b k);
U m←U m+α·(V i⊙C k·(y mik-f mik)-λ·U m);
V i←V i+α·(U m⊙C k·(y mik-f mik)-λ·V i);
C k←C k+α·(U m⊙V i·(y mik-f mik)-λ·C k);
其中,⊙表示向量对应元素相乘的运算;
步骤7:使用SGD(Stochastic gradient descent,随机梯度下降)方法对目标函数进行优化,通过SGD方法遍历训练集中的每一个评分,对步骤6中的目标函数中的参数进行更新,然后通过计算RMSE(Root Mean Squared Error,均方根误差),判断训练模型是否收敛;
若:前后两次优化得到的均方根误差的差值小于设定的极小值,则判断为已收敛,然后执行步骤8;
或前后两次优化得到的均方根误差的差值大于或者等于设定的极小值,则判断为未收敛,然后执行步骤5;
步骤8:输出:b,U,V,C及回归树T;
步骤9:结束。
本发明所带来的有益技术效果:
1、针对当前基于张量分解的上下文感知推荐算法未考虑用户偏置、物品偏置、上下文偏置及全局平均分等因素对评分的影响的问题,本申请首先提出了用于上下文感知推荐的偏置张量分解模型。
2、针对张量分解模型的模型参数随上下文类别呈指数增长的问题,本申请提出了基于回归树的上下文特征自动编码算法,并将该算法与偏置张量分解算法相结合,提出了基于回归 树上下文自动编码的偏置张量分解算法。
3、与当前已有的张量分解算法对比,本申请提高了推荐系统的推荐精度,解决了上下文维数过多的问题。
附图说明
图1为基于回归树的自动上下文特征编码示意图。
图2为本发明方法的流程图。
具体实施方式
下面结合附图以及具体实施方式对本发明作进一步详细说明:
1、问题形式化定义
本申请将来自M个用户在K种上下文条件下对N个物品的打分记作张量Y。Y包含M×N×K个记录,每个记录表示用户m在上下文k下对物品i的打分,记作
Figure PCTCN2020082641-appb-000002
|Y|表示Y中非零元素的数量,Y mk表示用户m在上下文k下对所有物品的评分向量。
矩阵分解模型的思想是使用低维矩阵来近似原始的交互矩阵。本申请使用张量分解对用户-物品-上下文交互信息进行建模,该方法将隐语义特征存储在U∈ M×D,V∈ N×D和C∈ K×D3个矩阵中,本申请用U m表示用户m的D维隐语义向量,相应的分别用V i和C k表示物品i和上下文k的D维隐语义向量。
使用CP分解算法对张量进行分解,将用户m在上下文k下对物品i的评分建模如下:
Figure PCTCN2020082641-appb-000003
其中,f mik代表用户m在上下文k下对物品i的预测评分,U md表示用户m的D维隐语义向量的第d个元素,V id表示物品i的D维隐语义向量的第d个元素,C kd表示上下文k的D维隐语义向量的第d个元素;
2、偏置张量分解模型
本申请在模型(1)的基础上进行改进,增加了全局平均分、用户偏置、物品偏置和上下文偏置,改进后的模型如下:
Figure PCTCN2020082641-appb-000004
其中,μ代表全局平均分,b m、b i、b k分别代表用户偏置、物品偏置与上下文偏置。
在这个模型中,观测评分被分解成5个部分:全局平均分、用户偏置、物品偏置、上下文偏置以及用户-物品-上下文的交互作用,这使得每个分量只解释评分中与其相关的部分。
为了防止过拟合,加入L2范数得到优化目标为:
Figure PCTCN2020082641-appb-000005
其中,y mik代表用户m在上下文k下对物品i的实际评分,f mik代表用户m在上下文k下对物品i的预测评分,U m表示用户m的D维隐语义向量,相应的V i和C k分别表示物品i和上下文k的D维隐语义向量,b m、b i、b k分别代表用户偏置、物品偏置与上下文偏置,λ为正则化参数。
本申请使用SGD(Stochastic gradient descent,随机梯度下降)方法对目标函数进行优化,SGD方法遍历训练集中的每一个评分,对模型中的参数进行更新。
训练过程详见图2。
3、基于回归树的上下文特征自动编码
本申请针对传统张量模型参数随上下文维度呈指数增长的问题,提出一种基于回归树的上下文特征编码机制,通过控制回归树的深度,不仅可以有效地控制上下文维度,同时提高了算法的精度。
自动上下文特征编码如图1所示,其中feature i表示训练样本i中的上下文特征。考虑到全局平均分,用户偏置等因素的影响,回归树训练样本的目标值target i为用户打分去掉全局平均分、用户偏置、物品偏置剩余的部分,即:
y mik←y mik-μ-b m-b i    (4);
其中,y mik为用户m在上下文k下对物品i的实际评分,μ为全局平均分,b m、b i分别为用户偏置和物品偏置。
最后,将偏置张量分解与上下文特征自动编码相结合,提出基于回归树上下文特征自动编码的偏置张量分解方法,其流程如图2所示,其中α表示学习率,λ表示正则化参数,超参数α,λ的值可以通过交叉验证得到,具体包括如下:
(1)建立起多组超参数α,λ的组合;
(2)将数据集等分为10份,取其中一份为测试集,其余九份为训练集,循环十次;
(3)依次采用不同的超参数组合进行10次交叉验证,将每种超参数组合下的推荐结果取平均值进行比较,选择推荐精度最高的一组超参数组合。
当然,上述说明并非是对本发明的限制,本发明也并不仅限于上述举例,本技术领域的技术人员在本发明的实质范围内所做出的变化、改型、添加或替换,也应属于本发明的保护范围。

Claims (1)

  1. 一种基于回归树上下文特征自动编码的偏置张量分解方法,其特征在于:包括如下步骤:
    步骤1:输入:b,U,V,C,λ,α;
    其中,b表示偏置信息,U表示用户特征矩阵,V表示物品特征矩阵,C表示上下文特征矩阵,λ表示正则化参数,α表示学习率;
    步骤2:计算μ,b m,b i并构造{(feature 1,target 1),…,(feature n,target n)};
    其中,μ表示全局平均分,b m表示用户偏置,b i表示物品偏置,feature n表示训练样本n中的上下文特征,target n为用户打分去掉全局平均分、用户偏置、物品偏置后剩余的部分;
    步骤3:训练回归树T,构造新上下文特征;
    步骤4:随机初始化b m,b i,b k,U m,V i,C k
    步骤5:当y mik∈Y′时,计算目标函数
    Figure PCTCN2020082641-appb-100001
    其中,Y′表示原评分张量Y中非空的部分,y mik和f mik分别代表用户m在上下文k下对物品i的实际评分和预测评分,b k表示上下文偏置,U md表示用户m的D维隐语义向量的第d个元素,V id表示物品i的D维隐语义向量的第d个元素,C kd表示上下文k的D维隐语义向量的第d个元素;
    步骤6:按如下公式迭代目标函数中各个因子;
    b m←b m+α·(y mik-f mik-λ·b m);
    b i←b i+α·(y mik-f mik-λ·b i);
    b k←b k+α·(y mik-f mik-λ·b k);
    U m←U m+α·(V i⊙C k·(y mik-f mik)-λ·U m);
    V i←V i+α·(U m⊙C k·(y mik-f mik)-λ·V i);
    C k←C k+α·(U m⊙V i·(y mik-f mik)-λ·C k);
    其中,⊙表示向量对应元素相乘的运算;
    步骤7:使用随机梯度下降方法对目标函数进行优化,通过随机梯度下降方法遍历训练集中的每一个评分,对步骤6中的目标函数中的参数进行更新,然后通过计算均方根误差,判断训练模型是否收敛;
    若:前后两次优化得到的均方根误差的差值小于设定的极小值,则判断为已收敛,然后执行步骤8;
    或前后两次优化得到的均方根误差的差值大于或者等于设定的极小值,则判断为未收敛,然后执行步骤5;
    步骤8:输出:b,U,V,C及回归树T;
    步骤9:结束。
PCT/CN2020/082641 2019-05-20 2020-04-01 一种基于回归树上下文特征自动编码的偏置张量分解方法 WO2020233245A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910416222.4 2019-05-20
CN201910416222.4A CN110209933A (zh) 2019-05-20 2019-05-20 一种基于回归树上下文特征自动编码的偏置张量分解方法

Publications (1)

Publication Number Publication Date
WO2020233245A1 true WO2020233245A1 (zh) 2020-11-26

Family

ID=67787737

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/082641 WO2020233245A1 (zh) 2019-05-20 2020-04-01 一种基于回归树上下文特征自动编码的偏置张量分解方法

Country Status (2)

Country Link
CN (1) CN110209933A (zh)
WO (1) WO2020233245A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113326433A (zh) * 2021-03-26 2021-08-31 沈阳工业大学 一种基于集成学习的个性化推荐方法

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110209933A (zh) * 2019-05-20 2019-09-06 山东科技大学 一种基于回归树上下文特征自动编码的偏置张量分解方法
CN113393303A (zh) * 2021-06-30 2021-09-14 青岛海尔工业智能研究院有限公司 物品推荐方法、装置、设备及存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102982107A (zh) * 2012-11-08 2013-03-20 北京航空航天大学 一种融合用户、项目和上下文属性信息的推荐系统优化方法
CN103136694A (zh) * 2013-03-20 2013-06-05 焦点科技股份有限公司 基于搜索行为感知的协同过滤推荐方法
CN106649657A (zh) * 2016-12-13 2017-05-10 重庆邮电大学 面向社交网络基于张量分解的上下文感知推荐系统及方法
CN108521586A (zh) * 2018-03-20 2018-09-11 西北大学 兼顾时间上下文与隐式反馈的iptv电视节目个性化推荐方法
CN110209933A (zh) * 2019-05-20 2019-09-06 山东科技大学 一种基于回归树上下文特征自动编码的偏置张量分解方法

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9047423B2 (en) * 2012-01-12 2015-06-02 International Business Machines Corporation Monte-Carlo planning using contextual information
US9846836B2 (en) * 2014-06-13 2017-12-19 Microsoft Technology Licensing, Llc Modeling interestingness with deep neural networks
CN105975496A (zh) * 2016-04-26 2016-09-28 清华大学 一种基于上下文感知的音乐推荐方法及装置
CN106383865B (zh) * 2016-09-05 2020-03-27 北京百度网讯科技有限公司 基于人工智能的推荐数据的获取方法及装置

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102982107A (zh) * 2012-11-08 2013-03-20 北京航空航天大学 一种融合用户、项目和上下文属性信息的推荐系统优化方法
CN103136694A (zh) * 2013-03-20 2013-06-05 焦点科技股份有限公司 基于搜索行为感知的协同过滤推荐方法
CN106649657A (zh) * 2016-12-13 2017-05-10 重庆邮电大学 面向社交网络基于张量分解的上下文感知推荐系统及方法
CN108521586A (zh) * 2018-03-20 2018-09-11 西北大学 兼顾时间上下文与隐式反馈的iptv电视节目个性化推荐方法
CN110209933A (zh) * 2019-05-20 2019-09-06 山东科技大学 一种基于回归树上下文特征自动编码的偏置张量分解方法

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113326433A (zh) * 2021-03-26 2021-08-31 沈阳工业大学 一种基于集成学习的个性化推荐方法
CN113326433B (zh) * 2021-03-26 2023-10-10 沈阳工业大学 一种基于集成学习的个性化推荐方法

Also Published As

Publication number Publication date
CN110209933A (zh) 2019-09-06

Similar Documents

Publication Publication Date Title
CN109299396B (zh) 融合注意力模型的卷积神经网络协同过滤推荐方法及系统
WO2020233245A1 (zh) 一种基于回归树上下文特征自动编码的偏置张量分解方法
CN111104595B (zh) 一种基于文本信息的深度强化学习交互式推荐方法及系统
CN112529168B (zh) 一种基于gcn的属性多层网络表示学习方法
US8676726B2 (en) Automatic variable creation for adaptive analytical models
CN112364976B (zh) 基于会话推荐系统的用户偏好预测方法
CN112906982A (zh) 一种基于gnn-lstm结合的网络流量预测方法
CN110263236B (zh) 基于动态多视图学习模型的社交网络用户多标签分类方法
CN109933720B (zh) 一种基于用户兴趣自适应演化的动态推荐方法
CN112182424A (zh) 一种基于异构信息和同构信息网络融合的社交推荐方法
CN106997488A (zh) 一种结合马尔科夫决策过程的动作知识提取方法
Mu et al. Auto-CASH: A meta-learning embedding approach for autonomous classification algorithm selection
WO2020147259A1 (zh) 一种用户画像方法、装置、可读存储介质及终端设备
Sreenath et al. Stochastic ground motion models to NGA‐West2 and NGA‐Sub databases using Bayesian neural network
Hain et al. The promises of Machine Learning and Big Data in entrepreneurship research
US20210065047A1 (en) Multi-tiered system for scalable entity representation learning
Nguyen et al. Grammatical evolution to mine OWL disjointness axioms involving complex concept expressions
CN115982373A (zh) 结合多级交互式对比学习的知识图谱推荐方法
Yan et al. Modeling long-and short-term user behaviors for sequential recommendation with deep neural networks
CN109636609A (zh) 基于双向长短时记忆模型的股票推荐方法及系统
Wang et al. Multi‐feedback Pairwise Ranking via Adversarial Training for Recommender
Xiao et al. A better understanding of the interaction between users and items by knowledge graph learning for temporal recommendation
US11829735B2 (en) Artificial intelligence (AI) framework to identify object-relational mapping issues in real-time
US20230018525A1 (en) Artificial Intelligence (AI) Framework to Identify Object-Relational Mapping Issues in Real-Time
CN113807422B (zh) 融合多特征信息的加权图卷积神经网络评分预测模型

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20809339

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20809339

Country of ref document: EP

Kind code of ref document: A1