CN117421661A - A group recommendation method based on counterfactual enhanced graph convolutional network - Google Patents
A group recommendation method based on counterfactual enhanced graph convolutional network Download PDFInfo
- Publication number
- CN117421661A CN117421661A CN202311744970.8A CN202311744970A CN117421661A CN 117421661 A CN117421661 A CN 117421661A CN 202311744970 A CN202311744970 A CN 202311744970A CN 117421661 A CN117421661 A CN 117421661A
- Authority
- CN
- China
- Prior art keywords
- group
- representation
- user
- level
- counterfactual
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 44
- 238000012549 training Methods 0.000 claims abstract description 22
- 239000011159 matrix material Substances 0.000 claims description 14
- 230000014509 gene expression Effects 0.000 claims description 10
- 230000003993 interaction Effects 0.000 claims description 10
- 238000005070 sampling Methods 0.000 claims description 7
- 230000007246 mechanism Effects 0.000 claims description 6
- 238000004364 calculation method Methods 0.000 claims description 3
- 230000003094 perturbing effect Effects 0.000 claims description 3
- 238000005065 mining Methods 0.000 abstract description 3
- 238000005096 rolling process Methods 0.000 abstract 1
- 230000006870 function Effects 0.000 description 7
- 230000001364 causal effect Effects 0.000 description 5
- 238000002474 experimental method Methods 0.000 description 3
- 230000004927 fusion Effects 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 238000013528 artificial neural network Methods 0.000 description 2
- 230000006872 improvement Effects 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000006467 substitution reaction Methods 0.000 description 2
- 230000002776 aggregation Effects 0.000 description 1
- 238000004220 aggregation Methods 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 230000001186 cumulative effect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 230000001537 neural effect Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/217—Validation; Performance evaluation; Active pattern learning techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/042—Knowledge-based neural networks; Logical representations of neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
- G06Q50/01—Social networking
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Business, Economics & Management (AREA)
- Computing Systems (AREA)
- Biomedical Technology (AREA)
- Molecular Biology (AREA)
- Computational Linguistics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Biophysics (AREA)
- Economics (AREA)
- Human Resources & Organizations (AREA)
- Marketing (AREA)
- Primary Health Care (AREA)
- Strategic Management (AREA)
- Tourism & Hospitality (AREA)
- General Business, Economics & Management (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
Description
技术领域Technical field
本发明涉及群组推荐技术领域,尤其涉及一种基于反事实增强的图卷积网络的群组推荐方法。The present invention relates to the technical field of group recommendation, and in particular to a group recommendation method based on counterfactual enhanced graph convolution network.
背景技术Background technique
推荐系统作为解决了信息过载问题的有力工具,被广泛应用于电子商务平台、社交媒体网站、新闻门户等在线信息系统中,以帮助用户从日常生活中的各种选项中进行选择。推荐系统收集用户过去的偏好,并生成合适的项目推荐提供给用户。一个有效的推荐方案不仅可以为服务提供商增加流量和利润,还可以帮助用户更容易地发现感兴趣的项目。As a powerful tool to solve the problem of information overload, recommendation systems are widely used in online information systems such as e-commerce platforms, social media websites, and news portals to help users choose from various options in daily life. Recommendation systems collect users’ past preferences and generate appropriate project recommendations for users. An effective recommendation scheme can not only increase traffic and profits for service providers, but also help users discover items of interest more easily.
随着社交媒体的普及,在线群组活动在当前的社交网络中变得异常普遍。然而,传统的个性化推荐算法主要针对个人用户的偏好进行设计,难以满足群组用户的需求。在群组中,每个成员的兴趣和偏好都可能对最终的群组决策产生影响。因此,迫切需要一种专门针对群组用户设计的推荐系统,即群组推荐系统。这种系统考虑了群组内部成员的互动和影响,能够更准确地为群组提供个性化的推荐服务。With the popularity of social media, online group activities have become extremely common in current social networks. However, traditional personalized recommendation algorithms are mainly designed for the preferences of individual users and are difficult to meet the needs of group users. In a group, the interests and preferences of each member may have an impact on the final group decision. Therefore, there is an urgent need for a recommendation system specifically designed for group users, that is, a group recommendation system. This kind of system takes into account the interaction and influence of members within the group and can more accurately provide personalized recommendation services to the group.
在群组推荐过程中,有效地聚合群组中所有成员的偏好以形成群组偏好是一项至关重要的任务。传统的聚合方法通常基于预定义的启发式规则,包括公平策略、最小痛苦策略和最大满意度策略等。尽管这些独立于数据的静态策略在一定程度上能够满足群组偏好的形成,但实际上群组决策是一个复杂的动态过程,需要考虑到每个成员的权重。神经注意力机制的发展为解决这一问题提供了一种灵活的方式。最近,复杂图神经网络和创新的超立方体结构,结合对比学习等技术,在群组推荐方法中取得了显著的进展,成功地解决了特定挑战和存在的问题。In the group recommendation process, it is a crucial task to effectively aggregate the preferences of all members in the group to form group preferences. Traditional aggregation methods are usually based on predefined heuristic rules, including fairness strategy, minimum pain strategy, maximum satisfaction strategy, etc. Although these data-independent static strategies can satisfy the formation of group preferences to a certain extent, in fact group decision-making is a complex dynamic process that needs to take into account the weight of each member. The development of neural attention mechanisms provides a flexible way to solve this problem. Recently, complex graph neural networks and innovative hypercube structures, combined with techniques such as contrastive learning, have made significant progress in group recommendation methods, successfully solving specific challenges and existing problems.
然而,当前的群组推荐方法通常建立在统计框架上,主要考虑群组和项目之间的统计关系,忽视了潜在的因果关系,这种局限性导致最终的推荐结果不够准确,而反事实学习作为因果推断中的一种方法,其对因果性的挖掘虽在群组推荐中具有很大的应用前景,但随着数据稀疏度的变化,现有方法的性能表现不佳,对不同稀疏度数据的适应性也不足。However, current group recommendation methods are usually built on a statistical framework, mainly considering the statistical relationship between groups and items, ignoring potential causal relationships. This limitation leads to the final recommendation results being less accurate, and counterfactual learning As a method in causal inference, its mining of causality has great application prospects in group recommendation. However, as the data sparsity changes, the performance of existing methods is not good. Data adaptability is also insufficient.
发明内容Contents of the invention
本发明旨在至少解决相关技术中存在的技术问题之一。为此,本发明提供一种基于反事实增强的图卷积网络的群组推荐方法。The present invention aims to solve at least one of the technical problems existing in the related art. To this end, the present invention provides a group recommendation method based on counterfactual enhanced graph convolution network.
本发明提供一种基于反事实增强的图卷积网络的群组推荐方法,包括:The present invention provides a group recommendation method based on counterfactual enhanced graph convolution network, including:
S1:对公开群组数据进行预处理,获得负例数据集,并构建群组用户超图及群组项目二部图;S1: Preprocess the public group data, obtain the negative example data set, and construct the group user hypergraph and group project bipartite graph;
S2:对所述群组用户超图卷积挖掘群组和用户间的高阶信息,获得用户级群组表示;对所述群组用户超图进行扰动,并对扰动后的群组用户超图进行卷积,获得反事实群组表示;S2: Convolve the group user hypergraph to mine high-order information between groups and users, and obtain user-level group representation; perform perturbation on the group user hypergraph, and perform perturbation on the group user hypergraph. The graph is convolved to obtain counterfactual group representation;
S3:对所述群组项目二部图及所述用户级群组表示卷积挖掘群组和项目间的高阶信息,获得群组级群组表示和群组级项目表示,通过残差门控机制加权融合所述用户级群组表示及所述群组级群组表示,获得最终群组表示;S3: Convolve the group-item bipartite graph and the user-level group representation to mine high-order information between groups and items, obtain group-level group representation and group-level item representation, and pass the residual gate A control mechanism weightedly fuses the user-level group representation and the group-level group representation to obtain the final group representation;
S4:基于所述反事实群组表示、用户级群组表示及初始项目表示获得反事实损失,基于所述最终群组表示和所述群组级项目表示获得群组损失,基于初始用户表示和初始项目表示获得用户损失,根据所述反事实损失、所述群组损失及所述用户损失对分层图卷积网络进行多轮次训练,获得群组预测模型;S4: Obtain counterfactual loss based on the counterfactual group representation, user-level group representation and initial item representation, obtain group loss based on the final group representation and the group-level item representation, based on the initial user representation and The initial project indicates that the user loss is obtained, and the hierarchical graph convolution network is trained for multiple rounds according to the counterfactual loss, the group loss and the user loss to obtain a group prediction model;
S5:将待推荐的项目列表输入所述预测模型,获得预测得分,基于所述预测得分排序获得推荐项目列表。S5: Input the list of items to be recommended into the prediction model, obtain a prediction score, and obtain a list of recommended items based on the ranking of the prediction score.
根据本发明提供的一种基于反事实增强的图卷积网络的群组推荐方法,步骤S1中,所述群组用户超图根据群组用户关系数据构建,所述群组项目二部图根据群组项目交互数据构建。According to a group recommendation method based on counterfactual enhanced graph convolution network provided by the present invention, in step S1, the group user hypergraph is constructed based on group user relationship data, and the group item bipartite graph is constructed based on Group project interaction data construction.
根据本发明提供的一种基于反事实增强的图卷积网络的群组推荐方法,步骤S1中,所述负例数据包括基于ID嵌入向量查找表构建的初始表示集合,所述初始表示集合包括初始群组表示、初始用户表示及初始项目表示。According to a group recommendation method based on counterfactual enhanced graph convolution network provided by the present invention, in step S1, the negative example data includes an initial representation set constructed based on the ID embedding vector lookup table, and the initial representation set includes Initial group representation, initial user representation, and initial project representation.
根据本发明提供的一种基于反事实增强的图卷积网络的群组推荐方法,步骤S2中,所述用户级群组表示的表达式为:According to a group recommendation method based on counterfactual enhanced graph convolution network provided by the present invention, in step S2, the expression of the user-level group representation is:
其中,为用户级群组表示,/>为超图卷积网络的卷积层总数,/>为超图卷积网络的卷积层索引值,/>为第/>卷积层更新的节点表示的聚合信息。in, Represented as a user-level group,/> is the total number of convolutional layers of the hypergraph convolutional network,/> is the convolution layer index value of the hypergraph convolution network,/> For the first/> Aggregated information represented by nodes updated by convolutional layers.
根据本发明提供的一种基于反事实增强的图卷积网络的群组推荐方法,步骤S2中,对所述群组用户超图进行扰动的计算式为:According to a group recommendation method based on counterfactual enhanced graph convolution network provided by the present invention, in step S2, the calculation formula for perturbing the group user hypergraph is:
其中,为扰动后的群组用户超图关联矩阵,/>为群组用户超图关联矩阵,/>为指标函数,/>为可训练的掩码矩阵,/>为指标函数决定条目指定的阈值。in, is the perturbed group user hypergraph correlation matrix,/> is the group user hypergraph correlation matrix,/> is the indicator function,/> is a trainable mask matrix,/> Threshold specified for indicator function decision entries.
根据本发明提供的一种基于反事实增强的图卷积网络的群组推荐方法,步骤S3中获得的群组级群组表示和群组级项目表示的表达式为:According to a group recommendation method based on counterfactual enhanced graph convolution network provided by the present invention, the expressions of the group-level group representation and group-level item representation obtained in step S3 are:
其中,为群组级群组表示,/>为群组级项目表示,/>为第/>卷积层获得的节点表示,为所有卷积层获得的节点嵌入表示的平均。in, For group-level group representation,/> Represented as a group-level item,/> For the first/> The node representation obtained by the convolutional layer, Average of node embedding representations obtained for all convolutional layers.
根据本发明提供的一种基于反事实增强的图卷积网络的群组推荐方法,步骤S3中的所述最终群组表示的表达式为:According to a group recommendation method based on counterfactual enhanced graph convolution network provided by the present invention, the expression of the final group representation in step S3 is:
其中,为最终群组表示,/>为用户级群组表示所占权重,/>为sigmoid函数,/>为第一门控系数,/>为第二门控系数,/>为第三权重系数。in, is the final group representation,/> Represents the weight of user-level groups,/> is the sigmoid function,/> is the first gating coefficient,/> is the second gating coefficient,/> is the third weight coefficient.
根据本发明提供的一种基于反事实增强的图卷积网络的群组推荐方法,步骤S4中,用于所述分层图卷积网络进行多轮次训练的损失包括:According to a group recommendation method based on counterfactual enhanced graph convolution network provided by the present invention, in step S4, the loss used for the multi-round training of the hierarchical graph convolution network includes:
其中,为反事实损失,/>为群组损失,/>为用户损失,/>为群组项目训练集,/>为群组项目训练集中的数据,/>为用户项目训练集,/>为用户项目训练集中的数据,/>为项目索引值,/>为与群组交互的项目对应的抽样负例,/>为与用户交互的项目对应的抽样负例,/>为事实预测得分,/>为反事实预测得分,/>为最终群组表示预测得分,/>为群组级项目表示预测得分,/>用户初始表示预测得分,/>为项目初始表示预测得分。in, is the counterfactual loss,/> For group losses,/> Loss for users,/> For the group project training set,/> Data from the training set for the group project,/> Training set for user projects,/> Data from the training set for user projects,/> is the item index value,/> is the sampling negative example corresponding to the item that interacts with the group,/> is the sampling negative example corresponding to the item that interacts with the user,/> Predict the score for the fact,/> is the counterfactual prediction score,/> represents the predicted score for the final group,/> Represents predicted scores for group-level items,/> The user initially indicates the predicted score,/> Predict the score for the initial representation of the project.
本发明提供的一种基于反事实增强的图卷积网络的群组推荐方法,针对当前群组推荐模型对因果性的挖掘和对不同稀疏度数据的适应两个方面的不足,以挖掘群组成员和项目推荐之间的因果关系,探索群组推荐背后的基本原理,并提高模型对不同稀疏度数据的适应能力。The present invention provides a group recommendation method based on counterfactual enhanced graph convolution network, which aims at the shortcomings of the current group recommendation model in mining causality and adapting to different sparsity data to mine groups. The causal relationship between members and item recommendations, explore the basic principles behind group recommendations, and improve the model's adaptability to data with different sparsities.
本发明提出了一种基于反事实增强的分层图卷积网络的群组推荐方法CF-hGCN,主要应用于群组推荐任务,通过在反事实超图学习模块中整合图反事实学习技术和超图卷积网络,CF-hGCN能够深入挖掘群组成员和项目推荐之间的因果性关系,这种方法使得CF-hGCN能够准确识别影响群组推荐的关键成员,为群组推荐的基本原理提供了深入的解释。The present invention proposes a group recommendation method CF-hGCN based on counterfactual enhanced hierarchical graph convolution network, which is mainly used in group recommendation tasks. By integrating graph counterfactual learning technology and Hypergraph convolutional network, CF-hGCN can deeply explore the causal relationship between group members and item recommendations. This method enables CF-hGCN to accurately identify key members that affect group recommendations and provides the basic principle of group recommendations. Provides in-depth explanations.
CF-hGCN采用了创新的分层网络结构,将群组、用户和项目的表示依次通过反事实超图学习模块和二部图学习模块进行计算,这种设计使其能够在不同的数据稀疏度的场景下表现出色。其中,超图学习模块针对高度稀疏的数据,能够有效地捕获成员的偏好,而二部图学习模块则针对中低度稀疏的数据,能够更好地捕获群组的共识。随后,借助精心设计的残差门控机制,自适应地平衡了来自这两个层级的群组表示,使得模型能够适应不同稀疏度的数据,不仅能够显式地对群组、用户和项目的特征进行更加准确的建模,明显提高了在群组推荐任务上的性能,并显著增强了模型对不同稀疏度数据的适应性。CF-hGCN adopts an innovative hierarchical network structure, which sequentially calculates the representations of groups, users and projects through the counterfactual hypergraph learning module and the bipartite graph learning module. This design enables it to operate at different data sparsities. Excellent performance in the scene. Among them, the hypergraph learning module can effectively capture the preferences of members for highly sparse data, while the bipartite graph learning module can better capture the consensus of the group for medium to low sparse data. Subsequently, with the help of a carefully designed residual gating mechanism, the group representations from these two levels are adaptively balanced, allowing the model to adapt to data with different sparsities. Features are modeled more accurately, which significantly improves the performance on group recommendation tasks and significantly enhances the model's adaptability to data with different sparsities.
本发明的附加方面和优点将在下面的描述中部分给出,部分将从下面的描述中变得明显,或通过本发明的实践了解到。Additional aspects and advantages of the invention will be set forth in part in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention.
附图说明Description of the drawings
为了更清楚地说明本发明或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。In order to more clearly illustrate the present invention or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below. Obviously, the drawings in the following description are the drawings of the present invention. For some embodiments, those of ordinary skill in the art can also obtain other drawings based on these drawings without exerting creative efforts.
图1是本发明提供的一种基于反事实增强的图卷积网络的群组推荐方法流程图。Figure 1 is a flow chart of a group recommendation method based on counterfactual enhanced graph convolution network provided by the present invention.
具体实施方式Detailed ways
为使本发明的目的、技术方案和优点更加清楚,下面将结合本发明中的附图,对本发明中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。以下实施例用于说明本发明,但不能用来限制本发明的范围。In order to make the purpose, technical solutions and advantages of the present invention more clear, the technical solutions in the present invention will be clearly and completely described below in conjunction with the accompanying drawings of the present invention. Obviously, the described embodiments are part of the embodiments of the present invention. , not all examples. Based on the embodiments of the present invention, all other embodiments obtained by those of ordinary skill in the art without making creative efforts fall within the scope of protection of the present invention. The following examples are used to illustrate the invention but are not intended to limit the scope of the invention.
在本发明实施例的描述中,需要说明的是,术语“中心”、“纵向”、“横向”、“上”、“下”、“前”、“后”、“左”、“右”、“竖直”、“水平”、“顶”、“底”、“内”、“外”等指示的方位或位置关系为基于附图所示的方位或位置关系,仅是为了便于描述本发明实施例和简化描述,而不是指示或暗示所指的装置或元件必须具有特定的方位、以特定的方位构造和操作,因此不能理解为对本发明实施例的限制。此外,术语“第一”、“第二”、“第三”仅用于描述目的,而不能理解为指示或暗示相对重要性。In the description of the embodiments of the present invention, it should be noted that the terms "center", "longitudinal", "horizontal", "upper", "lower", "front", "back", "left" and "right" The orientations or positional relationships indicated by "vertical", "horizontal", "top", "bottom", "inner", "outer", etc. are based on the orientations or positional relationships shown in the accompanying drawings and are only for the convenience of describing this document. The embodiments and simplified descriptions of the invention do not indicate or imply that the devices or elements referred to must have a specific orientation, be constructed and operate in a specific orientation, and therefore cannot be construed as limiting the embodiments of the invention. Furthermore, the terms “first”, “second” and “third” are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
在本发明实施例的描述中,需要说明的是,除非另有明确的规定和限定,术语“相连”、“连接”应做广义理解,例如,可以是固定连接,也可以是可拆卸连接,或一体连接;可以是机械连接,也可以是电连接;可以是直接相连,也可以通过中间媒介间接相连。对于本领域的普通技术人员而言,可以具体情况理解上述术语在本发明实施例中的具体含义。In the description of the embodiments of the present invention, it should be noted that, unless otherwise clearly stated and limited, the terms "connected" and "connected" should be understood in a broad sense. For example, it can be a fixed connection or a detachable connection. Or integrated connection; it can be a mechanical connection or an electrical connection; it can be a direct connection or an indirect connection through an intermediate medium. For those of ordinary skill in the art, the specific meanings of the above terms in the embodiments of the present invention can be understood in specific situations.
在本发明实施例中,除非另有明确的规定和限定,第一特征在第二特征“上”或“下”可以是第一和第二特征直接接触,或第一和第二特征通过中间媒介间接接触。而且,第一特征在第二特征“之上”、“上方”和“上面”可是第一特征在第二特征正上方或斜上方,或仅仅表示第一特征水平高度高于第二特征。第一特征在第二特征“之下”、“下方”和“下面”可以是第一特征在第二特征正下方或斜下方,或仅仅表示第一特征水平高度小于第二特征。In the embodiment of the present invention, unless otherwise expressly provided and limited, the first feature "on" or "below" the second feature may be that the first and second features are in direct contact, or the first and second features are in intermediate contact. Indirect media contact. Furthermore, the terms "above", "above" and "above" the first feature is above the second feature may mean that the first feature is directly above or diagonally above the second feature, or simply means that the first feature is higher in level than the second feature. "Below", "below" and "beneath" the first feature to the second feature may mean that the first feature is directly below or diagonally below the second feature, or simply means that the first feature has a smaller horizontal height than the second feature.
在本说明书的描述中,参考术语“一个实施例”、“一些实施例”、“示例”、“具体示例”、或“一些示例”等的描述意指结合该实施例或示例描述的具体特征、结构、材料或者特点包含于本发明实施例的至少一个实施例或示例中。在本说明书中,对上述术语的示意性表述不必须针对的是相同的实施例或示例。而且,描述的具体特征、结构、材料或者特点可以在任一个或多个实施例或示例中以合适的方式结合。此外,在不相互矛盾的情况下,本领域的技术人员可以将本说明书中描述的不同实施例或示例以及不同实施例或示例的特征进行结合和组合。In the description of this specification, reference to the terms "one embodiment," "some embodiments," "an example," "specific examples," or "some examples" or the like means that specific features are described in connection with the embodiment or example. , structures, materials or features are included in at least one embodiment or example of embodiments of the present invention. In this specification, the schematic expressions of the above terms are not necessarily directed to the same embodiment or example. Furthermore, the specific features, structures, materials or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, those skilled in the art may combine and combine different embodiments or examples and features of different embodiments or examples described in this specification unless they are inconsistent with each other.
下面结合图1描述本发明的实施例。The embodiment of the present invention is described below with reference to FIG. 1 .
本发明提供一种基于反事实增强的图卷积网络的群组推荐方法,包括:The present invention provides a group recommendation method based on counterfactual enhanced graph convolution network, including:
S1:对公开群组数据进行预处理,获得负例数据,并构建群组用户超图及群组项目二部图;S1: Preprocess public group data, obtain negative example data, and construct group user hypergraph and group project bipartite graph;
其中,步骤S1中,所述群组用户超图根据群组用户关系数据构建,所述群组项目二部图根据群组项目交互数据构建。In step S1, the group-user hypergraph is constructed based on group-user relationship data, and the group-item bipartite graph is constructed based on group-item interaction data.
其中,步骤S1中,所述负例数据包括基于ID嵌入向量查找表构建的初始表示集合,所述初始表示集合包括初始群组表示、初始用户表示及初始项目表示。In step S1, the negative example data includes an initial representation set constructed based on the ID embedding vector lookup table. The initial representation set includes an initial group representation, an initial user representation, and an initial item representation.
进一步的,本阶段首先采集群组推荐数据集中包含用户历史交互数据、群组历史交互数据和群组用户关系数据。该阶段的目标是根据群组用户关系数据构建群组用户超图,同时根据群组项目交互数据构建群组项目二部图;构建用户、群组和项目的嵌入向量查找表;从数据集中随机抽样未交互的用户(群组)负例,构建用于训练的数据集。Further, in this stage, the group recommendation data set is first collected, including user historical interaction data, group historical interaction data and group user relationship data. The goal of this stage is to construct a group user hypergraph based on group user relationship data, and at the same time construct a group item bipartite graph based on group item interaction data; build an embedding vector lookup table for users, groups, and items; randomly select from the data set Sampling negative examples of non-interacting users (groups) to construct a data set for training.
具体包括:首先构建训练数据,由于损失函数采用成对的BPR损失,所以需要对每个用户(群组)与项目的交互对,从未交互的项目中随机抽样多个负例,以形成本发明的模型所需的训练数据。Specifically, it includes: first constructing training data. Since the loss function uses pairwise BPR loss, it is necessary to randomly sample multiple negative examples from uninteracted items for each user (group) and item interaction pair to form this algorithm. Training data required for the invented model.
其次构建群组用户超图和群组项目二部图,为了高效地挖掘群组、用户和项目之间的高阶信息,根据数据集中的群组用户关系构建群组用户超图所对应的关联矩阵,其中值为1表示该群组中包含该用户,值为0则表示不包含。根据训练数据中的群组项目交互构建群组项目二部图所对应的邻接矩阵,其中值为1表示该群组与该项目有交互,值为0则表示没有交互。Secondly, a group user hypergraph and a group project bipartite graph are constructed. In order to efficiently mine high-order information between groups, users and projects, the association corresponding to the group user hypergraph is constructed based on the group user relationships in the data set. Matrix, in which a value of 1 means that the user is included in the group, and a value of 0 means that the user is not included in the group. Based on the group-item interactions in the training data, an adjacency matrix corresponding to the group-item bipartite graph is constructed, where a value of 1 indicates that the group interacts with the item, and a value of 0 indicates that there is no interaction.
最后构建嵌入向量查找表,基于用户、群组和项目的数量构建ID嵌入向量查找表,作为其各自的初始表示。Finally, an embedding vector lookup table is constructed, and an ID embedding vector lookup table is constructed based on the number of users, groups, and projects as their respective initial representations.
S2:对所述群组用户超图卷积挖掘群组和用户间多个用户间的高阶信息,获得用户级群组表示;对所述群组用户超图进行扰动,并对扰动后的群组用户超图进行卷积,获得反事实群组表示;S2: Convolve the group user hypergraph to mine high-order information among multiple users between groups and users, and obtain user-level group representation; perform perturbation on the group user hypergraph, and perform perturbation on the perturbed The group user hypergraph is convolved to obtain counterfactual group representation;
进一步的,用户级群组表示即为成员级群组表示,该阶段的目标是构造成员偏好反事实超图学习模块,利用超图卷积操作提取群组和用户之间的高阶信息以获得成员级群组表示,通过成员偏好反事实超图学习模块将超图神经网络与反事实学习相结合,旨在挖掘群组成员和项目推荐之间的因果关系。通过最小化反事实损失,使得根据事实和反事实群组表示计算得到的事实预测得分尽可能比反事实预测得分大。Furthermore, user-level group representation is member-level group representation. The goal of this stage is to construct a member preference counterfactual hypergraph learning module and use hypergraph convolution operations to extract high-order information between groups and users to obtain Member-level group representation combines hypergraph neural networks with counterfactual learning through the member preference counterfactual hypergraph learning module, aiming to mine the causal relationship between group members and item recommendations. By minimizing the counterfactual loss, the factual prediction score calculated based on the factual and counterfactual group representations is as large as possible than the counterfactual prediction score.
其中,步骤S2中,所述用户级群组表示的表达式为:Wherein, in step S2, the expression expressed by the user-level group is:
其中,为用户级群组表示,/>为超图卷积网络的卷积层总数,/>为超图卷积网络的卷积层索引值,/>为第/>卷积层更新的节点表示的聚合信息。in, Represented as a user-level group,/> is the total number of convolutional layers of the hypergraph convolutional network,/> is the convolution layer index value of the hypergraph convolution network,/> For the first/> Aggregated information represented by nodes updated by convolutional layers.
其中,步骤S2中,对所述群组用户超图进行扰动的计算式为:Among them, in step S2, the calculation formula for perturbing the group user hypergraph is:
其中,为扰动后的群组用户超图关联矩阵,/>为群组用户超图关联矩阵,/>为指标函数,/>为可训练的掩码矩阵,/>为指标函数决定条目指定的阈值。in, is the perturbed group user hypergraph correlation matrix,/> is the group user hypergraph correlation matrix,/> is the indicator function,/> is a trainable mask matrix,/> Threshold specified for indicator function decision entries.
具体包括:首先获取成员级群组表示,将构建的群组用户超图以及用户和群组的初始表示通过超图卷积获得成员级群组表示,然后为了增强表现力,对各层得到的嵌入表示进行平均,得到该层级的用户表示,同样,我们在每个卷积层上对消息传递过程中前一层更新的节点的聚合信息进行平均,得到该层级的群组表示。Specifically, it includes: first obtaining the member-level group representation, convolving the constructed group user hypergraph and the initial representation of users and groups through hypergraph convolution to obtain the member-level group representation, and then in order to enhance the expressiveness, the obtained results of each layer are The embedding representations are averaged to obtain the user representation at this level. Similarly, on each convolutional layer, we average the aggregate information of the nodes updated in the previous layer during the message passing process to obtain the group representation at this level.
其次获取反事实群组表示,通过一个可训练的掩码矩阵对原始超图关联矩阵进行扰动,基于得到的扰动超图关联矩阵,以上述步骤相同的超图卷积操作,得到反事实场景中的群组表示。Secondly, the counterfactual group representation is obtained, and the original hypergraph correlation matrix is perturbed through a trainable mask matrix. Based on the obtained perturbed hypergraph correlation matrix, the same hypergraph convolution operation as above is used to obtain the counterfactual scenario. group representation.
最后计算反事实损失并优化,基于事实和反事实场景中不同的群组表示与初始项目表示通过一个三层多层感知机计算事实和反事实预测得分,然后计算反事实损失/>。Finally, the counterfactual loss is calculated and optimized based on different group representations and initial item representations in factual and counterfactual scenarios through a three-layer multi-layer perceptron. Calculate factual and counterfactual prediction scores, then calculate the counterfactual loss/> .
S3:对所述群组项目二部图及所述用户级群组表示卷积挖掘群组和项目间的高阶信息,获得群组级群组表示和群组级项目表示,通过残差门控机制加权融合所述用户级群组表示及所述群组级群组表示,获得最终群组表示;S3: Convolve the group-item bipartite graph and the user-level group representation to mine high-order information between groups and items, obtain group-level group representation and group-level item representation, and pass the residual gate A control mechanism weightedly fuses the user-level group representation and the group-level group representation to obtain the final group representation;
进一步的,该阶段主要通过群组共识图学习模块和残差融合模块,该阶段的目标是构造群组共识图学习模块和残差融合模块,利用图卷积操作提取群组和项目之间的高阶信息以获得群组级群组表示,利用群组项目之间的监督信号增强当更多的数据可获得时模型的表现。然后通过残差融合模块将不同层级的群组表示融合为最终的群组表示。通过最小化群组损失,使得群组对正样本的预测得分尽可能大于对负样本的预测得分。Furthermore, this stage mainly uses the group consensus graph learning module and the residual fusion module. The goal of this stage is to construct the group consensus graph learning module and the residual fusion module, and use graph convolution operations to extract the differences between groups and projects. Higher-order information is obtained to obtain group-level group representations, leveraging supervisory signals between group items to enhance model performance when more data becomes available. Then the group representations at different levels are fused into the final group representation through the residual fusion module. By minimizing the group loss, the group's prediction score for positive samples is as much as possible greater than the prediction score for negative samples.
其中,步骤S3中获得的群组级群组表示和群组级项目表示的表达式为:Among them, the expressions of the group-level group representation and group-level item representation obtained in step S3 are:
其中,为群组级群组表示,/>为群组级项目表示,/>为第/>卷积层获得的节点表示,为所有卷积层获得的节点嵌入表示的平均。in, For group-level group representation,/> Represented as a group-level item,/> For the first/> The node representation obtained by the convolutional layer, Average of node embedding representations obtained for all convolutional layers.
其中,步骤S3中的所述最终群组表示的表达式为:Wherein, the expression of the final group representation in step S3 is:
其中,为最终群组表示,/>为用户级群组表示所占权重,/>为sigmoid函数,/>为第一门控系数,/>为第二门控系数,/>为第三权重系数。in, is the final group representation,/> Represents the weight of user-level groups,/> is the sigmoid function,/> is the first gating coefficient,/> is the second gating coefficient,/> is the third weight coefficient.
具体包括:首先将构建的群组项目二部图以及计算获得的群组表示和项目的初始表示通过图卷积获得群组级群组表示,然后对每层得到的嵌入进行平均,得到最终的嵌入。Specifically, it includes: firstly, the constructed group-item bipartite graph and the calculated group representation and the initial representation of the item are used to obtain the group-level group representation through graph convolution, and then the embeddings obtained at each layer are averaged to obtain the final Embed.
其次将计算获得的成员级群组表示和计算获得的群组级群组表示通过门控机制计算各部分所占的权重,将成员级群组表示所占的权重与其自己相乘,并加上群组级群组表示与其所占的权重/>相乘的结果,得到最终的群组表示。Secondly, the calculated member-level group representation and the calculated group-level group representation are used to calculate the weight of each part through the gating mechanism, and the weight of the member-level group representation is Multiply it by itself and add the group-level group representation to its weight/> The result of multiplication is the final group representation.
最后将最终群组表示和群组级项目表示输入至多层感知机中计算预测得分,然后计算群组损失。Finally, the final group representation and group-level item representation are input into the multi-layer perceptron to calculate the prediction score, and then calculate the group loss .
S4:基于所述反事实群组表示、用户级群组表示及初始项目表示获得反事实损失,基于所述最终群组表示和所述群组级项目表示获得群组损失,基于初始用户表示和初始项目表示获得用户损失,根据所述反事实损失、所述群组损失及所述用户损失对分层图卷积网络进行多轮次训练,获得群组预测模型;S4: Obtain counterfactual loss based on the counterfactual group representation, user-level group representation and initial item representation, obtain group loss based on the final group representation and the group-level item representation, based on the initial user representation and The initial project indicates that the user loss is obtained, and the hierarchical graph convolution network is trained for multiple rounds according to the counterfactual loss, the group loss and the user loss to obtain a group prediction model;
其中,步骤S4中,用于所述分层图卷积网络进行多轮次训练的损失包括:Wherein, in step S4, the losses used for the multi-round training of the hierarchical graph convolution network include:
其中,为反事实损失,/>为群组损失,/>为用户损失,/>为群组项目训练集,/>为群组项目训练集中的数据,/>为用户项目训练集,/>为用户项目训练集中的数据,/>为项目索引值,/>为与群组交互的项目对应的抽样负例,/>为与用户交互的项目对应的抽样负例,/>为事实预测得分,/>为反事实预测得分,/>为最终群组表示预测得分,/>为群组级项目表示预测得分,/>用户初始表示预测得分,/>为项目初始表示预测得分。in, is the counterfactual loss,/> For group losses,/> Loss for users,/> For the group project training set,/> Data from the training set for the group project,/> Training set for user projects,/> Data from the training set for user projects,/> is the item index value,/> is the sampling negative example corresponding to the item that interacts with the group,/> is the sampling negative example corresponding to the item that interacts with the user,/> Predict the score for the fact,/> is the counterfactual prediction score,/> represents the predicted score for the final group,/> Represents predicted scores for group-level items,/> The user initially indicates the predicted score,/> Predict the score for the initial representation of the project.
S5:将待推荐的群组数据输入所述预测模型,获得预测得分,基于所述预测得分排序获得推荐项目列表。S5: Input the group data to be recommended into the prediction model, obtain a prediction score, and obtain a list of recommended items based on the prediction score sorting.
进一步的,经过多轮次的联合训练,得到训练完成的模型,将任意需要进行预测的群组和项目表示作为输入,输出其对应的预测得分,将群组对所有候选项目的预测得分从高到低排序取前K个以获得最终的推荐项目列表。Further, after multiple rounds of joint training, the trained model is obtained, and any group and item representation that needs to be predicted is used as input, and its corresponding prediction score is output, and the group's prediction score for all candidate items is ranked from highest to highest. Take the top K items in low order to obtain the final list of recommended items.
进一步的,还包括获得用户和项目的初始表示,通过最小化用户损失,使得用户对正样本的预测得分尽可能大于对负样本的预测得分,具体来说,将用户和项目的初始表示输入至多层感知机中计算预测得分,然后计算用户损失并优化。Further, it also includes obtaining the initial representation of the user and the item, and minimizing the user loss so that the user's prediction score for the positive sample is as large as possible greater than the prediction score for the negative sample. Specifically, input the initial representation of the user and item at most Calculate the prediction score in the layer perceptron, and then calculate the user loss and optimize.
在一些实施例中,本发明进行了实验,使用了两个广泛应用的真实世界群组推荐数据集和一个半合成的群组推荐数据集,其中所用数据集的基本统计特性如表1所示。In some embodiments, the present invention conducted experiments using two widely used real-world group recommendation data sets and a semi-synthetic group recommendation data set, where the basic statistical properties of the used data sets are shown in Table 1 .
表1 本发明实施例中所用数据集的基本统计特性Table 1 Basic statistical characteristics of the data sets used in the embodiments of the present invention
实验分为两个方面:对一组用户进行项目推荐,以及对单个用户进行项目推荐,分别称为群组推荐任务和用户推荐任务。在这两个实验中,我们采用了两个常用的评价指标:HR(命中率)和NDCG(归一化折损累积增益),获得的本发明提供的方法与群组推荐领域其它方法在每个数据集的各任务上的结果如表2及表3所示。The experiment is divided into two aspects: project recommendation for a group of users, and project recommendation for a single user, which are called group recommendation tasks and user recommendation tasks respectively. In these two experiments, we used two commonly used evaluation indicators: HR (hit rate) and NDCG (normalized loss cumulative gain). The obtained method provided by the present invention is consistent with other methods in the field of group recommendation. The results on each task of the data set are shown in Table 2 and Table 3.
表2 本发明的方法与群组推荐领域其它方法在数据集的第一组实验结果对比Table 2 Comparison of the first set of experimental results in the data set between the method of the present invention and other methods in the field of group recommendation
表3本发明的方法与群组推荐领域其它方法在数据集的第二组实验结果对比Table 3 Comparison of the second set of experimental results in the data set between the method of the present invention and other methods in the field of group recommendation
由表2及表3的实验结果表明,我们提出的方法CF-hGCN相对于以往的方法表现更佳,在性能上取得了不同程度的提升,最大提升幅度为12.7%。以上对比结果充分证明了我们提出的方法在群组推荐和用户推荐任务上取得了卓越的效果。The experimental results in Table 2 and Table 3 show that our proposed method CF-hGCN performs better than previous methods and has achieved varying degrees of improvement in performance, with the maximum improvement being 12.7%. The above comparison results fully prove that our proposed method achieves excellent results in group recommendation and user recommendation tasks.
最后应说明的是:以上实施例仅用以说明本发明的技术方案,而非对其限制;尽管参照前述实施例对本发明进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本发明各实施例技术方案的精神和范围。Finally, it should be noted that the above embodiments are only used to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, those of ordinary skill in the art should understand that it can still be used Modifications are made to the technical solutions described in the foregoing embodiments, or equivalent substitutions are made to some of the technical features; however, these modifications or substitutions do not cause the essence of the corresponding technical solutions to deviate from the spirit and scope of the technical solutions of the embodiments of the present invention.
Claims (8)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311744970.8A CN117421661B (en) | 2023-12-19 | 2023-12-19 | A group recommendation method based on counterfactual enhanced graph convolutional network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311744970.8A CN117421661B (en) | 2023-12-19 | 2023-12-19 | A group recommendation method based on counterfactual enhanced graph convolutional network |
Publications (2)
Publication Number | Publication Date |
---|---|
CN117421661A true CN117421661A (en) | 2024-01-19 |
CN117421661B CN117421661B (en) | 2024-02-13 |
Family
ID=89525184
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311744970.8A Active CN117421661B (en) | 2023-12-19 | 2023-12-19 | A group recommendation method based on counterfactual enhanced graph convolutional network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117421661B (en) |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114090890A (en) * | 2021-11-23 | 2022-02-25 | 电子科技大学 | Counterfactual project recommendation method based on graph convolution network |
US20220083853A1 (en) * | 2020-09-15 | 2022-03-17 | Microsoft Technology Licensing, Llc | Recommending edges via importance aware machine learned model |
US20220253721A1 (en) * | 2021-01-30 | 2022-08-11 | Walmart Apollo, Llc | Generating recommendations using adversarial counterfactual learning and evaluation |
CN114936890A (en) * | 2022-03-31 | 2022-08-23 | 合肥工业大学 | Counter-fact fairness recommendation method based on inverse tendency weighting method |
CN116506302A (en) * | 2023-04-27 | 2023-07-28 | 河南科技大学 | A Network Alignment Method Based on Counterfactual Inference |
CN116888602A (en) * | 2020-12-17 | 2023-10-13 | 乌姆奈有限公司 | Interpretable transducer |
CN117112905A (en) * | 2023-09-01 | 2023-11-24 | 华中科技大学 | Sensitive attribute filtering fairness recommendation method and device based on bilateral adversarial learning |
-
2023
- 2023-12-19 CN CN202311744970.8A patent/CN117421661B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20220083853A1 (en) * | 2020-09-15 | 2022-03-17 | Microsoft Technology Licensing, Llc | Recommending edges via importance aware machine learned model |
CN116888602A (en) * | 2020-12-17 | 2023-10-13 | 乌姆奈有限公司 | Interpretable transducer |
US20220253721A1 (en) * | 2021-01-30 | 2022-08-11 | Walmart Apollo, Llc | Generating recommendations using adversarial counterfactual learning and evaluation |
CN114090890A (en) * | 2021-11-23 | 2022-02-25 | 电子科技大学 | Counterfactual project recommendation method based on graph convolution network |
CN114936890A (en) * | 2022-03-31 | 2022-08-23 | 合肥工业大学 | Counter-fact fairness recommendation method based on inverse tendency weighting method |
CN116506302A (en) * | 2023-04-27 | 2023-07-28 | 河南科技大学 | A Network Alignment Method Based on Counterfactual Inference |
CN117112905A (en) * | 2023-09-01 | 2023-11-24 | 华中科技大学 | Sensitive attribute filtering fairness recommendation method and device based on bilateral adversarial learning |
Non-Patent Citations (2)
Title |
---|
杨梦月;何洪波;王闰强;: "基于反事实学习及混淆因子建模的文章个性化推荐", 计算机系统应用, no. 10, 13 October 2020 (2020-10-13), pages 53 - 60 * |
郭文雅,张莹等: "一种面向指代短语理解的关系聚合网络", 《计算机研究与发展》, vol. 60, no. 11, 7 March 2023 (2023-03-07), pages 2611 * |
Also Published As
Publication number | Publication date |
---|---|
CN117421661B (en) | 2024-02-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112905900B (en) | Collaborative filtering recommendation method based on graph convolution attention mechanism | |
CN112989064B (en) | A Recommendation Method for Aggregating Knowledge Graph Neural Networks and Adaptive Attention | |
CN112861967B (en) | Social network abnormal user detection method and device based on heterogeneous graph neural network | |
CN111127142B (en) | An item recommendation method based on generalized neural attention | |
CN110321494A (en) | Socialization recommended method based on matrix decomposition Yu internet startup disk conjunctive model | |
CN113158071A (en) | Knowledge social contact recommendation method, system and equipment based on graph neural network | |
CN115114526B (en) | A weighted graph convolutional network rating prediction recommendation method with multi-behavior enhanced information | |
CN114510646A (en) | Neural network collaborative filtering recommendation method based on federal learning | |
CN116501956A (en) | Knowledge perception multi-domain recommendation method and system based on hierarchical graph comparison learning | |
CN115270001A (en) | Privacy-preserving recommendation method and system based on cloud collaborative learning | |
CN113590976A (en) | Recommendation method of space self-adaptive graph convolution network | |
CN115470406A (en) | Graph neural network session recommendation method based on dual-channel information fusion | |
CN116340646A (en) | Recommendation method for optimizing multi-element user representation based on hypergraph motif | |
CN115757897A (en) | Intelligent culture resource recommendation method based on knowledge graph convolution network | |
CN117056597A (en) | Noise enhancement-based comparison learning graph recommendation method | |
CN117194771A (en) | Dynamic knowledge graph service recommendation method for graph model characterization learning | |
CN111814066B (en) | Dynamic social user alignment method and system based on heuristic algorithm | |
CN116542742A (en) | Heterogeneous dynamic social recommendation method based on multiple relation types | |
CN113868537B (en) | A recommendation method based on multi-action conversation graph fusion | |
CN115481215A (en) | Partner prediction method and prediction system based on temporal partner knowledge graph | |
CN117421661A (en) | A group recommendation method based on counterfactual enhanced graph convolutional network | |
CN116541613A (en) | Item recommendation method of double-message propagation mechanism based on scoring weighting | |
CN117422134A (en) | Knowledge graph recommendation method based on graph convolution neural network | |
CN116805020A (en) | Interest point recommendation method based on graph neural network and context information awareness | |
CN116955810A (en) | An optimization method for knowledge collaborative recommendation algorithm based on graph convolution network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |