CN116894122B - Cross-view contrast learning group recommendation method based on hypergraph convolutional network - Google Patents
Cross-view contrast learning group recommendation method based on hypergraph convolutional network Download PDFInfo
- Publication number
- CN116894122B CN116894122B CN202310823337.1A CN202310823337A CN116894122B CN 116894122 B CN116894122 B CN 116894122B CN 202310823337 A CN202310823337 A CN 202310823337A CN 116894122 B CN116894122 B CN 116894122B
- Authority
- CN
- China
- Prior art keywords
- group
- view
- hypergraph
- level
- graph
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 64
- 230000002776 aggregation Effects 0.000 claims abstract description 13
- 238000004220 aggregation Methods 0.000 claims abstract description 13
- 239000011159 matrix material Substances 0.000 claims description 36
- 230000015654 memory Effects 0.000 claims description 32
- 230000003993 interaction Effects 0.000 claims description 22
- 230000006870 function Effects 0.000 claims description 10
- 230000008569 process Effects 0.000 claims description 9
- 238000012549 training Methods 0.000 claims description 9
- 238000004590 computer program Methods 0.000 claims description 8
- 230000000052 comparative effect Effects 0.000 claims description 6
- 230000007246 mechanism Effects 0.000 claims description 4
- 238000012935 Averaging Methods 0.000 claims description 3
- 230000004913 activation Effects 0.000 claims description 3
- 230000004931 aggregating effect Effects 0.000 claims description 3
- 238000004364 calculation method Methods 0.000 claims description 3
- 238000010606 normalization Methods 0.000 claims description 3
- 239000013598 vector Substances 0.000 claims description 3
- 238000013461 design Methods 0.000 abstract description 6
- 238000012546 transfer Methods 0.000 abstract description 2
- 230000006399 behavior Effects 0.000 abstract 1
- 230000000694 effects Effects 0.000 description 5
- 230000001360 synchronised effect Effects 0.000 description 4
- 238000010586 diagram Methods 0.000 description 2
- 230000003542 behavioural effect Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 230000007812 deficiency Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 235000021158 dinner Nutrition 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 238000005065 mining Methods 0.000 description 1
- 230000001537 neural effect Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 238000010422 painting Methods 0.000 description 1
- 238000006116 polymerization reaction Methods 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/90—Details of database functions independent of the retrieved data types
- G06F16/95—Retrieval from the web
- G06F16/953—Querying, e.g. by the use of web search engines
- G06F16/9535—Search customisation based on user profiles and personalisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0464—Convolutional networks [CNN, ConvNet]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- Databases & Information Systems (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Health & Medical Sciences (AREA)
- Computational Linguistics (AREA)
- Software Systems (AREA)
- Mathematical Physics (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computing Systems (AREA)
- Molecular Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Image Analysis (AREA)
Abstract
Description
技术领域Technical field
本发明涉及偏好预测的超图卷积网络的跨视图对比学习群组推荐技术领域,具体涉及一种基于超图卷积网络的跨视图对比学习群组推荐方法。The present invention relates to the technical field of cross-view comparison learning group recommendation using a hypergraph convolutional network for preference prediction, and specifically relates to a cross-view comparison learning group recommendation method based on a hypergraph convolutional network.
背景技术Background technique
随着互联网的发展以及在线社区活动的普及,具有相似背景(如爱好、职业、年龄)的人根据不同的需要固定或临时组成一个小组去参与不同的活动。例如,根据用户不同的兴趣划分为很多兴趣小组,游戏小组、绘画小组等,以便获得各种活动资源。人们也常常聚在一起进行各种群组活动,例如,临时组队的旅游团、团队聚餐或者看电影。常规的,这些人可能彼此熟悉,比如共同生活在一个家庭中;也可能彼此陌生,是在某个活动中偶然间相遇的,例如几个旅行者共同加入一个旅行团。在这些场景下,我们需要为群组推荐一个或者几个合适的项目,以满足群组的需求。但是每一个群组中有很多用户,不同用户之间的偏好存在着个体差异。因此,群组推荐的最终目的是聚合群组成员不同的偏好,向群组推荐适合且令人满意的项目。群组推荐不仅可以节约群组决策的时间,也可以减少群组成员之间不必要的矛盾。With the development of the Internet and the popularity of online community activities, people with similar backgrounds (such as hobbies, occupations, ages) form a fixed or temporary group to participate in different activities according to different needs. For example, users are divided into many interest groups according to their different interests, such as game groups, painting groups, etc., in order to obtain various activity resources. People also often get together for various group activities, such as temporary travel groups, team dinners or watching movies. Conventionally, these people may be familiar with each other, such as living together in a family; they may also be strangers to each other, meeting by chance in an activity, such as several travelers joining a tour group together. In these scenarios, we need to recommend one or several suitable projects for the group to meet the needs of the group. However, there are many users in each group, and there are individual differences in preferences among different users. Therefore, the ultimate purpose of group recommendation is to aggregate the different preferences of group members and recommend suitable and satisfactory items to the group. Group recommendation can not only save time in group decision-making, but also reduce unnecessary conflicts among group members.
现有的方法大多采用启发式的方法或基于注意力机制的方法来聚合群体成员的个人偏好来推断群组的偏好。然而这些方法只是对单个组的用户偏好进行建模,忽略了组内外复杂的高级交互。其次,一个群组最终的决策并不一定来自群组成员的偏好。但现有方法不足以对这种跨群组偏好进行建模。此外,由于群组-项目交互的稀疏,使得群组推荐存在数据稀疏的问题。如果没有解决上述问题,会降低推荐结果的准确性。Most existing methods use heuristic methods or methods based on attention mechanisms to aggregate the personal preferences of group members to infer the group's preferences. However, these methods only model the user preferences of a single group, ignoring the complex high-level interactions within and outside the group. Second, a group's final decision does not necessarily come from the preferences of group members. But existing methods are insufficient to model such cross-group preferences. In addition, due to the sparse group-item interactions, group recommendation suffers from the problem of data sparseness. If the above problems are not solved, the accuracy of the recommended results will be reduced.
发明内容Contents of the invention
本发明目的是为了克服现有技术中的不足问题,从而提出一种基于超图卷积网络的跨视图对比学习群组推荐方法。所述方法实现了向群组推荐评分最高的多个物品,并且在现实生活中,需要向群组推荐适合且令人满意的项目。The purpose of the present invention is to overcome the deficiencies in the existing technology and thereby propose a cross-view comparison learning group recommendation method based on a hypergraph convolutional network. The method achieves the recommendation of multiple items with the highest ratings to the group, and in real life, suitable and satisfactory items need to be recommended to the group.
本发明是通过以下技术方案实现的,本发明提出一种基于超图卷积网络的跨视图对比学习群组推荐方法,所述方法包括以下步骤:The present invention is realized through the following technical solutions. The present invention proposes a cross-view comparison learning group recommendation method based on a hypergraph convolution network. The method includes the following steps:
步骤1、在CAMRa2011和马蜂窝平台获取群组交互数据集,其中数据集包含用户对物品、群组对物品的交互历史以及用户-群组组成关系;Step 1. Obtain the group interaction data set from CAMRa2011 and Mafengwo platform. The data set contains the interaction history of users to items, groups to items, and user-group composition relationships;
步骤2、训练集中的用户集合为U,U={u1,u2,…,uh,…,uM},h∈{1,…,M},其中uh为第h个用户,M为用户的数量;商品集合为I,I={i1,i2,...,jj,...,in},j∈{1,...,N},其中ij为第j个商品,N为商品的数量;群组集合为G,G={g1,g2,...,gt,...,gk},t∈{1,...,k},其中gt为第t个群组,k为群组的数量;其中,第t组gt∈G由一组群组成员组成,用G(t)={u1,u2,...,uh,...,up}表示,其中uh∈U,p是群组gt包含群组成员的数量,G(t)是群组gt成员的集合;Step 2. The user set in the training set is U, U={u 1 ,u 2 ,…,u h ,…,u M }, h∈{1,…,M}, where u h is the h-th user, M is the number of users; the product set is I, I={i 1 , i 2 ,..., j j ,..., i n }, j∈{1,..., N}, where i j is the j-th commodity, N is the number of commodities; the group set is G, G={g 1 , g 2 ,..., g t ,..., g k }, t∈{1,... , k}, where g t is the t-th group, k is the number of groups; among them, the t-th group g t ∈G consists of a group of group members, and G(t)={u 1 , u 2 ,...,u h ,..., up } means, where u h ∈U, p is the number of group members included in group g t , G(t) is the set of members of group g t ;
步骤3、构造具有丰富边信息的超图,通过连接两个以上节点的超边来扩展图结构;其中,超边可以连接任何数量的节点;超图表示为Gm=(Vm,εm),其中,Vm=U∪I是包含N个唯一顶点的节点集,每个节点表示群体成员或群组交互的项目,εm是包含M个超边的边集,每条超边表示一个群组,它是由群组中的成员和群组交互的项目组成;形式上,用εt={u1,u2,…uh…,up,i1,i2,…,ij,…,iq}来表示群组gt;其中,uh∈U,ij∈I,并且εt∈εm;超图的连通性用关联矩阵来表示;对于每一个顶点和超边,使用对角矩阵D和B分别表示顶点和超边的度,其中/>每个超边e∈ε包含两个或多个顶点,并被赋予正权重Wee,所有的权重形成对角矩阵W∈RM×M;Step 3. Construct a hypergraph with rich edge information, and expand the graph structure by connecting hyperedges of two or more nodes; among them, hyperedges can connect any number of nodes; the hypergraph is expressed as G m = (V m , ε m ), where V m =U∪I is a node set containing N unique vertices, each node represents a group member or an item of group interaction, ε m is an edge set containing M hyperedges, each hyperedge represents A group is composed of members in the group and items that interact with the group; formally, use ε t = {u 1 , u 2 ,…u h …, up , i 1 , i 2 ,…, i j ,..., i q } to represent the group g t ; where, u h ∈U, i j ∈I, and ε t ∈ε m ; the connectivity of the hypergraph is represented by the correlation matrix to represent; for each vertex and hyperedge, use diagonal matrices D and B to represent the degrees of the vertex and hyperedge respectively, where/> Each hyperedge e∈ε contains two or more vertices and is assigned a positive weight W ee , and all weights form a diagonal matrix W∈R M×M ;
步骤4、在超图的重叠图上的图卷积网络中,从连接相似的群组去捕获和传播组级的偏好,构建重叠图;其中,用Gg=(Vg,εg)表示超图的重叠图;Vg={e:e∈ε},εg={(ep,eq):ep,eq∈ε,|ep∩eq|≥1},并为重叠图中的每一条边配置一个权重Wp,q,其中Wp,q=|ep∩eq|/|ep∪eq|;Step 4. In the graph convolutional network on the overlapping graph of the hypergraph, connect similar groups to capture and propagate group-level preferences, and construct an overlapping graph; where, represented by G g = (V g , ε g ) The overlapping graph of the hypergraph; V g = {e: e∈ε}, ε g = {(e p , e q ): e p , e q ∈ε, |e p ∩e q |≥1}, and is Each edge in the overlapping graph is configured with a weight W p, q , where W p, q = |e p ∩e q |/|e p ∪e q |;
步骤5、利用群组-项目二部图来构造图GI=(VI,εI);其中VI=G∪I表示节点集,εI={(gt,ij)gt∈G,ij∈I,R(t,j)=1};邻接矩阵 Step 5. Use the group-item bipartite graph to construct the graph G I =(V I , ε I ); where V I =G∪I represents the node set, ε I ={(g t , i j )g t ∈ G, i j ∈I, R(t, j)=1}; adjacency matrix
步骤6、通过利用超图从成员级别聚合群组内成员的偏好进而获得群组偏好通过利用重叠图从相似的群组中去捕获和传播群组的偏好/>通过利用群组-项目二部图从群组的交互历史中去捕获群组偏好/>采用三个不同的门控来自动区分不同视图的贡献,计算最终的群组表示gt:/>其中α、β和γ分别表示学习到的权重,分别由以及/>得到;其中WM、WI和WG∈Rd是三种不同的可训练权重,σ是激活函数;Step 6: Obtain group preferences by aggregating the preferences of members in the group from the member level using a hypergraph Capture and propagate group preferences from similar groups by leveraging overlay graphs/> Capture group preferences from the group’s interaction history by leveraging the group-item bipartite graph/> Three different gatings are used to automatically distinguish the contributions of different views, and the final group representation g t is calculated:/> where α, β and γ respectively represent the learned weights, respectively represented by and/> Obtain; where W M , W I and W G ∈R d are three different trainable weights, and σ is the activation function;
步骤7、计算群组gt对项目ij的预测得分将该得分降序排列得到为群组推荐的物品列表;随机从R中抽取(gt,ij)并为每一个群组gt采样负样本,使用成对损失来计算群组预测损失,具体如下:/>其中,OG={(t,j,j')|(t,j)∈OG+,(t,j')∈OG-}表示群组-项目训练数据集,OG+是观察到的交互的集合,OG-是未观察到的交互的集合;Step 7. Calculate the prediction score of group g t for item i j Arrange the scores in descending order to get a list of items recommended for the group; randomly select (g t ,i j ) from R and sample negative samples for each group g t , and use pairwise loss to calculate the group prediction loss, specifically As follows:/> Among them, O G = {(t,j,j')|(t,j)∈O G+ ,(t,j')∈O G- } represents the group-item training data set, and O G+ is the observed The set of interactions, O G- is the set of unobserved interactions;
步骤8、对跨视图协作关联进行建模,建立跨视图对比损失;利用得到的三个群组偏好表示获得对比损失Lcon;将群组推荐损失和对比损失结合起来联合训练,最小化以下目标函数来学习模型参数:L=Lgroup+λLcon;λ是控制对比损失的超参数。Step 8. Model the cross-view collaborative association and establish a cross-view contrast loss; use the obtained three group preference representations to obtain the contrast loss L con ; combine the group recommendation loss and the contrast loss for joint training to minimize the following goals Function to learn model parameters: L=L group +λL con ; λ is a hyperparameter that controls contrast loss.
进一步地,在步骤3中,构建成员级偏好网络,执行超图卷积操作来编码用户和项目之间的高阶关系;用户-商品的聚合过程为M(l+1)=D-1HWB-1HTM(l)Θ(l),其中D、B和W分别表示节点度矩阵、边度矩阵和权重矩阵;用单位矩阵初始化权值矩阵W,使得所有超边拥有相等的权重,Θ为两个卷积层之间可学习的参数矩阵;超图卷积可以看成两个阶段的信息聚合,“节点-超边-节点”;即和 Further, in step 3, a member-level preference network is constructed and a hypergraph convolution operation is performed to encode the high-order relationship between users and items; the user-item aggregation process is M (l+1) =D -1 HWB -1 H T M (l) Θ (l) , where D, B and W represent the node degree matrix, edge degree matrix and weight matrix respectively; initialize the weight matrix W with the identity matrix so that all hyperedges have equal weight, Θ is a learnable parameter matrix between two convolutional layers; hypergraph convolution can be seen as two stages of information aggregation, "node-hyperedge-node"; that is, and
进一步地,在步骤3中,应用注意力机制学习成员在群组中的权重; 其中,权重α(h,j)表示群组成员uh在群组决策项目ij时的影响力分数,通过计算o(h,j)=hTRELU(Wu[uh;u'h]+Wj[ij;i'j]+b)后进行softmax归一化得到。Further, in step 3, the attention mechanism is applied to learn the weights of members in the group; Among them, the weight α(h,j) represents the influence score of group member u h when the group decides on project i j . By calculating o(h,j)=h T RELU(Wu[u h ;u' h ] +Wj[i j ; i' j ]+b) and then softmax normalization.
进一步地,在步骤4中,Further, in step 4,
将群组嵌入G∈Rk×d输入到图卷积网络,记为G(0)=G,执行组级图卷积过程其中,/>I为单位矩阵,Ap,q=Wp,q;/>是邻接矩阵的对角度矩阵,/> Input the group embedding G∈R k×d into the graph convolution network, denoted as G (0) = G, and perform the group-level graph convolution process Among them,/> I is the identity matrix, A p,q = W p,q ;/> is the diagonal matrix of the adjacency matrix, />
对每层获得的群组嵌入进行平均,得到最终的组级的群组嵌入:因此每一群组gt的在组级下的群组表示为/> The group embeddings obtained at each layer are averaged to obtain the final group-level group embedding: Therefore, the group at the group level for each group g t is expressed as/>
进一步地,在步骤5中,Further, in step 5,
将群组嵌入G∈Rk×d和项目嵌入I∈Rn×d送到图卷积网络中,记作E(0)=E,其中E是两个嵌入的拼接E=[G;I];执行项目级图卷积: The group embedding G∈R k×d and the item embedding I∈R n×d are sent to the graph convolution network, denoted as E (0) = E, where E is the concatenation of the two embeddings E = [G; I ]; perform item-level graph convolution:
最终的群组表示通过平均在不同层学习到的表示得到,将其表示为获得每一群组gt的在项目级下的表示/> The final group representation is obtained by averaging the representations learned at different layers, which is expressed as Obtain the project-level representation of each group g t />
进一步地,在步骤8中,Further, in step 8,
在多视图上应用对比学习,对于一个视图中的节点,另一个视图学习的同一节点嵌入视为正样本对;在两个视图中,除它之外的节点嵌入视为负样本对;即:正样本有一个来源,负样本有两个来源,即视图内节点和视图间节点。Contrastive learning is applied on multiple views. For a node in one view, the same node embedding learned in another view is regarded as a positive sample pair; in both views, the node embeddings other than it are regarded as negative sample pairs; that is: Positive samples have one source, and negative samples have two sources, namely intra-view nodes and inter-view nodes.
进一步地,在步骤8中,Further, in step 8,
对于定义好的正负样本,成员级偏好视图和组级偏好视图之间的对比损失为 其中,θ(·)函数来学习两个输入向量之间的分数,并分配给正样本对比负样本对更高的分数,具体使用来计算,h(·)是非线性投影用于提升表征质量,主要由两层感知机实现;成员级偏好视图和项目级偏好视图之间的对比损失为组级偏好视图和项目级偏好视图之间的对比损失为/> For the defined positive and negative samples, the contrast loss between the member-level preference view and the group-level preference view is Among them, the θ(·) function learns the score between the two input vectors and assigns a higher score to the positive sample compared to the negative sample pair. Specifically, use To calculate, h(·) is a nonlinear projection used to improve representation quality, mainly implemented by a two-layer perceptron; the contrast loss between the member-level preference view and the item-level preference view is The contrast loss between the group-level preference view and the item-level preference view is/>
进一步地,在步骤8中,Further, in step 8,
由于任意两个视图是对称的,所以LGM、LIM、LIG的计算方式同LMG、LMI、LGI的计算方式,成员级偏好网络视图和组级偏好网络视图间的最终对比损失为另外任何两个视图间损失计算方式也如此计算得到Lcon2和Lcon3;然后,对三个视图的对比损失进行平均处理得到最终的对比损失Lcon:/> Since any two views are symmetrical, the calculation methods of L GM , L IM , and L IG are the same as those of L MG , L MI , and L GI . The final comparison loss between the member-level preference network view and the group-level preference network view is for In addition, the loss between any two views is calculated in the same way to obtain L con2 and L con3 ; then, the contrast losses of the three views are averaged to obtain the final contrast loss L con :/>
本发明提出一种电子设备,包括存储器和处理器,所述存储器存储有计算机程序,所述处理器执行所述计算机程序时实现所述一种基于超图卷积网络的跨视图对比学习群组推荐方法的步骤。The present invention proposes an electronic device, including a memory and a processor. The memory stores a computer program. When the processor executes the computer program, it implements the cross-view comparison learning group based on a hypergraph convolution network. Recommended method steps.
本发明提出一种计算机可读存储介质,用于存储计算机指令,所述计算机指令被处理器执行时实现所述一种基于超图卷积网络的跨视图对比学习群组推荐方法的步骤。The present invention proposes a computer-readable storage medium for storing computer instructions. When the computer instructions are executed by a processor, the steps of the cross-view comparison learning group recommendation method based on a hypergraph convolution network are implemented.
本发明具有以下有益效果:The invention has the following beneficial effects:
本发明提出了一个用于群组推荐的跨视图对比学习超图卷积网络模型,简写为“C2-HGR”。以构建多视图的方式挖掘群组对物品的偏好,从而精准的进行评分预测工作。The present invention proposes a cross-view comparative learning hypergraph convolutional network model for group recommendation, abbreviated as "C 2 -HGR". Mining group preferences for items by building multi-views to accurately predict ratings.
本发明设计了一个不同粒度级别的多视图学习框架,包括超图表征的成员级偏好网络,重叠图表征的组级偏好网络以及二部图表征的项目级偏好网络。通过三者有效融合,提取用户-项目、群组-项目的协同信息以及群组相似性,进而增强群组偏好。The present invention designs a multi-view learning framework at different granularity levels, including a member-level preference network represented by a hypergraph, a group-level preference network represented by an overlapping graph, and an item-level preference network represented by a bipartite graph. Through the effective integration of the three, user-project, group-project collaborative information and group similarity are extracted, thereby enhancing group preferences.
本发明设计了一个新的超图神经卷积网络获得成员级聚合,而且利用超图转换的重叠图获得组级偏好。与现有的聚合方法相比,本发明的方法在性能方面展现出了优越性。此外,为了整合来自多个视图获得的群组偏好表示,本发明设计了一个有效的门控组件,以权衡每个视图对于整个模型的贡献程度。The present invention designs a new hypergraph neural convolutional network to obtain member-level aggregation, and uses the overlapping graph of hypergraph transformation to obtain group-level preferences. Compared with existing polymerization methods, the method of the present invention exhibits superior performance in terms of performance. Furthermore, in order to integrate group preference representations obtained from multiple views, the present invention designs an effective gating component to weigh the contribution of each view to the entire model.
本发明提出了一个基于自监督的多视图对比学习的方法,以增强群组表示,并解决数据稀疏性的问题。该方法与图卷积网络分层的设计无缝耦合。通过统一推荐任务和对比学习任务,可以显著提升推荐性能。并且本发明适用于群组推荐。The present invention proposes a method based on self-supervised multi-view contrastive learning to enhance group representation and solve the problem of data sparsity. This method is seamlessly coupled with the design of graph convolutional network layering. By unifying recommendation tasks and comparative learning tasks, recommendation performance can be significantly improved. And the present invention is suitable for group recommendation.
附图说明Description of the drawings
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据提供的附图获得其他的附图。In order to explain the embodiments of the present invention or the technical solutions in the prior art more clearly, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below. Obviously, the drawings in the following description are only These are embodiments of the present invention. For those of ordinary skill in the art, other drawings can be obtained based on the provided drawings without exerting creative efforts.
图1为本发明的一种基于超图卷积网络的跨视图对比学习群组推荐方法的整体示意图。Figure 1 is an overall schematic diagram of a cross-view comparative learning group recommendation method based on a hypergraph convolutional network according to the present invention.
具体实施方式Detailed ways
下面将结合本发明实施例中的附图对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention. Obviously, the described embodiments are only some of the embodiments of the present invention, rather than all of the embodiments. Based on the embodiments of the present invention, all other embodiments obtained by those of ordinary skill in the art without creative efforts fall within the scope of protection of the present invention.
结合图1,本发明提出一种基于超图卷积网络的跨视图对比学习群组推荐方法,所述方法包括以下步骤:Combined with Figure 1, the present invention proposes a cross-view comparative learning group recommendation method based on hypergraph convolutional network. The method includes the following steps:
步骤1、在CAMRa2011和马蜂窝平台获取群组交互数据集,其中数据集包含用户对物品、群组对物品的交互历史以及用户-群组组成关系;Step 1. Obtain the group interaction data set from CAMRa2011 and Mafengwo platform. The data set contains the interaction history of users to items, groups to items, and user-group composition relationships;
步骤2、训练集中的用户集合为U,U={u1,u2,…,uh,…,uM},h∈{1,…,M},其中uh为第h个用户,M为用户的数量;商品集合为I,I={i1,i2,…,ij,…,in},j∈{1,…,N},其中ij为第j个商品,N为商品的数量;群组集合为G,G={g1,g2,...,gt,...,gk},t∈{1,...,k},其中gt为第t个群组,k为群组的数量;其中,第t组gt∈G由一组群组成员组成,用G(t)={u1,u2,...,uh,...,up}表示,其中uh∈U,p是群组gt包含群组成员的数量,G(t)是群组gt成员的集合;Step 2. The user set in the training set is U, U={u 1 ,u 2 ,…,u h ,…,u M }, h∈{1,…,M}, where u h is the h-th user, M is the number of users; the product set is I, I={i 1 ,i 2 ,…,i j ,…,i n }, j∈{1,…,N}, where i j is the j-th product, N is the number of commodities; the group set is G, G={g 1 , g 2 ,..., g t ,..., g k }, t∈{1,..., k}, where g t is the t-th group, k is the number of groups; among them, the t-th group g t ∈G consists of a group of group members, and G(t)={u 1 , u 2 ,...,u h ,..., u p } represents, where u h ∈U, p is the number of group members contained in group g t , and G(t) is the set of members of group g t ;
步骤3、为了捕捉复杂且高阶的群组偏好,拟构造具有丰富边信息的超图,通过连接两个以上节点的超边来扩展图结构;其中,超边可以连接任何数量的节点;超图表示为Gm=(Vm,εm),其中,Vm=U∪I是包含N个唯一顶点的节点集,每个节点表示群体成员或群组交互的项目,εm是包含M个超边的边集,每条超边表示一个群组,它是由群组中的成员和群组交互的项目组成;形式上,用εt={u1,u2,…uh…,up,i1,i2,…,ij,…,iq}来表示群组gt;其中,uh∈U,ij∈I,并且εt∈εm;超图的连通性用关联矩阵来表示;对于每一个顶点和超边,使用对角矩阵D和B分别表示顶点和超边的度,其中每个超边e∈ε包含两个或多个顶点,并被赋予正权重Wee,所有的权重形成对角矩阵W∈RM×M;Step 3. In order to capture complex and high-order group preferences, a hypergraph with rich edge information is planned to be constructed, and the graph structure is expanded by hyperedges connecting more than two nodes; among them, hyperedges can connect any number of nodes; super The graph is represented as G m =(V m , ε m ), where V m =U∪I is a node set containing N unique vertices, each node represents a group member or an item of group interaction, and ε m is a node set containing M An edge set of hyperedges, each hyperedge represents a group, which is composed of members in the group and items that the group interacts with; formally, use ε t = {u 1 , u 2 ,…u h … , u p , i 1 , i 2 ,..., i j ,..., i q } to represent the group g t ; among them, u h ∈U, i j ∈I, and ε t ∈ε m ; the connectivity of the hypergraph correlation matrix to represent; for each vertex and hyperedge, use diagonal matrices D and B to represent the degrees of the vertex and hyperedge respectively, where Each hyperedge e∈ε contains two or more vertices and is assigned a positive weight W ee , and all weights form a diagonal matrix W∈R M×M ;
步骤4、在超图的重叠图上的图卷积网络中,从连接相似的群组去捕获和传播组级的偏好,构建重叠图;其中,用Gg=(Vg,εg)表示超图的重叠图;Vg={e:e∈ε},εg={(ep,eq):ep,eq∈ε,|ep∩eq|≥1},并为重叠图中的每一条边配置一个权重Wp,q,其中Wp,q=|ep∩eq|/|ep∪eq|;Step 4. In the graph convolutional network on the overlapping graph of the hypergraph, the overlapping graph is constructed by connecting similar groups to capture and propagate group-level preferences; where G g = (V g , εg) represents the super graph. The overlapping graph of the graph; V g = {e: e∈ε}, ε g = {(e p , e q ): e p , e q ∈ε, |e p ∩e q |≥1}, and is overlapping Each edge in the graph is configured with a weight W p, q , where W p, q = |e p ∩e q |/|e p ∪e q |;
步骤5、利用群组-项目二部图来构造图GI=(VI,εI);其中VI=G∪I表示节点集,εI={(gt,ij)|gt∈G,ij∈I,R(t,j)=1};邻接矩阵 Step 5. Use the group-item bipartite graph to construct the graph G I =(V I , ε I ); where V I =G∪I represents the node set, ε I ={(g t , i j )|g t ∈G, i j ∈I, R(t, j)=1}; adjacency matrix
步骤6、通过利用超图从成员级别聚合群组内成员的偏好进而获得群组偏好通过利用重叠图从相似的群组中去捕获和传播群组的偏好/>通过利用群组-项目二部图从群组的交互历史中去捕获群组偏好/>采用三个不同的门控来自动区分不同视图的贡献,计算最终的群组表示gt:/>其中α、β和γ分别表示学习到的权重,分别由以及/>得到;其中WM、WI和WG∈Rd是三种不同的可训练权重,σ是激活函数;Step 6: Obtain group preferences by aggregating the preferences of members in the group from the member level using a hypergraph Capture and propagate group preferences from similar groups by leveraging overlay graphs/> Capture group preferences from the group’s interaction history by leveraging the group-item bipartite graph/> Three different gatings are used to automatically distinguish the contributions of different views, and the final group representation g t is calculated:/> where α, β and γ respectively represent the learned weights, respectively represented by and/> Obtain; where W M , W I and W G ∈R d are three different trainable weights, and σ is the activation function;
步骤7、计算群组gt对项目ij的预测得分将该得分降序排列得到为群组推荐的物品列表;随机从R中抽取(gt,ij)并为每一个群组gt采样负样本,使用成对损失来计算群组预测损失,具体如下:/>其中,OG={(t,j,j')|(t,j)∈OG+,(t,j')∈OG-}表示群组-项目训练数据集,OG+是观察到的交互的集合,OG-是未观察到的交互的集合;Step 7. Calculate the prediction score of group g t for item i j Arrange the scores in descending order to get a list of items recommended for the group; randomly select (g t ,i j ) from R and sample negative samples for each group g t , and use pairwise loss to calculate the group prediction loss, specifically As follows:/> Among them, O G = {(t,j,j')|(t,j)∈O G+ ,(t,j')∈O G- } represents the group-item training data set, and O G+ is the observed The set of interactions, O G- is the set of unobserved interactions;
步骤8、对跨视图协作关联进行建模,建立跨视图对比损失;利用步骤4中得到的三个群组偏好表示获得对比损失Lcon;将群组推荐损失和对比损失结合起来联合训练,最小化以下目标函数来学习模型参数:L=Lgroup+λLcon;λ是控制对比损失的超参数。Step 8. Model the cross-view collaborative association and establish a cross-view contrast loss; use the three group preference representations obtained in step 4 to obtain the contrast loss L con ; combine the group recommendation loss and the contrast loss for joint training, and the minimum The following objective function is used to learn model parameters: L=L group +λL con ; λ is a hyperparameter that controls contrast loss.
在步骤3中,构建成员级偏好网络,执行超图卷积操作来编码用户和项目之间的高阶关系;用户-商品的聚合过程为M(l+1)=D-1HWB-1HTM(l)Θ(l),其中D、B和W分别表示节点度矩阵、边度矩阵和权重矩阵;用单位矩阵初始化权值矩阵W,使得所有超边拥有相等的权重,Θ为两个卷积层之间可学习的参数矩阵;具体来说,超图卷积可以看成两个阶段的信息聚合,“节点-超边-节点”;即和 In step 3, a member-level preference network is constructed and a hypergraph convolution operation is performed to encode the high-order relationship between users and items; the user-item aggregation process is M (l+1) =D -1 HWB -1 H T M (l) Θ (l) , where D, B and W represent the node degree matrix, edge degree matrix and weight matrix respectively; initialize the weight matrix W with the identity matrix so that all hyperedges have equal weight, Θ is two A parameter matrix that can be learned between convolutional layers; specifically, hypergraph convolution can be seen as two stages of information aggregation, "node-hyperedge-node"; that is, and
在步骤3中,应用注意力机制学习成员在群组中的权重;具体来说, 其中,权重α(h,j)表示群组成员uh在群组决策项目ij时的影响力分数,通过计算o(h,j)=hTRELU(Wu[uh;u'h]+Wj[ij;i'j]+b)后进行softmax归一化得到。In step 3, apply the attention mechanism to learn the weight of members in the group; specifically, Among them, the weight α(h,j) represents the influence score of group member u h when the group decides on project i j . By calculating o(h,j)=h T RELU(Wu[u h ;u' h ] +Wj[i j ; i' j ]+b) and then softmax normalization.
在步骤4中,In step 4,
将群组嵌入G∈Rk×d输入到图卷积网络,记为G(0)=G,执行组级图卷积过程其中,/>I为单位矩阵,Ap,q=Wp,q;/>是邻接矩阵的对角度矩阵,/> Input the group embedding G∈R k×d into the graph convolution network, denoted as G (0) = G, and perform the group-level graph convolution process Among them,/> I is the identity matrix, A p,q = W p,q ;/> is the diagonal matrix of the adjacency matrix, />
对每层获得的群组嵌入进行平均,得到最终的组级的群组嵌入:因此每一群组gt的在组级下的群组表示为/> The group embeddings obtained at each layer are averaged to obtain the final group-level group embedding: Therefore, the group at the group level for each group g t is expressed as/>
在步骤5中,In step 5,
为捕捉群组-项目之间的协作信号,将群组嵌入G∈Rk×d和项目嵌入I∈Rn×d送到图卷积网络中,记作E(0)=E,其中E是两个嵌入的拼接E=[G;I];执行项目级图卷积: In order to capture the collaboration signal between groups and projects, the group embedding G∈R k×d and the project embedding I∈R n×d are sent to the graph convolution network, denoted as E (0) = E, where E is the concatenation of two embeddings E = [G; I]; perform item-level graph convolution:
最终的群组表示通过平均在不同层学习到的表示得到,将其表示为获得每一群组gt的在项目级下的表示/> The final group representation is obtained by averaging the representations learned at different layers, which is expressed as Obtain the project-level representation of each group g t />
在步骤8中,In step 8,
为了解决用户-项目、群组-项目交互稀疏的问题并细化用户和群组表示,在多视图上应用对比学习,对于一个视图中的节点,另一个视图学习的同一节点嵌入视为正样本对;在两个视图中,除它之外的节点嵌入视为负样本对;即:正样本有一个来源,负样本有两个来源,即视图内节点和视图间节点。In order to solve the problem of sparse user-item and group-item interactions and refine user and group representations, contrastive learning is applied on multiple views. For a node in one view, the same node embedding learned in another view is regarded as a positive sample. Pair; in the two views, node embeddings other than it are regarded as negative sample pairs; that is: positive samples have one source, and negative samples have two sources, namely intra-view nodes and inter-view nodes.
在步骤8中,In step 8,
对于定义好的正负样本,成员级偏好视图和组级偏好视图之间的对比损失为 其中,θ(·)函数来学习两个输入向量之间的分数,并分配给正样本对比负样本对更高的分数,具体使用来计算,h(·)是非线性投影用于提升表征质量,主要由两层感知机实现;成员级偏好视图和项目级偏好视图之间的对比损失为组级偏好视图和项目级偏好视图之间的对比损失为/> For the defined positive and negative samples, the contrast loss between the member-level preference view and the group-level preference view is Among them, the θ(·) function learns the score between the two input vectors and assigns a higher score to the positive sample compared to the negative sample pair. Specifically, use To calculate, h(·) is a nonlinear projection used to improve representation quality, mainly implemented by a two-layer perceptron; the contrast loss between the member-level preference view and the item-level preference view is The contrast loss between the group-level preference view and the item-level preference view is/>
在步骤8中,In step 8,
由于任意两个视图是对称的,所以LGM、LIM、LIG的计算方式同LMG、LMI、LGI的计算方式,成员级偏好网络视图和组级偏好网络视图间的最终对比损失为另外任何两个视图间损失计算方式也如此计算得到Lcon2和Lcon3;然后,对三个视图的对比损失进行平均处理得到最终的对比损失Lcon:/> Since any two views are symmetrical, the calculation methods of L GM , L IM , and L IG are the same as those of L MG , L MI , and L GI . The final comparison loss between the member-level preference network view and the group-level preference network view is for In addition, the loss between any two views is calculated in the same way to obtain L con2 and L con3 ; then, the contrast losses of the three views are averaged to obtain the final contrast loss L con :/>
本发明提出一种基于超图卷积网络的跨视图对比学习群组推荐方法,所述方法设计了一个多视图框架,分别是超图表征的成员级偏好网络视图、重叠图表征的组级偏好网络视图和二部图表征的项目级偏好网络视图。对于每个数据视图,应用一个特定的图结构来编码行为数据进而生成对应视图的群组表示。其中,本发明提出的超图学习架构来学习成员级的聚合并捕捉高阶协同信息。与现有的聚合方法相比,该聚合方式依靠超图卷积进行,不同的群组偏好沿着超边传递信息。对于群组的一般偏好,提出的项目级偏好网络和组级偏好网络。两者分别基于群组-项目交互信息和组相似性(即群组之间的重叠关系),通过多层卷积运算学习群组表示。利用多视图卷积网络,进一步提出了一种门控组件来自适应的调整每个视图的贡献。其次,为了缓解数据稀疏的问题,提出在多视图上应用对比学习方法。通过统一推荐任务和对比学习任务来优化模型参数,以此为群组提供好的决策结果。The present invention proposes a cross-view comparison learning group recommendation method based on a hypergraph convolutional network. The method designs a multi-view framework, which are member-level preference network views represented by hypergraphs and group-level preferences represented by overlapping graphs. Network views and bipartite graph representations of item-level preference network views. For each data view, a specific graph structure is applied to encode the behavioral data and generate a group representation of the corresponding view. Among them, the hypergraph learning architecture proposed by the present invention learns member-level aggregation and captures high-order collaborative information. Compared with existing aggregation methods, this aggregation method relies on hypergraph convolution, and different group preferences transfer information along the hyperedge. For general preferences of groups, the proposed item-level preference network and group-level preference network are proposed. The two learn group representation through multi-layer convolution operations based on group-item interaction information and group similarity (i.e., the overlapping relationship between groups) respectively. Utilizing multi-view convolutional networks, a gating component is further proposed to adaptively adjust the contribution of each view. Secondly, in order to alleviate the problem of data sparseness, a contrastive learning method is proposed to be applied on multiple views. Optimize model parameters through unified recommendation tasks and comparative learning tasks to provide good decision-making results for the group.
本发明提出一种电子设备,包括存储器和处理器,所述存储器存储有计算机程序,所述处理器执行所述计算机程序时实现所述一种基于超图卷积网络的跨视图对比学习群组推荐方法的步骤。The present invention proposes an electronic device, including a memory and a processor. The memory stores a computer program. When the processor executes the computer program, it implements the cross-view comparison learning group based on a hypergraph convolution network. Recommended method steps.
本发明提出一种计算机可读存储介质,用于存储计算机指令,所述计算机指令被处理器执行时实现所述一种基于超图卷积网络的跨视图对比学习群组推荐方法的步骤。The present invention proposes a computer-readable storage medium for storing computer instructions. When the computer instructions are executed by a processor, the steps of the cross-view comparison learning group recommendation method based on a hypergraph convolutional network are implemented.
本申请实施例中的存储器可以是易失性存储器或非易失性存储器,或可包括易失性和非易失性存储器两者。其中,非易失性存储器可以是只读存储器(read only memory,ROM)、可编程只读存储器(programmable ROM,PROM)、可擦除可编程只读存储器(erasablePROM,EPROM)、电可擦除可编程只读存储器(electrically EPROM,EEPROM)或闪存。易失性存储器可以是随机存取存储器(random access memory,RAM),其用作外部高速缓存。通过示例性但不是限制性说明,许多形式的RAM可用,例如静态随机存取存储器(static RAM,SRAM)、动态随机存取存储器(dynamic RAM,DRAM)、同步动态随机存取存储器(synchronousDRAM,SDRAM)、双倍数据速率同步动态随机存取存储器(double data rate SDRAM,DDRSDRAM)、增强型同步动态随机存取存储器(enhanced SDRAM,ESDRAM)、同步连接动态随机存取存储器(synchlink DRAM,SLDRAM)和直接内存总线随机存取存储器(direct rambusRAM,DRRAM)。应注意,本发明描述的方法的存储器旨在包括但不限于这些和任意其它适合类型的存储器。The memory in the embodiment of the present application may be a volatile memory or a non-volatile memory, or may include both volatile and non-volatile memories. Among them, the non-volatile memory can be read only memory (ROM), programmable ROM (PROM), erasable programmable read-only memory (erasablePROM, EPROM), electrically erasable memory Programmable read-only memory (electrically EPROM, EEPROM) or flash memory. Volatile memory may be random access memory (RAM), which is used as an external cache. By way of illustration, but not limitation, many forms of RAM are available, such as static random access memory (SRAM), dynamic random access memory (DRAM), synchronous dynamic random access memory (SDRAM) ), double data rate synchronous dynamic random access memory (double data rate SDRAM, DDRSDRAM), enhanced synchronous dynamic random access memory (enhanced SDRAM, ESDRAM), synchronous link dynamic random access memory (synchlink DRAM, SLDRAM) and Direct memory bus random access memory (direct rambusRAM, DRRAM). It should be noted that the memory of the method described herein is intended to include, but is not limited to, these and any other suitable types of memory.
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。所述计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行所述计算机指令时,全部或部分地产生按照本申请实施例所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。所述计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,所述计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线(digital subscriber line,DSL))或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质(例如,软盘、硬盘、磁带)、光介质(例如,高密度数字视频光盘(digital video disc,DVD))、或者半导体介质(例如,固态硬盘(solid state disc,SSD))等。In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented using software, it may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When the computer instructions are loaded and executed on the computer, the processes or functions described in the embodiments of the present application are generated in whole or in part. The computer may be a general-purpose computer, a special-purpose computer, a computer network, or other programmable device. The computer instructions may be stored in or transmitted from one computer-readable storage medium to another, e.g., the computer instructions may be transferred from a website, computer, server, or data center Transmission to another website, computer, server or data center through wired (such as coaxial cable, optical fiber, digital subscriber line (DSL)) or wireless (such as infrared, wireless, microwave, etc.) means. The computer-readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains one or more available media integrated therein. The usable media may be magnetic media (eg, floppy disks, hard disks, tapes), optical media (eg, high-density digital video discs (DVD)), or semiconductor media (eg, solid state discs, SSD)) etc.
在实现过程中,上述方法的各步骤可以通过处理器中的硬件的集成逻辑电路或者软件形式的指令完成。结合本申请实施例所公开的方法的步骤可以直接体现为硬件处理器执行完成,或者用处理器中的硬件及软件模块组合执行完成。软件模块可以位于随机存储器,闪存、只读存储器,可编程只读存储器或者电可擦写可编程存储器、寄存器等本领域成熟的存储介质中。该存储介质位于存储器,处理器读取存储器中的信息,结合其硬件完成上述方法的步骤。为避免重复,这里不再详细描述。During the implementation process, each step of the above method can be completed by instructions in the form of hardware integrated logic circuits or software in the processor. The steps of the methods disclosed in conjunction with the embodiments of the present application can be directly implemented by a hardware processor, or executed by a combination of hardware and software modules in the processor. The software module can be located in random access memory, flash memory, read-only memory, programmable read-only memory or electrically erasable programmable memory, registers and other mature storage media in this field. The storage medium is located in the memory, and the processor reads the information in the memory and completes the steps of the above method in combination with its hardware. To avoid repetition, it will not be described in detail here.
应注意,本申请实施例中的处理器可以是一种集成电路芯片,具有信号的处理能力。在实现过程中,上述方法实施例的各步骤可以通过处理器中的硬件的集成逻辑电路或者软件形式的指令完成。上述的处理器可以是通用处理器、数字信号处理器(DSP)、专用集成电路(ASIC)、现场可编程门阵列(FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。可以实现或者执行本申请实施例中的公开的各方法、步骤及逻辑框图。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。结合本申请实施例所公开的方法的步骤可以直接体现为硬件译码处理器执行完成,或者用译码处理器中的硬件及软件模块组合执行完成。软件模块可以位于随机存储器,闪存、只读存储器,可编程只读存储器或者电可擦写可编程存储器、寄存器等本领域成熟的存储介质中。该存储介质位于存储器,处理器读取存储器中的信息,结合其硬件完成上述方法的步骤。It should be noted that the processor in the embodiment of the present application may be an integrated circuit chip with signal processing capabilities. During the implementation process, each step of the above method embodiment can be completed through an integrated logic circuit of hardware in the processor or instructions in the form of software. The above-mentioned processor may be a general-purpose processor, a digital signal processor (DSP), an application-specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic devices, discrete gate or transistor logic devices, or discrete hardware components. . Each method, step and logical block diagram disclosed in the embodiment of this application can be implemented or executed. A general-purpose processor may be a microprocessor or the processor may be any conventional processor, etc. The steps of the method disclosed in conjunction with the embodiments of the present application can be directly implemented by a hardware decoding processor, or executed by a combination of hardware and software modules in the decoding processor. The software module can be located in random access memory, flash memory, read-only memory, programmable read-only memory or electrically erasable programmable memory, registers and other mature storage media in this field. The storage medium is located in the memory, and the processor reads the information in the memory and completes the steps of the above method in combination with its hardware.
以上对本发明所提出的一种基于超图卷积网络的跨视图对比学习群组推荐方法进行了详细介绍,本文中应用了具体个例对本发明的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本发明的方法及其核心思想;同时,对于本领域的一般技术人员,依据本发明的思想,在具体实施方式及应用范围上均会有改变之处,综上所述,本说明书内容不应理解为对本发明的限制。The above is a detailed introduction to the cross-view comparison learning group recommendation method based on hypergraph convolutional network proposed by the present invention. This article uses specific examples to illustrate the principles and implementation methods of the present invention. The above embodiments The description is only used to help understand the method and its core idea of the present invention; at the same time, for those of ordinary skill in the art, there will be changes in the specific implementation and application scope based on the idea of the present invention. In summary, , the contents of this description should not be construed as limitations of the present invention.
Claims (10)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310823337.1A CN116894122B (en) | 2023-07-06 | 2023-07-06 | Cross-view contrast learning group recommendation method based on hypergraph convolutional network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310823337.1A CN116894122B (en) | 2023-07-06 | 2023-07-06 | Cross-view contrast learning group recommendation method based on hypergraph convolutional network |
Publications (2)
Publication Number | Publication Date |
---|---|
CN116894122A CN116894122A (en) | 2023-10-17 |
CN116894122B true CN116894122B (en) | 2024-02-13 |
Family
ID=88311599
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310823337.1A Active CN116894122B (en) | 2023-07-06 | 2023-07-06 | Cross-view contrast learning group recommendation method based on hypergraph convolutional network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116894122B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117112914B (en) * | 2023-10-23 | 2024-02-09 | 合肥综合性国家科学中心人工智能研究院(安徽省人工智能实验室) | Group recommendation method based on graph convolution |
CN119417064A (en) * | 2025-01-06 | 2025-02-11 | 浙江师范大学 | Online learning cognitive abnormal student detection method and system based on multimodal representation |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113672811A (en) * | 2021-08-24 | 2021-11-19 | 广东工业大学 | Hypergraph convolution collaborative filtering recommendation method and system based on topology information embedding and computer readable storage medium |
CN115146140A (en) * | 2022-07-01 | 2022-10-04 | 中国人民解放军国防科技大学 | A group recommendation method and device based on fusion influence |
CN115357805A (en) * | 2022-08-02 | 2022-11-18 | 山东省计算中心(国家超级计算济南中心) | A group recommendation method based on internal and external perspectives |
CN115982467A (en) * | 2023-01-03 | 2023-04-18 | 华南理工大学 | Multi-interest recommendation method and device for depolarized user and storage medium |
CN116186390A (en) * | 2022-12-28 | 2023-05-30 | 北京理工大学 | Hypergraph-fused contrast learning session recommendation method |
CN116204729A (en) * | 2022-12-05 | 2023-06-02 | 重庆邮电大学 | A cross-domain group intelligent recommendation method based on hypergraph neural network |
CN116244513A (en) * | 2023-02-14 | 2023-06-09 | 烟台大学 | Random group POI recommendation method, system, equipment and storage medium |
CN116340646A (en) * | 2023-01-18 | 2023-06-27 | 云南师范大学 | Recommendation method for optimizing multi-element user representation based on hypergraph motif |
CN116383519A (en) * | 2023-04-20 | 2023-07-04 | 云南大学 | Group recommendation method based on double weighted self-attention |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11443346B2 (en) * | 2019-10-14 | 2022-09-13 | Visa International Service Association | Group item recommendations for ephemeral groups based on mutual information maximization |
-
2023
- 2023-07-06 CN CN202310823337.1A patent/CN116894122B/en active Active
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113672811A (en) * | 2021-08-24 | 2021-11-19 | 广东工业大学 | Hypergraph convolution collaborative filtering recommendation method and system based on topology information embedding and computer readable storage medium |
CN115146140A (en) * | 2022-07-01 | 2022-10-04 | 中国人民解放军国防科技大学 | A group recommendation method and device based on fusion influence |
CN115357805A (en) * | 2022-08-02 | 2022-11-18 | 山东省计算中心(国家超级计算济南中心) | A group recommendation method based on internal and external perspectives |
CN116204729A (en) * | 2022-12-05 | 2023-06-02 | 重庆邮电大学 | A cross-domain group intelligent recommendation method based on hypergraph neural network |
CN116186390A (en) * | 2022-12-28 | 2023-05-30 | 北京理工大学 | Hypergraph-fused contrast learning session recommendation method |
CN115982467A (en) * | 2023-01-03 | 2023-04-18 | 华南理工大学 | Multi-interest recommendation method and device for depolarized user and storage medium |
CN116340646A (en) * | 2023-01-18 | 2023-06-27 | 云南师范大学 | Recommendation method for optimizing multi-element user representation based on hypergraph motif |
CN116244513A (en) * | 2023-02-14 | 2023-06-09 | 烟台大学 | Random group POI recommendation method, system, equipment and storage medium |
CN116383519A (en) * | 2023-04-20 | 2023-07-04 | 云南大学 | Group recommendation method based on double weighted self-attention |
Non-Patent Citations (1)
Title |
---|
Hypergraph Convolutional Network for Group Recommendation;Renqi Jia 等;IEEE;第260-269页 * |
Also Published As
Publication number | Publication date |
---|---|
CN116894122A (en) | 2023-10-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN116894122B (en) | Cross-view contrast learning group recommendation method based on hypergraph convolutional network | |
US11705112B2 (en) | Adversarial, learning framework for persona-based dialogue modeling | |
CN113868537B (en) | A recommendation method based on multi-action conversation graph fusion | |
CN114020999A (en) | A method and system for community structure detection of movie social network | |
CN112507246A (en) | Social recommendation method fusing global and local social interest influence | |
CN113918834B (en) | Graph convolution collaborative filtering recommendation method fusing social relations | |
CN114510646A (en) | Neural network collaborative filtering recommendation method based on federal learning | |
Huang et al. | Deep adaptive interest network: personalized recommendation with context-aware learning | |
CN116186394A (en) | Agricultural human-machine mixed recommendation method and system integrating entity knowledge | |
Fang et al. | A top-k POI recommendation approach based on LBSN and multi-graph fusion | |
Li et al. | Random walk based distributed representation learning and prediction on social networking services | |
CN117171447A (en) | Online interest group recommendation method based on self-attention and contrast learning | |
Xu et al. | A fairness-aware graph contrastive learning recommender framework for social tagging systems | |
CN116776008A (en) | Social network alignment method and system based on multiple information fusion and graph optimization | |
Yang et al. | Hierarchical reinforcement learning for conversational recommendation with knowledge graph reasoning and heterogeneous questions | |
CN112364258B (en) | Recommendation method and system based on map, storage medium and electronic equipment | |
Li | [Retracted] Visual Classification of Music Style Transfer Based on PSO‐BP Rating Prediction Model | |
Vijaikumar et al. | SoRecGAT: Leveraging graph attention mechanism for top-N social recommendation | |
Le et al. | Enhancing anchor link prediction in information networks through integrated embedding techniques | |
Yin et al. | An efficient recommendation algorithm based on heterogeneous information network | |
Deng et al. | Group event recommendation based on a heterogeneous attribute graph considering long-and short-term preferences | |
Lin et al. | Deep Petri nets of unsupervised and supervised learning | |
CN116541613A (en) | Item recommendation method of double-message propagation mechanism based on scoring weighting | |
CN114491243A (en) | Recommendation method and device based on social network structure information | |
Liu et al. | Attentive-feature transfer based on mapping for cross-domain recommendation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |