CN116894122B - Cross-view contrast learning group recommendation method based on hypergraph convolutional network - Google Patents
Cross-view contrast learning group recommendation method based on hypergraph convolutional network Download PDFInfo
- Publication number
- CN116894122B CN116894122B CN202310823337.1A CN202310823337A CN116894122B CN 116894122 B CN116894122 B CN 116894122B CN 202310823337 A CN202310823337 A CN 202310823337A CN 116894122 B CN116894122 B CN 116894122B
- Authority
- CN
- China
- Prior art keywords
- group
- view
- hypergraph
- level
- graph
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 95
- 230000002776 aggregation Effects 0.000 claims abstract description 13
- 238000004220 aggregation Methods 0.000 claims abstract description 13
- 239000011159 matrix material Substances 0.000 claims description 39
- 230000008569 process Effects 0.000 claims description 24
- 230000003993 interaction Effects 0.000 claims description 22
- 230000006870 function Effects 0.000 claims description 10
- 238000012549 training Methods 0.000 claims description 9
- 238000004590 computer program Methods 0.000 claims description 8
- 238000012935 Averaging Methods 0.000 claims description 6
- 230000001902 propagating effect Effects 0.000 claims description 6
- 238000005096 rolling process Methods 0.000 claims description 6
- 230000004913 activation Effects 0.000 claims description 3
- 230000004931 aggregating effect Effects 0.000 claims description 3
- 238000004364 calculation method Methods 0.000 claims description 3
- 230000001413 cellular effect Effects 0.000 claims description 3
- 238000012512 characterization method Methods 0.000 claims description 3
- 230000007246 mechanism Effects 0.000 claims description 3
- 238000010606 normalization Methods 0.000 claims description 3
- 239000013598 vector Substances 0.000 claims description 3
- 238000013461 design Methods 0.000 abstract description 6
- 230000006399 behavior Effects 0.000 abstract description 2
- 238000012546 transfer Methods 0.000 abstract description 2
- 230000000694 effects Effects 0.000 description 5
- 238000013459 approach Methods 0.000 description 2
- 230000010354 integration Effects 0.000 description 2
- 230000001360 synchronised effect Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 230000001537 neural effect Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000006116 polymerization reaction Methods 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/90—Details of database functions independent of the retrieved data types
- G06F16/95—Retrieval from the web
- G06F16/953—Querying, e.g. by the use of web search engines
- G06F16/9535—Search customisation based on user profiles and personalisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0464—Convolutional networks [CNN, ConvNet]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- Databases & Information Systems (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Health & Medical Sciences (AREA)
- Computational Linguistics (AREA)
- Software Systems (AREA)
- Mathematical Physics (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computing Systems (AREA)
- Molecular Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Image Analysis (AREA)
Abstract
The invention provides a cross-view contrast learning group recommendation method based on a hypergraph convolutional network. The method designs a multi-view framework which is a member level preference network view of hypergraph representation, a group level preference network view of overlap graph representation and an item level preference network view of bipartite graph representation respectively. For each data view, a particular graph structure is applied to encode behavior data to generate a group representation of the corresponding view. The hypergraph learning architecture provided by the invention learns aggregation of member levels and captures high-order collaborative information. Compared with the existing aggregation method, the aggregation method is carried out by means of hypergraph convolution, and different group preference is carried out to transfer information along the hyperedges. The method mines the preference of the group to the articles in a mode of constructing multiple views, so that scoring prediction work is accurately carried out.
Description
Technical Field
The invention relates to the technical field of cross-view comparison learning group recommendation of a preference prediction hypergraph convolutional network, in particular to a cross-view comparison learning group recommendation method based on the hypergraph convolutional network.
Background
With the development of the internet and the popularization of online community activities, people with similar backgrounds (such as hobbies, professions and ages) are fixed or temporarily combined into a group according to different needs to participate in different activities. For example, the interest of the user is divided into a plurality of interest groups, game groups, drawing groups, and the like, so as to obtain various activity resources. People are also often grouped together for various group activities, such as temporarily teamed tourists, team dining, or watching movies. Conventionally, these persons may be familiar with each other, such as living together in a household; it may also be strange of each other that they meet by chance during an activity, e.g. several travelers join together in a travel group. In these scenarios we need to recommend one or several suitable items for the group to meet the group's needs. However, there are many users in each group, and individual differences exist in preferences among different users. Thus, the ultimate goal of group recommendation is to aggregate the group members' different preferences, recommending appropriate and satisfying items to the group. The group recommendation not only saves time for group decision, but also reduces unnecessary contradictions between group members.
Existing methods mostly employ heuristic methods or attention-mechanism-based methods to aggregate personal preferences of group members to infer preferences of groups. However, these approaches model only the user preferences of a single group, ignoring complex advanced interactions within and outside the group. Second, a group final decision is not necessarily from the preferences of group members. But the existing approaches are not sufficient to model such cross-group preferences. In addition, the problem of sparse data exists in group recommendation due to the sparse of group-project interactions. If the above-mentioned problems are not solved, the accuracy of the recommended results is lowered.
Disclosure of Invention
The invention aims to overcome the defects in the prior art, and provides a cross-view contrast learning group recommendation method based on a hypergraph convolutional network. The method enables recommending a plurality of items with highest scores to a group, and in real life, requires recommending a suitable and satisfactory item to the group.
The invention is realized by the following technical scheme, and provides a cross-view contrast learning group recommendation method based on a hypergraph convolutional network, which comprises the following steps:
step 1, acquiring a group interaction data set from CAMRa2011 and a horse cellular platform, wherein the data set comprises interaction histories of users on articles, groups on articles and user-group composition relations;
step 2, the user set in the training set is U, u= { U 1 ,u 2 ,…,u h ,…,u M "h.epsilon. {1, …, M }, where u h For the h user, M is the number of users; commodity set is I, I= { I 1 ,i 2 ,...,j j ,...,i n {1,. }, N }, where i } j For the j-th commodity, N is the number of commodities; group clusterIs combined to G, G= { G 1 ,g 2 ,...,g t ,...,g k T e { 1.,.. k }, where g t For the t-th group, k is the number of groups; wherein, group t g t E G consists of a group of group members, with G (t) = { u 1 ,u 2 ,...,u h ,...,u p Represented by u h E U, p is group g t Including the number of group members, G (t) is group G t A collection of members;
step 3, constructing a hypergraph with rich side information, and expanding a graph structure by connecting hyperedges of more than two nodes; wherein, the superside can be connected with any number of nodes; hypergraph is denoted as G m =(V m ,ε m ) Wherein V is m =u.u.i is a set of nodes containing N unique vertices, each node representing a group member or item of group interaction, ε m Is an edge set containing M superedges, each superedge representing a group, which is composed of members in the group and items interacted with the group; formally, use ε t ={u 1 ,u 2 ,…u h …,u p ,i 1 ,i 2 ,…,i j ,…,i q } to represent group g t The method comprises the steps of carrying out a first treatment on the surface of the Wherein u is h ∈U,i j E I, and e t ∈ε m The method comprises the steps of carrying out a first treatment on the surface of the Association matrix for connectivity of hypergraphTo represent; for each vertex and superside, the degree of the vertex and superside is represented using a diagonal matrix D and B, respectively, where +.>Each superside e epsilon contains two or more vertices and is given a positive weight W ee All weights form a diagonal matrix W εR M×M ;
Step 4, in the graph rolling network on the overlapping graph of the hypergraph, capturing and propagating the group-level preference from the connected similar groups, and constructing the overlapping graph; wherein G is used g =(V g ,ε g ) An overlay graph representing the hypergraph; v (V) g ={e:e∈ε},ε g ={(e p ,e q ):e p ,e q ∈ε,|e p ∩e q 1, and a weight W is configured for each edge in the overlap graph p,q Wherein W is p,q =|e p ∩e q |/|e p ∪e q |;
Step 5, constructing graph G by using the group-item bipartite graph I =(V I ,ε I ) The method comprises the steps of carrying out a first treatment on the surface of the Wherein V is I =gjoined represents node set, ε I ={(g t ,i j )g t ∈G,i j E I, R (t, j) =1 }; adjacency matrix
Step 6, aggregating preferences of members in the group from member level by utilizing hypergraph to further obtain group preferencesPreference ∈for capturing and propagating groups from similar groups by using overlap map>Capturing group preference ++from interaction history of group by using group-project bipartite graph>Three different gating is used to automatically distinguish contributions from different views, and the final group representation g is calculated t :/>Wherein α, β and γ represent learned weights respectively, respectivelyAnd +.>Obtaining; wherein W is M 、W I And W is G ∈R d Is three different trainable weights, σ is the activation function;
step 7, calculating group g t For item i j Predictive score of (a)Arranging the scores in descending order to obtain an item list recommended for the group; randomly draw from R (g t ,i j ) And for each group g t The negative samples are sampled and the group predicted loss is calculated using the pair-wise loss, as follows: />Wherein O is G ={(t,j,j')|(t,j)∈O G+ ,(t,j')∈O G- The group-project training data set, O G+ Is a collection of observed interactions, O G- Is a collection of unobserved interactions;
step 8, modeling cross-view collaboration association, and establishing cross-view contrast loss; obtaining contrast loss L using the resulting three group preference representations con The method comprises the steps of carrying out a first treatment on the surface of the The group recommendation loss and the contrast loss are combined for joint training, and model parameters are learned by minimizing the following objective functions: l=l group +λL con The method comprises the steps of carrying out a first treatment on the surface of the Lambda is a hyper-parameter controlling contrast loss.
Further, in step 3, a member level preference network is constructed, and a hypergraph convolution operation is performed to encode a higher order relationship between the user and the item; the aggregation process of the user-commodity is M (l+1) =D -1 HWB -1 H T M (l) Θ (l) Wherein D, B and W represent a node degree matrix, an edge degree matrix, and a weight matrix, respectively; initializing a weight matrix W by using an identity matrix, so that all supersides have equal weights, and Θ is a parameter matrix which can be learned between two convolution layers; hypergraph convolution can be seen as a two-stage aggregation of information, "node-hyperedge-node"; i.e.And
further, in step 3, the weight of the members in the group is learned using the attention mechanism; wherein the weight α (h, j) represents the group member u h Group decision item i j The impact score at time is calculated by o (h, j) =h T RELU(Wu[u h ;u' h ]+Wj[i j ;i' j ]+b) followed by softmax normalization.
Further, in step 4, the process comprises,
embedding groups into G.epsilon.R k×d Input to the graph rolling network, denoted G (0) =g, perform group-level graph convolution processWherein (1)>I is an identity matrix, A p,q =W p,q ;/>Is the diagonal matrix of the adjacency matrix, +.>
Averaging the group embeddings obtained for each layer to obtain a final group-level group embedment:thus each group g t The group at group level of (2) is denoted +.>
Further, in step 5, the process comprises,
embedding groups into G.epsilon.R k×d And project embedding I E R n×d Is sent to the graph convolution network, denoted as E (0) =e, where E is two embedded splice e= [ G; i]The method comprises the steps of carrying out a first treatment on the surface of the Performing item level graph convolution:
the final group representation is obtained by averaging the representations learned at the different layers, expressed asObtain each group g t Representation of (2) at item level +.>
Further, in step 8, the process comprises,
applying contrast learning on multiple views, for nodes in one view, the same node embedding of the other view learning is considered as a positive sample pair; in both views, node embedding other than it is considered a negative sample pair; namely: positive samples have one source and negative samples have two sources, namely intra-view nodes and inter-view nodes.
Further, in step 8, the process comprises,
for well-defined positive and negative samples, the loss of contrast between the member-level preference view and the group-level preference view is Wherein a θ (·) function learns the score between the two input vectors and assigns a higher score to positive versus negative sample pairs, particularly usingCalculating, wherein h (-) is a nonlinear projection for improving the characterization quality and is mainly realized by a two-layer perceptron; the loss of contrast between the member-level preference view and the item-level preference view isThe loss of contrast between group level preference view and item level preference view is +.>
Further, in step 8, the process comprises,
since any two views are symmetrical, L GM 、L IM 、L IG Is calculated in the same way as L MG 、L MI 、L GI The final loss of contrast between the member-level and group-level preferred network views isIn addition, the loss calculation mode between any two views is also calculated to obtain L con2 And L con3 The method comprises the steps of carrying out a first treatment on the surface of the Then, the contrast loss of the three views is averaged to obtain the final contrast loss L con :/>
The invention provides electronic equipment, which comprises a memory and a processor, wherein the memory stores a computer program, and the processor realizes the steps of the cross-view contrast learning group recommendation method based on a hypergraph convolutional network when executing the computer program.
The invention provides a computer readable storage medium for storing computer instructions which when executed by a processor implement the steps of a cross-view contrast learning group recommendation method based on a hypergraph convolutional network.
The invention has the following beneficial effects:
the invention provides a cross-view contrast learning hypergraph convolutional network model for group recommendation, which is abbreviated as C 2 -HGR. The preference of the group to the articles is mined in a mode of constructing multiple views, so that scoring prediction work is accurately carried out.
The invention designs a multi-view learning framework with different granularity levels, which comprises a member level preference network represented by a hypergraph, a group level preference network represented by an overlapped graph and a project level preference network represented by a bipartite graph. Through the effective fusion of the three, the collaborative information of the user-project and the group similarity are extracted, so that the group preference is enhanced.
The invention designs a new hypergraph neural convolutional network to obtain member-level aggregation, and the overlap graph of hypergraph conversion is used to obtain group-level preference. The process of the present invention exhibits advantages in performance over existing polymerization processes. Furthermore, to integrate group preference representations obtained from multiple views, the present invention designs an efficient gating component to trade off the contribution of each view to the overall model.
The invention provides a self-supervision-based multi-view contrast learning method to enhance group representation and solve the problem of data sparsity. The method is seamlessly coupled with the design of the graph roll integration network hierarchy. By unifying the recommended tasks and comparing the learning tasks, the recommended performance can be remarkably improved. And the invention is applicable to group recommendation.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are required to be used in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only embodiments of the present invention, and that other drawings can be obtained according to the provided drawings without inventive effort for a person skilled in the art.
Fig. 1 is an overall schematic diagram of a cross-view contrast learning group recommendation method based on a hypergraph convolutional network.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Referring to fig. 1, the invention provides a cross-view contrast learning group recommendation method based on a hypergraph convolutional network, which comprises the following steps:
step 1, acquiring a group interaction data set from CAMRa2011 and a horse cellular platform, wherein the data set comprises interaction histories of users on articles, groups on articles and user-group composition relations;
step 2, the user set in the training set is U, u= { U 1 ,u 2 ,…,u h ,…,u M "h.epsilon. {1, …, M }, where u h For the h user, M is the number of users; commodity set is I, I= { I 1 ,i 2 ,…,i j ,…,i n {1, …, N }, j ε, where i j For the j-th commodity, N is the number of commodities; the group set is G, g= { G 1 ,g 2 ,...,g t ,...,g k T e { 1.,.. k }, where g t For the t-th group, k is the number of groups; wherein, group t g t E G consists of a group of group members, with G (t) = { u 1 ,u 2 ,...,u h ,...,u p Represented by u h E U, p is group g t Including the number of group members, G (t) is group G t A collection of members;
step 3, in order to capture complex and high-order group preference, a hypergraph with rich side information is to be constructed by linkingConnecting the supersides of more than two nodes to expand the graph structure; wherein, the superside can be connected with any number of nodes; hypergraph is denoted as G m =(V m ,ε m ) Wherein V is m =u.u.i is a set of nodes containing N unique vertices, each node representing a group member or item of group interaction, ε m Is an edge set containing M superedges, each superedge representing a group, which is composed of members in the group and items interacted with the group; formally, use ε t ={u 1 ,u 2 ,…u h …,u p ,i 1 ,i 2 ,…,i j ,…,i q } to represent group g t The method comprises the steps of carrying out a first treatment on the surface of the Wherein u is h ∈U,i j E I, and e t ∈ε m The method comprises the steps of carrying out a first treatment on the surface of the Association matrix for connectivity of hypergraphTo represent; for each vertex and superside, the degree of the vertex and superside is represented using a diagonal matrix D and B, respectively, whereEach superside e epsilon contains two or more vertices and is given a positive weight W ee All weights form a diagonal matrix W εR M×M ;
Step 4, in the graph rolling network on the overlapping graph of the hypergraph, capturing and propagating the group-level preference from the connected similar groups, and constructing the overlapping graph; wherein G is used g =(V g εg) represents an overlay of the hypergraph; v (V) g ={e:e∈ε},ε g ={(e p ,e q ):e p ,e q ∈ε,|e p ∩e q 1, and a weight W is configured for each edge in the overlap graph p,q Wherein W is p,q =|e p ∩e q |/|e p ∪e q |;
Step 5, constructing graph G by using the group-item bipartite graph I =(V I ,ε I ) The method comprises the steps of carrying out a first treatment on the surface of the Wherein V is I =gjoined represents node set, ε I ={(g t ,i j )|g t ∈G,i j E I, R (t, j) =1 }; adjacency matrix
Step 6, aggregating preferences of members in the group from member level by utilizing hypergraph to further obtain group preferencesPreference ∈for capturing and propagating groups from similar groups by using overlap map>Capturing group preference ++from interaction history of group by using group-project bipartite graph>Three different gating is used to automatically distinguish contributions from different views, and the final group representation g is calculated t :/>Wherein α, β and γ represent learned weights respectively, respectivelyAnd +.>Obtaining; wherein W is M 、W I And W is G ∈R d Is three different trainable weights, σ is the activation function;
step 7, calculating group g t For item i j Predictive score of (a)Arranging the scores in descending order to obtain an item list recommended for the group; randomly draw from R (g t ,i j ) And for each group g t Sampling a negative sample usingThe group prediction loss is calculated as a pair loss, specifically as follows: />Wherein O is G ={(t,j,j')|(t,j)∈O G+ ,(t,j')∈O G- The group-project training data set, O G+ Is a collection of observed interactions, O G- Is a collection of unobserved interactions;
step 8, modeling cross-view collaboration association, and establishing cross-view contrast loss; obtaining contrast loss L using the three group preference representations obtained in step 4 con The method comprises the steps of carrying out a first treatment on the surface of the The group recommendation loss and the contrast loss are combined for joint training, and model parameters are learned by minimizing the following objective functions: l=l group +λL con The method comprises the steps of carrying out a first treatment on the surface of the Lambda is a hyper-parameter controlling contrast loss.
In step 3, a member level preference network is constructed, and a hypergraph convolution operation is performed to encode the higher-order relation between the user and the item; the aggregation process of the user-commodity is M (l+1) =D -1 HWB -1 H T M (l) Θ (l) Wherein D, B and W represent a node degree matrix, an edge degree matrix, and a weight matrix, respectively; initializing a weight matrix W by using an identity matrix, so that all supersides have equal weights, and Θ is a parameter matrix which can be learned between two convolution layers; specifically, hypergraph convolution can be seen as a two-stage aggregation of information, "node-hyperedge-node"; i.e.And
in step 3, learning the weight of the member in the group by applying an attention mechanism; in particular the number of the elements, wherein the weight α (h, j) represents the group member u h Group decision item i j The impact score at time is calculated by o (h, j) =h T RELU(Wu[u h ;u' h ]+Wj[i j ;i' j ]+b) followed by softmax normalization.
In the step 4 of the process, the process is carried out,
embedding groups into G.epsilon.R k×d Input to the graph rolling network, denoted G (0) =g, perform group-level graph convolution processWherein (1)>I is an identity matrix, A p,q =W p,q ;/>Is the diagonal matrix of the adjacency matrix, +.>
Averaging the group embeddings obtained for each layer to obtain a final group-level group embedment:thus each group g t The group at group level of (2) is denoted +.>
In the step 5 of the process, the process is carried out,
to capture collaboration signals between groups-items, groups are embedded into G e R k×d And project embedding I E R n×d Is sent to the graph convolution network, denoted as E (0) =e, where E is two embedded splice e= [ G; i]The method comprises the steps of carrying out a first treatment on the surface of the Performing item level graph convolution:
the final group representation is obtained by averaging the representations learned at the different layers, expressed asObtain each group g t Representation of (2) at item level +.>
In the step (8) of the process,
in order to solve the problem of sparse user-project, group-project interactions and refine user and group representations, contrast learning is applied on multiple views, and for nodes in one view, the same node embedding of the other view learning is regarded as a positive sample pair; in both views, node embedding other than it is considered a negative sample pair; namely: positive samples have one source and negative samples have two sources, namely intra-view nodes and inter-view nodes.
In the step (8) of the process,
for well-defined positive and negative samples, the loss of contrast between the member-level preference view and the group-level preference view is Wherein a θ (·) function learns the score between the two input vectors and assigns a higher score to positive versus negative sample pairs, particularly usingCalculating, wherein h (-) is a nonlinear projection for improving the characterization quality and is mainly realized by a two-layer perceptron; the loss of contrast between the member-level preference view and the item-level preference view isThe loss of contrast between group level preference view and item level preference view is +.>
In the step (8) of the process,
since any two views are symmetrical, L GM 、L IM 、L IG Is calculated in the same way as L MG 、L MI 、L GI The final loss of contrast between the member-level and group-level preferred network views isIn addition, the loss calculation mode between any two views is also calculated to obtain L con2 And L con3 The method comprises the steps of carrying out a first treatment on the surface of the Then, the contrast loss of the three views is averaged to obtain the final contrast loss L con :/>
The invention provides a cross-view comparison learning group recommendation method based on a hypergraph convolutional network, which designs a multi-view framework, namely a member-level preference network view of hypergraph representation, a group-level preference network view of overlap graph representation and a project-level preference network view of bipartite graph representation. For each data view, a particular graph structure is applied to encode behavior data to generate a group representation of the corresponding view. The hypergraph learning architecture provided by the invention learns aggregation of member levels and captures high-order collaborative information. Compared with the existing aggregation method, the aggregation method is carried out by means of hypergraph convolution, and different group preference is carried out to transfer information along the hyperedges. For general preferences of groups, a proposed item level preference network and a group level preference network. Both learn the group representation by multi-layer convolution operations based on the group-item interaction information and the group similarity (i.e., overlapping relationship between groups), respectively. With a multi-view convolutional network, a gating component is further proposed to adaptively adjust the contribution of each view. Secondly, to alleviate the problem of data sparseness, it is proposed to apply a contrast learning method on multiple views. Model parameters are optimized through unified recommendation tasks and comparison learning tasks, so that good decision results are provided for the group.
The invention provides electronic equipment, which comprises a memory and a processor, wherein the memory stores a computer program, and the processor realizes the steps of the cross-view contrast learning group recommendation method based on a hypergraph convolutional network when executing the computer program.
The invention provides a computer readable storage medium for storing computer instructions which when executed by a processor implement the steps of a cross-view contrast learning group recommendation method based on a hypergraph convolutional network.
The memory in embodiments of the present application may be either volatile memory or nonvolatile memory, or may include both volatile and nonvolatile memory. The nonvolatile memory may be a Read Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an electrically Erasable EPROM (EEPROM), or a flash memory. The volatile memory may be random access memory (random access memory, RAM) which acts as an external cache. By way of example, and not limitation, many forms of RAM are available, such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDR SDRAM), enhanced SDRAM (ESDRAM), synchronous DRAM (SLDRAM), and direct memory bus RAM (DRRAM). It should be noted that the memory of the methods described herein is intended to comprise, without being limited to, these and any other suitable types of memory.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When the computer instructions are loaded and executed on a computer, the processes or functions described in accordance with embodiments of the present application are produced in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by a wired (e.g., coaxial cable, fiber optic, digital subscriber line (digital subscriber line, DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains an integration of one or more available media. The usable medium may be a magnetic medium (e.g., a floppy disk, a hard disk, a magnetic tape), an optical medium (e.g., a high-density digital video disc (digital video disc, DVD)), or a semiconductor medium (e.g., a Solid State Disk (SSD)), or the like.
In implementation, the steps of the above method may be performed by integrated logic circuits of hardware in a processor or by instructions in the form of software. The steps of a method disclosed in connection with the embodiments of the present application may be embodied directly in a hardware processor for execution, or in a combination of hardware and software modules in the processor for execution. The software modules may be located in a random access memory, flash memory, read only memory, programmable read only memory, or electrically erasable programmable memory, registers, etc. as well known in the art. The storage medium is located in a memory, and the processor reads the information in the memory and, in combination with its hardware, performs the steps of the above method. To avoid repetition, a detailed description is not provided herein.
It should be noted that the processor in the embodiments of the present application may be an integrated circuit chip with signal processing capability. In implementation, the steps of the above method embodiments may be implemented by integrated logic circuits of hardware in a processor or instructions in software form. The processor may be a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic, or discrete hardware components. The disclosed methods, steps, and logic blocks in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of a method disclosed in connection with the embodiments of the present application may be embodied directly in hardware, in a decoded processor, or in a combination of hardware and software modules in a decoded processor. The software modules may be located in a random access memory, flash memory, read only memory, programmable read only memory, or electrically erasable programmable memory, registers, etc. as well known in the art. The storage medium is located in a memory, and the processor reads the information in the memory and, in combination with its hardware, performs the steps of the above method.
The above describes a cross-view comparison learning group recommendation method based on a hypergraph convolutional network, and specific examples are applied to describe the principle and implementation of the present invention, and the description of the above examples is only used for helping to understand the method and core ideas of the present invention; meanwhile, as those skilled in the art will have variations in the specific embodiments and application scope in accordance with the ideas of the present invention, the present description should not be construed as limiting the present invention in view of the above.
Claims (10)
1. A cross-view contrast learning group recommendation method based on a hypergraph convolutional network is characterized by comprising the following steps of: the method comprises the following steps:
step 1, acquiring a group interaction data set from CAMRa2011 and a horse cellular platform, wherein the data set comprises interaction histories of users on articles, groups on articles and user-group composition relations;
step 2, the user set in the training set is U, u= { U 1 ,u 2 ,...,u h ,...,u M { h.epsilon. { 1..once., M }, where u h For the h user, M is the number of users; commodity set is I, I= { I 1 ,i 2 ,...,i j ,...,i n {1,. }, N }, where i } j For the j-th commodity, N is the number of commodities; the group set is G, g= { G 1 ,g 2 ,...,g t ,...,g k T e { 1.,.. k }, where g t For the t-th group, k is the number of groups; wherein, group t g t E G consists of a group of group members, with G (t) = { u 1 ,u 2 ,...,u h ,...,u p Represented by u h E U, p is group g t Including the number of group members, G (t) is group G t A collection of members;
step 3, constructing a hypergraph with rich side information, and expanding a graph structure by connecting hyperedges of more than two nodes; wherein, the superside can be connected with any number of nodes; hypergraph is denoted as G m =(V m ,ε m ) Wherein V is m =u.u.i is a set of nodes containing N unique vertices, each node representing a group member or item of group interaction, ε m Is an edge set containing M superedges, each superedge representing a group, which is composed of members in the group and items interacted with the group; formally, use ε t ={u 1 ,u 2 ,…u h …,u p ,i 1 ,i 2 ,…,i j ,…,i q } to represent group g t The method comprises the steps of carrying out a first treatment on the surface of the Wherein u is h ∈U,o j E I, and e t ∈ε m The method comprises the steps of carrying out a first treatment on the surface of the Association matrix for connectivity of hypergraphTo represent; for each vertex and superside, the degree of the vertex and superside is represented using a diagonal matrix D and B, respectively, where +.>Each superside e epsilon contains two or more vertices and is given a positive weight W ee All weights form a diagonal matrix W εR M×M ;
Step 4, in the graph rolling network on the overlapping graph of the hypergraph, capturing and propagating the group-level preference from the connected similar groups, and constructing the overlapping graph; wherein G is used g =(V g ,ε g ) An overlay graph representing the hypergraph; v (V) g ={e:e∈ε},ε g ={(e p ,e q ):e p ,e q ∈ε,|e p ∩e q 1, and a weight W is configured for each edge in the overlap graph p,q Wherein W is p,q =|e p ∩e q |/e p ∪e q |;
Step 5, constructing graph G by using the group-item bipartite graph I =(V I ,ε I ) The method comprises the steps of carrying out a first treatment on the surface of the Wherein V is I =gjoined represents node set, ε I ={(g t ,i j )|g t ∈G,i j E I, R (t, j) =1 }; adjacency matrix
Step 6, aggregating preferences of members in the group from member level by utilizing hypergraph to further obtain group preferencesPreference ∈for capturing and propagating groups from similar groups by using overlap map>Capturing group preference ++from interaction history of group by using group-item bipartite graph>Three different gating is used to automatically distinguish contributions from different views, and the final group representation g is calculated t :/>Wherein a, β and γ each represent a learned weight, respectivelyAnd +.>Obtaining; wherein W is M 、W I And W is G ∈R d Is three different trainable weights, σ is the activation function;
step 7, calculating group g t For item i j Predictive score of (a) Arranging the scores in descending order to obtain an item list recommended for the group; randomly draw from R (g t ,i j ) And for each group g t The negative samples are sampled and the group predicted loss is calculated using the pair-wise loss, as follows: />Wherein,representing a group-project training dataset, +.>Is a set of observed interactions, +.>Is a collection of unobserved interactions;
step 8, cross-viewModeling is carried out by cooperative association, and cross-view contrast loss is established; obtaining contrast loss L using the resulting three group preference representations con The method comprises the steps of carrying out a first treatment on the surface of the The group recommendation loss and the contrast loss are combined for joint training, and model parameters are learned by minimizing the following objective functions: l=l group +λL con The method comprises the steps of carrying out a first treatment on the surface of the Lambda is a hyper-parameter controlling contrast loss.
2. The method according to claim 1, characterized in that: in step 3, a member level preference network is constructed, and a hypergraph convolution operation is performed to encode the higher-order relation between the user and the item; the aggregation process of the user-commodity is M (l+1) =D -1 HWB - 1 H T M (l) Θ (1) Wherein D, B and W represent a node degree matrix, an edge degree matrix, and a weight matrix, respectively; initializing a weight matrix W by using an identity matrix, so that all supersides have equal weights, and Θ is a parameter matrix which can be learned between two convolution layers; hypergraph convolution can be seen as a two-stage aggregation of information, "node-hyperedge-node"; i.e.And->
3. The method according to claim 2, characterized in that: in step 3, learning the weight of the member in the group by applying an attention mechanism;wherein the weight α (h, j) represents the group member u h Group decision item i j The impact score at time is calculated by o (h, j) =h T RELU(Wu[u h ;u’ h ]+Wj[i j ;i’ j ]+b) followed by softmax normalization.
4. A method according to claim 3, characterized in that: in the step 4 of the process, the process is carried out,
embedding groups into G.epsilon.R k×d Input to the graph rolling network, denoted G (0) =g, perform group-level graph convolution processWherein (1)>I is an identity matrix, A p,q =W p,q ;/>Is the diagonal matrix of the adjacency matrix, +.>
Averaging the group embeddings obtained for each layer to obtain a final group-level group embedment:thus each group g t The group at group level of (2) is denoted +.>
5. The method according to claim 4, wherein: in the step 5 of the process, the process is carried out,
embedding groups into G.epsilon.R k×d And project embedding I E R n×d Is sent to the graph convolution network, denoted as E (0) =e, where E is two embedded splice e= [ G; i]The method comprises the steps of carrying out a first treatment on the surface of the Performing item level graph convolution:
the final group representation is obtained by averaging the representations learned at the different layers, expressed asObtain each group g t Representation of (2) at item level +.>
6. The method according to claim 1, characterized in that: in the step (8) of the process,
applying contrast learning on multiple views, for nodes in one view, the same node embedding of the other view learning is considered as a positive sample pair; in both views, node embedding other than it is considered a negative sample pair; namely: positive samples have one source and negative samples have two sources, namely intra-view nodes and inter-view nodes.
7. The method according to claim 6, wherein: in the step (8) of the process,
for well-defined positive and negative samples, the loss of contrast between the member-level preference view and the group-level preference view is Wherein a θ (·) function learns the score between the two input vectors and assigns a higher score to positive versus negative samples, using +.>Calculating, wherein h (-) is a nonlinear projection for improving the characterization quality and is mainly realized by a two-layer perceptron; the loss of contrast between the member-level preference view and the item-level preference view is +.>The loss of contrast between group level preference view and item level preference view is +.>
8. The method according to claim 7, wherein: in the step (8) of the process,
since any two views are symmetrical, L GM 、L IM 、L IG Is calculated in the same way as L MG 、L MI 、L GI The final loss of contrast between the member-level and group-level preferred network views isIn addition, the loss calculation mode between any two views is also calculated to obtain L con2 And L con3 The method comprises the steps of carrying out a first treatment on the surface of the Then, the contrast loss of the three views is averaged to obtain the final contrast loss Lcon: />
9. An electronic device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor implements the steps of the method of any of claims 1-8 when the computer program is executed.
10. A computer readable storage medium storing computer instructions which, when executed by a processor, implement the steps of the method of any one of claims 1-8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310823337.1A CN116894122B (en) | 2023-07-06 | 2023-07-06 | Cross-view contrast learning group recommendation method based on hypergraph convolutional network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310823337.1A CN116894122B (en) | 2023-07-06 | 2023-07-06 | Cross-view contrast learning group recommendation method based on hypergraph convolutional network |
Publications (2)
Publication Number | Publication Date |
---|---|
CN116894122A CN116894122A (en) | 2023-10-17 |
CN116894122B true CN116894122B (en) | 2024-02-13 |
Family
ID=88311599
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310823337.1A Active CN116894122B (en) | 2023-07-06 | 2023-07-06 | Cross-view contrast learning group recommendation method based on hypergraph convolutional network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116894122B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117112914B (en) * | 2023-10-23 | 2024-02-09 | 合肥综合性国家科学中心人工智能研究院(安徽省人工智能实验室) | Group recommendation method based on graph convolution |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113672811A (en) * | 2021-08-24 | 2021-11-19 | 广东工业大学 | Hypergraph convolution collaborative filtering recommendation method and system based on topology information embedding and computer readable storage medium |
CN115146140A (en) * | 2022-07-01 | 2022-10-04 | 中国人民解放军国防科技大学 | Group recommendation method and device based on fusion influence |
CN115357805A (en) * | 2022-08-02 | 2022-11-18 | 山东省计算中心(国家超级计算济南中心) | Group recommendation method based on internal and external visual angles |
CN115982467A (en) * | 2023-01-03 | 2023-04-18 | 华南理工大学 | Multi-interest recommendation method and device for depolarized user and storage medium |
CN116186390A (en) * | 2022-12-28 | 2023-05-30 | 北京理工大学 | Hypergraph-fused contrast learning session recommendation method |
CN116204729A (en) * | 2022-12-05 | 2023-06-02 | 重庆邮电大学 | Cross-domain group intelligent recommendation method based on hypergraph neural network |
CN116244513A (en) * | 2023-02-14 | 2023-06-09 | 烟台大学 | Random group POI recommendation method, system, equipment and storage medium |
CN116340646A (en) * | 2023-01-18 | 2023-06-27 | 云南师范大学 | Recommendation method for optimizing multi-element user representation based on hypergraph motif |
CN116383519A (en) * | 2023-04-20 | 2023-07-04 | 云南大学 | Group recommendation method based on double weighted self-attention |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11443346B2 (en) * | 2019-10-14 | 2022-09-13 | Visa International Service Association | Group item recommendations for ephemeral groups based on mutual information maximization |
-
2023
- 2023-07-06 CN CN202310823337.1A patent/CN116894122B/en active Active
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113672811A (en) * | 2021-08-24 | 2021-11-19 | 广东工业大学 | Hypergraph convolution collaborative filtering recommendation method and system based on topology information embedding and computer readable storage medium |
CN115146140A (en) * | 2022-07-01 | 2022-10-04 | 中国人民解放军国防科技大学 | Group recommendation method and device based on fusion influence |
CN115357805A (en) * | 2022-08-02 | 2022-11-18 | 山东省计算中心(国家超级计算济南中心) | Group recommendation method based on internal and external visual angles |
CN116204729A (en) * | 2022-12-05 | 2023-06-02 | 重庆邮电大学 | Cross-domain group intelligent recommendation method based on hypergraph neural network |
CN116186390A (en) * | 2022-12-28 | 2023-05-30 | 北京理工大学 | Hypergraph-fused contrast learning session recommendation method |
CN115982467A (en) * | 2023-01-03 | 2023-04-18 | 华南理工大学 | Multi-interest recommendation method and device for depolarized user and storage medium |
CN116340646A (en) * | 2023-01-18 | 2023-06-27 | 云南师范大学 | Recommendation method for optimizing multi-element user representation based on hypergraph motif |
CN116244513A (en) * | 2023-02-14 | 2023-06-09 | 烟台大学 | Random group POI recommendation method, system, equipment and storage medium |
CN116383519A (en) * | 2023-04-20 | 2023-07-04 | 云南大学 | Group recommendation method based on double weighted self-attention |
Non-Patent Citations (1)
Title |
---|
Hypergraph Convolutional Network for Group Recommendation;Renqi Jia 等;IEEE;第260-269页 * |
Also Published As
Publication number | Publication date |
---|---|
CN116894122A (en) | 2023-10-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11705112B2 (en) | Adversarial, learning framework for persona-based dialogue modeling | |
TW201915790A (en) | Generating document for a point of interest | |
CN116894122B (en) | Cross-view contrast learning group recommendation method based on hypergraph convolutional network | |
CN112925977A (en) | Recommendation method based on self-supervision graph representation learning | |
CN116244513B (en) | Random group POI recommendation method, system, equipment and storage medium | |
CN114020999A (en) | Community structure detection method and system for movie social network | |
Ning et al. | Conditional generative adversarial networks based on the principle of homologycontinuity for face aging | |
CN116542720B (en) | Time enhancement information sequence recommendation method and system based on graph convolution network | |
CN107346333A (en) | A kind of online social networks friend recommendation method and system based on link prediction | |
CN117390267A (en) | Knowledge graph-based personalized multitask enhanced recommendation model | |
Huang et al. | On the improvement of reinforcement active learning with the involvement of cross entropy to address one-shot learning problem | |
CN113656709A (en) | Interpretable interest point recommendation method fusing knowledge graph and time sequence characteristics | |
CN116595479A (en) | Community discovery method, system, equipment and medium based on graph double self-encoder | |
Shen et al. | UniSKGRep: A unified representation learning framework of social network and knowledge graph | |
Cai et al. | The Analysis of Sharing Economy on New Business Model Based on BP Neural Network | |
CN116306834A (en) | Link prediction method based on global path perception graph neural network model | |
CN117076763A (en) | Hypergraph learning-based session recommendation method and device, electronic equipment and medium | |
CN117171447A (en) | Online interest group recommendation method based on self-attention and contrast learning | |
Le et al. | Enhancing Anchor Link Prediction in Information Networks through Integrated Embedding Techniques | |
CN112364258B (en) | Recommendation method and system based on map, storage medium and electronic equipment | |
CN111143700A (en) | Activity recommendation method and device, server and computer storage medium | |
Gao et al. | A two-stage classifier switchable aluminum electrolysis fault diagnosis method | |
CN115115901A (en) | Method and device for acquiring cross-domain learning model | |
Yinggang et al. | Social Recommendation System Based on Multi-agent Deep Reinforcement Learning | |
CN112765488A (en) | Recommendation method, system and equipment fusing social network and knowledge graph |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |