CN116797312A - Group recommendation method integrating lightweight graph convolution network and attention mechanism - Google Patents

Group recommendation method integrating lightweight graph convolution network and attention mechanism Download PDF

Info

Publication number
CN116797312A
CN116797312A CN202310522930.2A CN202310522930A CN116797312A CN 116797312 A CN116797312 A CN 116797312A CN 202310522930 A CN202310522930 A CN 202310522930A CN 116797312 A CN116797312 A CN 116797312A
Authority
CN
China
Prior art keywords
group
user
commodity
graph
layer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310522930.2A
Other languages
Chinese (zh)
Inventor
曹佳
葛云玮
黄思雨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Forestry University
Original Assignee
Beijing Forestry University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Forestry University filed Critical Beijing Forestry University
Priority to CN202310522930.2A priority Critical patent/CN116797312A/en
Publication of CN116797312A publication Critical patent/CN116797312A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • G06Q30/0601Electronic shopping [e-shopping]
    • G06Q30/0631Item recommendations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9535Search customisation based on user profiles and personalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Databases & Information Systems (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Evolutionary Computation (AREA)
  • Business, Economics & Management (AREA)
  • Biophysics (AREA)
  • Accounting & Taxation (AREA)
  • Finance (AREA)
  • Marketing (AREA)
  • General Business, Economics & Management (AREA)
  • Economics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Strategic Management (AREA)
  • Development Economics (AREA)

Abstract

A group recommendation method integrating a lightweight graph rolling network and an attention mechanism. First, the personal preferences of the user and the fixed preferences of the group are mined by improving the lightweight volume algorithm in individual recommendations to learn the high-dimensional characteristics of the user-commodity, group-commodity two graphs, respectively. Because the influence of each member in the group is different, the contribution degree in the decision is different, the result of the user-commodity graph convolution is used for dynamically adjusting the preference weight of each user in the group through learning the contribution degree of the member through an attention mechanism, and the preference weight is used as a group recommendation result based on the member; the group-commodity bipartite graph convolved representation is used as a group recommendation result based on the items. Then, taking the influence of the group topic on the group recommendation result into consideration, adding a topic factor and calculating to obtain the final group recommendation score of the item. Finally, optimizing the parameter size through training and generating a final recommendation list.

Description

Group recommendation method integrating lightweight graph convolution network and attention mechanism
Technical Field
The invention relates to the technical field of group recommendation, in particular to a group recommendation method integrating a lightweight graph convolution network and an attention mechanism.
Background
The recommendation system can effectively perform data filtering and realize accurate recommendation, and becomes a powerful tool for solving the information overload problem. With the increasing frequency of group activities, especially the continuous development of online shopping, mass shopping product recommendation is becoming an urgent requirement for e-commerce platforms and shoppers. However, at present, most of related research objects of the recommendation system are individual users, and good selection recommendation cannot be well performed for groups consisting of multiple people, so research and development of the recommendation system for the group users are urgent.
The object of the group recommendation algorithm is a group consisting of multiple people. The current related group recommendation research mainly comprises two aspects of recommendation methods and preference fusion strategies. In the recommendation method, although the graph roll-up network recommendation method in individual recommendation is developed mature, since group recommendation involves three types of subjects of groups, individuals and items, and individual recommendation is greatly different, it is difficult to migrate the individual recommendation method. In terms of preference fusion, a predefined static fusion strategy is currently generally adopted for solving. These static policies can meet the requirements of group preference fusion to some extent, however there are great limitations to dealing with dynamically changing and complex group user dynamic relationships. In addition, the group recommendation results are not only preference results of the users in the group, but also include the results of the group topic. However, current algorithms for group topics involve less, resulting in a bias in the final recommendation.
Disclosure of Invention
The invention provides a lightweight graph convolutional network model applicable to a group, which can effectively solve the preference difference in the group and add a group theme factor.
The technical scheme adopted for overcoming the technical problems is as follows:
a group recommendation method integrating a lightweight graph rolling network and an attention mechanism comprises the following steps:
a) Acquiring interaction records of N users and K commodities, and recording u n For the nth user node, N e {1, 2., N }, v k For the kth commodity node, K epsilon {1,2,.. Fwith, K }, constructing a user-commodity bipartite graph according to their interaction records;
b) Acquiring interaction records of M groups and K commodities, and recording g m For the mth group node, M e {1, 2..m }, v k For the kth commodity node, K e {1,2,.. Fwith, K }, building a group-commodity bipartite graph according to their interaction records;
c) Acquiring N user member tables of M groups and recording g m-n For the nth user of the mth group, M e {1,2,., M }, N e {1,2, constructing a group-user bipartite graph from their interaction records;
d) Inputting the user-commodity bipartite graph into a lightweight graph convolutional network LightGCN model, and obtaining a higher-order relation between the user and the commodity to obtain a L-pass graph 1 Each layer of user nodes for layer graph rolling operationAnd commodity node
e) Inputting the group-commodity bipartite graph into a lightweight graph convolutional network LightGCN model, and obtaining a higher-order relation between the group and the commodity to obtain a L-pass graph 2 Each layer group node for layer graph rolling operationAnd commodity node
f) By the formulaAnd formula->Averaging and pooling all convolution layers to obtain node representation e of users and commodities under influence of members u And e v(u)
g) By the formulaAnd formula->Averaging the convolutions to obtain node representation e of the group and commodity under the influence of the project g1 And e v(g)
h) Learning the attention weight b of the user in the group based on the group-commodity bipartite graph and the group-user bipartite graph uv . Specifically, record g for each of the group-commodity bipartite graphs m -v k Obtaining g according to the group-user bipartite graph m All members u of (2) g ∈g m The recorded attention weight b is obtained by using a multi-layer perceptron consisting of two layers of neural networks uv
i) Calculating normalized attention coefficient beta uv
j) By the formulaSumming the preferences and weights of all members in the group to obtain a node representation e of the group under the influence of the members g2
k) Through formula e g =γe g1 +(1-γ)e g2 The final node representation of the group is calculated. Where γ represents the subject factor of the group. The closer γ is to 0, the less affected by the group activity topic the group decision is, and more dependent on the decisions of the members in the group; the closer gamma is to 1, the higher the influence of the group activity theme is, and the smaller the influence of the decision of the members in the group is;
1) By the formulaGet the preference score of group g for item v +.>Completing the establishment of a model;
m) will predict the scoreAnd sorting from high to low respectively, and selecting the first x commodities to recommend to the group.
Further, step d) comprises the steps of:
d-1) lightweight graph roll-up network LightGCN is defined by the formulaCalculating to obtain an embedded representation of the k+1th layer of the user, wherein k is E [0, L 1 ]Indicating the number of convolutions, N u =[v|n uv =1]Refers to the collection of all items v interacted with by user u, N v =[u|n uv =1]Refers to the collection of all users u that interacted with the good v.
d-2) light graph roll-up network LightGCN is defined by the formulaCalculating to obtain embedded representation of k+1th layer of the commodity, wherein k is E [0, L 1 ]Indicating the number of convolutions, N u =[v|n uv =1]Refers to the collection of all items v interacted with by user u, N v =[u|n uv =1]Refers to the collection of all users u that interacted with the good v.
Preferably, L in step d) 1 The value is 3.
Further, step e) comprises the steps of:
e-1) lightweight graph roll-up network LightGCN is defined by the formulaCalculating to obtain embedded representation of k+1th layer of group, in which k is E [0, L 2 ]Represents the convolution layer number M v =[g|n gv =1]Refers to the collection of all groups g interacted with item v, M g =[v|n gv =1]Refer to all items interacted with group gv is a set of v.
e-2) lightweight graph roll-up network LightGCN is defined by the formulaCalculating to obtain embedded representation of k+1th layer of the commodity, wherein k is E [0, L 2 ]Represents the convolution layer number M v =[g|n gv =1]Refers to the collection of all groups g interacted with item v, M g =[v|n gv =1]Refers to the collection of all items v interacted with group g.
Preferably, L in step e) 2 The value is 3.
Further, step h) is performed by the formulaCalculating to obtain attention weight b uv In the formula->For user u in a group g User-embedded representation after user-item-graph stacking, < >>For candidate item v j Item embedded representation after user-item gallery layering, < >>The weight matrix and the bias vector of the first layer of the attention neural network are represented respectively. />Is the weight matrix and bias vector of the second layer of the attention neural network, which maps hidden layer information to the attention weight b uv . Sign->Representing a concatenation of two vectors.
Further, step i) is performed by the formulaCalculating to obtain normalizationCoefficient beta after uv Wherein softmax is a softmax activation function, b uv The attention weight obtained in step h).
Preferably, in step k) 0.ltoreq.γ.ltoreq.1.
Preferably, in step m) X has a value of 10.
Further, the method also comprises the following steps:
n-1) is represented by the formulaCalculating to obtain a loss function L gv Wherein Γ= [ (g, i, j) | (g, i) ∈Γ) + ,(g,j)∈Γ - ],Γ + Representing a set of positive samples in a set of group-item training sets Γ - Representing a negative set of samples in the group-item training set. />Nonlinear sigmoid function. Θ represents all the trainable parameters in the user model and the group model, λ is the regularization coefficient in the regularization function by reducing the overfitting of the model using L2 regularization. Lambda takes on a value of 0.0001;
n-2) training the model of the step 1) by using a loss function, optimizing parameters by using an RMSProp optimizer in the training process, and optimizing the loss function by using a small-batch method.
The beneficial effects of the invention are as follows: the model researches the analysis decision process of individual users and group users, and a scientific and effective model is established. The interaction behavior of group-commodity and user-commodity is convolved with two lightweight graph convolution networks, respectively. In one aspect, a high-level representation of a user-commodity is learned, and then dynamic preference fusion under the influence of the member is performed through an attention mechanism. On the other hand, a higher-order representation of the group-commodity is learned to represent the commodity recommendation under the influence of the project. Meanwhile, the influence of the group topics in the group on the recommendation result is considered, the topic factors are set, and the user of the recommendation system can manually adjust the sizes of the topic factors according to the correlation of the actual group topics so as to achieve a more accurate recommendation result. Finally, optimizing parameters through a gradient descent method and a small batch method in training to realize final group recommendation prediction. Compared with other group recommendation models, the recommendation accuracy of the model is higher, and the model has very strong practicability.
Drawings
FIG. 1 is a flow chart of the method of the present invention. The fusion prediction layer, the picture scroll layer and the embedding layer are respectively divided from top to bottom by dotted lines.
Detailed Description
The invention is further described with reference to fig. 1.
A group recommendation method integrating a lightweight graph rolling network and an attention mechanism specifically comprises the following steps:
a) Acquiring interaction records of N users and K commodities, and recording u n For the nth user node, N e {1, 2., N }, v k For the kth commodity node, K e {1,2,.. Fwith, K }, a user-commodity bipartite graph is constructed from their interaction records.
b) Acquiring interaction records of M groups and K commodities, and recording g m For the mth group node, M e {1, 2..m }, v k For the kth commodity node, K e {1,2,.. Fwith, K }, a group-commodity bipartite graph is constructed from their interaction records.
c) Acquiring N user member tables of M groups and recording g m-n For the nth user of the mth group, M e {1,2,., M }, N e {1,2, group-user bipartite graphs were constructed from their interaction records.
d) Inputting the user-commodity bipartite graph into a lightweight graph convolutional network LightGCN model, and obtaining a higher-order relation between the user and the commodity to obtain a L-pass graph 1 Each layer of user nodes for layer graph rolling operationAnd commodity node
e) Inputting the group-commodity bipartite graph into a lightweight graph convolutional network LightGCN model, and obtaining a higher-order relation between the group and the commodity to obtain a L-pass graph 2 Each layer of a layer graph rolling operationGroup nodeAnd commodity node
f) By the formulaAnd formula->Averaging and pooling all convolution layers to obtain node representation e of users and commodities under influence of members u And e v(u)
g) By the formulaAnd formula->Averaging the convolutions to obtain node representation e of the group and commodity under the influence of the project g1 And e v(g)
h) Learning the attention weight b of the user in the group based on the group-commodity bipartite graph and the group-user bipartite graph uv . Specifically, record g for each of the group-commodity bipartite graphs m -v k Obtaining g according to the group-user bipartite graph m All members u of (2) g ∈g m The recorded attention weight b is obtained by using a multi-layer perceptron consisting of two layers of neural networks uv
i) Calculating normalized attention coefficient beta uv
j) By the formulaSumming the preferences and weights of all members in the group to obtain a node representation e of the group under the influence of the members g2
k) Through formula e g =γe g1 +(1-γ)e g2 The final node representation of the group is calculated. Wherein gamma represents a groupTheme factors. The closer γ is to 0, the less affected by the group activity topic the group decision is, and more dependent on the decisions of the members in the group; the closer γ is to 1, the higher the influence of the group activity topic on the group decision, and the smaller the influence of the decision of the members in the group.
1) By the formulaGet the preference score of group g for item v +.>And (5) completing the establishment of the model.
m) will predict the scoreAnd sorting from high to low respectively, and selecting the first X commodities to recommend to the group.
Example 1:
step d) comprises the steps of:
d-1) lightweight graph roll-up network LightGCN is defined by the formulaCalculating to obtain an embedded representation of the k+1th layer of the user, wherein k is E [0, L 1 ]Indicating the number of convolutions, N u =[v|n uv =1]Refers to the collection of all items v interacted with by user u, N v =[u|n uv =1]Refers to the collection of all users u that interacted with the good v.
d-2) light graph roll-up network LightGCN is defined by the formulaCalculating to obtain embedded representation of k+1th layer of the commodity, wherein k is E [0, L 1 ]Indicating the number of convolutions, N u =[v|n uv =1]Refers to the collection of all items v interacted with by user u, N v =[u|n uv =1]Refers to the collection of all users u that interacted with the good v.
Example 2:
l in step d) 1 The value is 3.
Example 3:
step e) comprises the steps of:
e-1) lightweight graph roll-up network LightGCN is defined by the formulaCalculating to obtain embedded representation of k+1th layer of group, in which k is E [0, L 2 ]Represents the convolution layer number M v =[g|n gv =1]Refers to the collection of all groups g interacted with item v, M g =[u|n gv =1]Refers to the collection of all items u interacted with group g.
e-2) lightweight graph roll-up network LightGCN is defined by the formulaCalculating to obtain embedded representation of k+1th layer of the commodity, wherein k is E [0, L 2 ]Represents the convolution layer number M v =[g|n gv =1]Refers to the collection of all groups g interacted with item v, M g =[v|n gv =1]Refers to the collection of all items v interacted with group g.
Example 4:
l in step e) 2 The value is 3.
Example 5:
step h) is carried out by the formulaCalculating to obtain attention weight b uv In the formula->For user u in a group g User-embedded representation after user-item-graph stacking, < >>For candidate item v j Item embedded representation after user-item gallery layering, < >>Weight matrix and bias respectively representing first layer of attention neural networkVector. /> Is the weight matrix and bias vector of the second layer of the attention neural network, which maps hidden layer information to the attention weight b uv . Sign->Representing a concatenation of two vectors.
Example 6:
step i) is performed by the formulaCalculating normalized coefficient beta uv Wherein softmax is a softmax activation function, b uv The attention weight obtained in step h).
Example 7:
in the step k), gamma is more than or equal to 0 and less than or equal to 1.
Example 8:
in step m), X takes a value of 10.
Example 9:
the method also comprises the following steps:
n-1) is represented by the formulaCalculating to obtain a loss function L gv Wherein Γ= [ (g, i, j) | (g, i) ∈Γ) + ,(g,j)∈Γ - ],Γ + Representing a set of positive samples in a set of group-item training sets Γ - Representing a negative set of samples in the group-item training set. />Nonlinear sigmoid function. Θ represents all the trainable parameters in the user model and the group model, λ is the regularization coefficient in the regularization function, taking a value of 0.0001 by reducing the overfitting of the model using L2 regularization.
n-2) training the model of the step 1) by using a loss function, optimizing parameters by using an RMSProp optimizer in the training process, and optimizing the loss function by using a small-batch method.
Finally, it should be noted that: the foregoing description is only a preferred embodiment of the present invention, and the present invention is not limited thereto, but it is to be understood that modifications and equivalents of some of the technical features described in the foregoing embodiments may be made by those skilled in the art, although the present invention has been described in detail with reference to the foregoing embodiments. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. A group recommendation method fusing a lightweight graph rolling network and an attention mechanism, comprising the steps of:
a) Acquiring interaction records of N users and K commodities, and recording u n For the nth user node, N ε {1,2, …, N }, v k For the kth commodity node, K is {1,2, …, K }, constructing a user-commodity bipartite graph according to the interaction record of the kth commodity node and K;
b) Acquiring interaction records of M groups and K commodities, and recording g m For the M-th group node, M ε {1,2, …, M }, v k For the kth commodity node, K is {1,2, …, K }, constructing a group-commodity bipartite graph according to the interaction record of the kth commodity node and the K;
c) Acquiring N user member tables of M groups and recording g m-n For the nth user of the mth group, M epsilon {1,2, …, M }, N epsilon {1,2, …, N }, constructing a group-user bipartite graph according to their interaction records;
d) Inputting the user-commodity bipartite graph into a lightweight graph convolutional network LightGCN model, and obtaining a higher-order relation between the user and the commodity to obtain a L-pass graph 1 Each layer of user nodes for layer graph rolling operationAnd commodity node
e) Inputting the group-commodity bipartite graph into a lightweight graph convolutional network LightGCN model, and obtaining a higher-order relation between the group and the commodity to obtain a L-pass graph 2 Each layer group node for layer graph rolling operationAnd commodity node
f) By the formulaAnd formula->Averaging and pooling all convolution layers to obtain node representation e of users and commodities under influence of members u And e v(u)
g) By the formulaAnd formula->Averaging the convolutions to obtain node representation e of the group and commodity under the influence of the project g1 And e v(g)
h) Learning the attention weight b of the user in the group based on the group-commodity bipartite graph and the group-user bipartite graph uv . Specifically, record g for each of the group-commodity bipartite graphs m -v k Obtaining g according to the group-user bipartite graph m All members u of (2) g ∈g m The recorded attention weight b is obtained by using a multi-layer perceptron consisting of two layers of neural networks uv
i) Calculating normalized attention coefficient beta uv
j) By the formulaSumming the preferences and weights of all members in the group to obtain a node representation e of the group under the influence of the members g2
k) Through formula e g =γe g1 +(1-γ)e g2 The final node representation of the group is calculated. Where γ represents the subject factor of the group. The closer γ is to 0, the less affected by the group activity topic the group decision is, and more dependent on the decisions of the members in the group; the closer gamma is to 1, the higher the influence of the group activity theme is, and the smaller the influence of the decision of the members in the group is;
l) by the formulaGet the preference score of group g for item v +.>Completing the establishment of a model;
m) will predict the scoreAnd sorting from high to low respectively, and selecting the first X commodities to recommend to the group.
2. The method of claim 1, wherein step d) comprises the steps of:
d-1) lightweight graph roll-up network LightGCN is defined by the formulaCalculating to obtain an embedded representation of the k+1th layer of the user, wherein k is E [0, L 1 ]Indicating the number of convolutions, N u =[v|n uv =1]Refers to the collection of all items v interacted with by user u, N v =[u|n uv =1]Finger and commodityv the set of all users u interacted with.
d-2) light graph roll-up network LightGCN is defined by the formulaCalculating to obtain embedded representation of k+1th layer of the commodity, wherein k is E [0, L 1 ]Indicating the number of convolutions, N u =[v|n uv =1]Refers to the collection of all items v interacted with by user u, N v =[u|n uv =1]Refers to the collection of all users u that interacted with the good v.
3. The method of group recommendation fusing lightweight graph rolling network and attention mechanisms as recited in claim 1, wherein L in step d) 1 The value is 3.
4. The method of claim 1, wherein step e) comprises the steps of:
e-1) lightweight graph roll-up network LightGCN is defined by the formulaCalculating to obtain embedded representation of k+1th layer of group, in which k is E [0, L 2 ]Represents the convolution layer number M v =[g|n gv =1]Refers to the collection of all groups g interacted with item v, M g =[v|n gv =1]Refers to the collection of all items v interacted with group g.
e-2) lightweight graph roll-up network LightGCN is defined by the formulaCalculating to obtain embedded representation of k+1th layer of the commodity, wherein k is E [0, L 2 ]Represents the convolution layer number M v =[g|n gv =1]Refers to the collection of all groups g interacted with item v, M g =[v|n gv =1]Refers to the collection of all items v interacted with group g.
5. The method of claim 1, wherein L in step e) is 2 The value is 3.
6. The method of claim 1, wherein step h) is performed by the formulaCalculating to obtain attention weight b uv In the formula->For user u in a group g User-embedded representation after user-item-graph stacking, < >>For candidate item v j Item embedded representation after user-item gallery layering, < >>The weight matrix and the bias vector of the first layer of the attention neural network are represented respectively. />Is the weight matrix and bias vector of the second layer of the attention neural network, which maps hidden layer information to the attention weight b uv . Sign->Representing a concatenation of two vectors.
7. The method of claim 1, wherein step i) is performed by the formulaCalculating normalized coefficient beta uv Wherein softmax is a softmax activation function, b uv The attention weight obtained in step h).
8. The method of claim 1, wherein in step k), 0.ltoreq.γ.ltoreq.1 is used for group recommendation that merges lightweight graph rolling networks and attention mechanisms.
9. The method of claim 1, wherein X in step m) is 10.
10. The method of group recommendation fusing lightweight graph rolling network and attention mechanisms of claim 1, further comprising the steps of:
n-1) is represented by the formulaCalculating to obtain a loss function L gv Wherein Γ= [ (g, i, j) | (g, i) ∈Γ) + ,(g,j)∈Γ - ],Γ + Representing a set of positive samples in a set of group-item training sets Γ - Representing a negative set of samples in the group-item training set. />Nonlinear sigmoid function. Θ represents all the trainable parameters in the user model and the group model, λ is the regularization coefficient in the regularization function by reducing the overfitting of the model using L2 regularization. Lambda takes on a value of 0.0001;
n-2) training the model of step l) by using a loss function, optimizing parameters by using an RMSProp optimizer in the training process, and optimizing the loss function by using a small-batch method.
CN202310522930.2A 2023-05-10 2023-05-10 Group recommendation method integrating lightweight graph convolution network and attention mechanism Pending CN116797312A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310522930.2A CN116797312A (en) 2023-05-10 2023-05-10 Group recommendation method integrating lightweight graph convolution network and attention mechanism

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310522930.2A CN116797312A (en) 2023-05-10 2023-05-10 Group recommendation method integrating lightweight graph convolution network and attention mechanism

Publications (1)

Publication Number Publication Date
CN116797312A true CN116797312A (en) 2023-09-22

Family

ID=88046027

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310522930.2A Pending CN116797312A (en) 2023-05-10 2023-05-10 Group recommendation method integrating lightweight graph convolution network and attention mechanism

Country Status (1)

Country Link
CN (1) CN116797312A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117112914A (en) * 2023-10-23 2023-11-24 合肥综合性国家科学中心人工智能研究院(安徽省人工智能实验室) Group recommendation method based on graph convolution

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117112914A (en) * 2023-10-23 2023-11-24 合肥综合性国家科学中心人工智能研究院(安徽省人工智能实验室) Group recommendation method based on graph convolution
CN117112914B (en) * 2023-10-23 2024-02-09 合肥综合性国家科学中心人工智能研究院(安徽省人工智能实验室) Group recommendation method based on graph convolution

Similar Documents

Publication Publication Date Title
CN110503531B (en) Dynamic social scene recommendation method based on time sequence perception
CN110717098B (en) Meta-path-based context-aware user modeling method and sequence recommendation method
CN110362738B (en) Deep learning-based individual recommendation method combining trust and influence
CN109785062B (en) Hybrid neural network recommendation system based on collaborative filtering model
CN108921604B (en) Advertisement click rate prediction method based on cost-sensitive classifier integration
CN109783738B (en) Multi-similarity-based hybrid collaborative filtering recommendation method for double-pole-limit learning machine
CN112115377B (en) Graph neural network link prediction recommendation method based on social relationship
CN110084670B (en) Shelf commodity combination recommendation method based on LDA-MLP
CN109062962B (en) Weather information fused gated cyclic neural network interest point recommendation method
CN111881342A (en) Recommendation method based on graph twin network
CN112287166B (en) Movie recommendation method and system based on improved deep belief network
CN112800344B (en) Deep neural network-based movie recommendation method
CN116797312A (en) Group recommendation method integrating lightweight graph convolution network and attention mechanism
CN114491263A (en) Recommendation model training method and device, and recommendation method and device
CN115357805A (en) Group recommendation method based on internal and external visual angles
CN114298783A (en) Commodity recommendation method and system based on matrix decomposition and fusion of user social information
Nam et al. Predicting airline passenger volume
CN115829683A (en) Power integration commodity recommendation method and system based on inverse reward learning optimization
CN109800424A (en) It is a kind of based on improving matrix decomposition and the recommended method across channel convolutional neural networks
CN112559905B (en) Conversation recommendation method based on dual-mode attention mechanism and social similarity
CN113850317A (en) Multi-type neighbor aggregation graph convolution recommendation method and system
CN112784177A (en) Spatial distance adaptive next interest point recommendation method
George et al. The Influence of Activation Functions in Deep Learning Models Using Transfer Learning for Facial Age Prediction
Wang et al. A Career Recommendation Method for College Students Based on Occupational Values
CN112989202B (en) Personalized recommendation method and system based on dynamic network embedding

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination