CN117851687B - Multi-behavior graph contrast learning recommendation method and system integrating self-attention mechanism - Google Patents

Multi-behavior graph contrast learning recommendation method and system integrating self-attention mechanism Download PDF

Info

Publication number
CN117851687B
CN117851687B CN202410242618.2A CN202410242618A CN117851687B CN 117851687 B CN117851687 B CN 117851687B CN 202410242618 A CN202410242618 A CN 202410242618A CN 117851687 B CN117851687 B CN 117851687B
Authority
CN
China
Prior art keywords
behavior
embedding
representation
representing
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410242618.2A
Other languages
Chinese (zh)
Other versions
CN117851687A (en
Inventor
钱忠胜
黄恒
饶雨贤
何玉水
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangxi University of Finance and Economics
Original Assignee
Jiangxi University of Finance and Economics
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangxi University of Finance and Economics filed Critical Jiangxi University of Finance and Economics
Priority to CN202410242618.2A priority Critical patent/CN117851687B/en
Publication of CN117851687A publication Critical patent/CN117851687A/en
Application granted granted Critical
Publication of CN117851687B publication Critical patent/CN117851687B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9535Search customisation based on user profiles and personalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/042Knowledge-based neural networks; Logical representations of neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/098Distributed learning, e.g. federated learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biophysics (AREA)
  • Evolutionary Computation (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention provides a multi-behavior graph contrast learning recommendation method and system integrating a self-attention mechanism, wherein the method comprises the steps of combining behavior embedding into node embedding, and capturing the dependency relationship between different nodes and each behavior through the self-attention mechanism so as to obtain embedded representation with higher quality; taking the behavior of the same user as a positive example pair, and taking different user as a negative example pair to establish multi-behavior diagram comparison learning; performing complementary embedding diagram contrast learning by utilizing the positive interest node embedding representation and the negative interest node embedding representation; and finally, taking the recommended task as a main task, adopting a non-sampling strategy to calculate loss, taking multi-behavior graph comparison learning and complementary embedded graph comparison learning as auxiliary tasks, jointly optimizing a total loss function, and outputting a final recommended result after training is completed. The invention models the relationship between the auxiliary behavior and the target behavior at the same time, thereby capturing the behavior difference and commonality of different users better, and combining the behavior difference and commonality with the recommended task can greatly improve the model performance.

Description

Multi-behavior graph contrast learning recommendation method and system integrating self-attention mechanism
Technical Field
The invention relates to the technical field of recommendation, in particular to a multi-behavior graph contrast learning recommendation method and system integrating a self-attention mechanism.
Background
The recommendation system may utilize historical behavior information of the user to provide personalized recommendation techniques and systems for the user. It is widely used in various online platforms, such as e-commerce, social media, music and video streaming services, etc. The core goal of the recommendation system is to alleviate the information overload problem, helping users quickly and accurately find content or goods of interest to them.
In practical application scenarios (e.g., e-commerce), there are often multiple types of behavioral data for user interactions with items. Based on this, multi-behavior recommendation becomes a popular direction in the current recommendation field. Supplemental modeling of target behavior (purchases) with auxiliary behavior (e.g., clicking, joining shopping carts, etc.) enriches user preferences and makes finer grained recommendations.
Traditional multi-behavioral recommendation models continue to employ matrix factorization methods to capture potential user interest preferences in a variety of behaviors. However, the matrix decomposition method has limited capability of using higher-order information, and limits the improvement of recommendation effect. In recent years, researchers have begun to apply it widely to multi-behavior recommendations because of the efficient capture of higher-level domain information by the graph neural network. However, the current multi-behavior recommendation method based on the graph neural network has the following disadvantages:
1. Because the behavior correlation between different nodes is different, and the different behaviors of the users are interwoven in the system, the recommendation system faces complex and various challenges, and the dependency relationship between the different nodes (i.e. users/projects) and each behavior is difficult to capture effectively.
2. The data set is usually sparse, and in multi-behavior recommendation, the number of target behaviors is often far smaller than the number of auxiliary behaviors, so that the auxiliary behaviors are difficult to model in a complementary manner by utilizing the target behaviors, but most of the current methods fail to fully utilize the relationship between the auxiliary behaviors and the target behaviors, so that the sparsity problem of the target behaviors cannot be effectively relieved.
3. In the real world, users have complex interests and differ from each other in that although an item typically interacts with multiple users, the items are not attractive to different users and the same interaction does not lead to conclusions that the users have the same interests. In particular, it is difficult to find explicit points of interest hidden in the interaction record simply by learning on the interaction.
Disclosure of Invention
In view of the above, the present invention is directed to a multi-behavior graph contrast learning recommendation method and system that incorporate a self-attention mechanism, so as to solve the above technical problems.
The invention provides a multi-behavior graph contrast learning recommendation method integrating a self-attention mechanism, which comprises the following steps:
Step 1, carrying out graph rolling operation by combining nodes with embedding of behaviors based on a multi-behavior interaction graph, introducing a self-attention mechanism to effectively model the connection between different nodes and each behavior so as to enhance node embedding representation, and obtaining final representations of each behavior, a user and a project after embedding propagation;
Step 2, constructing positive example pairs by utilizing target behaviors and auxiliary behaviors of the same user according to final representation, constructing negative example pairs by utilizing target behaviors and auxiliary behaviors of different users, and constructing multi-behavior graph comparison learning by utilizing the positive example pairs and the negative example pairs so as to model the relation between the target behaviors and the auxiliary behaviors;
Step 3, based on the multi-behavior interaction graph, the negative interest node embedded representation is obtained in a complementary embedding mode, the final representation is used as a positive interest node embedded representation, and complementary embedding graph contrast learning is carried out on the positive interest node embedded representation and the negative interest node embedded representation, so that implicit prejudice in interaction is relieved, and the node embedded representation is optimized;
and 4, carrying out recommendation prediction on the final representation, training by adopting a multi-task learning method, taking a recommended task as a main task, adopting a non-sampling strategy to calculate loss, taking multi-behavior graph comparison learning and complementary embedded graph comparison learning as auxiliary tasks, jointly optimizing a total loss function, and outputting a final recommended result after training is completed.
The invention combines behavior embedding into node embedding, and captures the dependency relationship between different nodes and each behavior through a self-attention mechanism, so as to obtain embedded representation with higher quality. In addition, the relation between the target behavior and the auxiliary behavior is modeled by adopting graph contrast learning, so that the sparsity problem of the target behavior is better relieved. And comparing the positive interest embedding with the negative interest embedding obtained by the complementary embedding mode to relieve the implicit prejudice of interaction.
The invention also provides a multi-behavior graph comparison learning recommendation system fusing the self-attention mechanism, wherein the system applies the multi-behavior graph comparison learning recommendation method fusing the self-attention mechanism, and the system comprises the following steps:
The multi-behavior perception graph convolution network module is used for:
Based on the multi-behavior interaction graph, graph rolling operation is carried out by combining the node and the embedding of the behaviors, and the connection between different nodes and each behavior is effectively modeled by introducing a self-attention mechanism so as to enhance the node embedding representation, and after embedding propagation, the final representation of each behavior, a user and a project is obtained;
The multi-behavior diagram comparison learning module is used for:
according to the final representation, constructing positive example pairs by utilizing target behaviors and auxiliary behaviors of the same user, constructing negative example pairs by utilizing target behaviors and auxiliary behaviors of different users, and establishing multi-behavior graph comparison learning by utilizing the positive example pairs and the negative example pairs so as to model the relation between the target behaviors and the auxiliary behaviors;
The complementary embedded graph contrast learning module is used for:
Based on the multi-behavior interaction graph, the negative interest node embedded representation is obtained in a complementary embedding mode, the final representation is used as the positive interest node embedded representation, and complementary embedding graph contrast learning is carried out on the positive interest node embedded representation and the negative interest node embedded representation, so that implicit bias in interaction is relieved, and the node embedded representation is optimized;
model prediction and recommendation module for:
And carrying out recommendation prediction on the final representation, training by adopting a multi-task learning method, taking a recommended task as a main task, adopting a non-sampling strategy to calculate loss, taking multi-behavior graph comparison learning and complementary embedded graph comparison learning as auxiliary tasks, jointly optimizing a total loss function, and outputting a final recommendation result after training is completed.
Compared with the prior art, the invention has the following beneficial effects:
1) The present invention incorporates a self-attention mechanism. And combining the behavior embedding into the node embedding to perform high-order graph rolling operation, capturing the high-order relation between the nodes and the behaviors, and simultaneously utilizing a self-attention mechanism to further promote the embedded representation of the model so as to model the relation between different nodes and the behaviors.
2) The invention adopts multi-behavior diagram contrast learning. The relationship between the target behavior and the auxiliary behavior is modeled by constructing the target behavior and the auxiliary behavior of the same user as positive example pairs and constructing the target behavior and the auxiliary behavior of different users as negative example pairs, so that the sparsity problem of the target behavior is better relieved.
3) The invention constructs complementary embedded graph contrast learning. And comparing the positive interest embedding with the negative interest embedding obtained in the complementary embedding mode to optimize node embedding representation, thereby reducing implicit deviation in interaction.
Additional aspects and advantages of the invention will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the invention.
Drawings
FIG. 1 is a flow chart of a multi-behavior diagram versus learning recommendation method incorporating a self-attention mechanism according to the present invention;
FIG. 2 is a general framework of the multi-behavior diagram contrast learning recommendation system incorporating the self-attention mechanism proposed by the present invention;
Fig. 3 is a schematic structural diagram of a multi-behavior diagram-to-learning recommendation system with a self-attention mechanism.
In the drawing the view of the figure,Representing a user,/>Representing items,/>Representing non-sampling strategies,/>Representing embedding combinations,/>Representing a linear weighting.
Detailed Description
Embodiments of the present invention are described in detail below, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to like or similar elements or elements having like or similar functions throughout. The embodiments described below by referring to the drawings are illustrative only and are not to be construed as limiting the invention.
These and other aspects of embodiments of the invention will be apparent from and elucidated with reference to the description and drawings described hereinafter. In the description and drawings, particular implementations of embodiments of the invention are disclosed in detail as being indicative of some of the ways in which the principles of embodiments of the invention may be employed, but it is understood that the scope of the embodiments of the invention is not limited correspondingly.
Referring to fig. 1 and 2, the present embodiment provides a multi-behavior graph contrast learning recommendation method integrating a self-attention mechanism, which includes the following steps:
Step 1, carrying out graph rolling operation by combining nodes with embedding of behaviors based on a multi-behavior interaction graph, introducing a self-attention mechanism to effectively model the connection between different nodes and each behavior so as to enhance node embedding representation, and obtaining final representations of each behavior, a user and a project after embedding propagation;
The specific scheme of the step is as follows:
in the user-project bipartite graph, nodes represent users or projects, edges represent different interaction behaviors of the users-projects, the nodes and the edges are projected to an embedded space, and initialization is carried out;
the method comprises the steps of carrying out embedding propagation on node embedding of items and users of the same behavior, projecting the behavior embedding to the same embedding space as the nodes while carrying out node embedding propagation, enabling the embedding of the behavior to be propagated along with the node embedding simultaneously, so as to obtain node embedding of the items and the users of different behaviors and embedding representations of the different behaviors at different levels, wherein the embedding propagation process of the embedding of the items and the users has the following relational expression:
Wherein, And/>Respectively expressed in specific behavior/>Lower user/>Sum item/>(1 /)Layer embedding,/>、/>、/>Respectively represent user/>Item/>Behavior/>(1 /)Layer embedding,/>Represents the/>Layer specific parameter matrix,/>Representing LeakyReLU activation functions,/>Representing symmetric normalized terms,/>Representing user/>Interactive item set,/>Representation and itemInteractive user set,/>The combined function representing the embedding and integrating of the behavior into the message transmission has the effect of integrating the behavior into the message transmission, and considering various behavior types, respectively aggregating neighbors of specific relations, so that the potential preference of a user can be better mined, and the following relational expression exists in the process of integrating the behavior into the message transmission by adopting the combined function:
Wherein, Representing an element-wise multiplication operation.
The embedding propagation process of behavior embedding has the following relation:
Wherein, Representing the corresponding behavior/>The following/>A parameter matrix of the layer;
Modeling the dependency relationship between each node and each behavior in different levels by using a self-attention mechanism according to the projects of different behaviors and node embedding of the user to obtain corresponding weight coefficients;
Wherein, And/>Respectively represent behavior/>Specific parameters below,/>,/>,/>Representing the embedding vector dimension,/>Representing the dimension of the attention output,/>And/>Respectively represent user/>With item/>In behavior/>The weight coefficient of the lower one is that of the lower one,Representing hyperbolic tangent function,/>And/>Representing user/>, respectively, under all behaviors before enhanced embeddingWith item/>In/>Embedding the spliced representation into the layer;
Enhancing the node embedding by using the weight coefficient to obtain the enhanced node embedding;
Wherein, And/>Respectively represent the behavior/>, after enhancement by self-attention mechanismsLower user/>Sum item/>(1 /)Embedding a layer;
combining the node embedments enhanced with respect to the items and the users by the different behaviors to obtain different-level item and user embedded representations;
Wherein, And/>Respectively represent user/>Sum item/>In/>An embedded representation of the layer, R representing a set of behaviors;
respectively combining behaviors, projects and user embedded representations of different layers to obtain final representations of the behaviors, the users and the projects;
Wherein, 、/>、/>Respectively represent user/>Item/>Behavior/>Is the number of levels propagated. Pass/>After layer propagation, nodes in the graph may aggregate information from higher-order neighbors.
Step 2, constructing positive example pairs by utilizing target behaviors and auxiliary behaviors of the same user according to final representation, constructing negative example pairs by utilizing target behaviors and auxiliary behaviors of different users, and constructing multi-behavior graph comparison learning by utilizing the positive example pairs and the negative example pairs so as to model the relation between the target behaviors and the auxiliary behaviors, thereby alleviating the sparsity problem of the target behaviors;
The specific scheme of the step is as follows:
On the user side, the target behavior and each type of auxiliary behavior of the same user are regarded as positive example pairs, the target behavior and the auxiliary behavior of different users are regarded as negative example pairs, so as to form a multi-behavior graph comparison learning loss function on the user side, and the multi-behavior graph comparison learning loss function on the user side has the following relation:
Wherein, Representing target behavior,/>Representing auxiliary behavior,/>Representing cosine similarity,/>The temperature is represented by the super-parameter of the temperature,、/>Respectively represent user/>At target behavior/>And auxiliary behavior/>Lower embedded representation,/>Representing user/>At the auxiliary behaviorLower embedded representation,/>Representing a set of users;
On the item side, the target behavior and each type of auxiliary behavior of the same item are regarded as positive example pairs, the target behavior and the auxiliary behavior of different items are regarded as negative example pairs, so as to form a multi-behavior graph comparison learning loss function on the item side, and the multi-behavior graph comparison learning loss function on the item side has the following relation:
Wherein, 、/>Respectively represent target items/>In behavior/>And auxiliary behavior/>Lower embedded representation,/>Representing item/>In auxiliary behavior/>Lower embedded representation,/>Representing a set of items.
The final multi-behavior graph comparison learning loss function can be obtained by combining the multi-behavior graph comparison learning loss function at the user side and the multi-behavior graph comparison learning loss function at the project side, and the final multi-behavior graph comparison learning loss function has the following relational expression:
Wherein, Representing a hyper-parameter controlling the intensity of two loss functions,/>Representing the final multi-behavior graph versus the learning loss function.
Step 3, based on the multi-behavior interaction graph, the negative interest node embedded representation is obtained in a complementary embedding mode, the final representation is used as a positive interest node embedded representation, and complementary embedding graph contrast learning is carried out on the positive interest node embedded representation and the negative interest node embedded representation, so that implicit prejudice in interaction is relieved, and the node embedded representation is optimized;
The specific scheme of the step is as follows:
the maximum and minimum values of the project before normalization and the embedding of the user are obtained, and the process expression is as follows:
Wherein, 、/>Respectively express/>Dimension initial user embedding/>Maximum and minimum of (a)/>,/>、/>Respectively express/>Dimension initial item embedding/>Maximum and minimum of (a)/>,/>,/>、/>Representing embedded maximum and minimum functions, respectively
Will be、/>、/>、/>Extend to/>In the dimension space, the user/>, is obtainedSum item/>The process expression is as follows:
Wherein, 、/>Representing user/>Sum item/>And the range of each dimension is embedded into interval 0, 1;
By slave of 、/>To obtain complementary embedding to represent user/>Sum item/>Is the negative interest of (1), the process expression is as follows:
Wherein, 、/>Representing user/>Sum item/>Complementary embedding of/>Representation/>Dimension all 1 embedding,/>
On the user side, the complementary embedding of the user is used as the negative interest node embedding representation, the final representation of the user is used as the positive interest node embedding representation, the complementary embedding diagram contrast learning is carried out on the positive interest node embedding representation and the negative interest node embedding representation to form a complementary embedding diagram contrast learning loss function on the user side, and the process expression is as follows:
Wherein, Loss function representing user-side complementary embedding diagram contrast learning,/>Representing user/>Is embedded in/(I)Representing user/>Complementary embedding of/>Representing cosine similarity;
On the item side, using the complementary embedding of the item as a negative interest node embedding representation, using the final representation of the item as a positive interest node embedding representation, and performing complementary embedding graph contrast learning on the positive interest node embedding representation and the negative interest node embedding representation to form a complementary embedding graph contrast learning loss function on the item side, wherein the process expression is as follows:
Wherein, Loss function representing item-side complementary embedded graph contrast learning,/>Representing item/>Is embedded in/(I)Representing item/>Is embedded in the substrate;
The complementary embedded graph contrast learning loss function at the user side and the complementary embedded graph contrast learning loss function at the project side are combined to obtain a final complementary embedded graph contrast learning loss function, and the final complementary embedded graph contrast learning loss function has the following relation:
Wherein, Representing a hyper-parameter controlling the intensity of two loss functions,/>Representing the final complementary embedding map versus the learning loss function.
Step 4, recommending and predicting the final representation, training by adopting a multi-task learning method, taking a recommended task as a main task, adopting a non-sampling strategy to calculate loss, taking multi-behavior graph contrast learning and complementary embedded graph contrast learning as auxiliary tasks, jointly optimizing a total loss function, and outputting a final recommended result after training is completed
The specific scheme of the step is as follows:
The final representation of the user and the project with the same behavior is combined into a single prediction layer to be predicted, so that the interaction probability of the user and the project under different behaviors is obtained, and the process is expressed as follows:
Wherein, Representing user/>In behavior/>Lower and item/>Probability of interaction,/>Represent diagonal matrix, behavior/>Final embedded representation/>As diagonal elements;
And optimizing and constructing a loss function by using non-sampling strategies by using the interaction probabilities of users and projects under different behaviors, wherein the process is expressed as follows:
Wherein, Representing behavior/>Is a non-sampling loss function of B, B represents a batch of users,/>Representing the entire set of items,/>Representing user/>In behavior/>Lower set of interaction items,/>Weights representing positive examples,/>Weights representing negative examples,/>Respectively represent behavior/>Is the embedded vector of/>Individual elements and/>Element,/>Respectively represent user/>Is the embedded vector of/>Individual elements and/>Element,/>Respectively represent items/>Is the embedded vector of/>Individual elements and/>An element;
Combining the non-sampling loss function with the final multi-behavior graph contrast learning loss function and the final complementary embedding graph contrast learning loss function to construct a total loss function for multi-tasking joint optimization learning, and optimizing model performance by minimizing the total loss, wherein the process is expressed as follows:
Wherein, Representing the total loss function,/>Representation for controlling behavior/>Hyper-parameters affecting joint training,/>Hyper-parameters representing multi-behavior graph versus learning penalty function weights,/>Hyper-parameters representing complementary embedding map versus learning penalty function weights,/>Representing regularization coefficient,/>Trainable parameters representing a model,/>Representation/>Regularization.
Referring to fig. 3, the present embodiment further provides a multi-behavior graph-versus-learning recommendation system that merges with a self-attention mechanism, where the system applies the multi-behavior graph-versus-learning recommendation method that merges with a self-attention mechanism as described above, and the system includes:
The multi-behavior perception graph convolution network module is used for:
Based on the multi-behavior interaction graph, graph rolling operation is carried out by combining the node and the embedding of the behaviors, and the connection between different nodes and each behavior is effectively modeled by introducing a self-attention mechanism so as to enhance the node embedding representation, and after embedding propagation, the final representation of each behavior, a user and a project is obtained;
The multi-behavior diagram comparison learning module is used for:
according to the final representation, constructing positive example pairs by utilizing target behaviors and auxiliary behaviors of the same user, constructing negative example pairs by utilizing target behaviors and auxiliary behaviors of different users, and establishing multi-behavior graph comparison learning by utilizing the positive example pairs and the negative example pairs so as to model the relation between the target behaviors and the auxiliary behaviors;
The complementary embedded graph contrast learning module is used for:
Based on the multi-behavior interaction graph, the negative interest node embedded representation is obtained in a complementary embedding mode, the final representation is used as the positive interest node embedded representation, and complementary embedding graph contrast learning is carried out on the positive interest node embedded representation and the negative interest node embedded representation, so that implicit bias in interaction is relieved, and the node embedded representation is optimized;
model prediction and recommendation module for:
And carrying out recommendation prediction on the final representation, training by adopting a multi-task learning method, taking a recommended task as a main task, adopting a non-sampling strategy to calculate loss, taking multi-behavior graph comparison learning and complementary embedded graph comparison learning as auxiliary tasks, jointly optimizing a total loss function, and outputting a final recommendation result after training is completed.
In the description of the present specification, a description referring to terms "one embodiment," "some embodiments," "examples," "specific examples," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present invention. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiments or examples. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
The foregoing examples illustrate only a few embodiments of the invention and are described in detail herein without thereby limiting the scope of the invention. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the invention, which are all within the scope of the invention. Accordingly, the scope of protection of the present invention is to be determined by the appended claims.

Claims (2)

1. The multi-behavior graph contrast learning recommendation method integrating the self-attention mechanism is characterized by comprising the following steps of:
Step 1, carrying out graph rolling operation by combining nodes with embedding of behaviors based on a multi-behavior interaction graph, introducing a self-attention mechanism to effectively model the connection between different nodes and each behavior so as to enhance node embedding representation, and obtaining final representations of each behavior, a user and a project after embedding propagation;
Step 2, constructing positive example pairs by utilizing target behaviors and auxiliary behaviors of the same user according to final representation, constructing negative example pairs by utilizing target behaviors and auxiliary behaviors of different users, and constructing multi-behavior graph comparison learning by utilizing the positive example pairs and the negative example pairs so as to model the relation between the target behaviors and the auxiliary behaviors;
Step 3, based on the multi-behavior interaction graph, the negative interest node embedded representation is obtained in a complementary embedding mode, the final representation is used as a positive interest node embedded representation, and complementary embedding graph contrast learning is carried out on the positive interest node embedded representation and the negative interest node embedded representation, so that implicit prejudice in interaction is relieved, and the node embedded representation is optimized;
Step 4, carrying out recommendation prediction on the final representation, training by adopting a multi-task learning method, taking a recommended task as a main task, adopting a non-sampling strategy to calculate loss, taking multi-behavior graph comparison learning and complementary embedded graph comparison learning as auxiliary tasks, jointly optimizing a total loss function, and outputting a final recommended result after training is completed;
in the step 1, based on a multi-behavior interaction graph, graph convolution operation is performed by combining nodes and embedding of behaviors, and a self-attention mechanism is introduced to effectively model the connection between different nodes and each behavior so as to enhance node embedding representation, and after embedding propagation, the method for obtaining final representations of each behavior, user and item specifically comprises the following steps:
in the user-project bipartite graph, nodes represent users or projects, edges represent different interaction behaviors of the users-projects, the nodes and the edges are projected to an embedded space, and initialization is carried out;
the method comprises the steps of carrying out embedding propagation on node embedding of items and users of the same behavior, projecting the behavior embedding to the same embedding space as the nodes while carrying out node embedding propagation, enabling the embedding of the behavior to be propagated along with the node embedding simultaneously, so as to obtain node embedding of the items and the users of different behaviors and embedding representations of the different behaviors at different levels, wherein the embedding propagation process of the embedding of the items and the users has the following relational expression:
Wherein, And/>Respectively expressed in specific behavior/>Lower user/>Sum item/>(1 /)Layer embedding,/>、/>、/>Respectively represent user/>Item/>Behavior/>(1 /)Layer embedding,/>Represents the/>Layer specific parameter matrix,/>Representing LeakyReLU activation functions,/>Representing symmetric normalized terms,/>Representing user/>Interactive item set,/>Representation and item/>Interactive user set,/>Representing a combined function that integrates behavior embedding into messaging;
the embedding propagation process of behavior embedding has the following relation:
Wherein, Representing the corresponding behavior/>The following/>A parameter matrix of the layer;
Modeling the dependency relationship between each node and each behavior in different levels by using a self-attention mechanism according to the projects of different behaviors and node embedding of the user to obtain corresponding weight coefficients;
Wherein, And/>Respectively represent behavior/>Specific parameters below,/>,/>,/>The dimension of the embedded vector is represented,Representing the dimension of the attention output,/>And/>Respectively represent user/>With item/>In behavior/>Weight coefficient under,/>Representing hyperbolic tangent function,/>And/>Representing user/>, respectively, under all behaviors before enhanced embeddingWith item/>In/>Embedding the spliced representation into the layer;
Enhancing the node embedding by using the weight coefficient to obtain the enhanced node embedding;
Wherein, And/>Respectively represent the behavior/>, after enhancement by self-attention mechanismsLower user/>Sum item/>(1 /)Embedding a layer;
combining the node embedments enhanced with respect to the items and the users by the different behaviors to obtain different-level item and user embedded representations;
Wherein, And/>Respectively represent user/>Sum item/>In/>Embedded representation of layers,/>Representing a set of behaviors;
respectively combining behaviors, projects and user embedded representations of different layers to obtain final representations of the behaviors, the users and the projects;
Wherein, 、/>、/>Respectively represent user/>Item/>Behavior/>L represents the number of levels propagated;
The process of embedding behavior into message passing using a combining function has the following relationship:
Wherein, Representing an element-wise multiplication operation;
In the step 2, according to the final representation, a positive example pair is constructed by using the target behavior and the auxiliary behavior of the same user, a negative example pair is constructed by using the target behavior and the auxiliary behavior of different users, and a multi-behavior graph comparison learning is constructed by using the positive example pair and the negative example pair, so that the relation between the target behavior and the auxiliary behavior is modeled, and the method specifically comprises the following steps:
On the user side, the target behavior and each type of auxiliary behavior of the same user are regarded as positive example pairs, the target behavior and the auxiliary behavior of different users are regarded as negative example pairs, so as to form a multi-behavior graph comparison learning loss function on the user side, and the multi-behavior graph comparison learning loss function on the user side has the following relation:
Wherein, Representing target behavior,/>Representing auxiliary behavior,/>Representing cosine similarity,/>Representing temperature hyper-parameters,/>Respectively represent user/>At target behavior/>And auxiliary behavior/>Lower embedded representation,/>Representing user/>In auxiliary behavior/>Lower embedded representation,/>Representing a set of users;
On the item side, the target behavior and each type of auxiliary behavior of the same item are regarded as positive example pairs, the target behavior and the auxiliary behavior of different items are regarded as negative example pairs, so as to form a multi-behavior graph comparison learning loss function on the item side, and the multi-behavior graph comparison learning loss function on the item side has the following relation:
Wherein, 、/>Respectively represent target items/>In behavior/>And auxiliary behavior/>Lower embedded representation,/>Representing item/>In auxiliary behavior/>Lower embedded representation,/>Representing a set of items;
The final multi-behavior graph comparison learning loss function can be obtained by combining the multi-behavior graph comparison learning loss function at the user side and the multi-behavior graph comparison learning loss function at the project side, and the final multi-behavior graph comparison learning loss function has the following relational expression:
Wherein, Representing a hyper-parameter controlling the intensity of two loss functions,/>Representing a final multi-behavior graph and comparing the learning loss function;
In the step 3, based on the multi-behavior interaction graph, the negative interest node embedded representation is obtained through a complementary embedding mode, the final representation is used as the positive interest node embedded representation, and the complementary embedding graph contrast learning is carried out on the positive interest node embedded representation and the negative interest node embedded representation, so that the implicit bias in interaction is relieved, and the method for optimizing the node embedded representation specifically comprises the following steps:
the maximum and minimum values of the project before normalization and the embedding of the user are obtained, and the process expression is as follows:
Wherein, 、/>Respectively express/>Dimension initial user embedding/>Maximum and minimum of (a)/>,/>,/>Respectively express/>Dimension initial item embedding/>Maximum and minimum of (a)/>,/>,/>、/>Representing embedded maximum and minimum functions, respectively;
Will be 、/>、/>、/>Extend to/>In the dimension space, the user/>, is obtainedSum item/>The process expression is as follows:
Wherein, 、/>Representing user/>Sum item/>And the range of each dimension is embedded into interval 0, 1;
By slave of 、/>To obtain complementary embedding to represent user/>Sum item/>Is the negative interest of (1), the process expression is as follows:
Wherein, 、/>Representing user/>Sum item/>Complementary embedding of/>Representation/>Dimension all 1 embedding,/>
On the user side, the complementary embedding of the user is used as the negative interest node embedding representation, the final representation of the user is used as the positive interest node embedding representation, the complementary embedding diagram contrast learning is carried out on the positive interest node embedding representation and the negative interest node embedding representation to form a complementary embedding diagram contrast learning loss function on the user side, and the process expression is as follows:
Wherein, Loss function representing user-side complementary embedding diagram contrast learning,/>Representing user/>Is embedded in/(I)Representing a userComplementary embedding of/>Representing cosine similarity;
On the item side, using the complementary embedding of the item as a negative interest node embedding representation, using the final representation of the item as a positive interest node embedding representation, and performing complementary embedding graph contrast learning on the positive interest node embedding representation and the negative interest node embedding representation to form a complementary embedding graph contrast learning loss function on the item side, wherein the process expression is as follows:
Wherein, Loss function representing item-side complementary embedded graph contrast learning,/>Representing item/>Is embedded in/(I)Representing item/>Is embedded in the substrate;
The complementary embedded graph contrast learning loss function at the user side and the complementary embedded graph contrast learning loss function at the project side are combined to obtain a final complementary embedded graph contrast learning loss function, and the final complementary embedded graph contrast learning loss function has the following relation:
Wherein, Representing a hyper-parameter controlling the intensity of two loss functions,/>Representing a final complementary embedding diagram versus a learning loss function;
In the step 4, the final representation is recommended and predicted, the training is performed by adopting a multi-task learning method, the recommended task is used as a main task, the loss is calculated by adopting a non-sampling strategy, the multi-behavior graph contrast learning and the complementary embedded graph contrast learning are used as auxiliary tasks, and the method for jointly optimizing the total loss function specifically comprises the following steps:
The final representation of the user and the project with the same behavior is combined into a single prediction layer to be predicted, so that the interaction probability of the user and the project under different behaviors is obtained, and the process is expressed as follows:
Wherein, Representing user/>In behavior/>Lower and item/>Probability of interaction,/>Represent diagonal matrix, behavior/>Final embedded representation/>As diagonal elements;
And optimizing and constructing a loss function by using non-sampling strategies by using the interaction probabilities of users and projects under different behaviors, wherein the process is expressed as follows:
Wherein, Representing behavior/>Is a non-sampling loss function of B, B represents a batch of users,/>Representing the entire set of items,/>Representing user/>In behavior/>Lower set of interaction items,/>Weights representing positive examples,/>Weights representing negative examples,/>Respectively represent behavior/>Is the embedded vector of/>Individual elements and/>Element,/>Respectively represent user/>Is the embedded vector of/>Individual elements and/>Element,/>Respectively represent items/>Is the embedded vector of/>Individual elements and/>An element;
Combining the non-sampling loss function with the final multi-behavior graph contrast learning loss function and the final complementary embedding graph contrast learning loss function to construct a total loss function for multi-tasking joint optimization learning, and optimizing model performance by minimizing the total loss, wherein the process is expressed as follows:
Wherein, Representing the total loss function,/>Representation for controlling behavior/>Hyper-parameters affecting joint training,/>Hyper-parameters representing multi-behavior graph versus learning penalty function weights,/>Hyper-parameters representing complementary embedding map versus learning penalty function weights,/>Representing regularization coefficient,/>Trainable parameters representing a model,/>Representation/>Regularization.
2. A multi-behavior graph contrast learning recommendation system incorporating a self-attention mechanism as claimed in claim 1, wherein the system applies the multi-behavior graph contrast learning recommendation method incorporating a self-attention mechanism, the system comprising:
The multi-behavior perception graph convolution network module is used for:
Based on the multi-behavior interaction graph, graph rolling operation is carried out by combining the node and the embedding of the behaviors, and the connection between different nodes and each behavior is effectively modeled by introducing a self-attention mechanism so as to enhance the node embedding representation, and after embedding propagation, the final representation of each behavior, a user and a project is obtained;
The multi-behavior diagram comparison learning module is used for:
according to the final representation, constructing positive example pairs by utilizing target behaviors and auxiliary behaviors of the same user, constructing negative example pairs by utilizing target behaviors and auxiliary behaviors of different users, and establishing multi-behavior graph comparison learning by utilizing the positive example pairs and the negative example pairs so as to model the relation between the target behaviors and the auxiliary behaviors;
The complementary embedded graph contrast learning module is used for:
Based on the multi-behavior interaction graph, the negative interest node embedded representation is obtained in a complementary embedding mode, the final representation is used as the positive interest node embedded representation, and complementary embedding graph contrast learning is carried out on the positive interest node embedded representation and the negative interest node embedded representation, so that implicit bias in interaction is relieved, and the node embedded representation is optimized;
model prediction and recommendation module for:
And carrying out recommendation prediction on the final representation, training by adopting a multi-task learning method, taking a recommended task as a main task, adopting a non-sampling strategy to calculate loss, taking multi-behavior graph comparison learning and complementary embedded graph comparison learning as auxiliary tasks, jointly optimizing a total loss function, and outputting a final recommendation result after training is completed.
CN202410242618.2A 2024-03-04 2024-03-04 Multi-behavior graph contrast learning recommendation method and system integrating self-attention mechanism Active CN117851687B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410242618.2A CN117851687B (en) 2024-03-04 2024-03-04 Multi-behavior graph contrast learning recommendation method and system integrating self-attention mechanism

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410242618.2A CN117851687B (en) 2024-03-04 2024-03-04 Multi-behavior graph contrast learning recommendation method and system integrating self-attention mechanism

Publications (2)

Publication Number Publication Date
CN117851687A CN117851687A (en) 2024-04-09
CN117851687B true CN117851687B (en) 2024-05-14

Family

ID=90529467

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410242618.2A Active CN117851687B (en) 2024-03-04 2024-03-04 Multi-behavior graph contrast learning recommendation method and system integrating self-attention mechanism

Country Status (1)

Country Link
CN (1) CN117851687B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115357804A (en) * 2022-07-19 2022-11-18 大连民族大学 Two-stage collaborative filtering multi-behavior recommendation method based on graph convolution network
CN115982480A (en) * 2023-02-13 2023-04-18 山东师范大学 Sequence recommendation method and system based on cooperative attention network and comparative learning
CN116071128A (en) * 2023-01-19 2023-05-05 辽宁工程技术大学 Multitask recommendation method based on multi-behavioral feature extraction and self-supervision learning
CN116108687A (en) * 2023-03-03 2023-05-12 桂林电子科技大学 Sequence recommendation method utilizing multi-attribute multi-behavior information
CN116932923A (en) * 2023-09-19 2023-10-24 江西财经大学 Project recommendation method combining behavior characteristics and triangular collaboration metrics
CN117131282A (en) * 2023-10-26 2023-11-28 江西财经大学 Multi-view graph contrast learning recommendation method and system integrating layer attention mechanism
CN117171448A (en) * 2023-08-11 2023-12-05 哈尔滨工业大学 Multi-behavior socialization recommendation method and system based on graph neural network

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20230034559A1 (en) * 2021-07-18 2023-02-02 Sunstella Technology Corporation Automated prediction of clinical trial outcome
US20230401390A1 (en) * 2022-06-13 2023-12-14 Huaneng Lancang River Hydropower Inc. Automatic concrete dam defect image description generation method based on graph attention network

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115357804A (en) * 2022-07-19 2022-11-18 大连民族大学 Two-stage collaborative filtering multi-behavior recommendation method based on graph convolution network
CN116071128A (en) * 2023-01-19 2023-05-05 辽宁工程技术大学 Multitask recommendation method based on multi-behavioral feature extraction and self-supervision learning
CN115982480A (en) * 2023-02-13 2023-04-18 山东师范大学 Sequence recommendation method and system based on cooperative attention network and comparative learning
CN116108687A (en) * 2023-03-03 2023-05-12 桂林电子科技大学 Sequence recommendation method utilizing multi-attribute multi-behavior information
CN117171448A (en) * 2023-08-11 2023-12-05 哈尔滨工业大学 Multi-behavior socialization recommendation method and system based on graph neural network
CN116932923A (en) * 2023-09-19 2023-10-24 江西财经大学 Project recommendation method combining behavior characteristics and triangular collaboration metrics
CN117131282A (en) * 2023-10-26 2023-11-28 江西财经大学 Multi-view graph contrast learning recommendation method and system integrating layer attention mechanism

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
嵌入式数控系统构件研究;索林生;吴娟;;煤炭技术;20120810(第08期);全文 *
索林生 ; 吴娟 ; .嵌入式数控系统构件研究.煤炭技术.2012,(第08期),全文. *

Also Published As

Publication number Publication date
CN117851687A (en) 2024-04-09

Similar Documents

Publication Publication Date Title
CN112905900B (en) Collaborative filtering recommendation method based on graph convolution attention mechanism
Zou et al. Pseudo Dyna-Q: A reinforcement learning framework for interactive recommendation
CN111127142B (en) Article recommendation method based on generalized nerve attention
Darban et al. GHRS: Graph-based hybrid recommendation system with application to movie recommendation
Hu et al. Movie collaborative filtering with multiplex implicit feedbacks
CN112256980A (en) Dynamic graph attention network-based multi-relation collaborative filtering recommendation
CN113240086B (en) Complex network link prediction method and system
CN117131282B (en) Multi-view graph contrast learning recommendation method and system integrating layer attention mechanism
CN111831895A (en) Network public opinion early warning method based on LSTM model
CN112364242A (en) Graph convolution recommendation system for context-aware type
CN116071128A (en) Multitask recommendation method based on multi-behavioral feature extraction and self-supervision learning
CN113590976A (en) Recommendation method of space self-adaptive graph convolution network
CN113761359A (en) Data packet recommendation method and device, electronic equipment and storage medium
CN115795022A (en) Recommendation method, system, equipment and storage medium based on knowledge graph
CN116738047A (en) Session recommendation method based on multi-layer aggregation enhanced contrast learning
CN114741597A (en) Knowledge-enhanced attention-force-diagram-based neural network next item recommendation method
CN117171448B (en) Multi-behavior socialization recommendation method and system based on graph neural network
CN117851687B (en) Multi-behavior graph contrast learning recommendation method and system integrating self-attention mechanism
Li et al. CDRNP: Cross-Domain Recommendation to Cold-Start Users via Neural Process
CN116452293A (en) Deep learning recommendation method and system integrating audience characteristics of articles
CN114756768B (en) Data processing method, device, equipment, readable storage medium and program product
CN116383515A (en) Social recommendation method and system based on lightweight graph convolution network
CN115221410A (en) Recommendation method and system based on desmooth graph convolution neural network
CN114817758A (en) Recommendation system method based on NSGC-GRU integrated model
Wilson et al. A recommendation model based on deep feature representation and multi-head self-attention mechanism

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant