CN115358809A - Multi-intention recommendation method and device based on graph comparison learning - Google Patents

Multi-intention recommendation method and device based on graph comparison learning Download PDF

Info

Publication number
CN115358809A
CN115358809A CN202210847446.2A CN202210847446A CN115358809A CN 115358809 A CN115358809 A CN 115358809A CN 202210847446 A CN202210847446 A CN 202210847446A CN 115358809 A CN115358809 A CN 115358809A
Authority
CN
China
Prior art keywords
graph
intention
learning
commodity
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210847446.2A
Other languages
Chinese (zh)
Inventor
罗荣华
陈梦如
许勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South China University of Technology SCUT
Original Assignee
South China University of Technology SCUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China University of Technology SCUT filed Critical South China University of Technology SCUT
Priority to CN202210847446.2A priority Critical patent/CN115358809A/en
Publication of CN115358809A publication Critical patent/CN115358809A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • G06Q30/0601Electronic shopping [e-shopping]
    • G06Q30/0631Item recommendations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9536Search customisation based on social or collaborative filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/01Social networking

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • General Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • General Business, Economics & Management (AREA)
  • Marketing (AREA)
  • Economics (AREA)
  • Strategic Management (AREA)
  • Finance (AREA)
  • Data Mining & Analysis (AREA)
  • Accounting & Taxation (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Primary Health Care (AREA)
  • Biophysics (AREA)
  • Development Economics (AREA)
  • Tourism & Hospitality (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Human Resources & Organizations (AREA)
  • Computational Linguistics (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses a multi-intention recommendation method and device based on graph comparison learning, wherein the method comprises the following steps: acquiring a data set; obtaining graph structure data from the data set; constructing an enhanced contrast view for the original bipartite graph in a parameterized manner; dividing initialized user and commodity characteristics into K sections of characteristic blocks; each implicit factor is correspondingly decoupled and characterized for independent learning, and K intention characteristics under two views are respectively output on the basis of an original graph and an enhanced graph; carrying out self-adaptive comparative learning on the features under each potential intention; splicing the K sections of characteristics to form characteristic vectors of users and commodities, and performing a mutual information maximization task; and finally recommending a predictive supervision task to the spliced feature vector, and performing joint learning with the two unsupervised tasks. The invention adjusts the contrast learning of the contrast view into the self-adaptive contrast learning, learns more diversified semantic feature information, enhances the interpretability and the robustness of the model, and can be widely applied to the technical field of machine learning.

Description

Multi-intention recommendation method and device based on graph comparison learning
Technical Field
The invention relates to the technical field of machine learning, in particular to a multi-intention recommendation method and device based on graph comparison learning.
Background
In recent years, with the rapid development of internet technology, huge amounts of data are generated in various service platforms such as e-commerce, social contact, video software and the like. In the face of a large amount of data, people may be overwhelmed, but for machine learning models, the vast amount of data is the core "fuel" of their learning, a data-driven technology. The recommendation system which benefits and develops greatly analyzes the attributes and various potential implicit relations according to the historical interactive behaviors of the commodities purchased by the user, extracts the characteristics, mines the rules hidden behind the data and the interests of the user, and recommends the commodities which may be interested to the user.
Various GNN (graph neural network) -based recommendation algorithms are proposed by scholars, and high recommendation performance can be achieved. But may lack some interpretability and robustness against noise and may make it difficult for the user to generate trust. Because most GNN models learn node characteristics by aggregating neighbor information, the information of neighbor nodes is taken as a perception whole, influence factors hidden when each continuous edge message is determined to be transmitted are ignored, and the real intention of a user to purchase the commodity is not considered. On the other hand, available tags in mass reality data are rare, and various noise factors also exist.
Disclosure of Invention
In order to solve at least one of the technical problems in the prior art to a certain extent, the present invention provides a method and an apparatus for multi-intent recommendation based on graph comparison learning.
The technical scheme adopted by the invention is as follows:
a multi-intention recommendation method based on graph contrast learning comprises the following steps:
collecting a data set with user social relations, commodity attribute relations and user commodity interaction relations;
storing data of user social relations, commodity attribute relations and user commodity interaction relations by adopting a sparse graph structure form to obtain graph structure data which can be used in a graph convolution neural network model;
based on the original social relationship and the commodity attribute relationship, introducing the action of purchasing commodities by the user to form a new social relationship graph and a commodity attribute graph;
constructing a corresponding contrast view based on K (hidden factor number to be decoupled) decoupled potential factor intention representations, and generating a parameterized and enhanced UI (user interface) diagram through a learnable drop mode (a data enhancement mode, edge deletion);
learning decoupling characteristics of a recommendation model, establishing K GCN (graph convolution nerve) message transmission channels, respectively coding the characteristics, and simultaneously learning two groups of characteristics in an original user-commodity interaction bipartite graph and an enhanced user-commodity bipartite graph by each GCN channel;
introducing K intention prototype vectors, and learning the distribution of a plurality of intention characteristics of each node on the UI diagram;
two groups of intention characteristics decoupled from the two comparison views according to K potential hidden factors are independently subjected to K times of personalized comparison learning under different hidden factors;
splicing and combining the K intention characteristics to serve as predicted user and commodity characteristics, and introducing an unsupervised learning task based on mutual information maximization by utilizing the graph structure information of the social relationship graph and the commodity attribute graph;
performing joint learning on the recommended task, a multi-intention-based personalized comparative learning task and a maximization task based on mutual information of a graph structure;
and (4) carrying out scoring prediction on the embedded vectors of the user and the commodity finally learned by the model to obtain a recommended commodity sequence.
Further, the multi-intent recommendation method further comprises the step of preprocessing the acquired data set:
filtering invalid users according to preset conditions of the model, and reserving the valid users and corresponding commodity nodes;
and dividing the data set, randomly selecting one interaction from the verification set and the test set of each user respectively, and taking the rest interaction items as a training set.
Further, the step of introducing a behavior of purchasing commodities by the user based on the original social relationship and the commodity attribute relationship to form a new social relationship diagram and a commodity attribute diagram includes:
in order to add auxiliary information into the interactive behavior recommendation prediction, the social relationship and the attribute relationship are further processed according to the settings required by the model, which are specifically as follows: and injecting the social relationship and the commodity attribute relationship in the data set into the commodity purchasing behavior information of the user. If the number of the same commodities purchased by the two friends is larger than a preset threshold value, judging that a connection is possibly generated due to a certain similar purchasing intention, and reconstructing a social relationship graph; if two commodities belonging to the same category are purchased by a plurality of same users, judging that the two commodities are possibly linked because of containing a certain similar purchasing intention, and reforming a commodity attribute relation graph; and further learning the feature vector which is learned by the model and comprises a plurality of factors by utilizing a supervision signal generated by the graph structure information of the fine-grained hierarchy.
Further, constructing a corresponding contrast view based on the K decoupled potential factor intention characterizations, and generating a parameterization enhanced UI diagram in a learnable adaptive drop manner, including:
in each intention factor scene, calculating the probability omega of whether each edge on the interactive relationship graph corresponding to the K potential factors is deleted or not in a parameterized mode k_ui
ω k_ui =MLP(Concat[u k ,v k ])
Wherein MLP is a multi-layer perceptron, and Concat represents the concatenation of two feature vectors together; u. of k And v k The user node characteristics and the commodity node characteristics of a UI interaction edge on the graph corresponding to the Kth hidden factor are respectively;
to optimize the learning of graph structures and learn in an end-to-end manner, a heavy parameter trick is employed, denoted as:
p=σ((log∈-log(1-∈)+ω)/τ)
wherein e is subject to a uniform distribution of (0, 1); tau is more than 0, is a temperature coefficient, and the concentration degree of the distribution is adjusted; σ () is the activation function;
after the probability of each side is calculated, the side with the probability smaller than the preset threshold value is deleted, other sides are reserved, and the enhanced graph G 'after drop is obtained again' k_ui
Characterizing a node (u) k ,v k ) Enhancement of graph G' k_ui Inputting the data into a GCN encoder of shared parameters, and obtaining a plurality of final intended user and commodity characteristics E 'through L-layer message passing aggregation accumulation' ku ,E′ ki
Further, the learning of the recommended model decoupling feature includes:
the characteristics of the user and the commodity are divided into K characteristic blocks, namely u = (u) 1 ,u 2 ,...,u k ),v=(v 1 ,v 2 ,...,v k ),u k ,v k ∈R d/k One-to-one correspondence between feature blocks and intentions, one-to-one correspondence between each intent block of a user and each intent block of a commodity (u) k ,v k ) (ii) a Wherein R is d/k A real number space with dimension d/k;
graph convolution neural network message transfer model GCN based on each intention factor k Inputting the node characteristics (u) of the user's product k ,v k ) Inputting the bipartite graph G ui Through L-layer message transmission aggregation accumulation, the final user and commodity characteristics E under multiple intentions are obtained ku ,E ki
Further, the introducing K intention prototype vectors
Figure BDA0003753438740000031
The method comprises the following steps:
k feature blocks of a certain node are subjected to aggregation of neighbor information or high-order information on respective enhanced graphs to obtain feature representations under each potential intention factor, and k feature representations E 'learned by the node are calculated' kj And is provided withDegree of agreement of the determined intention category prototype vectors:
Figure BDA0003753438740000032
wherein, the first and the second end of the pipe are connected with each other,
Figure BDA0003753438740000033
cosine similarity, evaluating the similarity of two vectors; e' kj Is the kth decoupling characteristic representation of node j on the graph, c k The k-th intention prototype vector introduced for the model; exp (·) is an exponential function; p k And the normalized distribution representing the characteristics of the K intention categories obtained after each node is subjected to multi-layer GCN message transmission aggregation is shown.
Further, the two groups of intention features obtained by decoupling the two comparison views according to the K potential hidden factors are independently subjected to K times of personalized comparison learning under different hidden factors, and the method comprises the following steps:
the common contrast loss function under each intention characteristic is independently calculated, and a positive example is obtained (E) kj ,E′ kj ) Taking other sample points except the sample point in the minipatch size sample data as a negative column, wherein j represents a user node or a commodity node;
positive example score:
Figure BDA0003753438740000034
all negative column scores:
Figure BDA0003753438740000035
Figure BDA0003753438740000041
contrast loss function based on specific intention feature space
Figure BDA0003753438740000042
Calculating contrast loss corresponding to K hidden factors, and calculating weight coefficient before loss corresponding to the K hidden factors fused into uniform characteristics, namely different contained in each nodeThe characteristic vector information corresponding to the hidden factors has different contribution degrees to the final comparison learning loss and is related to the normalized probability distribution represented by the characteristics corresponding to the K hidden factors which are obtained by decoupling learning of a certain node, namely the node has the probability size of a certain intention vector; whether the characterization vector corresponding to the hidden factor can be accurately compared and learned is also related to the task, namely the sub-graph structure of the node after being enhanced is possibly inaccurate, and if the feature information of two visual angles is forcibly maximized, a suboptimal effect can be generated; therefore define
Figure BDA0003753438740000043
The rationality of this comparative learning is measured from a probabilistic perspective,
Figure BDA0003753438740000044
as the weight coefficient before the accumulation of the independent contrast loss under the K intention categories of each node, and finally the contrast loss function of each node
Figure BDA0003753438740000045
Total loss function
Figure BDA0003753438740000046
I.e. loss of contrast and product side; wherein, alpha and beta are balance coefficients of a contrast loss function of the user side and the commodity side; m is the number of users, and n is the number of commodities.
Further, the method for introducing the unsupervised learning task based on the mutual information maximization by using the graph structure information of the social relationship graph and the commodity attribute graph comprises the following steps:
and generating a hierarchical learning paradigm with maximized mutual information by utilizing progressive graph structure information from the nodes to the subgraphs taking the nodes as centers and then to the global graph, mining graph structure information in a finer granularity, and further optimizing the learning of node characteristics.
Further, the joint learning of the recommendation task, the multi-intention-based personalized comparison learning task and the maximization task based on graph structure mutual information comprises:
the end user and the goods are characterized by: e u ={E 1u ,E 2u ,...,E ku },E u ={E 1u ,E 2u ,...,E ku Get supervised loss L of recommended tasks with BPRLoss loss function bpr Adding the final model loss to the unsupervised loss:
Figure BDA0003753438740000047
in the formula, mu and
Figure BDA0003753438740000048
the parameter is a super parameter and is used as a balance factor of unsupervised task loss; l is MI Is graph mutual information maximization loss function;
and updating the parameters of the model by using a gradient descent method until the loss function reaches a preset threshold value.
The other technical scheme adopted by the invention is as follows:
a multi-intent recommendation device based on graph contrast learning, comprising:
at least one processor;
at least one memory for storing at least one program;
when executed by the at least one processor, cause the at least one processor to implement the method described above.
The invention has the beneficial effects that: according to the method, the contrast learning of the two contrast views is adjusted to be fine-grained self-adaptive contrast learning based on different decoupling factors of each vector, more diversified semantic feature information is learned, and the interpretability and the robustness of the model are enhanced.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the following description is made on the drawings of the embodiments of the present invention or the related technical solutions in the prior art, and it should be understood that the drawings in the following description are only for convenience and clarity of describing some embodiments in the technical solutions of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without creative efforts.
FIG. 1 is a flowchart illustrating steps of a method for multi-intent recommendation based on graph-based contrast learning according to an embodiment of the present invention;
FIG. 2 is a flowchart illustrating a method for multi-intent recommendation based on graph-based contrast learning according to an embodiment of the present invention;
FIG. 3 is a block diagram of a recommendation model in an embodiment of the invention.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the accompanying drawings are illustrative only for the purpose of explaining the present invention, and are not to be construed as limiting the present invention. For the step numbers in the following embodiments, they are set for convenience of illustration only, the order between the steps is not limited at all, and the execution order of each step in the embodiments can be adapted according to the understanding of those skilled in the art.
In the description of the present invention, it should be understood that the orientation or positional relationship referred to in the description of the orientation, such as the upper, lower, front, rear, left, right, etc., is based on the orientation or positional relationship shown in the drawings, and is only for convenience of description and simplification of description, and does not indicate or imply that the device or element referred to must have a specific orientation, be constructed and operated in a specific orientation, and thus, should not be construed as limiting the present invention.
In the description of the present invention, the meaning of a plurality of means is one or more, the meaning of a plurality of means is two or more, and larger, smaller, larger, etc. are understood as excluding the number, and larger, smaller, inner, etc. are understood as including the number. If the first and second are described for the purpose of distinguishing technical features, they are not to be understood as indicating or implying relative importance or implicitly indicating the number of technical features indicated or implicitly indicating the precedence of the technical features indicated.
In the description of the present invention, unless otherwise explicitly limited, terms such as arrangement, installation, connection and the like should be understood in a broad sense, and those skilled in the art can reasonably determine the specific meanings of the above terms in the present invention in combination with the specific contents of the technical solutions.
As the comparative learning is carried out by an auxiliary task, positive and negative samples are constructed, similar positive samples are drawn close and dissimilar negative samples are pushed away for feature representation learning, a related classical data enhancement mode is a random mode, suboptimal effects can be generated, and interpretability is lacked, irrelevant information can be wrongly learned by randomness due to the requirement for maximizing the consistency of different visual angles, and the learnable data enhancement mode is adopted to learn whether to delete a certain edge to convert the original interactive bipartite graph into a relevant visual angle graph. And performing unsupervised learning target of mutual information maximization by using the fine-grained hierarchical graph structure of the auxiliary relational graph, and further optimizing node feature learning.
Example one
As shown in fig. 1, the embodiment provides a multi-intent recommendation method based on graph contrast learning, which fine-grained decouples multiple interaction intents of users and commodities, learns implicit factors behind a decision graph structure, and makes the decomposed node intent representation have interpretability; and a strong feature learning contrast learning method is introduced, and the intention features under each specific hidden factor semantic meaning are subjected to contrast learning in a personalized manner from finer granularity and multiple aspects, so that a precise self-adaptive refined contrast learning paradigm is realized, the intention feature expression of user commodities with interpretability and robustness is better learned, and the performance of a recommendation model is improved. The method specifically comprises the following steps:
s101, collecting a data set with user social relations, commodity attribute relations and user commodity interaction relations.
As an optional embodiment, after the data set is obtained, the method further includes steps A1-A2 of preprocessing the obtained data set:
a1, filtering invalid users according to a model preset condition, and reserving the valid users and corresponding commodity nodes;
and A2, dividing the data set, randomly selecting one interaction from the verification set and the test set of each user respectively, and taking the rest interaction items as training sets.
And S102, storing data of the user social relationship, the commodity attribute relationship and the user commodity interaction relationship in a sparse graph structure form to obtain graph structure data which can be used in a graph convolution neural network model.
S103, introducing the behavior of purchasing commodities of the user based on the original social relationship and the commodity attribute relationship to form a new social relationship graph and a new commodity attribute graph.
Forming a new social relationship graph and a commodity attribute graph: in order to add auxiliary information into the interactive behavior recommendation prediction, the social relationships and attribute relationships are further processed according to settings required by the model, which are specifically as follows: and injecting the social relationship and the commodity attribute relationship in the data set into commodity purchasing behavior information of the user. If the number of the same commodities purchased by the two friends is larger than a certain threshold value, the two friends are considered to be possibly connected due to a certain similar purchase intention, and a social relationship graph is reconstructed; if two commodities belonging to the same category are purchased by a plurality of same users, the two commodities are considered to be possibly linked because of containing a certain similar purchasing intention, and a commodity attribute relation graph is formed again; the feature vector which is obtained by the model and comprises a plurality of factors can be further learned by using a supervision signal generated by the graph structure information of the fine granularity hierarchy.
S104, constructing a corresponding contrast view based on the K decoupled potential factor intention representations, and generating a parameterization enhanced UI diagram through a learnable adaptive drop mode.
In each intention factor scene, calculating the probability omega of whether each edge on the interactive relation graph corresponding to the K potential factors is deleted or not in a parameterized mode k_ui
ω k_ui =MLP(Concat[u k ,v k ])
Where MLP is a multi-layer perceptron and Concat represents the stitching together of two feature vectors.
To optimize the learning of graph structures more efficiently and in an end-to-end manner, a heavy parameter trick is employed, expressed as:
p=σ((log∈-log(1-∈)+ω)/τ)
wherein e is subject to a uniform distribution of (0, 1); tau is more than 0, is a temperature coefficient, and the concentration degree of the distribution is adjusted; σ () is the activation function.
Therefore, after the probability of each edge is calculated, the edge with the probability less than 0.5 is deleted, the other edges are reserved, and the enhanced graph G 'after one drop is obtained again' k_ui
Characterizing a node (u) k ,v k ) Enhancement of graph G' k_ui Inputting the data into a GCN encoder of the shared parameters, and obtaining final multiple intended user and commodity characteristics E 'through aggregation and accumulation of L-layer message passing' ku ,E′ ki
S105, learning decoupling characteristics of a recommendation model, establishing K GCN (graph convolution nerve) message transmission channels, respectively performing characteristic coding, and simultaneously performing learning of two groups of characteristics on an original user-commodity interaction bipartite graph and an enhanced user-commodity bipartite graph by each GCN channel.
The characteristics of the user and the commodity are divided into K characteristic blocks, namely u = (u) 1 ,u 2 ,...,u k ),v=(v 1 ,v 2 ,...,v k ),u k ,v k ∈R d/k One-to-one correspondence between feature blocks and intentions, one-to-one correspondence between each intention block of a user and each intention block of a commodity (u) k ,v k ). Graph convolution neural network message transfer model GCN based on each intention factor k Inputting the node characteristics (u) of the user's commodity k ,v k ) Inputting the bipartite graph G ui Finally, the user and commodity characteristics E under multiple intentions are obtained through the L-layer message transmission aggregation and accumulation ku ,E ki
And S106, introducing K intention prototype vectors, and learning the distribution of a plurality of intention features of each node on the UI diagram.
K feature blocks of a certain node are subjected to aggregation of neighbor information or high-order information on respective enhanced graphs to obtain feature representations under each potential intention factor, and then k feature representations E 'learned by the node are calculated' kj To the extent that we fit the intent class prototype vector we set,
Figure BDA0003753438740000071
wherein the content of the first and second substances,
Figure BDA0003753438740000072
cosine similarity, which can evaluate the similarity of two vectors; exp (-) is an exponential function. The formula represents the normalized distribution of K intention category characteristics obtained after each node is subjected to multi-layer GCN message transmission aggregation.
And S107, two groups of intention characteristics decoupled from the two comparison views according to the K potential hidden factors are respectively and independently subjected to K times of personalized comparison learning under different hidden factors.
First, the common contrast loss function under each intention characteristic is independently calculated, namely a positive example (E) kj ,E′ kj ) And j represents a user or a commodity node, and other sample points except the user or the commodity node in the minimatch size sample data serve as negative columns. The score of the positive example sample is given,
Figure BDA0003753438740000081
all of the negative columns are scored as negative columns,
Figure BDA0003753438740000082
contrast loss function based on specific intention feature space
Figure BDA0003753438740000083
Then, calculating the weight coefficient before the contrast loss corresponding to the K hidden factors when the contrast loss corresponding to the K hidden factors is fused into the loss corresponding to the unified feature, namely the features corresponding to different hidden factors contained in each nodeThe feature vector information has different contribution degrees to the final comparison learning loss, and is related to the normalized probability distribution represented by the features corresponding to the K implicit factors decoupled and learned by a certain node, namely the node has the probability of a certain intention vector. Whether the characterization vector corresponding to the hidden factor can be accurately compared and learned is related to the task, that is, the sub-graph structure of the node after being enhanced may be inaccurate, and if feature information of two visual angles is forcibly maximized, a suboptimal effect may be generated. Therefore define
Figure BDA0003753438740000084
The rationality of this comparative learning is measured from a probabilistic perspective,
Figure BDA0003753438740000085
as the weight coefficient before the accumulation of the independent contrast loss under the K intention categories of each node, and finally the contrast loss function of each node
Figure BDA0003753438740000086
Total loss function
Figure BDA0003753438740000087
I.e., loss of contrast between the user and the merchandise side.
And S108, splicing and combining the K intention characteristics to serve as predicted user and commodity characteristics, and introducing an unsupervised learning task based on mutual information maximization by utilizing the graph structure information of the social relationship graph and the commodity attribute graph.
And generating a hierarchical learning paradigm with maximized mutual information by utilizing progressive graph structure information from the nodes to the subgraphs taking the nodes as centers and then to the global graph, mining graph structure information in a finer granularity, and further optimizing the learning of node characteristics.
And S109, performing joint learning on the recommended task, a multi-intention-based personalized comparative learning task and a maximization task based on mutual information of a graph structure.
The end user and the goods are characterized by: e u ={E 1u ,E 2u ,...,E ku },E u ={E 1u ,E 2u ,...,E ku Get the supervised loss L of the recommended task with the BPRLoss loss function bpr And adding the final model loss with the unsupervised loss:
Figure BDA0003753438740000088
in the formula, mu and
Figure BDA0003753438740000089
and the balance factor is used as a balance factor of unsupervised task loss.
And updating the parameters of the model by using a gradient descent method until the loss function reaches a preset threshold value.
And S110, scoring and predicting the user and commodity embedded vectors finally learned by the model to obtain a recommended commodity sequence.
Example two
As shown in fig. 2, the present embodiment provides a recommendation method for a graph convolution neural network based on vector representations corresponding to multiple implicit factors for decoupling a user and a commodity, including the following steps:
s201, acquiring and processing a data set: and acquiring a data set containing user commodity interaction relation, user social relation and project attribute relation in the provider platform, and then performing certain preprocessing to obtain a required data set.
After the selected data set is acquired, the method further comprises the step of preprocessing the data set, including:
and filtering invalid users according to the condition that the commodity interaction number of the users is more than or equal to 3, and reserving the valid users and the nodes corresponding to the interactive commodities. And dividing the data set, randomly selecting an interaction from the verification set and the test set of each user, and taking the remaining interaction items as a training set. And finally, judging the prediction result, and carrying out negative sampling on the verification set and the test set.
S202, storing the interactive relation in the acquired data set into graph structure data in a sparse matrix form, and further processing the social relation and the attribute relation according to the setting required by the model in order to add auxiliary information into interactive relation recommendation prediction. In order to inject the social relationship in the data set into the behavior information of the purchased commodities, if the number of the purchased same commodities between two friends is larger than a certain threshold value, the two friends are considered to be possibly connected due to a certain similar purchasing intention, and a social relationship graph is reconstructed; if two commodities belonging to the same category are purchased by a plurality of same users, the two commodities are considered to be connected together possibly because of containing a certain similar purchasing intention, and a commodity attribute relation graph is formed again; the additionally generated supervision signal can further learn the characteristic vector corresponding to a plurality of factors and obtained by the model.
And S203, constructing enhanced views corresponding to the K potential factors.
According to the representation vectors corresponding to the hidden factors of the user and the commodity under each intention, the node characteristics at two ends of the interactive edge on the original user-commodity bipartite graph are spliced to obtain the edge characteristics, then the edge characteristics are sent to an MLP network to obtain a single value, then the operation of the repeated parameters is carried out to obtain the probability value of the edge, the edge drop smaller than the threshold value is carried out, and the enhanced views of different potential intention subspaces are obtained.
And S204, coding the characteristics of the nodes in the K intention subspaces.
And performing aggregation updating on neighbor information on K GCN message transmission channels by K intention vector representations decoupled from the node to obtain corresponding hidden factor feature representations, and performing similar operation to obtain another group of K potential intention representation vectors by node feature learning for enhancing the view angle.
S205, self-adaptive independent comparison learning of K intention subspaces.
And introducing prototype vectors of K intention categories, calculating the normalized probability distribution of K decoupled intention representations of the updated node belonging to a certain intention category, and indicating whether the node learns the intention of a certain hidden factor category.
And respectively carrying out contrast learning on the intention characterization vectors learned by the K implicit factor spaces, dividing the positive example scores into all negative column scores, solving the probability of whether the node needs to participate in a contrast loss task, and multiplying the probability of the learned intention features belonging to the implicit factor category by the positive example scores to obtain a weight coefficient before the contrast loss functions of the K intention subspaces are accumulated, so that the contrast learning of finer granularity and personalized features can be realized.
S206, introducing a preprocessed supervision signal of the user-user social relationship (uu) and the commodity-commodity (ii) attribute relationship, splicing the decoupled and learned intention features, and performing hierarchical mutual information maximization on the obtained user features and commodity features, namely performing hierarchical mutual information maximization on the spliced user features and commodity features on the uu graph and the ii graph respectively, and performing hierarchical mutual information maximization on the features of the nodes and the sub-graph information aggregation with the nodes as centers, and performing the mutual information maximization on the features of the sub-graph information aggregation with the nodes as centers and the global graph representation to increase additional supervision signals by using the structural information of the nodes, the sub-graphs and the global graph to better learn the node features.
And S207, joint training of a recommended task and an unsupervised task.
And splicing the K sections of intention characteristics together to form user commodity characteristics, calculating scores of positive and negative samples by using an inner product mode, calculating the recommended task loss based on BPRloss, combining the contrast loss task and the mutual information maximization task, and continuously performing model training and optimization by using a gradient descent method to obtain the model parameters with the best recommended performance.
S208, recommendation prediction: and (4) carrying out scoring prediction on the embedded vectors of the user and the commodity finally learned by the model to obtain a recommended commodity sequence.
EXAMPLE III
As shown in fig. 2 and fig. 3, the present embodiment provides a multi-intent recommendation model based on graph-to-graph comparison learning, and first performs division, preprocessing and construction of a training set, a verification set and a test set on an obtained data set, and a relationship graph of commodity interaction, user social interaction and commodity attributes; decoupling K characteristic ideogram blocks of the user and the commodity; the MLP parameter network can be used for learning and enhancing to obtain another contrast view; respectively carrying out feature learning on K GCN message coding channels, introducing self-adaptive contrast learning to carry out feature personalized learning on the feature vectors corresponding to K intention factors; and (3) splicing the K intention feature blocks to form features, further performing an unsupervised learning task with finer granularity and maximized mutual information by using hierarchical graph structure information of the auxiliary relation graph uu and the ii graph, enhancing feature learning, obtaining high-quality node feature representation, and performing final recommendation prediction. The method specifically comprises the following steps:
s301, constructing a data set, acquiring the data set with social relations, interaction records and commodity category information under each Internet platform, and further filtering, dividing and carrying out random negative sampling.
After the data set is acquired, the invention does not relate to the problem research of cold start, so in order to ensure the quality of the data, users with the number of interaction less than three in the data set are removed. And then dividing the interactive data of each user, respectively and randomly selecting one interactive record as a verification and test set, and remaining the interactive records as a training set. In order to verify and test the model recommendation prediction effect, the test set is verified and tested according to the positive and negative samples 1:99 random negative sampling is performed.
S302, data preparation and auxiliary relation graph construction: in order to introduce the social relationship and the commodity attribute relationship which exist implicitly into the purchasing behavior information of the users and generate effective auxiliary action with the objects mainly researched by the model, whether the social relationship and the commodity attribute relationship have similar intentions or not is rebuilt, if the coincidence number of the commodities purchased by the two users in the friend relationship is more than a set threshold value, the two users are considered to generate connection and establish connection edges because of certain similar intentions; similarly, if the number of users who purchase two commodities belonging to the same category is larger than the set threshold value, the two commodities are regarded as that the two commodities are connected because the two commodities contain similar user intentions. And storing the processed social relationship, commodity attribute relationship and interaction relationship data in a graph structure mode. Construction of a contrast view: the feature of the edge is utilized to input the importance probability of the output edge in the parameter MLP transformation network, and the edge drop with the probability value smaller than the set threshold value is dropped to obtain the enhanced views corresponding to different intents.
S303, learning of node K-segment intention feature vector
Some intention feature vector x to be input i Mapping the matrix W via parameters specific to the intended GCN channel k Non-linearly maps to the intention subspace and normalizes:
e i,k =L2_norm(σ(W k x i +b k ))
wherein, L2_ norm is 2 numbers corresponding to a certain dimension divided by the dimension; w k And b k A weight parameter and a bias for the mapping; σ is the activation function.
And respectively carrying out K times of independent neighbor information transmission and aggregation on the intention characteristics of the user and the commodity corresponding to the corresponding hidden factors in the original bipartite graph and the enhanced graph to obtain two groups of K potential intention characteristic representations.
And S304, self-adaptive comparison learning.
Under K different intention subspaces, calculating the probability distribution of whether each decoupling feature is suitable for contrast learning or not according to the probability distribution of K original intention semantic information contained in the intention representation decoupled from the node and the positive and negative example score ratio corresponding to the decoupling features of two visual angles under a certain intention of the node according to the formula of InfoNCEloss, and obtaining the sum of the contrast loss function under K potential factors of each node.
And S305, unsupervised learning with maximized graph mutual information.
And splicing the K decoupling learned intention characteristics to obtain complete user and commodity characteristics, and learning the structural information among all the substructures of the graph in a finer granularity according to the node characteristics, the subgraph characteristics taking the nodes as the centers and the hierarchical graph structure form of the global graph characteristics designed by the invention. And constructing two progressive optimization targets of maximizing mutual information of the node feature representation and the subgraph feature, the subgraph feature and the global graph representation to further strengthen the learning of the node feature.
And S306, recommending and predicting.
And finally, the model outputs comprehensive user and commodity embedded vectors, the BPRLoss loss function is used for obtaining the supervised loss of the recommended task, the supervised loss is added with the unsupervised loss to obtain the final model loss, the gradient of the loss to each parameter is calculated by using a gradient descent method, the gradient is reversely transmitted to the network, and the model parameters are continuously updated. In the testing stage, prepared testing data are recommended to perform performance evaluation according to the user and commodity characterization vectors updated and calculated finally by the model, a score sequence is generally obtained according to the similarity of each pair of user and commodity embedded vectors, the commodities corresponding to the prediction result with the maximum value are the commodities with the strongest preference of the user according to descending order, the hit rate can be represented by whether positive commodities exist in the first 10 commodities scored, if the positive commodities are hit, the position of the commodity in the first 10 is obtained to further calculate the prediction accuracy, and an early termination algorithm is used for determining whether to perform the next round of training. And (4) until the model training is completed with the best recommendation prediction effect, then accurate prediction can be carried out, and a commodity list which is possibly liked by the user is recommended.
In summary, compared with the prior art, the method of the embodiment has the following advantages and beneficial effects:
(1) In the invention, the characteristics of the user and the commodity are decoupled, the decisive potential factors behind the interactive behavior are distinguished, and K GCN message transmission channels are independently coded under K hidden factor subspaces.
(2) In the invention, the data enhancement mode is not a traditional random drop mode, but MLP transformation is carried out according to the user and commodity decoupling characteristics corresponding to each potential factor, and the importance of the interaction edge under the intention semantic meaning is obtained to construct a contrast view.
(3) In the invention, the relationship data of UU and II existing in the data set is not directly used, but the interactive behavior information is introduced on the basis of the original relationship to screen the data edges to form a new relationship graph of UU and II containing the interactive behavior information, and finally, the mutual information maximization between fine-grained graph structures is adopted, rather than the traditional mode of maximizing the mutual information between a node level and a global graph with information loss.
(4) The contrast learning based on the multi-intention features is the contrast learning of pairwise combination among K intents, namely, the K decoupled features are considered to have similar semantic information in nature. And the invention independently carries out graph comparison learning in each intention subspace, and is beneficial to further learning of the decoupling representation based on GCN feature extraction.
(5) The invention introduces multi-intention decomposition and hidden factor prototype intention vectors, calculates the normalized probability distribution of hidden factor semantic intention represented by decoupling characteristics learned by each node and the calculated normalized probability distribution of each contrast loss function value through the concept of probability distribution to determine the coefficient before the accumulation of the contrast loss function under each hidden factor characteristic space, and obtains the final complete loss function value.
The embodiment also provides a multi-intention recommendation device based on graph comparison learning, which comprises:
at least one processor;
at least one memory for storing at least one program;
when executed by the at least one processor, cause the at least one processor to implement the method of fig. 1.
The multi-intent recommendation device based on graph comparison learning according to the embodiment of the invention can execute the multi-intent recommendation method based on graph comparison learning provided by the method embodiment of the invention, can execute any combination of the implementation steps of the method embodiment, and has corresponding functions and beneficial effects of the method.
Embodiments of the present application also disclose a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The computer instructions may be read by a processor of a computer device from a computer-readable storage medium, and executed by the processor to cause the computer device to perform the method illustrated in fig. 1.
In alternative embodiments, the functions/acts noted in the block diagrams may occur out of the order noted in the operational illustrations. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality/acts involved. Furthermore, the embodiments presented and described in the flow charts of the present invention are provided by way of example in order to provide a more thorough understanding of the technology. The disclosed methods are not limited to the operations and logic flows presented herein. Alternative embodiments are contemplated in which the order of various operations is changed and in which sub-operations described as part of larger operations are performed independently.
Furthermore, although the present invention is described in the context of functional modules, it should be understood that, unless otherwise stated to the contrary, one or more of the described functions and/or features may be integrated in a single physical device and/or software module, or one or more functions and/or features may be implemented in a separate physical device or software module. It will also be appreciated that a detailed discussion of the actual implementation of each module is not necessary for an understanding of the present invention. Rather, the actual implementation of the various functional modules in the apparatus disclosed herein will be understood within the ordinary skill of an engineer, given the nature, function, and internal relationship of the modules. Accordingly, those skilled in the art can, using ordinary skill, practice the invention as set forth in the claims without undue experimentation. It is also to be understood that the specific concepts disclosed are merely illustrative of and not intended to limit the scope of the invention, which is defined by the appended claims and their full scope of equivalents.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The logic and/or steps represented in the flowcharts or otherwise described herein, e.g., an ordered listing of executable instructions that can be considered to implement logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). Further, the computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via for instance optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
It should be understood that portions of the present invention may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, various steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
In the foregoing description of the specification, reference to the description of "one embodiment/example," "another embodiment/example," or "certain embodiments/examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
While embodiments of the present invention have been shown and described, it will be understood by those of ordinary skill in the art that: various changes, modifications, substitutions and alterations can be made to the embodiments without departing from the principles and spirit of the invention, the scope of which is defined by the claims and their equivalents.
While the preferred embodiments of the present invention have been illustrated and described, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (10)

1. A multi-intention recommendation method based on graph contrast learning is characterized by comprising the following steps:
collecting a data set with user social relations, commodity attribute relations and user commodity interaction relations;
storing data of user social relations, commodity attribute relations and user commodity interaction relations by adopting a sparse graph structure form to obtain graph structure data which can be used in a graph convolution neural network model;
introducing the behavior of purchasing commodities of the user based on the original social relationship and the commodity attribute relationship to form a new social relationship graph and a commodity attribute graph;
constructing a corresponding contrast view based on K decoupled potential factor intention representations, and generating a parameterized and enhanced UI diagram in a learnable drop mode;
learning decoupling characteristics of a recommendation model, establishing K GCN message transmission channels, respectively coding the characteristics, and simultaneously learning two groups of characteristics of each GCN channel in an original user-commodity interaction bipartite graph and an enhanced user-commodity bipartite graph;
introducing K intention prototype vectors, and learning the distribution of a plurality of intention characteristics of each node on the UI diagram;
two groups of intention characteristics decoupled from the two comparison views according to K potential hidden factors are respectively and independently subjected to K times of personalized comparison learning under different hidden factors;
splicing and combining the K intention characteristics to serve as predicted user and commodity characteristics, and introducing an unsupervised learning task based on mutual information maximization by utilizing the graph structure information of the social relationship graph and the commodity attribute graph;
performing joint learning on the recommended task, a multi-intention-based personalized comparative learning task and a maximization task based on mutual information of a graph structure;
and (4) performing scoring prediction on the embedded vectors of the user and the commodity finally learned by the model to obtain a recommended commodity sequence.
2. The multi-intent recommendation method based on graph contrast learning according to claim 1, further comprising the step of preprocessing the acquired data set:
filtering invalid users according to preset conditions of the model, and reserving the valid users and corresponding commodity nodes;
and dividing the data set, randomly selecting an interaction from the verification set and the test set of each user, and taking the remaining interaction items as a training set.
3. The multi-intention recommendation method based on graph contrast learning according to claim 1, wherein the behavior of purchasing commodities by a user is introduced based on the original social relationship and commodity attribute relationship to form a new social relationship graph and commodity attribute graph, and the method comprises the following steps:
in order to add auxiliary information into the interactive behavior recommendation prediction, the social relationship and the attribute relationship are further processed according to the settings required by the model, which is specifically as follows: and injecting the social relationship and the commodity attribute relationship in the data set into the commodity purchasing behavior information of the user. If the number of the same commodities purchased between the two friends is larger than a preset threshold value, judging that the two friends are likely to be connected due to a certain similar purchase intention, and reconstructing a social relationship graph; if two commodities belonging to the same category are purchased by a plurality of same users, judging that the two commodities are possibly linked because of containing a certain similar purchasing intention, and reforming a commodity attribute relation graph; and further learning the feature vector which is learned by the model and comprises a plurality of factors by utilizing a supervision signal generated by the graph structure information of the fine-grained hierarchy.
4. The multi-intention recommendation method based on graph contrast learning according to claim 1, wherein the method for generating the parameterization enhanced UI diagram by a learnable drop manner based on constructing the corresponding contrast view based on the K decoupled potential factor intention characterizations comprises:
in each intention factor scene, calculating the probability omega of whether each edge on the interactive relation graph corresponding to the K potential factors is deleted or not in a parameterized mode k_ui
ω k_ui =MLP(Concat[u k ,v k ])
Wherein MLP is a multi-layer perceptron, and Concat represents the concatenation of two feature vectors together; u. of k And v k The user node characteristic and the commodity node characteristic of a UI interaction edge on the graph corresponding to the Kth hidden factor respectively;
To optimize the learning of graph structures and learn in an end-to-end manner, a re-parameter trick is employed, expressed as:
ρ=σ((log∈-log(1-∈)+ω)/τ)
wherein e is subject to a uniform distribution of (0, 1); τ >0, is the temperature coefficient; σ () is an activation function;
calculating the probability of each edge, deleting the edges with the probability smaller than a preset threshold value, reserving other edges, and obtaining an enhanced graph G 'after drop again' k_ui
Characterizing a node (u) k ,v k ) Enhancement of graph G' k_ui Inputting the data into a GCN encoder of the shared parameters, and obtaining final user and commodity characteristics E 'under multiple intentions through L-layer message passing aggregation and accumulation' ku ,E′ ki
5. The multi-intention recommendation method based on graph contrast learning according to claim 1, wherein the learning of the decoupling features of the recommendation model comprises:
the characteristics of the user and the commodity are divided into K characteristic blocks, namely u = (u) 1 ,u 2 ,…,u k ),v=(v 1 ,v 2 ,…,v k ),u k ,v k ∈R d/k One-to-one correspondence between feature blocks and intentions, one-to-one correspondence between each intention block of a user and each intention block of a commodity (u) k ,v k ) (ii) a Wherein R is d/k A real number space with dimension d/k;
graph convolution neural network message transfer model GCN based on each intention factor k Inputting the node characteristics (u) of the user's commodity k ,v k ) Inputting the bipartite graph G ui Through L-layer message transmission aggregation accumulation, the final user and commodity characteristics E under multiple intentions are obtained ku ,E ki
6. The method of claim 1, wherein the K number of intention prototype vectors are introduced into the multi-intention recommendation method based on graph contrast learning
Figure FDA0003753438730000021
The method comprises the following steps:
k feature blocks of a certain node are subjected to aggregation of neighbor information or high-order information on respective enhanced graphs to obtain feature representations under each potential intention factor, and k decoupling feature representations E 'learned by the node are calculated' kj Degree of matching with the set intent category prototype vector:
Figure FDA0003753438730000031
wherein the content of the first and second substances,
Figure FDA0003753438730000032
cosine similarity, evaluating the similarity of the two vectors; e' kj Is the kth decoupling characteristic representation of node j on the graph, c k The k-th intention prototype vector introduced for the model; exp (·) is an exponential function; p k And the normalized distribution representing the characteristics of the K intention categories obtained after each node is subjected to multi-layer GCN message transmission aggregation is shown.
7. The method of claim 1, wherein the two groups of intention features obtained by decoupling the two comparison views according to K potential hidden factors are independently subjected to K times of personalized comparison learning under different hidden factors, and the method comprises:
the common contrast loss function under each intention characteristic is independently calculated, and a positive example is obtained (E) kj ,E′ kj ) Taking other sample points except the sample point in the minipatch size sample data as a negative column, wherein j represents a user node or a commodity node;
positive example score:
Figure FDA0003753438730000033
all negative column scores:
Figure FDA0003753438730000034
Figure FDA0003753438730000035
contrast loss function based on specific intention feature space
Figure FDA0003753438730000036
Calculating contrast losses corresponding to the K hidden factors, wherein a weight coefficient before the losses corresponding to the unified features are fused, namely feature vector information corresponding to different hidden factors contained in each node has different contribution degrees to final contrast learning losses, and is related to normalized probability distribution represented by features corresponding to the K hidden factors which are subjected to decoupling learning by a certain node, namely the node has the probability of a certain intention vector; whether the characterization vector corresponding to the hidden factor can be accurately compared with the learning task or not is also related, namely the sub-graph structure of the node after being enhanced is possibly inaccurate, and if the feature information of two visual angles is forcibly maximized, a suboptimal effect can be generated; therefore define
Figure FDA0003753438730000037
The rationality of this comparative learning is measured from a probabilistic perspective,
Figure FDA0003753438730000038
as the weight coefficient before the accumulation of the independent contrast loss under the K intention categories of each node, and finally the contrast loss function of each node
Figure FDA0003753438730000039
Figure FDA00037534387300000310
Total loss function
Figure FDA00037534387300000311
Namely, the loss of contrast between the user and the commodity side; wherein, alpha and beta are balance coefficients of a contrast loss function of the user side and the commodity side; m is the number of users, and n is the number of commodities.
8. The multi-intention recommendation method based on graph contrast learning according to claim 1, wherein the unsupervised learning task based on mutual information maximization is introduced by using graph structure information of a social relationship graph and a commodity attribute graph, and comprises the following steps:
and generating a hierarchical learning paradigm with maximized mutual information by utilizing progressive graph structure information from the nodes to the subgraphs taking the nodes as centers and then to the global graph, mining graph structure information in a finer granularity, and further optimizing the learning of node characteristics.
9. The method for recommending multi-intention based on graph contrast learning according to claim 1, wherein the jointly learning of the recommendation task, the multi-intention based personalized contrast learning task and the maximization task based on graph structure mutual information comprises:
the end user and the goods are characterized by: e u ={E 1u ,E 2u ,…,E ku },E u ={E 1u ,E 2u ,…,E ku Get the supervised loss L of the recommended task with the BPRLoss loss function bpr And adding the final model loss with the unsupervised loss:
Figure FDA0003753438730000041
in the formula, mu and
Figure FDA0003753438730000042
the parameters are super parameters and are used as balance factors of unsupervised task loss; l is a radical of an alcohol MI Is graph mutual information maximization loss function; and updating the parameters of the model by using a gradient descent method until the loss function reaches a preset threshold value.
10. A multi-intention recommendation device based on graph contrast learning is characterized by comprising:
at least one processor;
at least one memory for storing at least one program;
when executed by the at least one processor, cause the at least one processor to implement the method of any one of claims 1-9.
CN202210847446.2A 2022-07-19 2022-07-19 Multi-intention recommendation method and device based on graph comparison learning Pending CN115358809A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210847446.2A CN115358809A (en) 2022-07-19 2022-07-19 Multi-intention recommendation method and device based on graph comparison learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210847446.2A CN115358809A (en) 2022-07-19 2022-07-19 Multi-intention recommendation method and device based on graph comparison learning

Publications (1)

Publication Number Publication Date
CN115358809A true CN115358809A (en) 2022-11-18

Family

ID=84031070

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210847446.2A Pending CN115358809A (en) 2022-07-19 2022-07-19 Multi-intention recommendation method and device based on graph comparison learning

Country Status (1)

Country Link
CN (1) CN115358809A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116611896A (en) * 2023-07-19 2023-08-18 山东省人工智能研究院 Multi-modal recommendation method based on attribute-driven decoupling characterization learning
CN116628347A (en) * 2023-07-20 2023-08-22 山东省人工智能研究院 Comparison learning recommendation method based on guided graph structure enhancement
CN116738035A (en) * 2023-02-02 2023-09-12 量子数科科技有限公司 Recommendation rearrangement method based on window sliding

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116738035A (en) * 2023-02-02 2023-09-12 量子数科科技有限公司 Recommendation rearrangement method based on window sliding
CN116738035B (en) * 2023-02-02 2024-05-28 量子数科科技有限公司 Recommendation rearrangement method based on window sliding
CN116611896A (en) * 2023-07-19 2023-08-18 山东省人工智能研究院 Multi-modal recommendation method based on attribute-driven decoupling characterization learning
CN116611896B (en) * 2023-07-19 2023-10-24 山东省人工智能研究院 Multi-modal recommendation method based on attribute-driven decoupling characterization learning
CN116628347A (en) * 2023-07-20 2023-08-22 山东省人工智能研究院 Comparison learning recommendation method based on guided graph structure enhancement
CN116628347B (en) * 2023-07-20 2023-09-29 山东省人工智能研究院 Comparison learning recommendation method based on guided graph structure enhancement

Similar Documents

Publication Publication Date Title
CN113626719B (en) Information recommendation method, device, equipment, storage medium and computer program product
CN111797321B (en) Personalized knowledge recommendation method and system for different scenes
CN115358809A (en) Multi-intention recommendation method and device based on graph comparison learning
CN113987200B (en) Recommendation method, system, terminal and medium for combining neural network with knowledge graph
CN113255895B (en) Structure diagram alignment method and multi-diagram joint data mining method based on diagram neural network representation learning
CN112966091B (en) Knowledge map recommendation system fusing entity information and heat
CN114418035A (en) Decision tree model generation method and data recommendation method based on decision tree model
CN111143704B (en) Online community friend recommendation method and system integrating user influence relationship
Borges et al. On measuring popularity bias in collaborative filtering data
Zhang et al. Taxonomy-aware collaborative denoising autoencoder for personalized recommendation
WO2023231542A1 (en) Representation information determination method and apparatus, and device and storage medium
CN115221413B (en) Sequence recommendation method and system based on interactive graph attention network
CN110321492A (en) A kind of item recommendation method and system based on community information
CN116401542A (en) Multi-intention multi-behavior decoupling recommendation method and device
CN115982467A (en) Multi-interest recommendation method and device for depolarized user and storage medium
CN114461929A (en) Recommendation method based on collaborative relationship graph and related device
CN114330476A (en) Model training method for media content recognition and media content recognition method
CN110543601B (en) Method and system for recommending context-aware interest points based on intelligent set
Zhang et al. Multi-view dynamic heterogeneous information network embedding
CN115310004A (en) Graph nerve collaborative filtering recommendation method fusing project time sequence relation
CN114862496A (en) Session recommendation method, device and medium based on user personalized modeling
Safdari et al. Anomaly detection and community detection in networks
CN114529399A (en) User data processing method, device, computer equipment and storage medium
CN111815396B (en) Product screening method, system, equipment and storage medium based on metagraph
CN116628310B (en) Content recommendation method, device, equipment, medium and computer program product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination