CN111476223A - Method and device for evaluating interaction event - Google Patents

Method and device for evaluating interaction event Download PDF

Info

Publication number
CN111476223A
CN111476223A CN202010588751.5A CN202010588751A CN111476223A CN 111476223 A CN111476223 A CN 111476223A CN 202010588751 A CN202010588751 A CN 202010588751A CN 111476223 A CN111476223 A CN 111476223A
Authority
CN
China
Prior art keywords
node
vector
layer
nodes
implicit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010588751.5A
Other languages
Chinese (zh)
Other versions
CN111476223B (en
Inventor
刘旭钦
文剑烽
常晓夫
宋乐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alipay Hangzhou Information Technology Co Ltd
Original Assignee
Alipay Hangzhou Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alipay Hangzhou Information Technology Co Ltd filed Critical Alipay Hangzhou Information Technology Co Ltd
Priority to CN202010588751.5A priority Critical patent/CN111476223B/en
Publication of CN111476223A publication Critical patent/CN111476223A/en
Application granted granted Critical
Publication of CN111476223B publication Critical patent/CN111476223B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Multimedia (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The embodiment of the specification provides a method and a device for evaluating an interaction event, wherein the method comprises the steps of firstly obtaining a dynamic interaction graph for reflecting the incidence relation of the interaction event, and then respectively taking two target nodes to be analyzed as current root nodes to determine two corresponding subgraphs in the dynamic interaction graph. Inputting the two subgraphs into a neural network model to obtain interactive characterization vectors corresponding to interaction of the two target nodes, wherein in the neural network model, a processing layer obtains implicit vectors of the nodes according to input features of the nodes in the two subgraphs and graph structures of the subgraphs, a fusion layer determines weight estimation of the nodes according to the implicit vectors by using a compression-transformation mechanism, updates input features of the nodes accordingly, transmits the input features to the next processing layer, and finally obtains the interactive characterization vectors through an output layer. Then, an interaction event of the interaction of the two target nodes may be evaluated based on the interaction characterization vector.

Description

Method and device for evaluating interaction event
Technical Field
One or more embodiments of the present specification relate to the field of machine learning, and more particularly, to a method and apparatus for processing and evaluating interaction events using machine learning.
Background
In many scenarios, user interaction events need to be analyzed and processed. The interaction event is one of basic constituent elements of an internet event, for example, a click action when a user browses a page can be regarded as an interaction event between the user and a content block of the page, a purchase action in an e-commerce can be regarded as an interaction event between the user and a commodity, and an inter-account transfer action is an interaction event between the user and the user. The characteristics of fine-grained habit preference and the like of the user and the characteristics of an interactive object are contained in a series of interactive events of the user, and the characteristics are important characteristic sources of a machine learning model. Therefore, in many scenarios, it is desirable to characterize and model interaction participants, as well as interaction events, based on the history of the interaction.
However, an interactive event involves both interacting parties, and the status of each party itself may be dynamically changing, and thus it is very difficult to accurately characterize the interacting parties comprehensively considering their multi-aspect characteristics. Thus, improved solutions for more efficiently analyzing and processing interactive events are desired.
Disclosure of Invention
One or more embodiments of the present specification describe a method and an apparatus for processing an interactivity event, in which an interactivity event sequence is represented by a dynamic interactivity diagram, and for two target nodes involved by an event to be evaluated, interactions of the two target nodes are characterized based on a subgraph of the two nodes in the dynamic interactivity diagram and correlations and importance degrees between the nodes in the subgraph, so that the event involving the two target nodes is evaluated and analyzed more accurately.
According to a first aspect, there is provided a computer-implemented method of evaluating an interaction event, the method comprising:
acquiring a dynamic interaction graph for reflecting an association relation of interaction events, wherein the dynamic interaction graph comprises a plurality of pairs of nodes, each pair of nodes represents two objects in one interaction event, and any node points to two nodes corresponding to the last interaction event in which the object represented by the node participates through a connecting edge;
respectively taking a first target node and a second target node to be analyzed as current root nodes, and determining subgraphs which start from the current root nodes and reach a preset range through connecting edges in the dynamic interaction graph as a first subgraph and a second subgraph;
inputting the first subgraph and the second subgraph into a neural network model for graph processing to obtain an interactive characterization vector, wherein the neural network model comprises L processing layers and output layers which are sequentially stacked, at least one processing layer in the L processing layers is provided with a corresponding fusion layer, and the graph processing comprises the following steps:
in each of the L processing layers, respectively processing to obtain a current-layer implicit vector of each node according to current-layer input features of each node included in the first subgraph and the second subgraph and a directional relation of a connecting edge between each node;
in the fusion layer, respectively carrying out first compression processing on each layer of hidden vectors obtained by the corresponding processing layer to obtain each compressed representation corresponding to each node, and carrying out first transformation processing on each compressed representation to obtain weight estimation corresponding to each node; taking the combination result of the current-layer implicit vector of each node and the corresponding weight estimation as the next-layer input characteristic of each node;
in the output layer, fusing L first implicit vectors corresponding to the first target node and L second implicit vectors corresponding to the second target node, which are respectively obtained by the L processing layers, to obtain interactive representation vectors;
and evaluating a first event of interaction between the first target node and the second target node according to the interaction characterization vector.
In one embodiment, sub-graphs within a predetermined range reached via a connecting edge include: nodes reached via connecting edges within a preset number K; and/or nodes which are reachable through the connecting edge and have interaction time within a preset time range.
According to one embodiment, the L processing layers include a first processing layer at the bottom, where the present-layer input features of the respective nodes include node attribute features of the respective nodes.
Further, each node comprises a user node and/or an item node, and the node attribute characteristics of the user node comprise at least one of the following: age, occupation, education level, region, registration duration, population label; the node attribute characteristics of the item node include at least one of: item category, time to shelve, number of reviews, sales volume.
According to an embodiment, each processing layer is a time-sequence-based network processing layer, and is configured to sequentially iterate and process each node according to the local-layer input features of each node included in the first subgraph and the second subgraph and the directional relationship of the connection edge between each node, so as to obtain the local-layer implicit vector of each node.
In an embodiment of the foregoing embodiment, the time-series-based network processing layer is an L STM layer, and the L STM layer is configured to sequentially use each node as a current node according to a pointing relationship sequence of a connection edge between each node, determine an implicit vector and an intermediate vector of the current node according to input features of the current layer of the current node, and use the implicit vector of the current node as the implicit vector of the current layer.
According to one embodiment, the first compression process includes: and calculating the average value of each vector element of any implicit vector in each layer of implicit vectors as the compressed representation of the node corresponding to the any implicit vector.
According to another embodiment, the first compression process includes: and compressing any hidden vector in the hidden vectors of the layer into a dimensionality reduction vector with lower dimensionality as a corresponding compression representation.
In one embodiment, the respective compressions are represented as respective compressed values, the respective compressed values constituting a compressed vector; correspondingly, the first transformation process comprises the following steps:
performing first linear transformation on the compressed vector by using a first transformation matrix to obtain a first transformation vector, wherein the dimensionality of the first transformation vector is smaller than that of the compressed vector;
performing first nonlinear transformation on the first transformation vector to obtain a second transformation vector;
performing second linear transformation on the second transformation vector by using a second transformation matrix to obtain a third transformation vector, wherein the dimensionality of the third transformation vector is equal to the compressed vector;
and performing second nonlinear transformation on the third transformation vector to obtain a weight vector, wherein each element in the weight vector corresponds to the weight estimation of each node.
According to one embodiment, the obtaining of the interactive characterization vector by the output layer specifically includes:
respectively carrying out second compression processing on 2L implicit vectors formed by the L first implicit vectors and L second implicit vectors to obtain corresponding 2L compressed representations;
performing a second transformation on the 2L compressed representations to obtain corresponding 2L weight factors;
and performing weighted combination on the 2L implicit vectors by utilizing the 2L weight factors, and obtaining the interactive characterization vector based on a weighted combination result.
In one embodiment, the second compression process is the same as the first compression process, and the second transform process is the same as the first transform process. .
According to one embodiment, the first event is a hypothetical event, and the evaluating the first event of the interaction between the first target node and the second target node comprises evaluating an occurrence probability of the first event.
According to another embodiment, the first event is an occurred event, and the evaluating the first event for interaction by the first target node and the second target node comprises evaluating an event category of the first event.
According to a second aspect, there is provided an apparatus for evaluating an interaction event, the apparatus comprising:
the interactive graph obtaining unit is configured to obtain a dynamic interactive graph used for reflecting an interactive event incidence relation, the dynamic interactive graph comprises a plurality of pairs of nodes, each pair of nodes represents two objects in one interactive event, and any node points to two nodes corresponding to the last interactive event in which the object represented by the node participates through a connecting edge;
the subgraph determining unit is configured to respectively use a first target node and a second target node to be analyzed as a current root node, and determine subgraphs which start from the current root node and reach a preset range through a connecting edge in the dynamic interaction graph as a first subgraph and a second subgraph;
a processing unit, configured to perform graph processing on the first sub-graph and the second sub-graph input neural network model to obtain an interactive characterization vector, where the neural network model includes L processing layers and an output layer that are stacked in sequence, and at least one of the L processing layers has a corresponding fusion layer, where:
each processing layer is used for respectively processing to obtain the hidden vector of each node according to the input characteristics of each layer of nodes contained in the first subgraph and the second subgraph and the directional relation of the connecting edges among the nodes;
the fusion layer is used for respectively carrying out first compression processing on each layer of hidden vectors obtained by the corresponding processing layer to obtain each compressed representation corresponding to each node, and carrying out first transformation processing on each compressed representation to obtain weight estimation corresponding to each node; taking the combination result of the current-layer implicit vector of each node and the corresponding weight estimation as the next-layer input characteristic of each node;
the output layer is used for fusing L first implicit vectors corresponding to the first target node and L second implicit vectors corresponding to the second target node, which are respectively obtained by the L processing layers, so as to obtain interactive characterization vectors;
and the evaluation unit is configured to evaluate a first event of interaction between the first target node and the second target node according to the interaction characterization vector.
According to a third aspect, there is provided a computer readable storage medium having stored thereon a computer program which, when executed in a computer, causes the computer to perform the method of the first aspect.
According to a fourth aspect, there is provided a computing device comprising a memory and a processor, wherein the memory has stored therein executable code, and wherein the processor, when executing the executable code, implements the method of the first aspect.
According to the method and the device provided by the embodiment of the specification, the dynamic interaction graph is constructed to reflect the time sequence relation of each interaction event and the interaction influence between interaction objects. For two target nodes to be analyzed, two sub-graphs with the two target nodes as root nodes are respectively obtained from the dynamic interaction graph, and the two sub-graphs are input into the neural network model. In the process of carrying out graph processing on the two subgraphs, the neural network model obtains the importance of each node in the two subgraphs by using a compression-transformation mechanism, determines the interaction characterization vectors of the two target nodes by considering the importance of each node, and is used for evaluating the interaction events participated by the two target nodes. Due to the fact that the importance of the nodes is considered, the interactive characterization vectors can better reflect different contribution degrees of different historical nodes, have better characterization capability, and are more beneficial to accurately analyzing and evaluating the interactivity among the target nodes.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 illustrates an implementation scenario diagram according to one embodiment;
FIG. 2 illustrates a flow diagram of a method of evaluating an interaction event, according to one embodiment;
FIG. 3 illustrates a dynamic interaction sequence and a dynamic interaction diagram constructed therefrom, in accordance with one embodiment;
FIG. 4 shows an example of a subgraph in one embodiment;
FIG. 5 illustrates a schematic structural diagram of a neural network model according to one embodiment;
FIG. 6 shows a working schematic of L STM processing layers;
FIG. 7 illustrates the structure of L STM processing layers, according to one embodiment;
FIG. 8 shows that in one embodimentlProcessing flow of the fusion layer;
FIG. 9 is a process diagram of a neural network model, according to a specific embodiment;
fig. 10 shows a schematic block diagram of an apparatus for evaluating an interaction event according to an embodiment.
Detailed Description
The scheme provided by the specification is described below with reference to the accompanying drawings.
As previously mentioned, it is desirable to be able to characterize and model the participants of an interaction event, as well as the interaction event itself, based on the history of the interaction.
In one approach, a static interaction relationship network graph is constructed based on historical interaction events, such that individual interaction objects are analyzed based on the interaction relationship network graph. Specifically, the participants of the historical events may be used as nodes, and connection edges may be established between nodes having an interaction relationship, so as to form the interaction network graph. For example, in one example, a user-commodity bipartite graph may be built with users and commodities as nodes, respectively. If a user purchases a good, a connecting edge is constructed between the user and the good. In another example, a user transfer relationship graph may be constructed in which each node represents a user and a connecting edge exists between two users who have occurred in the transfer record.
However, it can be seen that the bipartite graph and the transfer relationship graph in the above examples, although they may show the interaction relationships between objects, do not contain timing information of these interaction events. The graph embedding is simply carried out on the basis of the interaction relation network graph, and the obtained feature vectors do not express the influence of the time information of the interaction events on the nodes. Moreover, such static graphs are not scalable enough, and are difficult to flexibly process for the situations of newly added interaction events and newly added nodes.
In another scheme, for each interactive object in the interactive event to be analyzed, a behavior sequence of the object is constructed, and based on the behavior sequence, the feature expression of the object is extracted, so as to construct the feature expression of the event. However, such a behavior sequence merely characterizes the behavior of the object to be analyzed itself, whereas an interaction event is an event involving multiple parties, and influences are indirectly transmitted between the participants through the interaction event. Thus, such an approach does not express the impact between the participating objects in the interaction event.
Taking the above factors into consideration, according to one or more embodiments of the present specification, a dynamically changing interactivity event sequence is constructed into a dynamic interactivity graph, wherein each interactivity object involved in each interactivity event corresponds to each node in the dynamic interactivity graph. For two target nodes involved by the interaction event to be analyzed, two sub-graphs related to the target nodes are obtained from the dynamic interaction graph, and the two sub-graphs are input into the neural network model. The neural network model obtains the interactive vector expression of the two target nodes based on the two subgraphs and the correlation and the importance between the nodes in the subgraphs, and evaluates the interactive events to be analyzed according to the interactive vector expression.
Fig. 1 shows a schematic illustration of an implementation scenario according to an embodiment. As shown in FIG. 1, multiple interaction events occurring in sequence may be organized chronologically into a dynamic interaction sequence<E1,E2,…,EN>Wherein each element EiRepresenting an interaction event, which may be represented in the form of an interaction feature set Ei=(ai,bi,ti) Wherein a isiAnd biIs an event EiTwo interacting objects of, tiIs the interaction time.
According to the embodiment of the present specification, a dynamic interaction graph 10 is constructed based on the dynamic interaction sequence, and is used for reflecting the incidence relation of the interaction events. In the interaction diagram 10, each interaction object a in each interaction event is divided intoi,biRepresented by nodes and establishing connecting edges between events containing the same object. The structure of dynamic interaction FIG. 10 will be described in more detail later.
Two target nodes related to a certain interactive event to be analyzed can be determined in the dynamic interactive graph, and two corresponding subgraphs, namely a first subgraph and a second subgraph, are obtained by respectively taking the two target nodes as current root nodes. Generally, a subgraph includes a range of nodes that can be reached through a connecting edge, starting from a current root node. The subgraph reflects the impact on the current root node caused by other objects in the historical interaction events that are directly or indirectly associated with the object represented by the current root node.
The first sub-graph and the second sub-graph are then input into a pre-trained neural network model. The neural network model not only carries out feature representation on the interaction of the two target nodes according to the nodes and the connection relation contained in the first subgraph and the second subgraph respectively, but also according to the importance of each node in the two subgraphs. More specifically, the neural network model is a multilayer neural network and comprises a plurality of processing layers and at least one fusion layer, wherein the processing layers respectively obtain implicit vectors of all nodes in a first subgraph and a second subgraph according to input features and connection relations of the nodes; the fusion layer obtains the weight estimation of each node by using a compression-transformation mechanism (Squeeze and Excitation) according to the implicit vector of each node, and transmits the combined result of the weight estimation and the implicit vector to the next processing layer as the node input characteristic. And finally, the neural network model comprehensively obtains interactive characterization vectors of interaction of the two target nodes according to the implicit vectors obtained by each processing layer. Based on the interaction characterization vector thus obtained, the interaction event to be analyzed can be expressed and analyzed, and the occurrence probability of the interaction event, or the event category, can be evaluated.
Specific implementations of the above concepts are described below.
FIG. 2 illustrates a flow diagram of a method of evaluating an interaction event, according to one embodiment. It is to be appreciated that the method can be performed by any apparatus, device, platform, cluster of devices having computing and processing capabilities. The steps of the method for processing an interactivity event as shown in fig. 2 are described below with reference to specific embodiments.
First, in step 21, a dynamic interaction graph reflecting an interaction event correlation is obtained.
In general, a dynamic interaction graph may be constructed based on a sequence of interaction events as previously described to reflect the incidence of the interaction events. Dynamic interaction sequences, e.g. expressed as<E1,E2,…,EN>May comprise a plurality of interactivity events arranged in chronological order, wherein each interactivity event EiCan be represented as an interactive feature set Ei=(ai,bi,ti) Wherein a isiAnd biIs an event EiTwo interacting objects of, tiIs the interaction time.
For example, in an e-commerce platform, an interaction event may be a user's purchasing behavior, where two objects may be a certain user and a certain good. In another example, the interaction event may be a click action of a user on a page tile, where two objects may be a certain user and a certain page tile. In yet another example, the interaction event may be a transaction event, such as a transfer of money from one user to another user, when the two objects are two users. In other business scenarios, an interaction event may also be other interaction behavior that occurs between two objects.
In one embodiment, the set of interaction characteristics for each interaction event may further include an eventFeatures or behavior features f, so that each interactive feature set can be represented as Xi=(ai,bi,tiF). In particular, the event characteristics or behavior characteristics f may include context and context information of the occurrence of the interaction event, some attribute characteristics of the interaction behavior, and so on.
For example, in the case that the interaction event is a user click event, the event feature f may include a type of a terminal used by the user for clicking, a browser type, an app version, and the like; in the case where the interactive event is a transaction event, the event characteristics f may include, for example, a transaction type (commodity purchase transaction, transfer transaction, etc.), a transaction amount, a transaction channel, and the like.
For the dynamic interaction sequence described above, a dynamic interaction graph may be constructed. Specifically, a pair of nodes (two nodes) is used for representing two objects related to one interactive event, and each object in each interactive event in the dynamic interactive sequence is represented by a node respectively. Thus, one node may correspond to one object in one interaction event, but the same physical object may correspond to multiple nodes. For example, if the user U1 purchased commodity M1 at time t1 and purchased commodity M2 at time t2, there are two feature groups of interaction events (U1, M1, t1) and (U1, M2, t 2), then two nodes U1(t1), U1(t 2) are created for the user U1 from the two interaction events, respectively. It can therefore be considered that a node in the dynamic interaction graph corresponds to the state of an interaction object in one interaction event.
For each node in the dynamic interaction graph, a connecting edge is constructed in the following way: for any node i, assuming that it corresponds to an interactivity event i (with an interaction time t), in the dynamic interaction sequence, tracing back from the interactivity event i, i.e. tracing back to a direction earlier than the interaction time t, determines the first interactivity event j (with an interaction time t-, t-earlier than t) which also contains the object represented by the node i as the last interactivity event in which the object participates. Thus, a connecting edge is established pointing from node i to both nodes in the last interactivity event j. The two pointed-to nodes are then also referred to as the associated nodes of node i.
The following description is made in conjunction with specific examples. FIG. 3 illustrates a dynamic interaction sequence and a dynamic interaction diagram constructed therefrom, according to one embodiment. In particular, the left side of FIG. 3 shows a dynamic interaction sequence organized in time order, wherein an exemplary illustration is given at t respectively1,t2,…,t6Interaction event E occurring at a moment1,E2,…,E6Each interaction event contains two interaction objects involved in the interaction and the interaction time (the event feature is omitted for clarity of illustration). The right side of fig. 3 shows a dynamic interaction diagram constructed according to the dynamic interaction sequence on the left side, wherein two interaction objects in each interaction event are respectively taken as nodes. Node u (t) is shown below6) For example, the construction of the connecting edge is described.
As shown, the node u (t)6) Representing an interaction event E6One interaction object David. Thus, from interaction event E6Going back from the beginning, the first found interaction event, which also includes the interaction object David, is E4That is, E4Is the last interaction event in which David participated, correspondingly, E4Two nodes u (t) corresponding to the two interactive objects of4) And w (t)4) Is node u (t)6) Two associated nodes. Thus, the slave node u (t) is established6) Direction E4Corresponding two nodes u (t)4) And w (t)4) The connecting edge of (2). Similarly, from u (t)4) (corresponding to interaction event E4) Continuing to trace back forward, the last interactive event E in which the object u, namely David, participates can be found continuously2Then, a slave u (t) is established4) Direction E2Connecting edges of the two corresponding nodes; from w (t)4) Go back forward, the last interactive event E participated in by the object w can be found3Then, a slave w (t) is established4) Direction E3And connecting edges of the corresponding two nodes. In this manner, connecting edges are constructed between nodes, thereby forming the dynamic interaction graph of FIG. 3.
The above describes a way and a process for constructing a dynamic interaction graph based on a dynamic interaction sequence. For the method for processing interaction events shown in fig. 2, the process of constructing the dynamic interaction graph may be performed in advance or in the field. Accordingly, in one embodiment, at step 21, a dynamic interaction graph is constructed in the field according to the dynamic interaction sequence. Constructed as described above. In another embodiment, the dynamic interaction graph can be constructed in advance based on the dynamic interaction sequence. In step 21, the formed dynamic interaction graph is read or received.
It can be understood that the dynamic interaction graph constructed in the above manner has strong extensibility, and can be very easily updated dynamically according to the newly added interaction events. When a new interaction event occurs, two objects related to the new interaction event can be used as two new nodes and added into the existing dynamic interaction graph. And, for each newly added node, determining whether an associated node exists. And if the associated nodes exist, adding a connecting edge pointing to the two associated nodes from the newly added node, thus forming an updated dynamic interaction graph.
In step 21, a dynamic interaction graph reflecting the association relationship of the interaction events is obtained. Then, in step 22, the first target node and the second target node to be analyzed are respectively used as the current root node, and the corresponding first sub-graph and the second sub-graph are determined in the dynamic interaction graph.
Specifically, after determining two target nodes to be analyzed, that is, a first target node and a second target node, related to an interaction event to be analyzed, the first target node and the second target node may be respectively used as a current root node, and in the dynamic interaction graph, starting from the current root node, nodes within a predetermined range reached through a connecting edge are used as corresponding subgraphs, so as to obtain a first subgraph and a second subgraph respectively.
In one embodiment, the nodes within the predetermined range may be nodes reachable through at most a preset number K of connecting edges. The number K is a preset hyper-parameter and can be selected according to the service situation. It will be appreciated that the preset number K represents the number of steps of the historical interaction events traced back from the root node onwards. The larger the number K, the longer the historical interaction information is considered.
In another embodiment, the nodes in the predetermined range may also be nodes whose interaction time is within a predetermined time range. For example, the interaction time from the root node is traced back forward for a duration of T (e.g., one day) within which the nodes are within reach through the connecting edge.
In yet another embodiment, the predetermined range takes into account both the number of connected sides and the time range. In other words, the nodes in the predetermined range are nodes that are reachable at most through a preset number K of connecting edges and have interaction time within a predetermined time range.
The following continues the above examples and is described in connection with specific examples. Fig. 4 shows an example of a sub-graph in one embodiment. In the example of FIG. 4, assume u (t)6) Is the first target node, and then, with this node u (t)6) For the root node, its corresponding first subgraph is determined and it is assumed that the subgraph is composed of nodes that are reached at most via a preset number K = 2. Then, from the current root node u (t)6) Starting from this, traversal is performed along the direction of the connecting edges, and the nodes that can be reached via 2 connecting edges are shown as the dashed areas in the figure. The node and the connection relation in the region are the node u (t)6) The corresponding sub-graph, i.e. the first sub-graph.
Similarly, if another node v (t) is set6) Is the second target node, then the node v (t) can be identified6) And traversing again as a root node to obtain a second subgraph.
Hereinafter, for clarity and simplicity of description, the first target node is denoted as u (t), and the second target node is denoted as v (t).
Thus, for the first target node u (t) and the second target node v (t) involved by the interaction event to be analyzed, the corresponding first subgraph and second subgraph are obtained respectively. Next, in step 23, the first sub-graph and the second sub-graph are input into the neural network model for graph processing, and an interaction characterization vector for interaction between the first target node and the second target node is obtained from the output of the neural network model.
It will be appreciated that the subgraph contains the nodes involved in the historical interaction events that go back forward from the root node. And performing vector characterization on the root node based on the subgraph, namely performing vector characterization according to the historical interaction events directly or indirectly associated with the root node. The inventor has realized that when two target nodes are respectively vector-characterized according to two subgraphs in order to analyze events in which the two target nodes participate together, the respective historical interaction events of the two target nodes may have different importance, and different degrees of attention and weight should be given.
For example, referring to FIG. 4, when node u (t) is used6) Is a first target node, with v (t)6) For the second target node, predict and u (t)6) And v (t)6) When events are involved, both the first sub-graph and the second sub-graph contain L ucy this common node6) And v (t)6) The events which participate together have larger reference value and are expected to give more attention.
Based on the above consideration, in step 23, the neural network model not only performs the representation of the implicit vector according to the node and the connection relation included in each of the first sub-graph and the second sub-graph, but also evaluates the weight of each node through the fusion layer, and transmits the implicit vector fused with the weight to the next processing layer, so as to better represent the interaction event.
The following describes the specific structure and processing logic of the neural network model described above.
FIG. 5 illustrates a schematic diagram of a neural network model according to one embodiment, as shown in FIG. 5, the neural network model includes L processing layers, denoted as processing layers 1,2, L, respectively, at least a portion of the processing layers have corresponding fusion layers, FIG. 5 illustrates an example where processing layer 1 has fusion layer 1 and processing layer 2 has fusion layer 2.
And reading the graph structures of the first subgraph and the second subgraph respectively by the processing layers. Based on the graph structure obtained by reading, the respective processing layers, e.g. the optional second layerlA processing layer, which is based on the input characteristics of the layer at each node i in the first subgraph and the second subgraph, i.e. the first subgraphlLayer input features
Figure DEST_PATH_IMAGE001
And the direction relation of the connecting edges among all the nodes are respectively processed to obtain the local layer implicit vector of each node, namelylLayer implicit vector
Figure 4005DEST_PATH_IMAGE002
If it is firstlThe treatment layer having a corresponding fusion layer, i.e. the secondlA fusion layer according to the second of the nodeslLayer implicit vector
Figure 51595DEST_PATH_IMAGE002
Evaluating to obtain weight estimation corresponding to each node
Figure DEST_PATH_IMAGE003
And combines the weight estimates with the corresponding implicit vectors and passes them to the next processing layer, i.e., the firstl+1 processing level as input feature for the next processing level
Figure 423671DEST_PATH_IMAGE004
Specifically, the firstlThe fusion layer is from the firstlThe processing layer obtains the first node of each node ilLayer implicit vector
Figure 353450DEST_PATH_IMAGE002
For each implicit vector first
Figure 141277DEST_PATH_IMAGE002
The compression process (squeeze) is performed to obtain a compressed representation corresponding to each node
Figure DEST_PATH_IMAGE005
(ii) a Then, each compressed representation is subjected to weight transformation (excitation) to obtain weight estimation corresponding to each node
Figure 449899DEST_PATH_IMAGE003
. Thus, an implicit vector corresponding to each node i
Figure 930559DEST_PATH_IMAGE002
Weight estimation corresponding to the node
Figure 816475DEST_PATH_IMAGE003
Are combined as the first node of the node il+1 layer input feature
Figure 142414DEST_PATH_IMAGE004
In this way, the fusion layer determines the importance estimation of the node according to the implicit vector of the node obtained by the corresponding processing layer, updates the input characteristics of the node according to the weight estimation, and inputs the input characteristics to the next processing layer, so that different attention given to different nodes based on the importance of the node is processed and transferred between adjacent processing layers.
Accordingly, the firstl+1 processing layer gets the first node of each of the two subgraphsl+1 layer input feature
Figure 695755DEST_PATH_IMAGE004
Taking the first sub-graph and the second sub-graph as the local layer input characteristics of the nodes, and accordingly, continuously processing the first sub-graph and the second sub-graph to obtain the first sub-graph of each nodelLayer +1 implies a vector. If it islIf the +1 treated layer does not have a corresponding fusion layer, the obtained second layer is directly formedlThe +1 layer hidden vector is used as the node input characteristic of the next layer and is output to the first layerl+2 treatment layers; if it isl+1 treatment layer has a corresponding secondl+1 fusion layer, the same appliesThe combining layer combines the weight estimation of the node with the firstlThe +1 layer hidden vectors are fused and output to the first layer as the node input characteristics of the next layerl+2 process layers up to the last process layer.
In the L processing layers, the first processing layer at the bottom and the L processing layer at the last require special processing.
As previously described, each node in the dynamic interaction graph may represent various interaction objects, such as users, items, page blocks, and so forth. Accordingly, when the node is a user node representing a user, the node attribute characteristics may include attribute characteristics of the user, such as at least one of: age, occupation, education level, region, population label at registration; when the node is an item node representing an item, the node attribute characteristics may include attribute characteristics of the item, such as at least one of: item category, time on shelf, number of reviews, sales volume, etc. When the node represents other objects, the inherent attribute feature of the object can be correspondingly obtained as the node attribute feature, and further used as the local layer input feature of the node in the first processing layer.
The L th processing layer, as the last processing layer, does not have to have a fusion layer, but is directly connected to the output layer.
In an output layer, L first implicit vectors corresponding to a first target node u (t) and L second implicit vectors corresponding to a second target node v (t) which are respectively obtained by L processing layers are fused to obtain a final interaction characterization vector, so that an interaction event for interaction between the first target node u (t) and the second target node v (t) is characterized.
The implementation and computational logic of the above layers are described in detail below.
As described above, each processing layer is configured to respectively process to obtain the implicit vector of each node according to the input feature of the layer of each node included in the first/second subgraph and the directional relationship of the connection edge between the nodes in the subgraph. There are various implementations of the handling layer.
In one embodiment, each processing layer performs graph embedding processing on the first/second subgraph by using an existing graph embedding algorithm, and in the processing process, according to the input features of the current layer of each node and a graph structure (namely, a node connection relation), the current layer implicit vector of each first/second node is obtained.
In another embodiment, each processing layer adopts a time sequence-based network processing layer, for example, the time sequence-based network processing layer comprises a recurrent neural network RNN processing layer, a long-short term memory L STM processing layer and the like.
Specifically, when the L STM processing layer is adopted to process the first sub-graph, each node can be sequentially used as a current node according to the direction relation sequence of connecting edges between each node in the first sub-graph, according to the input characteristics of the current layer of the current node, the intermediate vector and the implicit vector of each of two nodes pointed by the current node, the implicit vector and the intermediate vector of the current node are determined, and the implicit vector of the current node is used as the implicit vector of the current layer.
FIG. 6 shows L a working diagram of the STM processing layer according to the connection of the dynamic interaction graph, a node in the first sub-graph can point to two nodes of the last interaction event in which the object represented by the node participates through the connection edge1And node j2At time T, the L STM processing layer processes to get node j, as shown in FIG. 61And node j2Comprising an intermediate vector c and an implicit vector H, L, at the next time T +, the STM processing layer, according to the present layer input features X of node z (T)z(t)J obtained by previous processing1And j2To obtain a representation vector H of node z (t)z(t). It is understood that the representation vector of the node z (t) can be used for processing to obtain the representation vector of the node pointing to the node z (t) at a subsequent time, so as to implement the iterative process.
FIG. 7 illustrates L the structure of the STM processing layer in the example of FIG. 7, consistent with FIG. 6, the current node is denoted as z (t), node j1And node j2Two nodes, referred to for simplicity as a first associated node and a second associated node, to which the current node points through a connecting edge are represented. Xz(t)The current level input features representing the current node, cj1And hj1Respectively represent a first associated node j1Intermediate and implicit vectors of cj2And hj2Respectively represent second associated nodes j2The intermediate vector and the implicit vector of (2). X abovez(t)、cj1And hj1、cj2And hj2In one embodiment, the input data also comprises a time difference delta which represents the occurrence time of the event of the current node z (t) and two connected nodes j1And j2Time difference of occurrence time of the event.
Fig. 7 specifically shows the calculation logic for obtaining the intermediate vector and the implicit vector of the current node z (t) according to the above input. Specifically, as shown in fig. 7, the present-level input feature X of the current node is inputz(t)First associated node j1Is implicitly given by the vector hj1And a second associated node j2Is implicitly given by the vector hj2And optional time difference delta, respectively inputting the first and second transformation functions with the same algorithm and different parameters to respectively obtain first transformation vectors
Figure 81737DEST_PATH_IMAGE006
And a second transform vector
Figure DEST_PATH_IMAGE007
Then, the first transformation vector is transformed
Figure 595895DEST_PATH_IMAGE006
And a second transform vector
Figure 991104DEST_PATH_IMAGE007
Respectively with the first associated node j1Intermediate vector c ofj1And a second associated node j2Intermediate vector c ofj2And performing combination operation to obtain a combination vector based on the operation result.
In addition, the current layer of the current node is input with the characteristic Xz(t)First associated node j1Is implicitly given by the vector hj1And a second associated node j2Is implicitly given by the vector hj2And optional time difference delta, respectively inputting the third transformation function and the fourth transformation function to respectively obtain a third transformation vector rz(t)And a fourth transformation vector Oz(t)
Then, based on the combined vector and the third transformation vector rz(t)Determining the intermediate vector c of the current node z (t)z(t)And based on the intermediate vector c of the node z (t) thus obtainedz(t)And a fourth transformation vector Oz(t)Determining an implicit vector h of the node z (t)z(t)
The specific form of each transformation function can be set according to the requirement, and the parameters are determined through the training of the neural network model.
Thus, according to the structure and algorithm shown in FIG. 7, the L STM processing layer points to two associated nodes j according to the present layer input features X of the current node z (t)1And j2Respective intermediate vector and implicit vector, determining the intermediate vector c of the node z (t)z(t)And an implicit vector hz(t)
When a first sub-graph is input into the L STM processing layer shown in FIGS. 6 and 7 above, the L STM processing layer then iteratively processes each node in turn according to the directional relationships between the nodes in the first sub-graph, such that intermediate and implicit vectors for each node in the first sub-graph can be obtainedAnd obtaining the intermediate vector and the implicit vector of the bottommost node by the output associated node, and then iterating upwards layer by layer to obtain the intermediate vector and the implicit vector of each node. For example, when node u (t)4) As the current node, according to the node u (t)4) The local layer of input features of (2), two associated nodes u (t) to which the node points2) And node r (t)2) Obtaining the current node u (t) by respective intermediate vector and implicit vector4) The intermediate vector and the implicit vector of (2). Similarly, get node w (t)4) After the intermediate vector and the implicit vector of (c), the first target node u (t)6) As the current node, according to the node u (t)6) The local layer of input features of (2), two associated nodes u (t)4) And node w (t)4) The respective intermediate vector and the implicit vector are used to obtain a node u (t)6) Similarly, for the second sub-graph, the L STM processing layer performs the same processing to obtain intermediate vectors and implicit vectors for each node in the second sub-graph.
Thus, by various means such as L STM processing layers, each processing layer processes the implicit vectors of the present layer for each node in the first subgraph and the second subgraph.
The following describeslThe processing layer correspondingly has the secondlAnd (5) processing the fusion layer.
FIG. 8 shows that in one embodimentlAnd (5) processing the fusion layer. As shown in FIG. 8, first at step 81, the secondlThe merged layer will correspond to the processed layer, i.e. the firstlThe processing layer is used for respectively carrying out first compression processing on each obtained implicit vector of the layer to obtain each compressed representation corresponding to each node; in step 82, a first transformation process is performed on each compressed representation to obtain a weight estimate corresponding to each node; then, in step 83, the current-level implicit vectors of each node and the corresponding weight estimates are combined, and each combination result is input to the first node as the next-level input feature of each nodel+1 treatment layer.
Specifically, the firstlThe fused layer canTo get fromlThe processing layer acquires the first sub-graph and the second sub-graph of each nodelThe layer implies a vector. In an embodiment, a union U of nodes in the first sub-graph and the second sub-graph may be taken, and it is assumed that the union U includes N nodes, so in step 81, the local-layer implicit vectors of the N nodes may be obtained, and the local-layer implicit vectors are respectively subjected to the first compression processing. The following description will take an arbitrary node a in the union U as an example. First, thelThe implicit vector of the layer obtained by the processing layer and corresponding to the node a can be recorded as
Figure 398952DEST_PATH_IMAGE008
In one embodiment, the present-level implicit vector for any node a can be
Figure 18152DEST_PATH_IMAGE008
Performing dimensionality reduction processing to compress the vector into a dimensionality reduced vector with lower dimensionality
Figure DEST_PATH_IMAGE009
As a compressed representation thereof. The dimension reduction process may be implemented in a number of ways, for example, using a dimension reduction transformation matrix. Assume the layer implicit vector
Figure 81923DEST_PATH_IMAGE008
With dimension D, a reduced-dimension transformation matrix C with dimension k x D (where k is<D) Transforming the implicit vector
Figure 139878DEST_PATH_IMAGE010
And obtaining the dimensionality reduction vector with the dimensionality of k.
In another embodiment, the present-level implicit vector for any node a can be
Figure 277598DEST_PATH_IMAGE008
Pooling is performed to obtain a pooled value as a compressed representation thereof. The pooling process may be an average pooling, i.e. finding the latent vector of the layer
Figure 536541DEST_PATH_IMAGE008
Or, the pooling process may be maximal pooling, i.e., obtaining the maximum value among the vector elements.
In one specific example, the present-level implicit vector to node a is given by the following equation (1)
Figure 25291DEST_PATH_IMAGE008
Performing a first compression process:
Figure DEST_PATH_IMAGE011
(1)
wherein, FsqAs a processing function of the first compression process,
Figure 355778DEST_PATH_IMAGE012
the current-level implicit vector for node a
Figure 613584DEST_PATH_IMAGE008
D is the hidden vector of this layer
Figure 777850DEST_PATH_IMAGE008
Of (c) is calculated. Equation (1) reflects a compressed representation for node a obtained by means of average pooling
Figure 488317DEST_PATH_IMAGE009
The compression is expressed as a compression value.
Thereafter, in step 82, a first transformation process is performed on each compressed representation corresponding to each node to obtain a weight estimate corresponding to each node.
In one embodiment, the compressed representation corresponding to each node
Figure 622495DEST_PATH_IMAGE009
To compress values, the compressed representations of the N nodes thus form a compressed vector
Figure DEST_PATH_IMAGE013
. By performing conversion processing including nonlinear conversion on the compressed vector, a weight vector representing weight estimation of each of the N nodes is obtained
Figure 734807DEST_PATH_IMAGE014
In a specific example, the first transformation process is performed using the following equation (2):
Figure DEST_PATH_IMAGE015
(2)
wherein,
Figure 929028DEST_PATH_IMAGE016
and
Figure DEST_PATH_IMAGE017
for the purpose of the two transformation matrices,
Figure 392371DEST_PATH_IMAGE018
and
Figure DEST_PATH_IMAGE019
two non-linear transformation functions.
According to equation (2), first a first transformation matrix is used
Figure 940027DEST_PATH_IMAGE016
For compressed vector
Figure 297059DEST_PATH_IMAGE013
And performing first linear transformation to obtain a first transformation vector. First transformation matrix
Figure 537547DEST_PATH_IMAGE016
Is a (N/r) N dimensional matrix, wherein r is a preset compression coefficient and is a hyper-parameter; to compress the vector
Figure 753765DEST_PATH_IMAGE013
Is an N-dimensional vector, so that the dimension of the obtained first transformation vector is N/r, which is smaller than the compression directionDimension N of the quantity.
Then, the obtained first transformation vector is subjected to
Figure 698587DEST_PATH_IMAGE020
Performing a first non-linear transformation
Figure 519912DEST_PATH_IMAGE019
A second transformation vector is obtained, the dimension of which is still N/r. Wherein the transformation function of the first non-linear transformation may be tanh, Relu function, or the like.
Then, using the second transformation matrix
Figure 790357DEST_PATH_IMAGE017
And performing second linear transformation on the second transformation vector to obtain a third transformation vector. Second transformation matrix
Figure 962712DEST_PATH_IMAGE017
Is a matrix of dimensions N x (N/r), so the dimensions of the transformed third transformed vector are restored back to dimensions N.
Finally, a second non-linear transformation is performed on the third transformed vector
Figure 242384DEST_PATH_IMAGE018
The transformation of
Figure 449374DEST_PATH_IMAGE018
The transformation function of (c) may be, for example, a sigmoid function, resulting in a result between (0, 1). Obtaining N-dimensional weight vector by second nonlinear transformation
Figure 953037DEST_PATH_IMAGE014
Wherein the N elements correspond to weight estimates for the N nodes, respectively. Correspondingly, the element corresponding to the node a in the weight vector
Figure DEST_PATH_IMAGE021
I.e. represents the weight estimate for that node a.
In another embodiment, the compressed representation corresponding to each node
Figure 737322DEST_PATH_IMAGE009
For reduced-dimension vectors, thus, a compressed representation of N nodes
Figure 696051DEST_PATH_IMAGE013
Is a dimension reduction matrix. Can perform similar operation to the formula (2) on the dimensionality reduction matrix to obtain
Figure 757548DEST_PATH_IMAGE014
Estimating a matrix for the weights, wherein the weights of node a are estimated
Figure 776319DEST_PATH_IMAGE021
Correspond to
Figure 761417DEST_PATH_IMAGE014
A row or a column of vectors.
On the basis of obtaining the weight estimation of each node, in step 83, the current-layer implicit vector of each node is combined with the corresponding weight estimation, and the combined result is used as the next-layer input feature of each node and is input to the first-layerl+1 treatment layer. Specifically, in one embodiment, the combination is made by the following equation (3):
Figure 382891DEST_PATH_IMAGE022
(3)
wherein,
Figure DEST_PATH_IMAGE023
indicating that node a is atlThe input features of the +1 layer,
Figure 33315DEST_PATH_IMAGE008
is node a atlThe layer implies that the vector is,
Figure 222988DEST_PATH_IMAGE021
is node a atlWeight estimation of the layer. When in use
Figure 716286DEST_PATH_IMAGE021
When the weight value is obtained, the multiplication in the formula (3) is the ordinary multiplication of the numerical value and the vector; when in use
Figure 875872DEST_PATH_IMAGE021
In the case of a vector, the multiplication in equation (3) may be a bit-wise multiplication.
Thus, firstl+1 processing level based on the second of each nodel+1 layer input feature
Figure 239857DEST_PATH_IMAGE004
And the connection relation between the nodes, the first subgraph and the second subgraph are continuously processed, and the steps are iterated until the final L th processing layer, the L th processing layer does not need to have a corresponding fusion layer because the subsequent processing layers are not continued any more, and each processing layer is connected to an output layer and transmits the implicit vectors of the layer obtained by the processing layer to the output layer as shown in the figure 5.
It can be understood that, each of the aforementioned L processing layers is sequentially and iteratively processed to obtain its own implicit vector for N nodes in the node union U formed by the first subgraph and the second subgraph, where the aforementioned N nodes naturally include a first target node U (t) as a first subgraph root node, and a second target node v (t) as a second subgraph root node, so that the output layer may obtain L first implicit vectors corresponding to the first target node U (t) from L sets of own implicit vectors obtained from L processing layers, respectively
Figure 725065DEST_PATH_IMAGE024
Figure DEST_PATH_IMAGE025
,…,
Figure 564714DEST_PATH_IMAGE026
And L second implicit vectors corresponding to the second target node v (t)
Figure DEST_PATH_IMAGE027
Figure 855887DEST_PATH_IMAGE028
,…,
Figure DEST_PATH_IMAGE029
And fusing the L first implicit vectors and L second implicit vectors to obtain the interactive characterization vectors.
In one embodiment, the output layer performs pooling operation or weighted combination on L first implicit vectors to obtain first target feature vectors corresponding to first target nodes, similarly performs pooling operation or weighted combination on L second implicit vectors to obtain second target feature vectors corresponding to second target nodes, and then splices or combines the first target feature vectors and the second target feature vectors to serve as the interactive characterization vectors.
In another embodiment, the output layer fuses the L first implicit vectors and L second implicit vectors using a compression-transform approach, using a similar concept as the fusion layer.
Specifically, in an embodiment, the output layer performs second compression processing on 2L implicit vectors formed by the L first implicit vectors and L second implicit vectors, respectively, to obtain corresponding 2L compressed representations.
As mentioned above, the first compression process used by the fusion layer can be implemented in various ways, and the second compression process used by the output layer can also be implemented in various ways, and the implementation manner can be the same as or different from that of the first compression process.
In one specific example, for L first implicit vectors
Figure 746483DEST_PATH_IMAGE024
Figure 543537DEST_PATH_IMAGE025
,…,
Figure 870483DEST_PATH_IMAGE026
And L second implicit vectors
Figure 778396DEST_PATH_IMAGE027
Figure 851394DEST_PATH_IMAGE028
,…,
Figure 553771DEST_PATH_IMAGE029
Constructed 2L implicit vectors
Figure 633591DEST_PATH_IMAGE030
Figure 876354DEST_PATH_IMAGE025
,…,
Figure 7121DEST_PATH_IMAGE026
,
Figure 880399DEST_PATH_IMAGE027
Figure 198247DEST_PATH_IMAGE028
,…,
Figure DEST_PATH_IMAGE031
Any implicit vector can be represented in a compressed form by the following equation (4):
Figure 775859DEST_PATH_IMAGE032
(4)
wherein h is any implicit vector in a set P formed by 2L implicit vectors,
Figure DEST_PATH_IMAGE033
a compressed representation of an implicit vector h. It can be seen that the compression method in equation (4) is the same as that in equation (1), and the compressed value is obtained by performing average pooling on the implicit vectors.
The 2L compressed representations corresponding to the 2L implicit vectors can be obtained by a second compression process, and then the 2L compressed representations are subjected to a second transformation process by the output layer to obtain the corresponding 2L weighting factors.
In a specific example, the second transformation process is performed using the following equation (5):
Figure 354608DEST_PATH_IMAGE034
(5)
wherein S is a compressed vector formed by 2L compressed representations, Q is a weight factor vector comprising 2L elements corresponding to the weight factors of the 2L implicit vectors
Figure 398788DEST_PATH_IMAGE016
And
Figure 203933DEST_PATH_IMAGE017
may be the same as or different from equation (2), a non-linear transformation function
Figure 257339DEST_PATH_IMAGE018
And
Figure 956174DEST_PATH_IMAGE019
this example, the second transformation process is performed in a similar manner as the first transformation process, resulting in 2L weight factors.
Then, the 2L implicit vectors may be weighted and combined by using 2L weighting factors, and based on the result of weighted combination, an interactive feature vector is obtained.
Specifically, in one example, the interaction characterization vector M is determined by the following equation (6):
Figure DEST_PATH_IMAGE035
(6)
wherein h is any implicit vector in the set P,
Figure 30309DEST_PATH_IMAGE036
And the weighting factors are corresponding to the implicit vectors, namely, 2L weighting factors are utilized to carry out weighted summation on 2L implicit vectors, and then the number N of node union sets in the two subgraphs is averaged to obtain the interactive characterization vectors.
In other examples, equation (6) may be modified to obtain other variant schemes, for example, the result of weighted combination may be averaged with layer number L to be the interactive feature vector, or the result of weighted combination may be directly used as the interactive feature vector.
Thus, in the above various manners, the output layer obtains an interaction representation vector for representing interaction events of the first target node u (t) and the second target node v (t).
FIG. 9 is a process diagram of a neural network model, according to one embodiment. Specifically, as shown in FIG. 9, inlIn a processing layer of =1, the first subgraph and the second subgraph are processed according to the input features (node attribute features) of the layer of each node in the first subgraph and the second subgraph and the node connection relation, and a layer 1 hidden vector of each node is obtained. The fusion layer corresponding to the processing layer 1 acquires the layer 1 implicit vectors of each node, including the layer 1 implicit vector of any node a
Figure DEST_PATH_IMAGE037
And compressing the layer 1 implicit vectors of each node to obtain respective corresponding compressed representations, e.g., the compressed representation corresponding to the node a
Figure 57171DEST_PATH_IMAGE038
. Next, the compressed representation of each node is transformed to obtain its weight estimate. For example, the weight corresponding to node a is estimated as
Figure DEST_PATH_IMAGE039
. Thus, the layer 1 implicit vectors of each node are ANDedTheir corresponding weight estimates are combined as the next processing layer (lNode input characteristic of = 2). For example, a level 1 implicit vector for node a may be used
Figure 648689DEST_PATH_IMAGE037
Weight estimation corresponding thereto
Figure 202030DEST_PATH_IMAGE039
In combination with (1)
Figure 853592DEST_PATH_IMAGE040
Input features as the 2 nd processing level of node a
Figure DEST_PATH_IMAGE041
Although only the processing of the fusion layer corresponding to the 1 st processing layer is shown in fig. 9, it is understood that other processing layers may be optionally provided with similar fusion layers.
Then, the output layer acquires L implicit vectors corresponding to a first target node u (t) and L implicit vectors corresponding to a second target node v (t) from L processing layers, the 2L implicit vectors form a set P, similar ideas are adopted, the output layer compresses each implicit vector in the set P to obtain a compressed representation S corresponding to each implicit vector, the compressed representation S is transformed based on the compressed representation to obtain a set Q of weight factors corresponding to each implicit vector, and finally, each implicit vector in the set P is subjected to weighted combination by using each weight factor in the set Q to obtain a final interactive representation vector M.
In the whole processing process of the neural network model, the plurality of processing layers respectively obtain the hidden vector of each node on the basis of the input characteristic of each node on the layer and the node connection relation; and the fusion layer determines the weight estimation of each node according to the implicit vector of the layer determined by the corresponding processing layer, and takes the combination of the implicit vector and the weight estimation as the basis for determining the implicit vector by the next processing layer. Therefore, the importance of each node in the two subgraphs is fully considered, and the interaction between the two target nodes is represented, so that the interaction representation vector better represents different contributions of different nodes, and has stronger representation capability.
Returning to fig. 2, after the interaction characterization vector M of the first target node u (t) and the second target node v (t) is obtained through the neural network model, the first event of interaction between the first target node and the second target node may be evaluated based on the interaction characterization vector.
In one embodiment, the first event may be a hypothetical interaction event that has not occurred. Accordingly, the evaluating the first event in step 24 may include evaluating an occurrence probability of the first event, that is, evaluating a possibility of interaction between the first target node and the second target node.
In another embodiment, the first event is a current interaction event that has occurred. Accordingly, evaluating the first event in step 24 may include evaluating an event category of the current interaction event. For example, in one example, user A initiates a transaction with object B, resulting in a current interaction event. Upon receiving such a transaction request (e.g., upon user a requesting payment), user a and user B may be respectively considered as a first and a second target node, and the current interaction event may be analyzed based on the interaction characterization vectors of the two target nodes, so as to determine an event category of the transaction event, such as whether the transaction event is a suspected fraud transaction, a risk level of the transaction, and so on.
The neural network model may be trained based on evaluation requirements for the event.
In one embodiment, it is desirable to evaluate the probability of interaction between two target nodes using the interaction characterization vectors of the interaction between the two target nodes output by the neural network model. In such a case, the neural network model may be trained by collecting node pairs with interactions as positive samples from the historical interaction event sequence, and obtaining node pairs without interactions as negative samples.
In another embodiment, it is desirable to evaluate the event categories of the events in which the two target nodes participate, using the interaction characterization vectors of the interactions between the two target nodes output by the neural network model. In such a case, two nodes in a sample event of which the event category is known in the historical interaction events may be acquired as sample nodes. Specifically, two sub-graphs corresponding to the two sample nodes are input into the neural network model to obtain interaction characterization vectors corresponding to the two sample nodes, and then the event category of the sample event is predicted according to the interaction characterization vectors to obtain a category prediction result. And determining the prediction loss according to the class prediction result and the real event class of the sample event, and adjusting the parameters of the neural network model by taking the reduction of the prediction loss as a target so as to train the neural network model.
In summary, in the solution of the embodiment of the present specification, a dynamic interaction graph is constructed to reflect the time sequence relationship of each interaction event and the interaction between interaction objects. For two target nodes to be analyzed, two sub-graphs with the two target nodes as root nodes are respectively obtained from the dynamic interaction graph, and the two sub-graphs are input into the neural network model. In the process of carrying out graph processing on the two subgraphs, the neural network model obtains the importance of each node in the two subgraphs by using a compression-transformation mechanism, determines the interaction characterization vectors of the two target nodes by considering the importance of each node, and is used for evaluating the interaction events participated by the two target nodes. Due to the fact that the importance of the nodes is considered, the interactive characterization vectors can better reflect different contribution degrees of different historical nodes, have better characterization capability, and are more beneficial to accurately analyzing and evaluating the interactivity among the target nodes.
According to an embodiment of another aspect, an apparatus for evaluating an interaction event is provided, which may be deployed in any device, platform or cluster of devices having computing and processing capabilities. Fig. 10 shows a schematic block diagram of an apparatus for evaluating an interaction event according to an embodiment. As shown in fig. 10, the apparatus 100 includes:
an interaction graph obtaining unit 110, configured to obtain a dynamic interaction graph used for reflecting an association relationship of interaction events, where the dynamic interaction graph includes a plurality of pairs of nodes, each pair of nodes represents two objects in one interaction event, and any node points to two nodes corresponding to a previous interaction event in which the object represented by the node participates through a connecting edge;
a subgraph determining unit 120, configured to determine, in the dynamic interaction graph, subgraphs which start from a current root node and reach a predetermined range via a connecting edge as a first subgraph and a second subgraph, with a first target node and a second target node to be analyzed as the current root node respectively;
a graph processing unit 130 configured to input the first sub-graph and the second sub-graph into a neural network model 131 for graph processing, where the neural network model 131 includes L processing layers and an output layer stacked in sequence, and at least one of the L processing layers has a corresponding fusion layer, where:
each processing layer is used for respectively processing to obtain the hidden vector of each node according to the input characteristics of each layer of nodes contained in the first subgraph and the second subgraph and the directional relation of the connecting edges among the nodes;
the fusion layer is used for respectively carrying out first compression processing on each layer of hidden vectors obtained by the corresponding processing layer to obtain each compressed representation corresponding to each node, and carrying out first transformation processing on each compressed representation to obtain weight estimation corresponding to each node; taking the combination result of the current-layer implicit vector of each node and the corresponding weight estimation as the next-layer input characteristic of each node;
the output layer is used for fusing L first implicit vectors corresponding to the first target node and L second implicit vectors corresponding to the second target node, which are respectively obtained by the L processing layers, so as to obtain interactive characterization vectors;
the evaluation unit 140 is configured to evaluate a first event of interaction between the first target node and the second target node according to the interaction characterization vector.
It is to be understood that the neural network model 131 described above may be included as part of the apparatus 100 or may be deployed outside of the apparatus 100. Fig. 10 schematically illustrates a case where the neural network model 131 is included in the apparatus 100.
According to one embodiment, the subgraph determined by the subgraph determination unit 120 within the predetermined range reached via the connecting edge includes:
nodes reached via connecting edges within a preset number K; and/or
And the nodes can be reached through the connecting edge and the interaction time is within the preset time range.
In one embodiment, the L processing layers include a first processing layer at a bottom-most layer in which the present-layer input features of the respective first nodes include node attribute features of the respective first nodes.
Further, in various examples, the first node may include a user node and/or an item node, and the node attribute characteristics of the user node include at least one of: age, occupation, education level, region, registration duration, population label; the node attribute characteristics of the item node include at least one of: item category, time to shelve, number of reviews, sales volume.
According to an embodiment, each processing layer may be a time-sequence-based network processing layer, and is configured to sequentially iterate and process each node according to the local-layer input features of each node and the directional relationship of the connection edges between each node, so as to obtain the local-layer implicit vector of each node.
Further, in an embodiment, the timing-based network processing layer may be an L STM layer, and the L STM layer is configured to sequentially use each node as a current node according to a pointing relationship sequence of a connection edge between each node, determine an implicit vector and an intermediate vector of the current node according to input features of the current layer of the current node, and use the implicit vector of the current node as the implicit vector of the current layer.
In an embodiment, the fusion layer is specifically configured to, for any implicit vector in the implicit vectors of the current layer, calculate an average value of each vector element of the any implicit vector, and use the average value as a compressed representation of a node corresponding to the any implicit vector.
In another embodiment, the fusion layer is specifically configured to, for any implicit vector in the implicit vectors of the current layer, compress the any implicit vector into a reduced-dimension vector of a lower dimension, which is used as a corresponding compressed representation.
According to one embodiment, the respective compressed representations are respective compressed values, the respective compressed values constituting a compressed vector; the fusion layer is particularly useful for:
performing first linear transformation on the compressed vector by using a first transformation matrix to obtain a first transformation vector, wherein the dimensionality of the first transformation vector is smaller than that of the compressed vector;
performing first nonlinear transformation on the first transformation vector to obtain a second transformation vector;
performing second linear transformation on the second transformation vector by using a second transformation matrix to obtain a third transformation vector, wherein the dimensionality of the third transformation vector is equal to the compressed vector;
and performing second nonlinear transformation on the third transformation vector to obtain a weight vector, wherein each element in the weight vector corresponds to the weight estimation of each node.
According to one embodiment, the output layer is specifically configured to:
respectively carrying out second compression processing on 2L implicit vectors formed by the L first implicit vectors and L second implicit vectors to obtain corresponding 2L compressed representations;
performing a second transformation on the 2L compressed representations to obtain corresponding 2L weight factors;
and performing weighted combination on the 2L implicit vectors by utilizing the 2L weight factors, and obtaining the interactive characterization vector based on a weighted combination result.
Further, in one embodiment, the second compression process is the same as the first compression process, and the second transformation process is the same as the first transformation process.
According to an embodiment, the first event is a hypothetical event, in which case the evaluation unit 140 is configured to evaluate the probability of occurrence of the first event.
According to another embodiment, the first event is an occurred event, in which case the evaluation unit 140 is configured to evaluate an event category of the first event.
According to the method and the device, for two target nodes involved by an event to be evaluated, the interaction between the two target nodes can be characterized based on the subgraphs of the two nodes in the dynamic interaction graph, so that the event involving the two target nodes can be evaluated and analyzed more accurately.
According to an embodiment of another aspect, there is also provided a computer-readable storage medium having stored thereon a computer program which, when executed in a computer, causes the computer to perform the method described in connection with fig. 2.
According to an embodiment of yet another aspect, there is also provided a computing device comprising a memory and a processor, the memory having stored therein executable code, the processor, when executing the executable code, implementing the method described in connection with fig. 2.
Those skilled in the art will recognize that, in one or more of the examples described above, the functions described in this invention may be implemented in hardware, software, firmware, or any combination thereof. When implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium.
The above-mentioned embodiments, objects, technical solutions and advantages of the present invention are further described in detail, it should be understood that the above-mentioned embodiments are only exemplary embodiments of the present invention, and are not intended to limit the scope of the present invention, and any modifications, equivalent substitutions, improvements and the like made on the basis of the technical solutions of the present invention should be included in the scope of the present invention.

Claims (28)

1. A computer-implemented method of evaluating an interaction event, the method comprising:
acquiring a dynamic interaction graph for reflecting an association relation of interaction events, wherein the dynamic interaction graph comprises a plurality of pairs of nodes, each pair of nodes represents two objects in one interaction event, and any node points to two nodes corresponding to the last interaction event in which the object represented by the node participates through a connecting edge;
respectively taking a first target node and a second target node to be analyzed as current root nodes, and determining subgraphs which start from the current root nodes and reach a preset range through connecting edges in the dynamic interaction graph as a first subgraph and a second subgraph;
inputting the first subgraph and the second subgraph into a neural network model for graph processing to obtain an interactive characterization vector, wherein the neural network model comprises L processing layers and output layers which are sequentially stacked, at least one processing layer in the L processing layers is provided with a corresponding fusion layer, and the graph processing comprises the following steps:
in each of the L processing layers, respectively processing to obtain a current-layer implicit vector of each node according to current-layer input features of each node included in the first subgraph and the second subgraph and a directional relation of a connecting edge between each node;
in the fusion layer, respectively carrying out first compression processing on each layer of hidden vectors obtained by the corresponding processing layer to obtain each compressed representation corresponding to each node, and carrying out first transformation processing on each compressed representation to obtain weight estimation corresponding to each node; taking the combination result of the current-layer implicit vector of each node and the corresponding weight estimation as the next-layer input characteristic of each node;
in the output layer, fusing L first implicit vectors corresponding to the first target node and L second implicit vectors corresponding to the second target node, which are obtained by the L processing layers respectively, to obtain the interactive representation vectors;
and evaluating a first event of interaction between the first target node and the second target node according to the interaction characterization vector.
2. The method of claim 1, wherein the subgraphs within the predetermined range reached via the connecting edge comprise:
nodes reached via connecting edges within a preset number K; and/or
And the nodes can be reached through the connecting edge and the interaction time is within the preset time range.
3. The method of claim 1 wherein said L processing layers include a first processing layer at the bottom most layer in which said present-level input features of respective nodes include node attribute features of respective nodes.
4. The method of claim 3, wherein the respective nodes comprise user nodes and/or item nodes, and the node attribute characteristics of the user nodes comprise at least one of: age, occupation, education level, region, registration duration, population label; the node attribute characteristics of the item node include at least one of: item category, time to shelve, number of reviews, sales volume.
5. The method according to claim 1, wherein each processing layer is a time-sequence-based network processing layer, and is configured to sequentially and iteratively process each node according to the local-layer input feature of each node included in the first sub-graph and the second sub-graph and the directional relationship of the connecting edge between each node, so as to obtain a local-layer implicit vector of each node.
6. The method as claimed in claim 5, wherein the time sequence based network processing layer is L STM layer, the L STM layer is used for sequentially using each node as a current node according to the direction relation sequence of the connecting edges between each node, determining an implicit vector and an intermediate vector of the current node according to the input characteristics of the current layer of the current node, and using the implicit vector of the current node as the implicit vector of the current layer.
7. The method of claim 1, wherein the first compression process comprises:
and calculating the average value of each vector element of any implicit vector in each layer of implicit vectors as the compressed representation of the node corresponding to the any implicit vector.
8. The method of claim 1, wherein the first compression process comprises:
and compressing any hidden vector in the hidden vectors of the layer into a dimensionality reduction vector with lower dimensionality as a corresponding compression representation.
9. The method of claim 1, wherein the respective compressed representations are respective compressed values, the respective compressed values constituting a compressed vector; the first transform process includes:
performing first linear transformation on the compressed vector by using a first transformation matrix to obtain a first transformation vector, wherein the dimensionality of the first transformation vector is smaller than that of the compressed vector;
performing first nonlinear transformation on the first transformation vector to obtain a second transformation vector;
performing second linear transformation on the second transformation vector by using a second transformation matrix to obtain a third transformation vector, wherein the dimensionality of the third transformation vector is equal to the compressed vector;
and performing second nonlinear transformation on the third transformation vector to obtain a weight vector, wherein each element in the weight vector corresponds to the weight estimation of each node.
10. The method of claim 1, wherein fusing L first implicit vectors corresponding to the first target node and L second implicit vectors corresponding to the second target node, obtained by the L processing layers respectively, to obtain an interaction characterization vector comprises:
respectively carrying out second compression processing on 2L implicit vectors formed by the L first implicit vectors and L second implicit vectors to obtain corresponding 2L compressed representations;
performing a second transformation on the 2L compressed representations to obtain corresponding 2L weight factors;
and performing weighted combination on the 2L implicit vectors by utilizing the 2L weight factors, and obtaining the interactive characterization vector based on a weighted combination result.
11. The method of claim 10, wherein the second compression process is the same as the first compression process and the second transform process is the same as the first transform process.
12. The method of claim 1, wherein the first event is a hypothetical event and evaluating the first event of the first and second nodes interacting comprises evaluating a probability of occurrence of the first event.
13. The method of claim 1, wherein the first event is a occurred event, and evaluating the first event for interaction by the first and second nodes comprises evaluating an event category of the first event.
14. An apparatus to evaluate an interaction event, the apparatus comprising:
the interactive graph obtaining unit is configured to obtain a dynamic interactive graph used for reflecting an interactive event incidence relation, the dynamic interactive graph comprises a plurality of pairs of nodes, each pair of nodes represents two objects in one interactive event, and any node points to two nodes corresponding to the last interactive event in which the object represented by the node participates through a connecting edge;
the subgraph determining unit is configured to respectively use a first target node and a second target node to be analyzed as a current root node, and determine subgraphs which start from the current root node and reach a preset range through a connecting edge in the dynamic interaction graph as a first subgraph and a second subgraph;
a graph processing unit configured to perform graph processing on the first sub-graph and the second sub-graph input neural network model to obtain an interactive characterization vector, where the neural network model includes L processing layers and an output layer stacked in sequence, and at least one of the L processing layers has a corresponding fusion layer, where:
each processing layer is used for respectively processing to obtain the hidden vector of each node according to the input characteristics of each layer of nodes contained in the first subgraph and the second subgraph and the directional relation of the connecting edges among the nodes;
the fusion layer is used for respectively carrying out first compression processing on each layer of hidden vectors obtained by the corresponding processing layer to obtain each compressed representation corresponding to each node, and carrying out first transformation processing on each compressed representation to obtain weight estimation corresponding to each node; taking the combination result of the current-layer implicit vector of each node and the corresponding weight estimation as the next-layer input characteristic of each node;
the output layer is configured to fuse L first implicit vectors corresponding to the first target node and L second implicit vectors corresponding to the second target node, which are obtained by the L processing layers, respectively, to obtain the interactive representation vectors;
and the evaluation unit is configured to evaluate a first event of interaction between the first target node and the second target node according to the interaction characterization vector.
15. The apparatus of claim 14, wherein the subgraph within the predetermined range reached via the connecting edge comprises:
nodes reached via connecting edges within a preset number K; and/or
And the nodes can be reached through the connecting edge and the interaction time is within the preset time range.
16. The apparatus of claim 14 wherein said L processing layers include a first processing layer at the bottom most layer in which said present-level input features of respective nodes include node attribute features of respective nodes.
17. The apparatus of claim 16, wherein the respective nodes comprise user nodes and/or item nodes, and the node attribute characteristics of the user nodes comprise at least one of: age, occupation, education level, region, registration duration, population label; the node attribute characteristics of the item node include at least one of: item category, time to shelve, number of reviews, sales volume.
18. The apparatus according to claim 14, wherein each processing layer is a time-sequence-based network processing layer, and is configured to sequentially and iteratively process each node according to the local-layer input feature of each node included in the first sub-graph and the second sub-graph and a directional relationship of a connecting edge between each node, so as to obtain a local-layer implicit vector of each node.
19. The apparatus of claim 18, wherein the timing-based network processing layer is an L STM layer, and the L STM layer is configured to, in order of orientation relationship of connecting edges between nodes, take each node as a current node in turn, determine an implicit vector and an intermediate vector of the current node according to input features of the current layer, respective intermediate vectors and implicit vectors of two nodes pointed to by the current node, and take the implicit vector of the current node as the implicit vector of the current layer.
20. The apparatus according to claim 14, wherein the fusion layer is specifically configured to, for any implicit vector in the implicit vectors of the present layer, calculate an average value of each vector element of the any implicit vector as a compressed representation of a node corresponding to the any implicit vector.
21. The apparatus according to claim 14, wherein the fusion layer is specifically configured to, for any implicit vector in the layers of implicit vectors, compress the any implicit vector into a reduced-dimension vector of a lower dimension as a corresponding compressed representation.
22. The apparatus of claim 14, wherein the respective compressed representations are respective compressed values, the respective compressed values comprising a compressed vector; the fusion layer is particularly useful for:
performing first linear transformation on the compressed vector by using a first transformation matrix to obtain a first transformation vector, wherein the dimensionality of the first transformation vector is smaller than that of the compressed vector;
performing first nonlinear transformation on the first transformation vector to obtain a second transformation vector;
performing second linear transformation on the second transformation vector by using a second transformation matrix to obtain a third transformation vector, wherein the dimensionality of the third transformation vector is equal to the compressed vector;
and performing second nonlinear transformation on the third transformation vector to obtain a weight vector, wherein each element in the weight vector corresponds to the weight estimation of each node.
23. The apparatus of claim 14, wherein the output layer is specifically configured to:
respectively carrying out second compression processing on 2L implicit vectors formed by the L first implicit vectors and L second implicit vectors to obtain corresponding 2L compressed representations;
performing a second transformation on the 2L compressed representations to obtain corresponding 2L weight factors;
and performing weighted combination on the 2L implicit vectors by utilizing the 2L weight factors, and obtaining the interactive characterization vector based on a weighted combination result.
24. The apparatus according to claim 23, wherein the second compression process is the same as the first compression process, and the second transform process is the same as the first transform process.
25. The apparatus according to claim 14, wherein the first event is a hypothetical event, and the evaluation unit is configured to evaluate the probability of occurrence of the first event.
26. The apparatus according to claim 14, wherein the first event is a occurred event, the evaluation unit being configured to evaluate an event category of the first event.
27. A computer-readable storage medium, on which a computer program is stored which, when executed in a computer, causes the computer to carry out the method of any one of claims 1-13.
28. A computing device comprising a memory and a processor, wherein the memory has stored therein executable code that, when executed by the processor, performs the method of any of claims 1-13.
CN202010588751.5A 2020-06-24 2020-06-24 Method and device for evaluating interaction event Active CN111476223B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010588751.5A CN111476223B (en) 2020-06-24 2020-06-24 Method and device for evaluating interaction event

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010588751.5A CN111476223B (en) 2020-06-24 2020-06-24 Method and device for evaluating interaction event

Publications (2)

Publication Number Publication Date
CN111476223A true CN111476223A (en) 2020-07-31
CN111476223B CN111476223B (en) 2020-09-22

Family

ID=71765376

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010588751.5A Active CN111476223B (en) 2020-06-24 2020-06-24 Method and device for evaluating interaction event

Country Status (1)

Country Link
CN (1) CN111476223B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112507210A (en) * 2020-11-18 2021-03-16 天津大学 Interactive visualization method for event detection on attribute network
CN113538030A (en) * 2020-10-21 2021-10-22 腾讯科技(深圳)有限公司 Content pushing method and device and computer storage medium
CN114817751A (en) * 2022-06-24 2022-07-29 腾讯科技(深圳)有限公司 Data processing method, data processing device, electronic equipment, storage medium and program product

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170243517A1 (en) * 2013-09-03 2017-08-24 Roger Midmore Interactive story system using four-valued logic
CN109241440A (en) * 2018-09-29 2019-01-18 北京工业大学 It is a kind of based on deep learning towards implicit feedback recommended method
CN110490274A (en) * 2019-10-17 2019-11-22 支付宝(杭州)信息技术有限公司 Assess the method and device of alternative events
CN110689110A (en) * 2019-08-28 2020-01-14 阿里巴巴集团控股有限公司 Method and device for processing interaction event
CN110765260A (en) * 2019-10-18 2020-02-07 北京工业大学 Information recommendation method based on convolutional neural network and joint attention mechanism
CN111242283A (en) * 2020-01-09 2020-06-05 支付宝(杭州)信息技术有限公司 Training method and device for evaluating self-encoder of interaction event

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170243517A1 (en) * 2013-09-03 2017-08-24 Roger Midmore Interactive story system using four-valued logic
CN109241440A (en) * 2018-09-29 2019-01-18 北京工业大学 It is a kind of based on deep learning towards implicit feedback recommended method
CN110689110A (en) * 2019-08-28 2020-01-14 阿里巴巴集团控股有限公司 Method and device for processing interaction event
CN110490274A (en) * 2019-10-17 2019-11-22 支付宝(杭州)信息技术有限公司 Assess the method and device of alternative events
CN110765260A (en) * 2019-10-18 2020-02-07 北京工业大学 Information recommendation method based on convolutional neural network and joint attention mechanism
CN111242283A (en) * 2020-01-09 2020-06-05 支付宝(杭州)信息技术有限公司 Training method and device for evaluating self-encoder of interaction event

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113538030A (en) * 2020-10-21 2021-10-22 腾讯科技(深圳)有限公司 Content pushing method and device and computer storage medium
CN113538030B (en) * 2020-10-21 2024-03-26 腾讯科技(深圳)有限公司 Content pushing method and device and computer storage medium
CN112507210A (en) * 2020-11-18 2021-03-16 天津大学 Interactive visualization method for event detection on attribute network
CN114817751A (en) * 2022-06-24 2022-07-29 腾讯科技(深圳)有限公司 Data processing method, data processing device, electronic equipment, storage medium and program product
CN114817751B (en) * 2022-06-24 2022-09-23 腾讯科技(深圳)有限公司 Data processing method, data processing apparatus, electronic device, storage medium, and program product

Also Published As

Publication number Publication date
CN111476223B (en) 2020-09-22

Similar Documents

Publication Publication Date Title
CN110598847B (en) Method and device for processing interactive sequence data
CN111210008B (en) Method and device for processing interactive data by using LSTM neural network model
CN110490274B (en) Method and device for evaluating interaction event
CN111476223B (en) Method and device for evaluating interaction event
CN112364976B (en) User preference prediction method based on session recommendation system
CN110555469B (en) Method and device for processing interactive sequence data
CN110543935B (en) Method and device for processing interactive sequence data
CN110689110B (en) Method and device for processing interaction event
CN111242283B (en) Training method and device for evaluating self-encoder of interaction event
CN111695965B (en) Product screening method, system and equipment based on graphic neural network
CN109242633A (en) A kind of commodity method for pushing and device based on bigraph (bipartite graph) network
CN112580789B (en) Training graph coding network, and method and device for predicting interaction event
CN112085293B (en) Method and device for training interactive prediction model and predicting interactive object
CN112541575A (en) Method and device for training graph neural network
CN113610610B (en) Session recommendation method and system based on graph neural network and comment similarity
CN113222711A (en) Commodity information recommendation method, system and storage medium
CN110705688A (en) Neural network system, method and device for risk assessment of operation event
CN110413897A (en) Users&#39; Interests Mining method, apparatus, storage medium and computer equipment
CN110674181A (en) Information recommendation method and device, electronic equipment and computer-readable storage medium
CN111258469B (en) Method and device for processing interactive sequence data
CN115309997B (en) Commodity recommendation method and device based on multi-view self-coding features
CN116977019A (en) Merchant recommendation method and device, electronic equipment and storage medium
Wu et al. Symphony in the latent space: Provably integrating high-dimensional techniques with non-linear machine learning models
CN114861072B (en) Graph convolution network recommendation method and device based on interlayer combination mechanism
CN115564532A (en) Training method and device of sequence recommendation model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40034545

Country of ref document: HK