CN110689110A - Method and device for processing interaction event - Google Patents

Method and device for processing interaction event Download PDF

Info

Publication number
CN110689110A
CN110689110A CN201910803312.9A CN201910803312A CN110689110A CN 110689110 A CN110689110 A CN 110689110A CN 201910803312 A CN201910803312 A CN 201910803312A CN 110689110 A CN110689110 A CN 110689110A
Authority
CN
China
Prior art keywords
node
vector
event
nodes
interaction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910803312.9A
Other languages
Chinese (zh)
Other versions
CN110689110B (en
Inventor
文剑烽
常晓夫
刘旭钦
宋乐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Advanced New Technologies Co Ltd
Advantageous New Technologies Co Ltd
Original Assignee
Alibaba Group Holding Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Group Holding Ltd filed Critical Alibaba Group Holding Ltd
Priority to CN201910803312.9A priority Critical patent/CN110689110B/en
Publication of CN110689110A publication Critical patent/CN110689110A/en
Application granted granted Critical
Publication of CN110689110B publication Critical patent/CN110689110B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q40/00Finance; Insurance; Tax strategies; Processing of corporate or income taxes
    • G06Q40/04Trading; Exchange, e.g. stocks, commodities, derivatives or currency exchange

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • Business, Economics & Management (AREA)
  • Data Mining & Analysis (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Accounting & Taxation (AREA)
  • Finance (AREA)
  • Economics (AREA)
  • General Business, Economics & Management (AREA)
  • Development Economics (AREA)
  • Technology Law (AREA)
  • Strategic Management (AREA)
  • Marketing (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The embodiment of the specification provides a method and a device for processing an interaction event. In the method, a dynamic interaction graph constructed according to a dynamic interaction sequence is obtained first, wherein each interaction object involved in each interaction event corresponds to each node in the graph. And for the current interactive event to be analyzed, obtaining the participating node of the current interactive event and the associated node of the participating node from the dynamic interactive graph, and determining each participating node and each associated sub-graph of the associated node in the dynamic interactive graph. Then, inputting the subgraphs of the participating nodes into a first neural network model, wherein the subgraphs are processed based on the event characteristics and the connection relation of the nodes to obtain the implicit vectors of the participating nodes; and inputting the subgraph of the associated node into a second neural network model, wherein the event class label and the connection relation of the node are processed to obtain an implicit vector of the associated node. The current interaction event is then expressed and analyzed based on the implicit vectors of these nodes.

Description

Method and device for processing interaction event
Technical Field
One or more embodiments of the present specification relate to the field of machine learning, and more particularly, to a method and apparatus for processing interaction events using machine learning.
Background
In many scenarios, user interaction events need to be analyzed and processed. The interaction event is one of basic constituent elements of an internet event, for example, a click action when a user browses a page can be regarded as an interaction event between the user and a content block of the page, a purchase action in an e-commerce can be regarded as an interaction event between the user and a commodity, and an inter-account transfer action is an interaction event between the user and the user. The characteristics of fine-grained habit preference and the like of the user and the characteristics of an interactive object are contained in a series of interactive events of the user, and the characteristics are important characteristic sources of a machine learning model. Therefore, in many scenarios, it is desirable to characterize and model interaction participants, as well as interaction events, based on the history of the interaction.
However, an interactive event involves both interacting parties, and the status of each party itself may be dynamically changing, and thus it is very difficult to accurately characterize the interacting parties comprehensively considering their multi-aspect characteristics. Thus, improved solutions for more efficiently analyzing and processing interactive events are desired.
Disclosure of Invention
One or more embodiments of the present specification describe a method and apparatus for processing an interactivity event, in which an interactivity event sequence is represented by a dynamic interactivity diagram, for a current interactivity event, a participating node and an associated node of the event are determined from the dynamic interactivity diagram, and based on a direction and an association relationship between event nodes reflected in the dynamic interactivity diagram, each participating node and associated node are processed as implicit features, and the current interactivity event is analyzed using the implicit features.
According to a first aspect, there is provided a method of processing an interaction event, the method comprising:
acquiring a dynamic interaction diagram constructed according to a dynamic interaction sequence, wherein the dynamic interaction sequence comprises a plurality of interaction events which are arranged according to a time sequence, and each interaction event comprises two objects with interaction behaviors, an event characteristic and interaction time; at least some of the plurality of interactivity events have an event category tag; the dynamic interaction graph comprises a plurality of nodes representing each object in each event, wherein any node i points to two associated nodes through a connecting edge, and the two associated nodes are two nodes corresponding to the last interaction event in which the object represented by the node i participates;
determining a first node and a second node corresponding to a current interaction event to be analyzed and four related associated nodes in the dynamic interaction graph, wherein the four related nodes comprise two related nodes pointed by the first node and two related nodes pointed by the second node;
respectively taking the first node, the second node and the four associated nodes as current root nodes, and determining corresponding current subgraphs in the dynamic interaction graph so as to respectively obtain a first subgraph, a second subgraph and four associated subgraphs, wherein the current subgraph comprises nodes which start from the current root node and reach a predetermined range through a connecting edge;
inputting the first subgraph and the second subgraph into a first neural network model respectively to obtain a corresponding first hidden vector and a corresponding second hidden vector respectively; the first neural network determines an implicit vector corresponding to the input subgraph according to first input features of all nodes in the input subgraph and the directional relation of connecting edges among the nodes, wherein the first input features comprise event features of events where the nodes are located;
inputting the four association subgraphs into a second neural network model respectively to obtain four association hidden vectors respectively; the second neural network determines an implicit vector corresponding to the input subgraph according to second input features of each node in the input subgraph and the directional relation of connecting edges among the nodes, wherein the second input features comprise event category labels of events where the nodes are located;
and predicting the event category of the current interaction event according to the first implicit vector, the second implicit vector and the four associated implicit vectors.
According to one embodiment, the obtaining of the dynamic interaction graph constructed according to the dynamic interaction sequence includes:
acquiring an existing dynamic interaction diagram constructed based on an existing interaction sequence;
acquiring a newly added interaction event;
taking a first object and a second object related to the newly added interaction event as two newly added nodes, and adding the two newly added nodes into the existing dynamic interaction graph;
and for each newly added node, if two associated nodes exist, adding a connecting edge pointing to the two associated nodes from the newly added node.
According to one embodiment, the nodes within the predetermined range reached via the connecting edge in the current subgraph comprise: nodes reached via connecting edges within a preset number K; and/or nodes which are reachable through the connecting edge and have interaction time within a preset time range.
According to various embodiments, the first input feature may further include an attribute feature of the node, and/or a time difference between a first interaction time of an interaction event in which the node is located and a second interaction time of an interaction event in which two associated nodes are located.
In a specific embodiment, the interaction event is a transaction event, and the event characteristic includes at least one of the following: transaction type, transaction amount, transaction channel; the event category label is a transaction risk level label.
In one embodiment, the first neural network model is an LSTM-based neural network including at least one LSTM layer, the LSTM layer processing the first sub-graph as follows:
and sequentially taking each node in the first subgraph as a target node, and determining the implicit vector and the intermediate vector of the target node according to the first input characteristic of the target node, the respective intermediate vector and the implicit vector of the two associated nodes pointed by the target node until the implicit vector of the first node is obtained.
Further, in one embodiment, the two associated nodes pointed to by the target node are a first associated node and a second associated node; the first neural network model determines the implicit and intermediate vectors of the target node as follows:
inputting the first input feature of the target node, the implicit vector of the first associated node and the implicit vector of the second associated node into a first transformation function and a second transformation function which have the same algorithm and different parameters respectively to obtain a first transformation vector and a second transformation vector respectively;
respectively carrying out combined operation on the first transformation vector and the second transformation vector, and the intermediate vector of the first association node and the intermediate vector of the second association node, and obtaining a combined vector based on an operation result;
inputting the first input feature of the target node, the implicit vector of the first associated node and the implicit vector of the second associated node into a third transformation function and a fourth transformation function respectively to obtain a third transformation vector and a fourth transformation vector respectively;
determining an intermediate vector for the target node based on the combined vector and a third transformed vector;
determining an implicit vector for the target node based on the intermediate vector and a fourth transformed vector for the target node.
In one embodiment, the first neural network model comprises a plurality of LSTM layers, wherein the implicit vector of the target node determined by the previous LSTM layer is input to the next LSTM layer as the first input feature of the target node.
Further, in an embodiment, the first neural network model synthesizes implicit vectors of the first node output by each of the plurality of LSTM layers to obtain the first implicit vector.
Alternatively, in another embodiment, the first neural network model takes an implicit vector of the first node output by a last LSTM layer of the plurality of LSTM layers as the first implicit vector.
According to one embodiment, the first neural network model and the second neural network model are neural network models with the same structure and algorithm and different parameters.
In one embodiment, the event category of the current interaction event is determined as follows:
fusing the first implicit vector, the second implicit vector and the four associated implicit vectors by utilizing a fully-connected neural network to obtain an event representation vector of the current interaction event;
and determining the event category of the current interaction event according to the event representation vector by utilizing a classifier.
According to one embodiment, the method further comprises:
acquiring a current category label of the current interaction event;
determining a predicted loss based at least on the determined event category and the current category label;
and jointly training the first neural network model, the second neural network model, the fully-connected neural network and the classifier according to the prediction loss.
According to another embodiment, the aforementioned method further comprises:
acquiring a current category label of the current interaction event;
determining a predicted loss based at least on the determined event category and the current category label;
training the first neural network model and a second neural network model according to the predicted loss.
According to a second aspect, there is provided an apparatus for processing an interactivity event, the apparatus comprising:
the interactive map acquisition unit is configured to acquire a dynamic interactive map constructed according to a dynamic interactive sequence, wherein the dynamic interactive sequence comprises a plurality of interactive events which are arranged according to a time sequence, and each interactive event comprises two objects with interactive behaviors, an event characteristic and interactive time; at least some of the plurality of interactivity events have an event category tag; the dynamic interaction graph comprises a plurality of nodes representing each object in each event, wherein any node i points to two associated nodes through a connecting edge, and the two associated nodes are two nodes corresponding to the last interaction event in which the object represented by the node i participates;
a node determining unit, configured to determine, in the dynamic interaction graph, a first node and a second node corresponding to a current interaction event to be analyzed, and four related associated nodes, where the four related nodes include two related nodes pointed to by the first node and two related nodes pointed to by the second node;
a subgraph determining unit, configured to determine respective corresponding current subgraphs in the dynamic interaction graph by respectively taking the first node, the second node and the four associated nodes as current root nodes, so as to obtain a first subgraph, a second subgraph and four associated subgraphs respectively, wherein the current subgraph comprises nodes which start from the current root node and reach in a predetermined range through connecting edges;
the first processing unit is configured to input the first sub-graph and the second sub-graph into a first neural network model respectively to obtain a corresponding first implicit vector and a corresponding second implicit vector respectively; the first neural network determines an implicit vector corresponding to the input subgraph according to first input features of all nodes in the input subgraph and the directional relation of connecting edges among the nodes, wherein the first input features comprise event features of events where the nodes are located;
the second processing unit is configured to input the four association subgraphs into a second neural network model respectively to obtain four association hidden vectors respectively; the second neural network determines an implicit vector corresponding to the input subgraph according to second input features of each node in the input subgraph and the directional relation of connecting edges among the nodes, wherein the second input features comprise event category labels of events where the nodes are located;
and the prediction unit is configured to predict the event type of the current interactive event according to the first implicit vector, the second implicit vector and the four associated implicit vectors.
According to a third aspect, there is provided a computer readable storage medium having stored thereon a computer program which, when executed in a computer, causes the computer to perform the method of the first aspect.
According to a fourth aspect, there is provided a computing device comprising a memory and a processor, wherein the memory has stored therein executable code, and wherein the processor, when executing the executable code, implements the method of the first aspect.
According to the method and the device provided by the embodiment of the specification, a dynamic interaction diagram is constructed on the basis of a dynamic interaction sequence, and the dynamic interaction diagram reflects the time sequence relation of each interaction event and the interaction influence transmitted by each interaction event between interaction objects. For the current interactive event to be analyzed, obtaining the participation node of the current interactive event and the associated node of the participation node from the dynamic interactive graph, obtaining each sub-graph part taking each participation node and each associated node as a root node, and respectively inputting the sub-graph parts into a neural network model to obtain the implicit vector expression of each participation node and each associated node. The event characteristics or category labels of the historical interaction events and the influence of the historical interaction events on the nodes are introduced into the implicit vector obtained in the way, so that the deep characteristics of the nodes can be comprehensively expressed. Based on the respective implicit vectors of the participating nodes and the associated nodes related to the current interactive event, the current interactive event can be expressed and analyzed more comprehensively and more accurately.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1A illustrates an interaction relationship bipartite graph in one example;
FIG. 1B illustrates an interaction relationship network diagram in another example;
FIG. 2 illustrates an implementation scenario diagram according to one embodiment;
FIG. 3 illustrates a flow diagram of a method of processing an interaction event, according to one embodiment;
FIG. 4 illustrates a dynamic interaction sequence and a dynamic interaction diagram constructed therefrom, in accordance with one embodiment;
FIG. 5 illustrates a node relationship diagram relating to a current interaction event;
FIG. 6 shows an example of a subgraph in one embodiment;
FIG. 7 shows a schematic diagram of the operation of the LSTM layer in the first neural network model;
FIG. 8 illustrates the structure of the LSTM layer in the first neural network model in accordance with one embodiment;
FIG. 9 illustrates the structure of the LSTM layer in the second neural network model in accordance with one embodiment;
FIG. 10 illustrates a schematic structural diagram of the integrated model in one embodiment;
FIG. 11 shows a schematic block diagram of an apparatus for processing interactivity events according to one embodiment.
Detailed Description
The scheme provided by the specification is described below with reference to the accompanying drawings.
As previously mentioned, it is desirable to be able to characterize and model the participants of an interaction event, as well as the interaction event itself, based on the history of the interaction.
In one approach, a static interaction relationship network graph is constructed based on historical interaction events, such that individual interaction objects are analyzed based on the interaction relationship network graph. Specifically, the participants of the historical events may be used as nodes, and connection edges may be established between nodes having an interaction relationship, so as to form the interaction network graph.
Fig. 1A and 1B respectively show an interaction network diagram in a specific example. More specifically, FIG. 1A shows a bipartite graph comprising user nodes (U1-U4) and commodity nodes (V1-V3), where if a user purchases a commodity, a connecting edge is constructed between the user and the commodity. FIG. 1B shows a user transfer relationship diagram where each node represents a user and there is a connecting edge between two users who have had transfer records.
However, it can be seen that fig. 1A and 1B, although showing the interaction relationship between objects, do not contain timing information of these interaction events. The graph embedding is simply carried out on the basis of the interaction relation network graph, and the obtained feature vectors do not express the influence of the time information of the interaction events on the nodes. Moreover, such static graphs are not scalable enough, and are difficult to flexibly process for the situations of newly added interaction events and newly added nodes.
In another scheme, for each interactive object in the interactive event to be analyzed, a behavior sequence of the object is constructed, and based on the behavior sequence, the feature expression of the object is extracted, so as to construct the feature expression of the event. However, such a behavior sequence merely characterizes the behavior of the object to be analyzed itself, whereas an interaction event is an event involving multiple parties, and influences are indirectly transmitted between the participants through the interaction event. Thus, such an approach does not express the impact between the participating objects in the interaction event.
Taking the above factors into consideration, according to one or more embodiments of the present specification, a dynamically changing interactivity event sequence is constructed into a dynamic interactivity graph, wherein each interactivity object involved in each interactivity event corresponds to each node in the dynamic interactivity graph. For the current interactive event to be analyzed, obtaining the participation node of the current interactive event and the associated node of the participation node from the dynamic interactive graph, obtaining each sub-graph part related to each participation node and associated node, respectively inputting the sub-graph parts into a neural network model, obtaining the implicit vector expression of each participation node and associated node, and expressing and analyzing the current interactive event based on the implicit vectors of the nodes.
Fig. 2 shows a schematic illustration of an implementation scenario according to an embodiment. As shown in FIG. 2, multiple interaction events occurring in sequence may be organized chronologically into a dynamic interaction sequence<E1,E2,…,EN>Wherein each element EiRepresenting an interaction event, which may be represented in the form of an interaction feature set Ei (ai, b)i,f,ti) WhereinaiAnd biIs an event EiF is an interaction feature, or event feature, tiIs the interaction time.
According to embodiments of the present description, a dynamic interaction graph 200 is constructed based on the dynamic interaction sequence. In diagram 200, each interactive object a in each interactive event is assignedi,biRepresented by nodes and establishing connecting edges between events containing the same object. The structure of the dynamic interaction graph 200 will be described in more detail later.
For a certain current interaction event to be analyzed, the participating nodes and the associated nodes in the dynamic interaction graph can be determined, and corresponding subgraphs are obtained in the dynamic interaction graph by taking each participating node and each associated node as the current root node. Generally, a subgraph includes a range of nodes that can be reached through a connecting edge, starting from a current root node. The subgraph reflects the impact on the current node by other objects in the interaction event directly or indirectly associated with the current interaction object.
Then, the subgraphs of the participating nodes and the subgraphs of the associated nodes are input to two neural network models, namely a first neural network model and a second neural network model, respectively. The first neural network model obtains an implicit vector corresponding to the participating node based on the event characteristics and the node connection relation of each node in the subgraph of the participating node. And the second neural network model obtains the implicit vector of the associated node based on the event type label and the node connection relation of each node in the subgraph of the associated node. The obtained implicit vector can extract the label information and the time sequence information of the associated interaction events and the influence among the interaction objects in each interaction event, so that the deep features of each interaction object can be more accurately expressed. Based on the implicit vectors of the participating nodes and the implicit vectors of the associated nodes, the current interactive event can be expressed and analyzed, and the event category of the current interactive event can be determined.
Specific implementations of the above concepts are described below.
FIG. 3 illustrates a flow diagram of a method of processing an interaction event, according to one embodiment. It is to be appreciated that the method can be performed by any apparatus, device, platform, cluster of devices having computing and processing capabilities. The following describes, with reference to specific embodiments, various steps in the method for processing interactive sequence data shown in fig. 3.
First, in step 31, a dynamic interaction graph constructed according to a dynamic interaction sequence is obtained.
As previously mentioned, the dynamic interaction sequence, for example, is represented as<E1,E2,…,EN>May comprise a plurality of interactivity events arranged in chronological order, wherein each interactivity event EiCan be represented as an interactive feature set Ei=(ai,bi,f,ti) Wherein a isiAnd biIs an event EiF is an event feature or an interaction feature that can be used to describe the context and context information of the occurrence of the interaction, some attribute features of the interaction, etc., tiIs the interaction time.
In order to analyze the nature of the interaction events, it is required that at least a part of the interaction events in the sequence of interaction events have a tag of an event category. The tags for the event categories may be generated by manual marking or by post analysis of the interaction events that have occurred. As such, the set of interaction characteristics for a portion of an interaction event may be further denoted as Ei=(ai,bi,f,y,ti) Where y is an event category label.
For example, in one embodiment, the interaction event may be a transaction event. In such a case, two interactive objects aiAnd biEither a user and a product or two users. The event characteristics f of the transaction event may include, for example, a transaction type (merchandise purchase transaction, transfer transaction, etc.), a transaction amount, a transaction channel, etc. At least some of the transaction events that have occurred may also have an event category label y that may show a transaction risk level, e.g., fraudulent transactions are represented by-1, normal transactions are represented by 0, or different transactions are represented by different numbersThe risk factor of (c).
In another embodiment, the interaction event may be a user's click behavior on a section of a page presented with particular content. In such a case, two interactive objects aiAnd biRespectively a certain user and a certain page block. The event characteristics f of the interaction event may include the type of terminal used by the user to click, the browser type, the app version, the position number of the page block in the page, and so on. The event category tab of the interaction event may show whether the user's click action on a page tile translated the content in the tile, such as whether the goods displayed in the tile were purchased, whether the coupon displayed in the tile was received, and so on.
In other business scenarios, an interaction event may also be other interaction behavior occurring between two objects, such as communication behavior between users, etc. The interaction events may have different event characteristics, and event category labels, depending on the business scenario.
For the dynamic interaction sequence described above, a dynamic interaction graph may be constructed. Specifically, each object in each interactive event in the dynamic interactive sequence is respectively used as a node of the dynamic interactive graph. Thus, one node may correspond to one object in one interaction event, but the same physical object may correspond to multiple nodes. For example, if user U1 purchased commodity M1 at time t1 and purchased commodity M2 at time t2, then there are two feature groups of interaction events (U1, M1, t1) and (U1, M2, t2), then nodes U1(t1), U1(t2) are created for user U1 from these two interaction events, respectively. It can therefore be considered that a node in the dynamic interaction graph corresponds to the state of an interaction object in one interaction event.
For each node in the dynamic interaction graph, a connecting edge is constructed in the following way: for any node i, assuming that it corresponds to an interactivity event i (with an interaction time t), in the dynamic interaction sequence, tracing back from the interactivity event i, i.e. tracing back to a direction earlier than the interaction time t, determines the first interactivity event j (with an interaction time t-, t-earlier than t) which also contains the object represented by the node i as the last interactivity event in which the object participates. Thus, a connecting edge is established pointing from node i to both nodes in the last interactivity event j. The two pointed-to nodes are then also referred to as the associated nodes of node i.
The following description is made in conjunction with specific examples. FIG. 4 illustrates a dynamic interaction sequence and a dynamic interaction diagram constructed therefrom, according to one embodiment. In particular, the left side of fig. 4 shows a dynamic interaction sequence organized in time sequence, wherein an exemplary illustration is given at t respectively1,t2,…,t6Interaction event E occurring at a moment1,E2,…,E6Each interaction event contains two interaction objects involved in the interaction and the interaction time (the event feature is omitted for clarity of illustration). The right side of fig. 4 shows a dynamic interaction diagram constructed according to the dynamic interaction sequence on the left side, wherein two interaction objects in each interaction event are respectively taken as nodes. Node u (t) is shown below6) For example, the construction of the connecting edge is described.
As shown, the node u (t)6) Representing an interaction event E6One interaction object David. Thus, from interaction event E6Going back from the beginning, the first found interaction event, which also includes the interaction object David, is E4That is, E4Is the last interaction event in which David participated, correspondingly, E4Two nodes u (t) corresponding to the two interactive objects of4) And v (t)4) Is node u (t)6) Two associated nodes. Thus, the slave node u (t) is established6) Direction E4Corresponding two nodes u (t)4) And v (t)4) The connecting edge of (2). Similarly, from u (t)4) (corresponding to interaction event E4) Continuing to trace back forward, the last interactive event E in which the object u, namely David, participates can be found continuously2Then, a slave u (t) is established4) Direction E2Connecting edges of the two corresponding nodes; from v (t)4) Go back forward, the last interactive event E in which the object v participates can be found3Then, a slave v (t) is established4) Direction E3And connecting edges of the corresponding two nodes. Thus, connections are constructed between nodesEdges, thereby forming the dynamic interaction graph of FIG. 4.
The above describes a way and a process for constructing a dynamic interaction graph based on a dynamic interaction sequence. For the method for processing interaction events shown in fig. 3, the process of constructing the dynamic interaction graph may be performed in advance or in the field. Accordingly, in one embodiment, a dynamic interaction graph is constructed in the field according to the dynamic interaction sequence, step 31. Constructed as described above. In another embodiment, the dynamic interaction graph can be constructed in advance based on the dynamic interaction sequence. In step 31, the formed dynamic interaction graph is read or received.
It can be understood that the dynamic interaction graph constructed in the above manner has strong extensibility, and can be very easily updated dynamically according to the newly added interaction events. Accordingly, step 31 may also include a process of updating the dynamic interaction graph.
In one embodiment, whenever a new interaction event is detected, the dynamic interaction graph is updated based on the new interaction event. Specifically, in this embodiment, an existing dynamic interaction graph constructed based on an existing interaction sequence may be obtained, and a newly added interaction event may be obtained. Then, two objects involved by the newly added interaction event, namely the first object and the second object, are used as two newly added nodes and added into the existing dynamic interaction graph. And, for each new added node, determining whether an associated node exists, wherein the definition of the associated node is as described above. And if the associated nodes exist, adding a connecting edge pointing to the two associated nodes from the newly added node, thus forming an updated dynamic interaction graph.
In another embodiment, a new interaction event may be detected every predetermined time interval, for example, every hour, and the new interaction events in the time interval may be formed into a new interaction sequence. Alternatively, whenever a predetermined number (e.g., 100) of newly added interactivity events are detected, the predetermined number of newly added interactivity events are formed into a newly added interactivity sequence. And then updating the dynamic interaction graph based on the newly added interaction sequence.
Specifically, in this embodiment, an existing dynamic interaction graph constructed based on an existing interaction sequence may be obtained, and the newly added interaction sequence as described above, which includes a plurality of newly added interaction events, may be obtained. And then, regarding each newly added interaction event, taking the first object and the second object as two newly added nodes, and adding the two newly added nodes into the existing dynamic interaction graph. And for each newly added node, determining whether the newly added node has an associated node, and if so, adding a connecting edge pointing to the two associated nodes from the newly added node, thereby forming an updated dynamic interaction graph.
In step 31, a dynamic interaction graph constructed based on the dynamic interaction sequence is obtained. Next, in step 32, in the above dynamic interaction graph, node information related to the current interaction event to be analyzed is determined.
In one embodiment, the current interactivity event is an interactivity event for which the event category is unknown and thus yet to be analyzed. For example, in one example, user A initiates a transaction with object B, resulting in a current interaction event. Upon receipt of such a transaction request (e.g., upon user a requesting payment), the current interaction event is analyzed to determine an event category for the transaction event, such as whether it is a suspected fraud transaction, a risk level for the transaction, and so forth.
In another embodiment, the current interaction event may also be an event with an event category tag, such that its category is known. The current interaction event with the label is analyzed, and the method can be used for learning and training of a neural network model. This will be further described later.
For any of the above-mentioned current interaction events, the node information involved in it can be determined in the dynamic interaction graph. Specifically, two nodes corresponding to two participating objects of the current interaction event, referred to as a first node and a second node, and four associated nodes related to the event may be determined, where the four associated nodes include two associated nodes pointed to by the first node and two associated nodes pointed to by the second node. In other words, for a current interaction event to be analyzed, two participating nodes and four associated nodes for the event are determined from the dynamic interaction graph.
FIG. 5 illustrates a node relationship diagram relating to a current interaction event. As shown in fig. 5, the two participating objects of the current interactivity event correspond to node 1 and node 2, respectively. The last interactive event in which the object represented by node 1 participates is the first event, and the participation objects of the first event correspond to node 3 and node 4. It will be appreciated that one of nodes 3 and 4 is a node of the same physical object at a different time, in a different event than node 1. Accordingly, in the dynamic interaction diagram, node 1 points to node 3 and node 4. Similarly, the last interactive event in which the object represented by node 2 participated is the second event, which corresponds to node 5 and node 6. Thus, for the current interactivity event, the relevant nodes include two participating nodes, node 1 and node 2, and four associated nodes, node 3,4,5, 6.
For clarity of illustration, the description is made with reference to the example of fig. 4. In one example, assume event E6Is the current interaction event to be analyzed. As can be seen in FIG. 4, event E6Is at the occurrence time t6And the interactive objects are u and w. Therefore, the two nodes corresponding to the current interactivity event, namely the first node and the second node, are u (t) respectively6) And w (t)6). First node u (t)6) Pointing to two associated nodes u (t)4) And v (t)4) Second node w (t)6) Pointing to two associated nodes p (t)5) And w (t)5). Thus, with the current interaction event E6The related nodes include two participating nodes u (t)6) And w (t)6) And four associated nodes, node u (t)4) And v (t)4),p(t5) And w (t)5)。
After the above-mentioned nodes related to the current interaction event are determined, in step 33, the corresponding subgraphs of the respective nodes in the dynamic interaction graph are determined. Specifically, the first node, the second node, and the four associated nodes are respectively used as root nodes, and respective corresponding subgraphs are determined in the dynamic interaction graph, so as to obtain a first subgraph, a second subgraph, and four associated subgraphs, respectively, where for any root node, the corresponding subgraph includes nodes in a predetermined range starting from the root node and arriving via a connecting edge.
In one embodiment, the nodes within the predetermined range may be nodes reachable through at most a preset number K of connecting edges. The number K is a preset hyper-parameter and can be selected according to the service situation. It will be appreciated that the preset number K represents the number of steps of the historical interaction events traced back from the root node onwards. The larger the number K, the longer the historical interaction information is considered.
In another embodiment, the nodes in the predetermined range may also be nodes whose interaction time is within a predetermined time range. For example, the interaction time from the root node is traced back forward for a duration of T (e.g., one day) within which the nodes are within reach through the connecting edge.
In yet another embodiment, the predetermined range takes into account both the number of connected sides and the time range. In other words, the nodes in the predetermined range are nodes that are reachable at most through a preset number K of connecting edges and have interaction time within a predetermined time range.
The following continues the above examples and is described in connection with specific examples. FIG. 6 shows an example of a subgraph in one embodiment. In the example of FIG. 6, assume that event E is currently interacted with6I.e. u (t) in fig. 46) For the root node, its corresponding first subgraph is determined, and the subgraph is assumed to be composed of nodes reached via a preset number K-2 at most. Then, from the current root node u (t)6) Starting from this, traversal is performed along the direction of the connecting edges, and the nodes that can be reached via 2 connecting edges are shown as the dashed areas in the figure. The node and the connection relation in the region are the node u (t)6) The corresponding sub-graph, i.e. the first sub-graph.
Similarly, when the other participating node (second node) of the current interactivity event is active, e.g., w (t)6) And traversing the root node to obtain a second subgraph. In addition, the four associated nodes are respectively used as root nodes, and the dynamic interaction graph is traversed along the connecting edges, so that four associated subgraphs can be respectively obtained.
Thus, for the current interaction event, a first sub-graph corresponding to the first node, a second sub-graph corresponding to the second node, and four associated sub-graphs corresponding to the four associated nodes are obtained respectively.
Next, in step 34, the first sub-graph and the second sub-graph are respectively input to a first neural network model to be processed, so as to respectively obtain a corresponding first implicit vector and a corresponding second implicit vector, where the first neural network determines the implicit vector corresponding to the input sub-graph according to the node input feature X of each node in the input sub-graph and the directional relationship of the connection edges between the nodes, where the node input feature X includes the event feature of the event in which the node is located.
For the purpose of distinction, the node input features X processed in the first neural network model are also referred to as first input features. As mentioned before, the first input feature X comprises an event feature f of the event in which the node is located. For example, when the event is a transaction event, the event characteristics may include a transaction type, a transaction amount, a transaction channel, and the like.
In one embodiment, the first input features X may also include attribute features a of the node itself. For example, where a node represents a user, the node attribute characteristics may include attribute characteristics of the user, such as age, occupation, education, locale, and so forth; in the case where the nodes represent goods, the node attribute characteristics may include attribute characteristics of the goods, such as goods category, time on shelf, sales volume, and the like. And under the condition that the node represents other interactive objects, the original node attribute characteristics can be correspondingly acquired.
The processing of the first neural network model is described below.
The first neural network model may employ various neural network models capable of processing sequence information, such as an RNN neural network model, an LSTM neural network model, a Transformer neural network model, and the like.
In one embodiment, the first neural network model is a Transformer-based neural network model. Under the condition, a node sequence is formed according to the directional relation between nodes in the input subgraph, and the node sequence is input into a Transformer neural network model to obtain an implicit vector corresponding to the input subgraph.
In another embodiment, the first neural network is an LSTM-based neural network model. Under the condition, the LSTM neural network model sequentially iterates and processes each node according to the directional relation among the nodes in the input subgraph to obtain the implicit vector corresponding to the input subgraph. The specific process is described below in conjunction with an LSTM neural network.
In particular, the first neural network model may include at least one LSTM layer. When the first sub-graph is input into a first neural network model, an LSTM layer in the first sub-graph takes each node in the first sub-graph as a target node in turn, determines an implicit vector and an intermediate vector of the target node according to a first input feature X of the target node, and a respective intermediate vector and implicit vector of two associated nodes pointed by the target node, and iterates in turn in this way until a root node of the first sub-graph, namely the implicit vector of the first node, is obtained.
FIG. 7 shows a schematic diagram of the operation of the LSTM layer in the first neural network model. Suppose node Q points to node J1And node J2. As shown in FIG. 7, at time T, the LSTM layers are processed to obtain nodes J1And node J2Representation vectors H1 and H2, including intermediate vectors and implicit vectors; at the next time T +, the LSTM layer is based on the first input characteristic X of the node QQJ obtained by previous treatment1And J2To obtain a representation vector H of node Q, representing vectors H1 and H2Q. It is understood that the representation vector of node Q can be used for processing to obtain the representation vector of the node pointing to node Q at a subsequent time, thus implementing an iterative process.
This process is described in connection with the first sub-diagram of fig. 6. For the node u (t) at the lowest level in the graph2) In this first subgraph, its pointing node is not considered, i.e. u (t) is considered2) There is no node pointed to. In such a case, the intermediate vector c and the implicit vector h, respectively, of the two nodes to which the node points are generated by padding (padding) with a default value (e.g. 0). The LSTM layer is then based on this node u (t)2) To (1) aAn input feature X (u (t)2) And the two intermediate vectors c and the two implicit vectors h generated, determine a node u (t)2) Intermediate vector c (u (t) of (c)2) And an implicit vector h (c (t)2)). For the lowest node r (t)2) The same process is also performed to obtain a corresponding intermediate vector c (r (t))2) H (r (t))2))。
For node u (t)4) Pointing to node u (t)2) And r (t)2). Therefore, the LSTM layer is based on the node u (t)4) First input feature X (u (t) of (b))4) And two nodes u (t) to which it points2) And r (t)2) Respective intermediate and implicit vectors, i.e. c (u (t)2)),h(u(t2)),c(r(t2) H (r (t))2) Determine node u (t)4) Intermediate vector c (u (t) of (c)4) H (u (t))4))。
Thus, layer by layer iterative processing can obtain the root node of the first subgraph, namely the first node u (t)6) The intermediate vector and the implicit vector of (2).
The internal structure and algorithms of the LSTM layer to achieve the above iterative process are described below.
FIG. 8 illustrates the structure of the LSTM layer in the first neural network model in accordance with one embodiment. In the example of FIG. 8, the target node of the current process is denoted as z (t), where X isz(t)Representing a first input characteristic of the node.
Assume that the two nodes pointed to by the target node z (t) are the first associated node j1And a second associated node j2Then cj1And hj1Respectively represent a first associated node j1Intermediate and implicit vectors of cj2And hj2Respectively represent second associated nodes j2The intermediate vector and the implicit vector of (2).
The LSTM layer performs the following operations on the first input feature X, the intermediate vector, and the implicit vector input therein.
The first input feature Xz(t)First associated node j1Is implicitly given by the vector hj1And a second associated node j2Is implicitly given by the vector hj2Respectively inputting a first transformation function and a second transformation function with the same algorithm and different parameters to respectively obtain a first transformation vector
Figure BDA0002182924600000171
And a second transform vector
Figure BDA0002182924600000172
The first transformation function and the second transformation function may employ various operations, such as an operation of first linearly transforming an input vector and then applying an activation function. For example, in one example, the first transformation function and the second transformation function may be calculated using the following equations (1) and (2), respectively:
Figure BDA0002182924600000173
Figure BDA0002182924600000174
in the above equations (1) and (2), σ is an activation function, for example, a sigmoid function,
Figure BDA0002182924600000175
and
Figure BDA0002182924600000176
in the form of a linear transformation matrix, the transformation matrix,is an offset parameter. It can be seen that the algorithms of equations (1) and (2) are the same, with only the parameters being different. By the above transformation function, a first transformation vector can be obtainedAnd a second transform vector
Figure BDA0002182924600000181
Of course, in other examples, similar but different transformation functions may be employed, such as selecting different activation functions, modifying the form and number of parameters in the above formula, and so forth.
Then, the first transformation vector is transformed
Figure BDA0002182924600000182
And a second transform vector
Figure BDA0002182924600000183
Respectively with the first associated node j1Intermediate vector c ofj1And a second associated node j2Intermediate vector c ofj2And performing combination operation to obtain a combination vector based on the operation result.
Specifically, in one example, as shown in fig. 8, the combining operation may be to transform the first transformed vector
Figure BDA0002182924600000184
Intermediate vector c with first associated nodej1Performing bit-wise multiplication (as shown by ⊙ symbols in the figure) to obtain a vector v1, and converting the second transformed vector
Figure BDA0002182924600000185
Intermediate vector c with second associated nodej2The bitwise multiplication is performed to obtain a vector v2, and then the vector v1 and the vector v2 are recombined, for example, added and summed, to obtain a combined vector.
In addition, the first input characteristic X of the node is also usedz(t)First associated node j1Is implicitly given by the vector hj1And a second associated node j2Is implicitly given by the vector hj2Respectively inputting a third transformation function and a fourth transformation function to respectively obtain a third transformation vector rz(t)And a fourth transformation vector Oz(t)
Specifically, in the example shown in fig. 8, the third transformation function may be to first obtain the vector iz(t)And uz(t)Then i isz(t)And uz(t)Performing bit-wise multiplication to obtain a third transform vector rz(t)Namely:
rz(t)=iz(t)⊙uz(t)(3)
wherein ⊙ denotes a bitwise multiplication.
More specifically, iz(t)A function of similar form and different parameters to the first transformation function may be used, for example, as follows:
Figure BDA0002182924600000186
uz(t)can be calculated according to the following formula
Figure BDA0002182924600000187
The fourth transformation function may adopt a function with similar form and different parameters from the first transformation function, so as to obtain a four-transformation vector Oz(t)
Then, based on the combined vector and the third transformation vector rz(t)Determining the intermediate vector c of the target node z (t)z(t)
More specifically, in one example, the combined vector and the third transformed vector may be summed to obtain an intermediate vector c of z (t)z(t). In other examples, the combined result can be used as the intermediate vector c of z (t) by other combination methods, such as weighted summation, bit-wise multiplicationz(t)
Furthermore, an intermediate vector c based on the thus obtained node z (t)z(t)And a fourth transformation vector Oz(t)Determining an implicit vector h of the node z (t)z(t)
In the specific example shown in fig. 8, the intermediate vector c may be divided intoz(t)After the tanh function operation, the fourth transformation vector O is addedz(t)Bit-wise multiplication as an implicit vector h for the node z (t)z(t)Namely:
hz(t)=oz(t)⊙tanh(cz(t)) (6)
thus, the structure and algorithm shown in FIG. 8The LSTM layer is based on the first input characteristics X of the currently processed target node z (t), two associated nodes j pointed to by the node1And j2Respective intermediate vector and implicit vector, determining the intermediate vector c of the node z (t)z(t)And an implicit vector hz(t)
In one embodiment, in the process of iteratively processing each target node z (t) to determine the intermediate vector and the implicit vector thereof, a time difference Δ between the interaction time corresponding to the current processed target node z (t) and the interaction time corresponding to the pointed node is further introduced.
In operation, in one embodiment, the time difference Δ may also be incorporated into the first input feature X as part thereof. That is, the first input characteristic may also include the above-described time difference Δ. In another embodiment, the first input feature and the time difference Δ may be taken as two juxtaposed input features. In this case, the form of the first to fourth variation functions may be maintained, and only the parameter related to the time difference may be introduced on the basis of the original form. For example, for the first and second transformation functions described above, a parameter regarding the time difference Δ may be introduced on the basis of equations (1) and (2), resulting in the following transformation functions:
Figure BDA0002182924600000191
other transformation functions may be similarly modified to factor in the time difference Δ.
Through the LSTM layer as shown in fig. 8, each node in the first subgraph is iteratively processed in turn, so that an intermediate vector and an implicit vector of the first node of the root node can be obtained. In one embodiment, the implicit vector thus obtained may be used as a first implicit vector for the first sub-graph output by the first neural network model.
According to one embodiment, to further increase the effect, the first step isA plurality of LSTM layers can be included in the neural network model, wherein an implicit vector of a certain node determined by a previous LSTM layer is input to a next LSTM layer to serve as a first input feature of the node. That is, each LSTM layer still iteratively processes each node, and determines the implicit vector and the intermediate vector of the target node i according to the first input feature of the currently processed target node i, the intermediate vector and the implicit vector of each of the two associated nodes pointed to by the target node i, where only the LSTM layer at the bottom layer uses the original event feature (optionally, node attribute feature and/or time difference) of the target node i as the first input feature, and the subsequent LSTM layer uses the implicit vector h of the target node i determined by the previous LSTM layeriAs a first input feature. In one embodiment, the plurality of LSTM layers are stacked in a residual network to form a first neural network model.
In the case of a first neural network model having multiple LSTM layers, it will be appreciated that each LSTM layer may determine an implicit vector for the first node as the root node. In one embodiment, the first neural network model synthesizes implicit vectors of the first node output by each of the plurality of LSTM layers to obtain a final implicit vector of the first node, that is, a first implicit vector. More specifically, the implicit vectors output by the LSTM layers may be weighted and combined to obtain the final first implicit vector. The weight of the weighted combination can be simply set to a weight factor corresponding to each layer, and the magnitude of the weight factor is adjusted through training. Alternatively, the weighting factor may be determined by a more complex attention (attention) mechanism.
In another embodiment, the first neural network model may further use an implicit vector of the first node output by the last LSTM layer of the plurality of LSTM layers as a final first implicit vector.
Thus, through various ways, the first neural network model obtains the implicit vector of the first node, i.e. the first implicit vector, based on the first sub-graph taking the first node as the root node. Since the first subgraph reflects the information of the time-sequence interaction history related to the interaction object corresponding to the first node, the method can be usedThe first implicit vector of (1) expresses not only the first node (e.g., u (t) in FIGS. 4 and 6)6) The characteristics of the corresponding interactive object (e.g., David) itself, and the influence of the interactive object in the past interactive events can be expressed, so as to comprehensively characterize the characteristics of the interactive object.
Similar to the above process, when the second sub-graph corresponding to the second node is input into the first neural network model, the first neural network model sequentially iterates the nodes according to the first input features X (including event features) of the nodes in the second sub-graph and the directional relationship of the connection edges between the nodes, thereby determining the implicit vector corresponding to the second node or the second sub-graph, that is, the second implicit vector. Specifically, under the condition of adopting the LSTM neural network model, the LSTM layer of the first neural network model sequentially takes each node in the second sub-graph as a target node, determines an implicit vector and an intermediate vector of the target node according to a first input feature X of the target node, and respective intermediate vectors and implicit vectors of two associated nodes pointed by the target node, and sequentially iterates the above steps until an implicit vector of a root node, i.e., a second node, of the second sub-graph is obtained. The process is similar to the process for processing the first sub-graph and is not repeated.
Thus, a first implicit vector corresponding to the first node and a second implicit vector corresponding to the second node are obtained through the first neural network model based on the first sub-graph and the second sub-graph respectively.
On the other hand, in step 35, the four associated subgraphs obtained in step 33 are respectively input into a second neural network model to respectively obtain four associated implicit vectors; and the second neural network determines an implicit vector corresponding to the input subgraph according to second input features Y of each node in the input subgraph and the directional relation of connecting edges among the nodes, wherein the second input features comprise event category labels of events where the nodes are located.
It can be seen that the second neural network model is similar in processing logic to the first neural network model. The difference is that when the first neural network model processes each node in the subgraph, the features (first input features X) based on comprise the event features of the event in which the node is located, and when the second neural network model processes each node, the features (second input features Y) based on comprise the event class labels of the event in which the node is located. Accordingly, this requires that events at which each node in the associated subgraph resides have a known event class label.
More specifically, the second neural network model also includes an LSTM layer. When a certain associated subgraph is input into a second neural network model, each node in the associated subgraph is sequentially used as a target node by an LSTM layer, the implicit vector and the intermediate vector of the target node are determined according to a second input characteristic Y of the target node and the respective intermediate vector and implicit vector of two associated nodes pointed by the target node, and iteration processing is sequentially carried out until the implicit vector of a root node of the associated subgraph is obtained.
FIG. 9 illustrates the structure of the LSTM layer in the second neural network model in accordance with one embodiment. In this embodiment, the structure and processing logic of the LSTM layer in the second neural network model is identical to the first neural network model shown in fig. 8, except that the first input features X in fig. 8 are modified to second input features Y, where Y includes the event class label of the event in which the node is located. In operation, the event category tag may be first subjected to embedding processing, and an embedded vector expression of the tag is obtained as the second input feature Y. In one embodiment, the second input feature Y may further include attribute features of the node itself and/or a time difference from an event in which the associated node is located, based on the event category label.
The second neural network model may also include a plurality of LSTM layers shown in fig. 9, and the results of the hidden vectors of the plurality of layers are fused to obtain the associated hidden vector corresponding to the associated subgraph.
In one embodiment, the second neural network model may have the exact same structure (e.g., number of layers) and algorithmic logic (e.g., form of transformation functions) as the first neural network model, except that its network parameters are different.
In this way, the four associated subgraphs obtained in step 33 are respectively input into the second neural network model, so that four associated implicit vectors corresponding to the four associated nodes can be respectively obtained.
Next, in step 36, the event category of the current interactive event is determined according to the first implicit vector H1 and the second implicit vector H2 obtained by the first neural network model in step 34, and the four associated implicit vectors G1, G2, G3, and G4 obtained by the second neural network model in step 35.
In one embodiment, the first and second implicit vectors H1, H2 and the four associated implicit vectors G1-G4 are input into a calculation function with predetermined algorithm logic, and the event category of the current interactive event is determined according to the result of the function.
In another embodiment, the 6 implicit vectors are further processed using a further neural network to analyze the current interaction event.
In one example, the first implicit vector H1, the second implicit vector H2, and the four associated implicit vectors G1-G4 may be fused by using a fully-connected neural network to obtain an event representation vector of the current interactive event. In different embodiments, different fusion manners may be adopted, for example, direct fusion manners such as splicing 6 implicit vectors, weighted combination, and the like. More complex fusion modes can also be adopted, for example, the first implicit vector and the second implicit vector are subjected to weighted combination to obtain a first combination vector, the four associated implicit vectors are subjected to weighted combination to obtain a second combination vector, the first combination vector and the second combination vector are spliced, and the like. The specific mode of fusion can be set according to the service scene of the current interactive event.
Thus, an event representation vector of the current interaction event is obtained through the fully-connected neural network. It can be understood that the event representation vector comprehensively considers the implicit vectors of two participating nodes and four associated nodes of the current interactive event, and the implicit vector of each node further reflects the time sequence information and the event feature/tag information of the historical interactive event, so that the event representation vector comprehensively reflects the influence of the historical interactive event on the current node and the current event and contains rich and deep feature information.
Next, an event category of the current interaction event may be predicted based on the event representation vector using a classifier.
At this time, the first neural network model, the second neural network model, the fully-connected neural network and the classifier together form a comprehensive model.
FIG. 10 illustrates a schematic diagram of the structure of the integrated model in one embodiment. As shown in fig. 10, in the integrated model, the first neural network model is used to obtain a first hidden vector and a second hidden vector according to sub-graphs corresponding to two participating nodes of the current interaction event; the second neural network model is used for obtaining four association hidden vectors according to sub-graphs corresponding to the four association nodes of the current interaction event. The fully-connected neural network is connected with the first neural network and the second neural network, and the first implicit vector, the second implicit vector and the four associated implicit vectors are obtained from the fully-connected neural network and are fused into an event representation vector. The classifier is connected with the fully-connected neural network, and predicts the event category of the current interaction event according to the event representation vector.
In one embodiment, the current interactivity event is an interactivity event for which the event category is unknown. Thus, the event type of the current interactive event can be analyzed through step 36, and the subsequent processing mode can be determined. For example, in one example, the current interaction event is a transaction initiated by user A with object B. By analyzing the transaction event, the event category can be predicted, such as whether the transaction is suspected to be a fraud transaction, the risk level of the transaction, and the like. Based on the category of events thus predicted, it may be decided whether to allow the transaction, whether to pass the payment request of user a.
In another embodiment, the current interaction event may also be an event with an event category tag. In such a case, the aforementioned steps 31-36 may be part of the learning and training process of the neural network model. In such a case, the training process of the neural network model further includes the following steps.
A current category label for the current interaction event may be obtained. The obtaining mode and meaning of the category label are as described above, and are not repeated.
The predicted loss is then determined based on the event category determined at step 36, and the current category label obtained. The predicted loss may be obtained in the form of a loss function such as L2 error, cross entropy, etc.
Then, the first and second neural network models may then be trained based on the predicted loss. Specifically, the parameters in the first neural network and the second neural network, for example, the parameters in each transformation function, may be adjusted in a gradient descent manner, a back propagation manner, and the like, so as to update the first and second neural network models until the accuracy of the class prediction of the current interaction event reaches a certain requirement. This approach is suitable for the case where step 36 is implemented by a predetermined computational function, or where the fully-connected neural network and classifier have been trained.
In another embodiment, the fully-connected neural network used to characterize the event vectors and the classifier used to perform the classification are also to be trained. At this time, the first neural network model, the second neural network model, the fully-connected neural network, and the classifier described above, that is, the entire integrated model shown in fig. 10 may be trained jointly based on the prediction loss determined above. Specifically, parameters of each model part in the comprehensive model can be adjusted and updated, so that the prediction loss is reduced until the accuracy of the category prediction of the current interaction event reaches a certain requirement. The comprehensive model obtained by training can directly carry out category analysis on the event to be analyzed.
In summary, in the solution of the embodiment of the present specification, a dynamic interaction graph is constructed based on a dynamic interaction sequence, and the dynamic interaction graph reflects a time sequence relationship of each interaction event and an interaction effect between interaction objects transmitted through each interaction event. For the current interactive event to be analyzed, obtaining the participation node of the current interactive event and the associated node of the participation node from the dynamic interactive graph, obtaining each sub-graph part taking each participation node and each associated node as a root node, and respectively inputting the sub-graph parts into a neural network model to obtain the implicit vector expression of each participation node and each associated node. The event characteristics or category labels of the historical interaction events and the influence of the historical interaction events on the nodes are introduced into the implicit vector obtained in the way, so that the deep characteristics of the nodes can be comprehensively expressed. Based on the respective implicit vectors of the participating nodes and the associated nodes related to the current interactive event, the current interactive event can be expressed and analyzed more comprehensively and more accurately.
According to an embodiment of another aspect, an apparatus for processing interactive sequence data is provided, which may be deployed in any device, platform or cluster of devices having computing and processing capabilities. FIG. 11 shows a schematic block diagram of an apparatus for processing interactivity events according to one embodiment. As shown in fig. 11, the processing device 110 includes:
an interaction graph obtaining unit 111 configured to obtain a dynamic interaction graph constructed according to a dynamic interaction sequence, wherein the dynamic interaction sequence includes a plurality of interaction events arranged in a time sequence, and each interaction event includes two objects where an interaction behavior occurs, an event feature and an interaction time; at least some of the plurality of interactivity events have an event category tag; the dynamic interaction graph comprises a plurality of nodes representing each object in each event, wherein any node i points to two associated nodes through a connecting edge, and the two associated nodes are two nodes corresponding to the last interaction event in which the object represented by the node i participates;
a node determining unit 112, configured to determine, in the dynamic interaction graph, a first node and a second node corresponding to a current interaction event to be analyzed, and four related associated nodes, where the four related nodes include two related nodes pointed to by the first node and two related nodes pointed to by the second node;
a subgraph determining unit 113 configured to determine, by taking the first node, the second node, and the four associated nodes as current root nodes, respective corresponding current subgraphs in the dynamic interaction graph, so as to obtain a first subgraph, a second subgraph, and four associated subgraphs, respectively, where the current subgraph includes nodes in a predetermined range starting from the current root node and arriving via a connecting edge;
a first processing unit 114, configured to input the first sub-graph and the second sub-graph into a first neural network model respectively, and obtain a corresponding first implicit vector and a corresponding second implicit vector respectively; the first neural network determines an implicit vector corresponding to the input subgraph according to first input features of all nodes in the input subgraph and the directional relation of connecting edges among the nodes, wherein the first input features comprise event features of events where the nodes are located;
the second processing unit 115 is configured to input the four association subgraphs into a second neural network model respectively to obtain four association hidden vectors respectively; the second neural network determines an implicit vector corresponding to the input subgraph according to second input features of each node in the input subgraph and the directional relation of connecting edges among the nodes, wherein the second input features comprise event category labels of events where the nodes are located;
a prediction unit 116 configured to predict an event class of the current interactive event according to the first implicit vector, the second implicit vector, and four associated implicit vectors.
In one embodiment, the interaction graph obtaining unit 111 is configured to:
acquiring an existing dynamic interaction diagram constructed based on an existing interaction sequence;
acquiring a newly added interaction event;
taking a first object and a second object related to the newly added interaction event as two newly added nodes, and adding the two newly added nodes into the existing dynamic interaction graph;
and for each newly added node, if two associated nodes exist, adding a connecting edge pointing to the two associated nodes from the newly added node.
According to one embodiment, the nodes within the predetermined range reached via the connecting edge in the subgraph determined by the subgraph determination unit 113 include: nodes reached via connecting edges within a preset number K; and/or nodes which are reachable through the connecting edge and have interaction time within a preset time range.
According to one embodiment, the first input features of each node processed by the first neural network model further include attribute features of the node, and/or a time difference between a first interaction time of an interaction event in which the node is located and a second interaction time of an interaction event in which two associated nodes are located.
In a particular embodiment, the interaction event is a transaction event, and the event characteristics include at least one of: transaction type, transaction amount, transaction channel; the event category label is a transaction risk level label.
According to one embodiment, the first neural network comprises an LSTM layer for:
and sequentially taking each node in the first subgraph as a target node, and determining the implicit vector and the intermediate vector of the target node according to the first input characteristic of the target node, the respective intermediate vector and the implicit vector of the two associated nodes pointed by the target node until the implicit vector of the first node is obtained.
Further, in a specific embodiment, the two associated nodes pointed to by the target node are a first associated node and a second associated node; the LSTM layer is specifically used for:
inputting the first input feature of the target node, the implicit vector of the first associated node and the implicit vector of the second associated node into a first transformation function and a second transformation function which have the same algorithm and different parameters respectively to obtain a first transformation vector and a second transformation vector respectively;
respectively carrying out combined operation on the first transformation vector and the second transformation vector, and the intermediate vector of the first association node and the intermediate vector of the second association node, and obtaining a combined vector based on an operation result;
inputting the first input feature of the target node, the implicit vector of the first associated node and the implicit vector of the second associated node into a third transformation function and a fourth transformation function respectively to obtain a third transformation vector and a fourth transformation vector respectively;
determining an intermediate vector for the target node based on the combined vector and a third transformed vector;
determining an implicit vector for the target node based on the intermediate vector and a fourth transformed vector for the target node.
According to one embodiment, the first neural network model comprises a plurality of LSTM layers, wherein the implicit vector of the target node determined by the previous LSTM layer is input to the next LSTM layer as the first input feature of the target node.
Further, in an embodiment, the first neural network model synthesizes implicit vectors of the first node output by each of the plurality of LSTM layers to obtain the first implicit vector.
Alternatively, in another embodiment, the first neural network model takes an implicit vector of the first node output by a last LSTM layer of the plurality of LSTM layers as the first implicit vector.
According to one implementation, the first neural network model and the second neural network model are neural network models with the same structure and algorithm and different parameters.
In one embodiment, the prediction unit 116 is configured to:
fusing the first implicit vector, the second implicit vector and the four associated implicit vectors by utilizing a fully-connected neural network to obtain an event representation vector of the current interaction event;
and determining the event category of the current interaction event according to the event representation vector by utilizing a classifier.
According to one embodiment, each neural network model is trained by the model training unit 117. In different embodiments, the model training unit 117 may be located outside or inside the apparatus 110.
In one embodiment, the model training unit 117 is configured to:
acquiring a current category label of the current interaction event;
determining a predicted loss based at least on the determined event category and the current category label;
and jointly training the first neural network model, the second neural network model, the fully-connected neural network and the classifier according to the prediction loss.
In another embodiment, the model training unit 117 is configured to:
acquiring a current category label of the current interaction event;
determining a predicted loss based at least on the determined event category and the current category label;
training the first neural network model and a second neural network model according to the predicted loss.
Through the device, the participation nodes and the association nodes of the interaction events are processed by adopting the neural network model based on the dynamic interaction graph, so that the interaction events are comprehensively analyzed and predicted.
According to an embodiment of another aspect, there is also provided a computer-readable storage medium having stored thereon a computer program which, when executed in a computer, causes the computer to perform the method described in connection with fig. 3.
According to an embodiment of yet another aspect, there is also provided a computing device comprising a memory and a processor, the memory having stored therein executable code, the processor, when executing the executable code, implementing the method described in connection with fig. 3.
Those skilled in the art will recognize that, in one or more of the examples described above, the functions described in this invention may be implemented in hardware, software, firmware, or any combination thereof. When implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium.
The above-mentioned embodiments, objects, technical solutions and advantages of the present invention are further described in detail, it should be understood that the above-mentioned embodiments are only exemplary embodiments of the present invention, and are not intended to limit the scope of the present invention, and any modifications, equivalent substitutions, improvements and the like made on the basis of the technical solutions of the present invention should be included in the scope of the present invention.

Claims (30)

1. A computer-implemented method of processing an interaction event, the method comprising:
acquiring a dynamic interaction diagram constructed according to a dynamic interaction sequence, wherein the dynamic interaction sequence comprises a plurality of interaction events which are arranged according to a time sequence, and each interaction event comprises two objects with interaction behaviors, an event characteristic and interaction time; at least some of the plurality of interactivity events have an event category tag; the dynamic interaction graph comprises a plurality of nodes representing each object in each event, wherein any node i points to two associated nodes through a connecting edge, and the two associated nodes are two nodes corresponding to the last interaction event in which the object represented by the node i participates;
determining a first node and a second node corresponding to a current interaction event to be analyzed and four related associated nodes in the dynamic interaction graph, wherein the four related nodes comprise two related nodes pointed by the first node and two related nodes pointed by the second node;
respectively taking the first node, the second node and the four associated nodes as current root nodes, and determining corresponding current subgraphs in the dynamic interaction graph so as to respectively obtain a first subgraph, a second subgraph and four associated subgraphs, wherein the current subgraph comprises nodes which start from the current root node and reach a predetermined range through a connecting edge;
inputting the first subgraph and the second subgraph into a first neural network model respectively to obtain a corresponding first hidden vector and a corresponding second hidden vector respectively; the first neural network determines an implicit vector corresponding to the input subgraph according to first input features of all nodes in the input subgraph and the directional relation of connecting edges among the nodes, wherein the first input features comprise event features of events where the nodes are located;
inputting the four association subgraphs into a second neural network model respectively to obtain four association hidden vectors respectively; the second neural network determines an implicit vector corresponding to the input subgraph according to second input features of each node in the input subgraph and the directional relation of connecting edges among the nodes, wherein the second input features comprise event category labels of events where the nodes are located;
and predicting the event category of the current interaction event according to the first implicit vector, the second implicit vector and the four associated implicit vectors.
2. The method of claim 1, wherein the obtaining a dynamic interaction graph constructed according to a dynamic interaction sequence comprises:
acquiring an existing dynamic interaction diagram constructed based on an existing interaction sequence;
acquiring a newly added interaction event;
taking a first object and a second object related to the newly added interaction event as two newly added nodes, and adding the two newly added nodes into the existing dynamic interaction graph;
and for each newly added node, if two associated nodes exist, adding a connecting edge pointing to the two associated nodes from the newly added node.
3. The method of claim 1, wherein nodes within a predetermined range reached via a connecting edge comprise:
nodes reached via connecting edges within a preset number K; and/or
And the nodes can be reached through the connecting edge and the interaction time is within the preset time range.
4. The method of claim 1, wherein the first input features further comprise attribute features of the node, and/or a time difference between a first interaction time of an interaction event in which the node is located and a second interaction time of an interaction event in which two associated nodes are located.
5. The method of claim 1, wherein the interaction event is a transaction event, the event characteristics comprising at least one of: transaction type, transaction amount, transaction channel; the event category label is a transaction risk level label.
6. The method of claim 1, wherein the first neural network is an LSTM-based neural network comprising at least one LSTM layer for at least:
and sequentially taking each node in the first subgraph as a target node, and determining the implicit vector and the intermediate vector of the target node according to the first input characteristic of the target node, the respective intermediate vector and the implicit vector of the two associated nodes pointed by the target node until the implicit vector of the first node is obtained.
7. The method of claim 6, wherein the two associated nodes to which the target node points are a first associated node and a second associated node; the determining the implicit vector and the intermediate vector of the target node comprises:
inputting the first input feature of the target node, the implicit vector of the first associated node and the implicit vector of the second associated node into a first transformation function and a second transformation function which have the same algorithm and different parameters respectively to obtain a first transformation vector and a second transformation vector respectively;
respectively carrying out combined operation on the first transformation vector and the second transformation vector, and the intermediate vector of the first association node and the intermediate vector of the second association node, and obtaining a combined vector based on an operation result;
inputting the first input feature of the target node, the implicit vector of the first associated node and the implicit vector of the second associated node into a third transformation function and a fourth transformation function respectively to obtain a third transformation vector and a fourth transformation vector respectively;
determining an intermediate vector for the target node based on the combined vector and a third transformed vector;
determining an implicit vector for the target node based on the intermediate vector and a fourth transformed vector for the target node.
8. The method of claim 6, wherein the first neural network model comprises a plurality of LSTM layers, wherein the implicit vector of the target node determined by the previous LSTM layer is input to the next LSTM layer as the first input feature of the target node.
9. The method of claim 8, wherein the first neural network model synthesizes implicit vectors of the first nodes output by each of the plurality of LSTM layers to obtain the first implicit vector.
10. The method of claim 8, wherein the first neural network model takes as the first implicit vector an implicit vector of the first node output by a last LSTM layer of the plurality of LSTM layers.
11. The method of any one of claims 1, 6-10, wherein the first and second neural network models are structurally and algorithmically identical, parametrically different neural network models.
12. The method of claim 1, wherein determining an event category for the current interaction event from the first implicit vector, the second implicit vector, and four associated implicit vectors comprises:
fusing the first implicit vector, the second implicit vector and the four associated implicit vectors by utilizing a fully-connected neural network to obtain an event representation vector of the current interaction event;
and determining the event category of the current interaction event according to the event representation vector by utilizing a classifier.
13. The method of claim 12, further comprising:
acquiring a current category label of the current interaction event;
determining a predicted loss based at least on the determined event category and the current category label;
and jointly training the first neural network model, the second neural network model, the fully-connected neural network and the classifier according to the prediction loss.
14. The method of claim 1, further comprising:
acquiring a current category label of the current interaction event;
determining a predicted loss based at least on the determined event category and the current category label;
training the first neural network model and a second neural network model according to the predicted loss.
15. An apparatus for processing an interaction event, the apparatus comprising:
the interactive map acquisition unit is configured to acquire a dynamic interactive map constructed according to a dynamic interactive sequence, wherein the dynamic interactive sequence comprises a plurality of interactive events which are arranged according to a time sequence, and each interactive event comprises two objects with interactive behaviors, an event characteristic and interactive time; at least some of the plurality of interactivity events have an event category tag; the dynamic interaction graph comprises a plurality of nodes representing each object in each event, wherein any node i points to two associated nodes through a connecting edge, and the two associated nodes are two nodes corresponding to the last interaction event in which the object represented by the node i participates;
a node determining unit, configured to determine, in the dynamic interaction graph, a first node and a second node corresponding to a current interaction event to be analyzed, and four related associated nodes, where the four related nodes include two related nodes pointed to by the first node and two related nodes pointed to by the second node;
a subgraph determining unit, configured to determine respective corresponding current subgraphs in the dynamic interaction graph by respectively taking the first node, the second node and the four associated nodes as current root nodes, so as to obtain a first subgraph, a second subgraph and four associated subgraphs respectively, wherein the current subgraph comprises nodes which start from the current root node and reach in a predetermined range through connecting edges;
the first processing unit is configured to input the first sub-graph and the second sub-graph into a first neural network model respectively to obtain a corresponding first implicit vector and a corresponding second implicit vector respectively; the first neural network determines an implicit vector corresponding to the input subgraph according to first input features of all nodes in the input subgraph and the directional relation of connecting edges among the nodes, wherein the first input features comprise event features of events where the nodes are located;
the second processing unit is configured to input the four association subgraphs into a second neural network model respectively to obtain four association hidden vectors respectively; the second neural network determines an implicit vector corresponding to the input subgraph according to second input features of each node in the input subgraph and the directional relation of connecting edges among the nodes, wherein the second input features comprise event category labels of events where the nodes are located;
and the prediction unit is configured to predict the event type of the current interactive event according to the first implicit vector, the second implicit vector and the four associated implicit vectors.
16. The apparatus of claim 15, wherein the interaction graph obtaining unit is configured to:
acquiring an existing dynamic interaction diagram constructed based on an existing interaction sequence;
acquiring a newly added interaction event;
taking a first object and a second object related to the newly added interaction event as two newly added nodes, and adding the two newly added nodes into the existing dynamic interaction graph;
and for each newly added node, if two associated nodes exist, adding a connecting edge pointing to the two associated nodes from the newly added node.
17. The apparatus of claim 15, wherein nodes within a predetermined range reached via a connecting edge comprise:
nodes reached via connecting edges within a preset number K; and/or
And the nodes can be reached through the connecting edge and the interaction time is within the preset time range.
18. The apparatus of claim 15, wherein the first input features further comprise attribute features of the node, and/or a time difference between a first interaction time of an interaction event in which the node is located and a second interaction time of an interaction event in which two associated nodes are located.
19. The apparatus of claim 15, wherein the interaction event is a transaction event, the event characteristics comprising at least one of: transaction type, transaction amount, transaction channel; the event category label is a transaction risk level label.
20. The apparatus of claim 15, wherein the first neural network is an LSTM-based neural network comprising at least one LSTM layer configured to:
and sequentially taking each node in the first subgraph as a target node, and determining the implicit vector and the intermediate vector of the target node according to the first input characteristic of the target node, the respective intermediate vector and the implicit vector of the two associated nodes pointed by the target node until the implicit vector of the first node is obtained.
21. The apparatus of claim 20, wherein the two associated nodes to which the target node points are a first associated node and a second associated node; the LSTM layer is specifically used for:
inputting the first input feature of the target node, the implicit vector of the first associated node and the implicit vector of the second associated node into a first transformation function and a second transformation function which have the same algorithm and different parameters respectively to obtain a first transformation vector and a second transformation vector respectively;
respectively carrying out combined operation on the first transformation vector and the second transformation vector, and the intermediate vector of the first association node and the intermediate vector of the second association node, and obtaining a combined vector based on an operation result;
inputting the first input feature of the target node, the implicit vector of the first associated node and the implicit vector of the second associated node into a third transformation function and a fourth transformation function respectively to obtain a third transformation vector and a fourth transformation vector respectively;
determining an intermediate vector for the target node based on the combined vector and a third transformed vector;
determining an implicit vector for the target node based on the intermediate vector and a fourth transformed vector for the target node.
22. The apparatus of claim 20, wherein the first neural network model comprises a plurality of LSTM layers, wherein an implicit vector of the target node determined by a previous LSTM layer is input to a next LSTM layer as the first input feature of the target node.
23. The apparatus of claim 22, wherein the first neural network model synthesizes implicit vectors of the first nodes output by each of the plurality of LSTM layers to obtain the first implicit vector.
24. The apparatus of claim 22, wherein the first neural network model takes as the first implicit vector an implicit vector of the first node output by a last LSTM layer of the plurality of LSTM layers.
25. The apparatus of any one of claims 15, 20-24, wherein the first and second neural network models are structurally and algorithmically identical, parametrically different neural network models.
26. The apparatus of claim 15, wherein the prediction unit is configured to:
fusing the first implicit vector, the second implicit vector and the four associated implicit vectors by utilizing a fully-connected neural network to obtain an event representation vector of the current interaction event;
and determining the event category of the current interaction event according to the event representation vector by utilizing a classifier.
27. The apparatus of claim 26, comprising or connected to a model training unit configured to:
acquiring a current category label of the current interaction event;
determining a predicted loss based at least on the determined event category and the current category label;
and jointly training the first neural network model, the second neural network model, the fully-connected neural network and the classifier according to the prediction loss.
28. The apparatus of claim 1, comprising or connected to a model training unit configured to:
acquiring a current category label of the current interaction event;
determining a predicted loss based at least on the determined event category and the current category label;
training the first neural network model and a second neural network model according to the predicted loss.
29. A computer-readable storage medium, on which a computer program is stored which, when executed in a computer, causes the computer to carry out the method of any one of claims 1-14.
30. A computing device comprising a memory and a processor, wherein the memory has stored therein executable code that, when executed by the processor, performs the method of any of claims 1-14.
CN201910803312.9A 2019-08-28 2019-08-28 Method and device for processing interaction event Active CN110689110B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910803312.9A CN110689110B (en) 2019-08-28 2019-08-28 Method and device for processing interaction event

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910803312.9A CN110689110B (en) 2019-08-28 2019-08-28 Method and device for processing interaction event

Publications (2)

Publication Number Publication Date
CN110689110A true CN110689110A (en) 2020-01-14
CN110689110B CN110689110B (en) 2023-06-02

Family

ID=69108428

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910803312.9A Active CN110689110B (en) 2019-08-28 2019-08-28 Method and device for processing interaction event

Country Status (1)

Country Link
CN (1) CN110689110B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111476223A (en) * 2020-06-24 2020-07-31 支付宝(杭州)信息技术有限公司 Method and device for evaluating interaction event
CN111582873A (en) * 2020-05-07 2020-08-25 支付宝(杭州)信息技术有限公司 Method and device for evaluating interaction event, electronic equipment and storage medium
CN112085279A (en) * 2020-09-11 2020-12-15 支付宝(杭州)信息技术有限公司 Method and device for training interaction prediction model and predicting interaction event
CN112541129A (en) * 2020-12-06 2021-03-23 支付宝(杭州)信息技术有限公司 Method and device for processing interaction event

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101600167A (en) * 2008-06-06 2009-12-09 瞬联软件科技(北京)有限公司 Towards moving information self-adaptive interactive system and its implementation of using
US20120036097A1 (en) * 2010-08-05 2012-02-09 Toyota Motor Engineering & Manufacturing North America, Inc. Systems And Methods For Recognizing Events
CN103902966A (en) * 2012-12-28 2014-07-02 北京大学 Video interaction event analysis method and device base on sequence space-time cube characteristics
CN106415535A (en) * 2014-04-14 2017-02-15 微软技术许可有限责任公司 Context-sensitive search using a deep learning model
CN108764011A (en) * 2018-03-26 2018-11-06 青岛科技大学 Group recognition methods based on the modeling of graphical interactive relation
US20190057683A1 (en) * 2017-08-18 2019-02-21 Google Llc Encoder-decoder models for sequence to sequence mapping
US20190188561A1 (en) * 2017-12-15 2019-06-20 Facebook, Inc. Deep learning based distribution of content items describing events to users of an online system

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101600167A (en) * 2008-06-06 2009-12-09 瞬联软件科技(北京)有限公司 Towards moving information self-adaptive interactive system and its implementation of using
US20120036097A1 (en) * 2010-08-05 2012-02-09 Toyota Motor Engineering & Manufacturing North America, Inc. Systems And Methods For Recognizing Events
CN103902966A (en) * 2012-12-28 2014-07-02 北京大学 Video interaction event analysis method and device base on sequence space-time cube characteristics
CN106415535A (en) * 2014-04-14 2017-02-15 微软技术许可有限责任公司 Context-sensitive search using a deep learning model
US20190057683A1 (en) * 2017-08-18 2019-02-21 Google Llc Encoder-decoder models for sequence to sequence mapping
US20190188561A1 (en) * 2017-12-15 2019-06-20 Facebook, Inc. Deep learning based distribution of content items describing events to users of an online system
CN108764011A (en) * 2018-03-26 2018-11-06 青岛科技大学 Group recognition methods based on the modeling of graphical interactive relation

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111582873A (en) * 2020-05-07 2020-08-25 支付宝(杭州)信息技术有限公司 Method and device for evaluating interaction event, electronic equipment and storage medium
CN111476223A (en) * 2020-06-24 2020-07-31 支付宝(杭州)信息技术有限公司 Method and device for evaluating interaction event
CN111476223B (en) * 2020-06-24 2020-09-22 支付宝(杭州)信息技术有限公司 Method and device for evaluating interaction event
CN112085279A (en) * 2020-09-11 2020-12-15 支付宝(杭州)信息技术有限公司 Method and device for training interaction prediction model and predicting interaction event
CN112085279B (en) * 2020-09-11 2022-09-06 支付宝(杭州)信息技术有限公司 Method and device for training interactive prediction model and predicting interactive event
CN112541129A (en) * 2020-12-06 2021-03-23 支付宝(杭州)信息技术有限公司 Method and device for processing interaction event
CN112541129B (en) * 2020-12-06 2023-05-23 支付宝(杭州)信息技术有限公司 Method and device for processing interaction event

Also Published As

Publication number Publication date
CN110689110B (en) 2023-06-02

Similar Documents

Publication Publication Date Title
CN110598847B (en) Method and device for processing interactive sequence data
CN110555469B (en) Method and device for processing interactive sequence data
CN111210008B (en) Method and device for processing interactive data by using LSTM neural network model
CN111814977B (en) Method and device for training event prediction model
US11521221B2 (en) Predictive modeling with entity representations computed from neural network models simultaneously trained on multiple tasks
CN110543935B (en) Method and device for processing interactive sequence data
CN110689110B (en) Method and device for processing interaction event
CN110490274B (en) Method and device for evaluating interaction event
TW202008264A (en) Method and apparatus for recommendation marketing via deep reinforcement learning
CN111737546B (en) Method and device for determining entity service attribute
CN111242283B (en) Training method and device for evaluating self-encoder of interaction event
US11250088B2 (en) Method and apparatus for processing user interaction sequence data
CN111476223B (en) Method and device for evaluating interaction event
CN112085293B (en) Method and device for training interactive prediction model and predicting interactive object
JP7162417B2 (en) Estimation device, estimation method, and estimation program
CN112149824B (en) Method and device for updating recommendation model by game theory
CN111523682B (en) Method and device for training interactive prediction model and predicting interactive object
CN113610610B (en) Session recommendation method and system based on graph neural network and comment similarity
CN112580789A (en) Training graph coding network, and method and device for predicting interaction event
CN111258469B (en) Method and device for processing interactive sequence data
CN110288444A (en) Realize the method and system of user&#39;s associated recommendation
CN111026973B (en) Commodity interest degree prediction method and device and electronic equipment
CN113449176A (en) Recommendation method and device based on knowledge graph
CN112085279B (en) Method and device for training interactive prediction model and predicting interactive event
CN109598347A (en) For determining causal method, system and computer program product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20201012

Address after: Cayman Enterprise Centre, 27 Hospital Road, George Town, Grand Cayman Islands

Applicant after: Advanced innovation technology Co.,Ltd.

Address before: A four-storey 847 mailbox in Grand Cayman Capital Building, British Cayman Islands

Applicant before: Alibaba Group Holding Ltd.

Effective date of registration: 20201012

Address after: Cayman Enterprise Centre, 27 Hospital Road, George Town, Grand Cayman Islands

Applicant after: Innovative advanced technology Co.,Ltd.

Address before: Cayman Enterprise Centre, 27 Hospital Road, George Town, Grand Cayman Islands

Applicant before: Advanced innovation technology Co.,Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant