CN110689110B - Method and device for processing interaction event - Google Patents

Method and device for processing interaction event Download PDF

Info

Publication number
CN110689110B
CN110689110B CN201910803312.9A CN201910803312A CN110689110B CN 110689110 B CN110689110 B CN 110689110B CN 201910803312 A CN201910803312 A CN 201910803312A CN 110689110 B CN110689110 B CN 110689110B
Authority
CN
China
Prior art keywords
node
vector
event
interaction
nodes
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910803312.9A
Other languages
Chinese (zh)
Other versions
CN110689110A (en
Inventor
文剑烽
常晓夫
刘旭钦
宋乐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Advanced New Technologies Co Ltd
Advantageous New Technologies Co Ltd
Original Assignee
Advanced New Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Advanced New Technologies Co Ltd filed Critical Advanced New Technologies Co Ltd
Priority to CN201910803312.9A priority Critical patent/CN110689110B/en
Publication of CN110689110A publication Critical patent/CN110689110A/en
Application granted granted Critical
Publication of CN110689110B publication Critical patent/CN110689110B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q40/00Finance; Insurance; Tax strategies; Processing of corporate or income taxes
    • G06Q40/04Trading; Exchange, e.g. stocks, commodities, derivatives or currency exchange

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • Business, Economics & Management (AREA)
  • Data Mining & Analysis (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Accounting & Taxation (AREA)
  • Finance (AREA)
  • Economics (AREA)
  • General Business, Economics & Management (AREA)
  • Development Economics (AREA)
  • Technology Law (AREA)
  • Strategic Management (AREA)
  • Marketing (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The embodiment of the specification provides a method and a device for processing interaction events. In the method, a dynamic interaction graph constructed according to a dynamic interaction sequence is firstly obtained, wherein each interaction object involved in each interaction event corresponds to each node in the graph. And for the current interaction event to be analyzed, obtaining the participation node of the current interaction event and the associated node of the participation node from the dynamic interaction graph, and determining each participation node and each sub-graph related to the associated node in the dynamic interaction graph. Inputting the subgraphs of the participating nodes into a first neural network model, wherein the subgraphs are processed based on event characteristics and connection relations of the nodes to obtain implicit vectors of the participating nodes; and inputting the subgraphs of the associated nodes into a second neural network model, wherein the subgraphs are processed based on event class labels and connection relations of the nodes to obtain implicit vectors of the associated nodes. The current interaction event is then expressed and analyzed based on the implicit vectors of the nodes.

Description

Method and device for processing interaction event
Technical Field
One or more embodiments of the present description relate to the field of machine learning, and more particularly, to a method and apparatus for processing interactive events using machine learning.
Background
In many scenarios, analysis and processing of user interaction events is required. The interaction event is one of basic elements of the internet event, for example, clicking action of a user when browsing a page can be regarded as an interaction event between the user and a page content block, purchasing action in an electronic commerce can be regarded as an interaction event between the user and a commodity, and account transfer action is an interaction event between the users. The characteristics of fine-grained habit preferences and the like of the user and the characteristics of the interactive objects are contained in a series of interactive events of the user, and are important characteristic sources of the machine learning model. Thus, in many scenarios, it is desirable to characterize and model interaction participants, as well as interaction events, based on interaction history.
However, the interaction event involves both interaction parties, and the states of the respective parties themselves may be dynamically changed, so it is very difficult to comprehensively consider the various characteristics of the interaction parties to accurately express the characteristics thereof. Thus, improved schemes are desired for more efficient analysis of interactive events.
Disclosure of Invention
One or more embodiments of the present specification describe methods and apparatus for processing an interactivity event, in which an interactivity event sequence is represented by a dynamic interactivity graph, for a current interactivity event, a participating node and an associated node for the event are determined from the dynamic interactivity graph, and each participating node and associated node are processed as implicit features based on directives and association relationships between event nodes reflected in the dynamic interactivity graph, and the current interactivity event is analyzed using the implicit features.
According to a first aspect, there is provided a method of processing an interaction event, the method comprising:
acquiring a dynamic interaction diagram constructed according to a dynamic interaction sequence, wherein the dynamic interaction sequence comprises a plurality of interaction events which are arranged according to a time sequence, and each interaction event comprises two objects for generating interaction behaviors, event characteristics and interaction time; at least some of the plurality of interactivity events have event category labels; the dynamic interaction graph comprises a plurality of nodes representing all objects in all events, wherein any node i points to two associated nodes through a connecting edge, and the two associated nodes are two nodes corresponding to the last interaction event in which the object represented by the node i participates;
in the dynamic interaction graph, determining a first node and a second node corresponding to a current interaction event to be analyzed and related four associated nodes, wherein the four associated nodes comprise two associated nodes pointed by the first node and two associated nodes pointed by the second node;
respectively taking the first node, the second node and the four associated nodes as current root nodes, and determining respective corresponding current subgraphs in the dynamic interaction graph so as to respectively obtain a first subgraph, a second subgraph and four associated subgraphs, wherein the current subgraphs comprise nodes which start from the current root nodes and reach a preset range through connecting edges;
Respectively inputting the first sub-graph and the second sub-graph into a first neural network model to respectively obtain a corresponding first hidden vector and a corresponding second hidden vector; the first neural network determines an implicit vector corresponding to the input subgraph according to first input features of all nodes in the input subgraph and pointing relations of connecting edges between the nodes, wherein the first input features comprise event features of events where the nodes are located;
inputting the four associated subgraphs into a second neural network model respectively to obtain four associated implicit vectors respectively; the second neural network determines an implicit vector corresponding to the input subgraph according to second input features of all nodes in the input subgraph and the pointing relation of connecting edges between the nodes, wherein the second input features comprise event category labels of events where the nodes are located;
and predicting the event category of the current interaction event according to the first implicit vector, the second implicit vector and the four associated implicit vectors.
According to one embodiment, obtaining a dynamic interaction map constructed from a dynamic interaction sequence comprises:
acquiring an existing dynamic interaction diagram constructed based on an existing interaction sequence;
Acquiring a newly added interaction event;
the first object and the second object related to the newly added interaction event are used as two newly added nodes and added into the existing dynamic interaction graph;
for each newly added node, if two associated nodes exist, a connecting edge pointing to the two associated nodes from the newly added node is added.
According to one embodiment, nodes within a predetermined range that are reached via a connecting edge in the current subgraph include: nodes reached via connecting edges within a preset number K; and/or nodes which are reachable via the connection edge and whose interaction time is within a preset time range.
According to different embodiments, the first input feature may further include an attribute feature of a node, and/or a time difference between a first interaction time of an interaction event where the node is located and a second interaction time of an interaction event where two associated nodes are located.
In a specific embodiment, the interaction event is a transaction event, and the event features include at least one of the following: transaction type, transaction amount, transaction channel; the event category label is a transaction risk level label.
In one embodiment, the first neural network model is an LSTM-based neural network, comprising at least one LSTM layer that processes the first subgraph as follows:
And sequentially taking each node in the first subgraph as a target node, and determining the hidden vector and the intermediate vector of the target node according to the intermediate vector and the hidden vector of each of the two associated nodes pointed by the target node according to the first input characteristics of the target node until the hidden vector of the first node is obtained.
Further, in one embodiment, the two associated nodes pointed to by the target node are a first associated node and a second associated node; the first neural network model determines the implicit vector and the intermediate vector of the target node as follows:
respectively inputting the first input characteristic of the target node, the implicit vector of the first association node and the implicit vector of the second association node into a first transformation function and a second transformation function which have the same algorithm and different parameters to respectively obtain a first transformation vector and a second transformation vector;
combining the first transformation vector and the second transformation vector with the intermediate vector of the first association node and the intermediate vector of the second association node respectively, and obtaining a combined vector based on an operation result;
respectively inputting the first input characteristic of the target node, the implicit vector of the first association node and the implicit vector of the second association node into a third transformation function and a fourth transformation function to respectively obtain a third transformation vector and a fourth transformation vector;
Determining an intermediate vector of the target node based on the combined vector and a third transformation vector;
an implicit vector of the target node is determined based on the intermediate vector and a fourth transformation vector of the target node.
In one embodiment, the first neural network model includes a plurality of LSTM layers, wherein an implicit vector of the target node determined by a previous LSTM layer is input to a next LSTM layer as a first input feature of the target node.
Further, in one embodiment, the first neural network model synthesizes the implicit vectors of the first node output by each of the LSTM layers to obtain the first implicit vector.
Alternatively, in another embodiment, the first neural network model takes as the first implicit vector an implicit vector of the first node output by a last LSTM layer of the plurality of LSTM layers.
According to one embodiment, the first neural network model and the second neural network model are neural network models of the same structure and algorithm and different parameters.
In one embodiment, the event category of the current interaction event is determined as follows:
fusing the first implicit vector, the second implicit vector and the four associated implicit vectors by using a fully connected neural network to obtain an event representation vector of the current interaction event;
And determining the event category of the current interaction event according to the event representation vector by using a classifier.
According to one embodiment, the foregoing method further comprises:
acquiring a current category label of the current interaction event;
determining a predicted loss based at least on the determined event category and the current category label;
and jointly training the first neural network model, the second neural network model, the fully connected neural network and the classifier according to the prediction loss.
According to another embodiment, the foregoing method further comprises:
acquiring a current category label of the current interaction event;
determining a predicted loss based at least on the determined event category and the current category label;
and training the first neural network model and the second neural network model according to the prediction loss.
According to a second aspect, there is provided an apparatus for processing an interaction event, the apparatus comprising:
the interactive image acquisition unit is configured to acquire a dynamic interactive image constructed according to a dynamic interactive sequence, wherein the dynamic interactive sequence comprises a plurality of interactive events which are arranged according to a time sequence, and each interactive event comprises two objects for generating interactive behaviors, event characteristics and interactive time; at least some of the plurality of interactivity events have event category labels; the dynamic interaction graph comprises a plurality of nodes representing all objects in all events, wherein any node i points to two associated nodes through a connecting edge, and the two associated nodes are two nodes corresponding to the last interaction event in which the object represented by the node i participates;
The node determining unit is configured to determine a first node and a second node corresponding to a current interaction event to be analyzed and related four associated nodes in the dynamic interaction graph, wherein the four associated nodes comprise two associated nodes pointed by the first node and two associated nodes pointed by the second node;
the sub-graph determining unit is configured to determine respective corresponding current sub-graphs in the dynamic interaction graph by taking the first node, the second node and the four associated nodes as current root nodes respectively, so as to obtain the first sub-graph, the second sub-graph and the four associated sub-graphs respectively, wherein the current sub-graphs comprise nodes which start from the current root nodes and reach a preset range through connecting edges;
the first processing unit is configured to input the first sub-graph and the second sub-graph into a first neural network model respectively to obtain a corresponding first implicit vector and a corresponding second implicit vector respectively; the first neural network determines an implicit vector corresponding to the input subgraph according to first input features of all nodes in the input subgraph and pointing relations of connecting edges between the nodes, wherein the first input features comprise event features of events where the nodes are located;
The second processing unit is configured to input the four association subgraphs into a second neural network model respectively to obtain four association hidden vectors respectively; the second neural network determines an implicit vector corresponding to the input subgraph according to second input features of all nodes in the input subgraph and the pointing relation of connecting edges between the nodes, wherein the second input features comprise event category labels of events where the nodes are located;
and the prediction unit is configured to predict the event category of the current interaction event according to the first implicit vector, the second implicit vector and the four associated implicit vectors.
According to a third aspect, there is provided a computer readable storage medium having stored thereon a computer program which, when executed in a computer, causes the computer to perform the method of the first aspect.
According to a fourth aspect, there is provided a computing device comprising a memory and a processor, characterised in that the memory has executable code stored therein, the processor implementing the method of the first aspect when executing the executable code.
According to the method and the device provided by the embodiment of the specification, the dynamic interaction graph is constructed based on the dynamic interaction sequence, and the dynamic interaction graph reflects the time sequence relation of each interaction event and the interaction effect transferred between interaction objects through each interaction event. And for the current interaction event to be analyzed, obtaining the participation node of the current interaction event and the associated node of the participation node from the dynamic interaction graph, obtaining each sub-graph part taking each participation node and the associated node as root nodes, and respectively inputting the sub-graph parts into a neural network model to obtain implicit vector expression of each participation node and the associated node. The event characteristics or class labels of the historical interaction events and the influence of the historical interaction events on the nodes are introduced into the implicit vector obtained in the method, so that the deep characteristics of the nodes can be comprehensively and comprehensively expressed. The current interaction event can be expressed and analyzed more comprehensively and accurately based on respective implicit vectors of the participating node and the associated node related to the current interaction event.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required for the description of the embodiments will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1A illustrates a two-part diagram of interactions in one example;
FIG. 1B illustrates a graph of an interaction relationship network in another example;
FIG. 2 illustrates an implementation scenario diagram according to one embodiment;
FIG. 3 illustrates a method flow diagram for processing interactivity events, according to one embodiment;
FIG. 4 illustrates a dynamic interaction sequence and dynamic interaction graph constructed therefrom, according to one embodiment;
FIG. 5 illustrates a node relationship diagram relating to a current interaction event;
FIG. 6 illustrates an example of a sub-map in one embodiment;
FIG. 7 shows a schematic diagram of the operation of the LSTM layer in the first neural network model;
FIG. 8 illustrates the structure of an LSTM layer in a first neural network model, according to one embodiment;
FIG. 9 illustrates the structure of an LSTM layer in a second neural network model, according to one embodiment;
FIG. 10 illustrates a schematic diagram of the structure of a composite model in one embodiment;
FIG. 11 illustrates a schematic block diagram of an apparatus for processing an interactivity event, according to one embodiment.
Detailed Description
The following describes the scheme provided in the present specification with reference to the drawings.
As previously mentioned, it is desirable to be able to characterize and model the participants of the interaction event, as well as the interaction event itself, based on the interaction history.
In one approach, a static interaction relationship network graph is constructed based on historical interaction events, such that individual interaction objects are analyzed based on the interaction relationship network graph. Specifically, the participants of each historical event can be taken as nodes, and connection edges are established between the nodes with interaction relationship, so that the interaction network diagram is formed.
Fig. 1A and 1B show graphs of interaction relationship networks, respectively, in a specific example. More specifically, FIG. 1A shows a two-part diagram including user nodes (U1-U4) and commodity nodes (V1-V3), where if a user purchases a commodity, a connecting edge is constructed between the user and the commodity. FIG. 1B shows a user transfer relationship diagram in which each node represents a user, and a connection edge exists between two users who have undergone transfer records.
However, it can be seen that fig. 1A and 1B, although showing the interaction relationship between objects, do not contain timing information of these interaction events. The graph embedding is simply performed based on the interaction relation network graph, and the obtained feature vector does not express the influence of the time information of the interaction event on the node. Moreover, the scalability of such static graphs is not strong enough, and it is difficult to flexibly handle the case of newly added interaction events and newly added nodes.
In another scheme, for each interactive object in the interactive event to be analyzed, a behavior sequence of the object is constructed, and based on the behavior sequence, the feature expression of the object is extracted, so that the feature expression of the event is constructed. However, such a sequence of actions characterizes only the actions of the object to be analyzed itself, whereas the interaction events are multiparty events, through which the influence is indirectly transferred between the participants. Thus, such an approach does not express the impact between the participating objects in the interaction event.
In view of the above, and in accordance with one or more embodiments of the present description, a dynamically changing sequence of interactivity events is constructed into a dynamic interactivity map, wherein each interactivity object involved in each interactivity event corresponds to each node in the dynamic interactivity map. And for the current interaction event to be analyzed, obtaining a participation node of the current interaction event and an association node of the participation node from the dynamic interaction graph, obtaining each sub-graph part related to each participation node and the association node, respectively inputting the sub-graph parts into a neural network model to obtain implicit vector expression of each participation node and the association node, and expressing and analyzing the current interaction event based on the implicit vectors of the nodes.
FIG. 2 illustrates an implementation scenario diagram according to one embodiment. As shown in FIG. 2, a plurality of interaction events occurring in sequence may be organized chronologically into a dynamic interaction sequence<E 1 ,E 2 ,…,E N >Wherein each element E i Representing an interaction event, which may be represented as a form ei= (ai, b) of an interaction feature set i ,f,t i ) Wherein a is i And b i Is event E i Is f is an interaction feature, otherwise known as an event feature, t i Is the interaction time.
According to an embodiment of the present description, the dynamic interaction graph 200 is constructed based on the dynamic interaction sequence. In diagram 200, each interaction object a in each interaction event is identified i ,b i Represented by nodes, and connecting edges are established between events that contain the same object. The structure of the dynamic interaction map 200 will be described in more detail later.
For a certain current interaction event to be analyzed, determining the participation node and the association node in the dynamic interaction graph, and respectively taking each participation node and each association node as the current root node in the dynamic interaction graph to obtain a corresponding sub-graph. Generally, a subgraph includes nodes within a certain range that can be reached through connecting edges, starting from the current root node. The subgraph reflects the influence of other objects in the interaction event directly or indirectly associated with the current interaction object on the current node.
Then, the subgraphs of the participating nodes and the subgraphs of the associated nodes are input to two neural network models, namely a first neural network model and a second neural network model, respectively. The first neural network model obtains hidden vectors corresponding to the participating nodes based on event characteristics and node connection relations of all nodes in the subgraph of the participating nodes. The second neural network model obtains an implicit vector of the associated node based on event category labels and node connection relations of all nodes in the subgraph of the associated node. The implicit vector obtained in this way can extract the label information and the time sequence information of the related interaction events and the influence among the interaction objects in each interaction event, thereby expressing the deep features of each interaction object more accurately. Based on the implicit vector of the participating node and the implicit vector of the associated node thus obtained, the current interaction event can be expressed and analyzed, and the event category of the current interaction event can be determined.
Specific implementations of the above concepts are described below.
FIG. 3 illustrates a flow diagram of a method of processing an interactivity event, according to one embodiment. It will be appreciated that the method may be performed by any apparatus, device, platform, cluster of devices, having computing, processing capabilities. Various steps in a method of processing interactive sequence data as shown in fig. 3 are described below in connection with specific embodiments.
First, in step 31, a dynamic interaction map constructed from a dynamic interaction sequence is acquired.
As previously mentioned, dynamic interaction sequences, e.g., expressed as<E 1 ,E 2 ,…,E N >May include a plurality of interactivity events arranged in a time sequence, wherein each interactivity event E i Can be represented as an interaction feature set E i =(a i ,b i ,f,t i ) Wherein a is i And b i Is event E i Is an event feature or interaction feature that can be used to describe the context and context information of the occurrence of an interaction behavior, some attribute features of an interaction behavior, etc., t i Is the interaction time.
In order to analyze the nature of the interaction event, it is required that at least some of the interaction events in the sequence of interaction events have tags of event categories. The labels of event categories may be generated by manual marking or by post-hoc analysis of interaction events that have occurred. As such, the set of interaction characteristics for a partial interaction event may be further represented as E i =(a i ,b i ,f,y,t i ) Where y is the event category label.
For example, in one embodiment, the interaction event may be a transaction event. In such a case, two interactive objects a i And b i Either a user or a commodity or both. The event characteristics f of the transaction event may include, for example, the transaction type (merchandise purchase transaction, transfer transaction, etc.), the transaction amount, the transaction channel, etc. At least some of the transaction events that have occurred may also have an event category label y that may show a transaction risk level, e.g., a fraudulent transaction with-1, a normal transaction with 0, or a different risk factor with a different number.
In another embodiment, the interaction event may be a user's click action on a page block presented with particular content. In such a case, two interactive objects a i And b i A certain user and a certain page block, respectively. The event characteristics f of the interaction event may include the type of terminal used by the user to click, the browser type, app version, the location number of the page block in the page, etc. The event category label of the interactive event may show whether the clicking action of the page tile converts the content in the tile, such as whether the merchandise displayed in the tile is purchased, whether the coupon displayed in the tile is picked up, and so on.
In other business scenarios, the interaction event may also be other interactions occurring between two objects, such as communication actions between users, etc. The interaction event may have different event characteristics, and event category labels, depending on the business scenario.
For the dynamic interaction sequence described above, a dynamic interaction graph may be constructed. Specifically, each object in each interaction event in the dynamic interaction sequence is respectively used as a node of the dynamic interaction graph. As such, one node may correspond to one object in one interaction event, but the same physical object may correspond to multiple nodes. For example, if the user U1 purchases the product M1 at time t1 and purchases the product M2 at time t2, there are two feature sets (U1, M1, t 1) and (U1, M2, t 2) of the interaction event, and then the nodes U1 (t 1), U1 (t 2) are created for the user U1 according to the two interaction events, respectively. Thus, it can be considered that a node in the dynamic interaction map corresponds to the state of one interaction object in one interaction event.
For each node in the dynamic interaction graph, the connection edge is constructed as follows: for any node i, assuming that it corresponds to an interaction event i (interaction time is t), in the dynamic interaction sequence, the interaction event j, which first contains the object represented by node i as well (interaction time is t-, t-is earlier than t), is determined as the last interaction event in which the object participates, going back from interaction event i, i.e. going back in the direction earlier than interaction time t. Then, a connection edge pointing from node i to both nodes in this last interaction event j is established. These two pointed to nodes are then also referred to as the associated nodes of node i.
The following describes the specific examples. FIG. 4 illustrates a dynamic interaction sequence and dynamic interaction graph constructed therefrom, according to one embodiment. Specifically, the left side of FIG. 4 shows a time-sequentially organized dynamic interaction sequence, with exemplary illustrations at t, respectively 1 ,t 2 ,…,t 6 Time-of-day interactivity event E 1 ,E 2 ,…,E 6 Each interaction event contains two interaction objects involved in the interaction and the interaction time (event features are omitted for clarity of illustration). FIG. 4 right shows a motion constructed from a dynamic interaction sequence on the leftAnd a state interaction diagram, wherein two interaction objects in each interaction event are respectively used as nodes. The node u (t 6 ) For example, the construction of the connecting edges is described.
As shown, the node u (t 6 ) Representing interaction event E 6 Is provided. Thus, from interaction event E 6 Starting and backtracking forward, wherein the first found interaction event which also contains the interaction object David is E 4 That is, E 4 Is the last interaction event participated by David, corresponding to E 4 Corresponding two nodes u (t 4 ) And v (t) 4 ) Is node u (t 6 ) Is a node of the association of the two nodes. Thus, the slave node u (t 6 ) Pointing to E 4 Corresponding two nodes u (t 4 ) And v (t) 4 ) Is provided. Similarly, from u (t 4 ) (corresponding to interaction event E) 4 ) Continuing to trace back, the last interaction event E in which the object u, namely David, participates can be found continuously 2 Thus, a slave u (t 4 ) Pointing to E 2 Connecting edges of the two corresponding nodes; from v (t) 4 ) Backtracking forward, the last interaction event E participated by the object v can be found 3 Thus, a slave v (t 4 ) Pointing to E 3 And connecting edges of the two corresponding nodes. In this way, connecting edges are constructed between nodes, thereby forming the dynamic interaction graph of FIG. 4.
The manner and process of constructing a dynamic interaction map based on a dynamic interaction sequence is described above. For the method of processing interaction events shown in fig. 3, the process of constructing the dynamic interaction map may be performed in advance or may be performed in situ. Accordingly, in one embodiment, at step 31, a dynamic interaction map is constructed in situ from the dynamic interaction sequence. The construction is as described above. In another embodiment, the dynamic interaction graph may be built in advance based on a dynamic interaction sequence. In step 31, the formed dynamic interaction map is read or received.
It can be appreciated that the dynamic interaction graph constructed in the above manner has strong expandability, and can be very easily updated dynamically according to the newly added interaction event. Accordingly, step 31 may also include a process of updating the dynamic interaction map.
In one embodiment, each time a new interaction event is detected, the dynamic interaction map is updated based on the new interaction event. Specifically, in this embodiment, an existing dynamic interaction map constructed based on an existing interaction sequence may be acquired, and a new interaction event may be acquired. Then, two objects related to the newly added interaction event, namely a first object and a second object, are added to the existing dynamic interaction graph as two newly added nodes. And, for each newly added node, it is determined whether it has an associated node, the definition of which is as described above. If there are associated nodes, adding connecting edges pointing to the two associated nodes from the newly added node, thus forming an updated dynamic interaction graph.
In another embodiment, the newly added interaction event may be detected every predetermined time interval, for example every other hour, and the newly added interaction events within the time interval may be formed into a newly added interaction sequence. Alternatively, each time a predetermined number (e.g., 100) of newly added interaction events are detected, the predetermined number of newly added interaction events is formed into a newly added interaction sequence. The dynamic interaction map is then updated based on the newly added interaction sequence.
Specifically, in this embodiment, an existing dynamic interaction graph constructed based on an existing interaction sequence may be acquired, and a new interaction sequence including a plurality of new interaction events as described above may be acquired. Then, for each newly added interaction event, the first object and the second object are used as two newly added nodes and added into the existing dynamic interaction graph. And for each newly added node, determining whether the newly added node has an associated node, and if so, adding a connecting edge pointing to two associated nodes from the newly added node, so as to form an updated dynamic interaction graph.
In view of the above, in step 31, a dynamic interaction map constructed based on the dynamic interaction sequence is acquired. Next, in step 32, in the dynamic interaction map, node information related to the current interaction event to be analyzed is determined.
In one embodiment, the current interaction event is an interaction event for which the event category is unknown, and thus is to be analyzed. For example, in one example, user A initiates a transaction with object B, thereby generating a current interaction event. Upon receipt of such a transaction request (e.g., upon user a requesting payment), the current interaction event is analyzed to determine an event category for the transaction event, such as whether the transaction is suspected of being a fraudulent transaction, a risk level for the transaction, and so forth.
In another embodiment, the current interaction event may also be an event with an event category label, whereby its category is known. The analysis of the current interaction event with the tag can be used for learning and training of the neural network model. This will be further described later.
For any of the current interaction events described above, the node information it relates to may be determined in the dynamic interaction map. Specifically, two nodes corresponding to two participating objects of the current interaction event, which are called a first node and a second node, and four associated nodes related to the event can be determined, wherein the four associated nodes comprise two associated nodes pointed by the first node and two associated nodes pointed by the second node. In other words, for the current interaction event to be analyzed, two participating nodes of the event, and four associated nodes, are determined from the dynamic interaction graph.
FIG. 5 illustrates a node relationship diagram relating to a current interaction event. As shown in fig. 5, the two participating objects of the current interaction event correspond to node 1 and node 2, respectively. The last interaction event in which the object represented by node 1 participates is a first event, the participating objects of which correspond to node 3 and node 4. It will be appreciated that one of node 3 and node 4 is a node of the same physical object as node 1 at a different time and in a different event. Accordingly, in the dynamic interaction graph, node 1 points to node 3 and node 4. Similarly, the last interaction event in which the object represented by node 2 participated is a second event, which corresponds to nodes 5 and 6. Thus, for the current interaction event, the relevant nodes include two participating nodes, node 1 and node 2, and four associated nodes, nodes 3,4,5,6.
For clarity of illustration, the description is provided in connection with the example of fig. 4. In one example, assume event E 6 Is the current interaction event to be analyzed. As can be seen from FIG. 4, event E 6 The occurrence time of (2) is t 6 The interactive objects are u and w. Thus, the two nodes corresponding to the current interaction event, namely the first node and the second node, are u (t 6 ) And w (t) 6 ). First node u (t 6 ) Pointing to two associated nodes u (t 4 ) And v (t) 4 ) Second node w (t 6 ) Pointing to two associated nodes p (t 5 ) And w (t) 5 ). Thus, with the current interaction event E 6 The relevant node comprises two participating nodes u (t 6 ) And w (t) 6 ) And four associated nodes, namely node u (t 4 ) And v (t) 4 ),p(t 5 ) And w (t) 5 )。
After determining the above nodes related to the current interaction event, at step 33, a corresponding sub-graph of each node in the dynamic interaction graph is determined. Specifically, the first node, the second node and the four associated nodes are taken as root nodes respectively, and respective corresponding subgraphs are determined in the dynamic interaction graph, so that the first subgraph, the second subgraph and the four associated subgraphs are obtained respectively, wherein for any root node, the corresponding subgraphs comprise nodes in a preset range which are started from the root node and reached through connecting edges.
In one embodiment, the nodes within the predetermined range may be nodes reachable through a preset number K of connection edges at most. The number K is a preset super parameter and can be selected according to service conditions. It will be appreciated that the preset number K represents the number of steps of the historical interaction event back from the root node. The larger the number K, the longer the historical interaction information is considered.
In another embodiment, the nodes within the predetermined range may be nodes whose interaction time is within a predetermined time range. For example, a T-period (e.g., one day) is traced back from the interaction time of the root node, nodes that are within that period and reachable through the connecting edges.
In yet another embodiment, the predetermined range considers both the number of connection sides and the time range. In other words, the nodes within the predetermined range are nodes that are reachable through at most the preset number K of connection sides and that have the interaction time within the predetermined time range.
The above examples are continued and described in connection with specific examples. FIG. 6 illustrates an example of a sub-map in one embodiment. In the example of FIG. 6, assume that current interactivity event E 6 I.e., u (t) in fig. 4 6 ) For the root node, a first sub-graph corresponding to it is determined, and it is assumed that the sub-graph is made up of nodes arriving at most via a preset number k=2. Then, from the current root node u (t 6 ) Proceeding, the traversal is performed along the direction of the connection edge, and the nodes which can be reached through the 2 connection edges are shown as dotted line areas in the figure. The node and connection relationship in the region is node u (t 6 ) The corresponding sub-graph, i.e. the first sub-graph.
Similarly, when another participating node (second node) is involved in the current interaction event, e.g. w (t 6 ) Traversing the root node to obtain a second sub-graph. In addition, four associated nodes are used as root nodes respectively, and the four associated subgraphs can be obtained respectively by traversing along the connecting edges in the dynamic interaction graph.
Thus, for the current interaction event, a first sub-graph corresponding to the first node, a second sub-graph corresponding to the second node and four associated sub-graphs corresponding to the four associated nodes are obtained respectively.
Next, in step 34, the first sub-graph and the second sub-graph are respectively input into a first neural network model for processing, so as to obtain a corresponding first implicit vector and a corresponding second implicit vector, where the first neural network determines the implicit vector corresponding to the input sub-graph according to node input features X of each node in the input sub-graph and the pointing relationship of connecting edges between the nodes, where the node input features X include event features of an event where the node is located.
For distinction, the node input feature X processed in the first neural network model is also referred to as a first input feature. As described above, the first input feature X includes the event feature f of the event in which the node is located. For example, when the event is a transaction event, the event characteristics may include transaction type, transaction amount, transaction channel, and so forth.
In one embodiment, the first input feature X may also include an attribute feature a of the node itself. For example, in the case where a node represents a user, the node attribute features may include attribute features of the user, such as age, occupation, education level, region of locale, and so on; in the case where the node represents a commodity, the node attribute features may include attribute features of the commodity, such as commodity category, time-to-shelf, sales, and the like. Under the condition that the node represents other interactive objects, the original node attribute characteristics can be correspondingly acquired.
The processing of the first neural network model is described below.
The first neural network model may employ a variety of neural network models capable of processing sequence information, such as an RNN neural network model, an LSTM neural network model, a transducer neural network model, and so forth.
In one embodiment, the first neural network model is a transducer-based neural network model. Under the condition, according to the pointing relation among the nodes in the input subgraph, a node sequence is formed, and the node sequence is input into a transducer neural network model to obtain an implicit vector corresponding to the input subgraph.
In another embodiment, the first neural network is an LSTM-based neural network model. Under the condition, the LSTM neural network model sequentially iterates and processes each node according to the pointing relation among the nodes in the input subgraph to obtain the hidden vector corresponding to the input subgraph. The specific process is described below in connection with LSTM neural networks.
In particular, the first neural network model may include at least one LSTM layer. When the first sub-graph is input into the first neural network model, the LSTM layer takes each node in the first sub-graph as a target node in turn, and determines the hidden vector and the intermediate vector of the target node according to the respective intermediate vector and the hidden vector of two associated nodes pointed by the target node according to the first input characteristic X of the target node, so that the hidden vector of the root node, namely the first node, of the first sub-graph is obtained through iterative processing in turn.
Fig. 7 shows a schematic diagram of the operation of the LSTM layer in the first neural network model. Suppose node Q points to node J 1 And joint J 2 . As shown in FIG. 7, at time T, LSTM layers are processed to obtain nodes J 1 And joint J 2 The representation vectors H1 and H2 of (1) include intermediate vectors and implicit vectors; at the next time t+, the LSTM layer is based on the first input characteristic X of node Q Q Previously treating the obtained J 1 And J 2 Is represented by vectors H1 and H2 to obtain a representation vector H of node Q Q . It will be appreciated that the representation vector of the node Q may be used at a later time for processing to obtain a representation vector of the node pointing to the node Q, thus enabling an iterative process.
This process is described in connection with the first sub-graph of fig. 6. For the lowest level node u (t 2 ) In this first sub-graph its pointing node is not considered, i.e. u (t 2 ) There are no pointed nodes. In such a case, the intermediate vector c and the implicit vector h of each of the two nodes to which the node points are generated by padding (padding) with a default value (e.g., 0). The LSTM layer is then based on the node u (t 2 ) Is a first input feature X (u (t) 2 ) And two intermediate vectors c and two implicit vectors h generated, determining a node u (t) 2 ) Is (t) is (are) represented by the intermediate vector c (u (t 2 ) And an implicit vector h (c (t) 2 )). For the lowest level node r (t 2 ) The same process is also performed to obtain a corresponding intermediate vector c (r (t 2 ) And h (r (t) 2 ))。
For node u (t 4 ) Which points to node u (t 2 ) And r (t) 2 ). Thus, the LSTM layer is based on the node u (t 4 ) Is a first input feature X (u (t) 4 ) And two nodes u (t) 2 ) And r (t) 2 ) Respective intermediate vectors and implicit vectors, i.e. c (u (t 2 )),h(u(t 2 )),c(r(t 2 ) And h (r (t) 2 ) Determining a node u (t) 4 ) Is (t) is (are) represented by the intermediate vector c (u (t 4 ) And h (u (t) 4 ))。
Thus, the root node of the first sub-graph, namely the first node u (t) 6 ) Intermediate vectors and implicit vectors of (a).
The internal structure and algorithm of the LSTM layer in order to implement the above iterative process are described below.
Fig. 8 illustrates the structure of the LSTM layer in the first neural network model according to one embodiment. In the example of FIG. 8, the currently processed target node is denoted as z (t), where X z(t) Representing a first input characteristic of the node.
Assume that two nodes pointed to by a target node z (t) are a first associated node j 1 And a second association node j 2 Then c j1 And h j1 Respectively represent first association nodes j 1 Intermediate vector and implicit vector of (c) j2 And h j2 Respectively represent second associated nodes j 2 Intermediate vectors and implicit vectors of (a).
The LSTM layer performs the following operations on the first input feature X, the intermediate vector, and the hidden vector input thereto.
Inputting the first input feature X z(t) First association node j 1 Is (are) implicit vector h j1 And a second association node j 2 Is (are) implicit vector h j2 Respectively inputting a first transformation function and a second transformation function with the same algorithm and different parameters to respectively obtain a first transformation vector
Figure BDA0002182924600000171
And a second transformation vector->
Figure BDA0002182924600000172
The first transformation function and the second transformation function may use various operations, such as first performing linear transformation on the input vector, and then applying an operation of activating the function. For example, in one example, the first transformation function and the second transformation function may be calculated using the following equation (1) and equation (2), respectively:
Figure BDA0002182924600000173
Figure BDA0002182924600000174
in equations (1) and (2) above, σ is the activation function, e.g. the sigmoid function,
Figure BDA0002182924600000175
and->
Figure BDA0002182924600000176
Is a linear transformation matrix>
Figure BDA0002182924600000177
Is an offset parameter. It can be seen that the algorithms of equations (1) and (2) are identical, only the parameters are different. By means of the above transformation function, a first transformation vector +.>
Figure BDA0002182924600000178
And a second transformation vector->
Figure BDA0002182924600000181
Of course, in other examples, similar but different transformation functions may be employed, such as selecting different activation functions, modifying the form and number of parameters in the above formula, and so forth.
Then, the first transformation vector
Figure BDA0002182924600000182
And a second transformation vector->
Figure BDA0002182924600000183
Respectively with the first associated node j 1 Intermediate vector c of (2) j1 And a second association node j 2 Intermediate vector c of (2) j2 Performing a combination operation, and obtaining a combination based on the operation resultVector.
Specifically, in one example, as shown in fig. 8, the above-mentioned combining operation may be to combine the first transformation vector
Figure BDA0002182924600000184
Intermediate vector c with first associated node j1 Performing bit multiplication (as shown by the symbol of +.in the figure) to obtain a vector v1; second transformation vector +.>
Figure BDA0002182924600000185
Intermediate vector c with second associated node j2 The bitwise multiplication is performed to obtain a vector v2, and then the vector v1 and the vector v2 are recombined, e.g. summed together, to obtain a combined vector.
In addition, the first input characteristic X of the node z(t) First association node j 1 Is (are) implicit vector h j1 And a second association node j 2 Is (are) implicit vector h j2 Respectively inputting a third transformation function and a fourth transformation function to respectively obtain a third transformation vector r z(t) And a fourth transformation vector O z(t)
Specifically, in the example shown in fig. 8, the third transformation function may be to first find the vector i z(t) And u z(t) And then i z(t) And u z(t) Performing bit-wise multiplication to obtain a third transformation vector r z(t) The method comprises the following steps:
r z(t) =i z(t) ⊙u z(t) (3)
wherein, the ". Iy represents bit wise multiplication.
More specifically, i z(t) A function of similar form and different parameters from the first transformation function may be used, for example, as calculated according to the following formula:
Figure BDA0002182924600000186
u z(t) can be calculated according to the following formula
Figure BDA0002182924600000187
The fourth transformation function can adopt functions similar to the first transformation function and different in parameters, so that a four-transformation vector O is obtained z(t)
Then, based on the combined vector and the third transformation vector r z(t) Determining an intermediate vector c of a target node z (t) z(t)
More specifically, in one example, the combined vector and the third transformed vector may be summed to obtain an intermediate vector c of z (t) z(t) . In other examples, the combined result may be used as the intermediate vector c of z (t) by other combinations, such as weighted summation, bit-wise multiplication z(t)
Furthermore, an intermediate vector c based on the node z (t) thus obtained z(t) And a fourth transformation vector O z(t) Determining an implicit vector h of the node z (t) z(t)
In the specific example shown in FIG. 8, the intermediate vector c may be z(t) Performing tanh function operation, and then performing transformation with a fourth transformation vector O z(t) By bit multiplication as an implicit vector h for the node z (t) z(t) The method comprises the following steps:
h z (t)=o z (t)⊙tanh(c z (t)) (6)
thus, according to the structure and algorithm shown in FIG. 8, the LSTM layer is based on the first input feature X of the currently processed target node z (t), the two associated nodes j pointed to by that node 1 And j 2 Respective intermediate vectors and implicit vectors, determining the intermediate vector c of the node z (t) z(t) And implicit vector h z(t)
In one embodiment, in iteratively processing each target node z (t) to determine its intermediate vector and its implicit vector, a time difference Δ between the interaction time corresponding to the currently processed target node z (t) and the interaction time corresponding to the pointed node is further introduced.
In operation, in one embodiment, the time difference Δ may also be incorporated into the first input feature X as part thereof. That is, the first input feature may also include the above-described time difference Δ. In another embodiment, the first input feature and the above-described time difference Δ may be used as two parallel input features. In this case, the form of the first to fourth variation functions may be maintained, but the parameters concerning the time difference may be introduced on the original basis. For example, for the first and second transformation functions described above, parameters regarding the time difference Δ may be introduced on the basis of equations (1) and (2), resulting in the following transformation functions:
Figure BDA0002182924600000191
Figure BDA0002182924600000192
other transformation functions may be similarly modified to introduce a factor of the time difference delta.
Through the LSTM layer shown in fig. 8, each node in the first sub-graph is sequentially and iteratively processed, so that an intermediate vector and an implicit vector of the first node of the root node can be obtained. In one embodiment, the implicit vector thus obtained may be used as the first implicit vector for the first subgraph output by the first neural network model.
According to one embodiment, to further enhance the effect, the first neural network model may include a plurality of LSTM layers, where an implicit vector of a node determined by a previous LSTM layer is input to a next LSTM layer as a first input feature of the node. That is, each LSTM layer still iteratively processes each node, determines an implicit vector and an intermediate vector of a currently processed target node i based on a first input feature of the target node i, an intermediate vector and an implicit vector of each of two associated nodes to which the target node i points, except that the bottommost LSTM layer uses an original event feature (optionally including node attribute features and/or time differences) of the target node i as the first input feature, a subsequent LSThe TM layer adopts the implicit vector h of the target node i determined by the previous LSTM layer i As a first input feature. In one embodiment, the plurality of LSTM layers are stacked in a residual network manner to form a first neural network model.
Where the first neural network model has a plurality of LSTM layers, it is understood that each LSTM layer may determine an implicit vector of the first node as a root node. In one embodiment, the first neural network model synthesizes the implicit vectors of the first node output by each of the LSTM layers to obtain a final implicit vector of the first node, i.e., a first implicit vector. More specifically, each implicit vector output by each LSTM layer may be weighted and combined, thereby obtaining a final first implicit vector. The weights of the weighted combination can be simply set to correspond to one weight factor for each layer, and the size of the weights is adjusted through training. Alternatively, the weighting factors may be determined by a more complex attention (attention) mechanism.
In another embodiment, the first neural network model may further use an implicit vector of the first node output by the last LSTM layer of the plurality of LSTM layers as the final first implicit vector.
In this way, in various manners, the first neural network model obtains an implicit vector of the first node, that is, a first implicit vector, based on the first subgraph taking the first node as a root node. Since the information of the time-series interaction history about the interaction object corresponding to the first node is reflected in the first subgraph, the thus obtained first implicit vector expresses not only the first node (e.g., u (t in fig. 4 and 6 6 ) The characteristics of the corresponding interactive object (such as David) can also express the influence of the interactive object in the past interactive event, so that the characteristics of the interactive object are comprehensively represented.
Similar to the above process, when the second sub-graph corresponding to the second node is input into the first neural network model, the first neural network model sequentially and iteratively processes each node according to the first input feature X (including the event feature) of each node in the second sub-graph and the pointing relationship of the connecting edges between the nodes, so as to determine the implicit vector corresponding to the second node or the second sub-graph, that is, the second implicit vector. Specifically, in the case of using an LSTM neural network model, the LSTM layer of the first neural network model sequentially uses each node in the second sub-graph as a target node, determines an implicit vector and an intermediate vector of each of two associated nodes pointed by the target node according to the first input feature X of the target node, and sequentially iterates until obtaining a root node of the second sub-graph, that is, an implicit vector of the second node. The process is similar to the processing of the first sub-graph and will not be described again.
Thus, through the first neural network model, based on the first sub-graph and the second sub-graph, a first implicit vector corresponding to the first node and a second implicit vector corresponding to the second node are obtained respectively.
On the other hand, in step 35, the four associated subgraphs obtained in step 33 are respectively input into a second neural network model to respectively obtain four associated implicit vectors; the second neural network determines an implicit vector corresponding to the input subgraph according to a second input feature Y of each node in the input subgraph and the pointing relation of connecting edges between the nodes, wherein the second input feature comprises event category labels of events where the nodes are located.
It can be seen that the second neural network model is similar to the processing logic of the first neural network model. The difference is that the feature (first input feature X) on which the first neural network model is based when processing each node in the subgraph includes an event feature of the event in which the node is located, and the feature (second input feature Y) on which the second neural network model is based when processing each node includes an event category label of the event in which the node is located. Accordingly, this requires that the event at which each node in the associative subgraph is located have a known event class label.
More specifically, the second neural network model also includes an LSTM layer. When a certain associated sub-graph is input into a second neural network model, the LSTM layer takes each node in the associated sub-graph as a target node in turn, and according to the second input feature Y of the target node, the intermediate vector and the hidden vector of each of the two associated nodes pointed by the target node are determined, and the hidden vector and the intermediate vector of the target node are determined, so that iterative processing is sequentially carried out until the hidden vector of the root node of the associated sub-graph is obtained.
FIG. 9 illustrates the structure of the LSTM layer in the second neural network model, according to one embodiment. In this embodiment, the structure and processing logic of the LSTM layer in the second neural network model is identical to that of the first neural network model shown in fig. 8, except that the first input feature X in fig. 8 is modified to a second input feature Y, where Y includes an event category label of the event in which the node is located. In operation, the event type label may be first subjected to embedding processing to obtain an embedded vector expression of the label as the second input feature Y. In one embodiment, the second input feature Y may further include an attribute feature of the node itself and/or a time difference from an event where the associated node is located on the basis of the event category label.
The second neural network model may also include a plurality of LSTM layers shown in fig. 9, where the implicit vector results of the layers are fused to obtain an associated implicit vector corresponding to the associated subgraph.
In one embodiment, the second neural network model may have exactly the same structure (e.g., number of layers) and algorithm logic (e.g., transform function form) as the first neural network model, except that its network parameters are different.
Thus, by inputting the four association subgraphs obtained in step 33 into the second neural network model, four association hidden vectors corresponding to the four association nodes can be obtained.
Next, in step 36, an event category of the current interaction event is determined according to the first hidden vector H1 and the second hidden vector H2 obtained in the first neural network model in step 34, and the four associated hidden vectors G1, G2, G3, G4 obtained in the second neural network model in step 35.
In one embodiment, the first and second implicit vectors H1, H2 and the four associated implicit vectors G1-G4 are input into a computing function having a predetermined algorithm logic, and the event type of the current interaction event is determined according to the result of the function.
In another embodiment, the 6 hidden vectors are further processed using a further neural network to analyze the current interaction event.
In one example, the first implicit vector H1, the second implicit vector H2, and the four associated implicit vectors G1-G4 may be fused by using a fully connected neural network to obtain an event representation vector of the current interaction event. In different embodiments, different fusion modes may be used, for example, a direct fusion mode such as stitching, weighted combination, etc. of 6 implicit vectors. More complex fusion methods may be used, for example, weighting and combining the first and second implicit vectors to obtain a first combined vector, weighting and combining the four associated implicit vectors to obtain a second combined vector, and then splicing the first combined vector and the second combined vector. The specific mode of fusion can be set according to the service scene of the current interaction event.
Thus, the event representation vector of the current interaction event is obtained through the fully-connected neural network. It can be understood that the event representation vector comprehensively considers the hidden vectors of the two participating nodes and the four associated nodes of the current interaction event, and the hidden vector of each node further reflects the time sequence information and the event feature/label information of the historical interaction event, so that the event representation vector comprehensively reflects the influence of the historical interaction event on the current node and the current event and contains rich and deep feature information.
The event category of the current interaction event may then be predicted based on the event representation vector using a classifier.
At this time, the first neural network model, the second neural network model, the fully connected neural network and the classifier together constitute one integrated model.
FIG. 10 illustrates a schematic diagram of the structure of the integrated model in one embodiment. As shown in fig. 10, in the integrated model, a first neural network model is used for obtaining a first implicit vector and a second implicit vector according to sub-graphs corresponding to two participating nodes of a current interaction event; the second neural network model is used for obtaining four associated hidden vectors according to the subgraphs corresponding to the four associated nodes of the current interaction event. The fully-connected neural network is connected with the first neural network and the second neural network, and a first implicit vector, a second implicit vector and four associated implicit vectors are respectively obtained from the first neural network and the second neural network and are fused into an event representation vector. The classifier is connected with the fully-connected neural network, and predicts the event category of the current interaction event according to the event representation vector.
In one embodiment, the current interaction event is an interaction event for which the event category is unknown. Thus, through step 36, the event type of the current interaction event can be analyzed, and further the subsequent processing mode can be determined. For example, in one example, the current interaction event is a user A initiated transaction with object B. By analysis of the transaction event, the event category may be predicted, such as whether it is a suspected cashed fraudulent transaction, the risk level of the transaction, and so forth. Based on the event categories so predicted, it may be decided whether the transaction is allowed or not, whether it is requested by user a's payment.
In another embodiment, the current interaction event may also be an event with an event category label. In such a case, the foregoing steps 31-36 may be part of the learning and training process of the neural network model. In such a case, the training process of the neural network model further includes the following steps.
A current category label for a current interaction event may be obtained. The method and meaning of obtaining the category label are as described above, and are not repeated.
Then, a predicted loss is determined based on the event category determined in step 36, and the current category label obtained. The prediction loss may be obtained in the form of a loss function such as L2 error, cross entropy, etc.
Then, the first neural network model and the second neural network model described above may then be trained based on the predicted loss. Specifically, gradient descent, back propagation and other modes may be adopted to adjust parameters in the first neural network and the second neural network, for example, parameters in each transformation function, so as to update the first neural network model and the second neural network model until the accuracy of the class prediction of the current interaction event reaches a certain requirement. This approach is suitable for cases where step 36 is implemented by a predetermined computational function, or where the fully connected neural network and classifier have been trained.
In another embodiment, the fully connected neural network used to characterize the event vector and the classifier used to perform the classification are also to be trained. At this time, the above-described first neural network model, second neural network model, fully connected neural network, and classifier may be jointly trained according to the above-determined prediction loss, that is, the entire integrated model shown in fig. 10 is trained. Specifically, parameters of each model part in the comprehensive model can be adjusted and updated, so that the prediction loss is reduced until the accuracy of the category prediction of the current interaction event reaches a certain requirement. The comprehensive model obtained through training can directly conduct category analysis on the events to be analyzed.
In view of the above, in the solution of the embodiment of the present specification, a dynamic interaction graph is constructed based on a dynamic interaction sequence, where the dynamic interaction graph reflects a time sequence relationship of each interaction event and an interaction effect transferred between interaction objects through each interaction event. And for the current interaction event to be analyzed, obtaining the participation node of the current interaction event and the associated node of the participation node from the dynamic interaction graph, obtaining each sub-graph part taking each participation node and the associated node as root nodes, and respectively inputting the sub-graph parts into a neural network model to obtain implicit vector expression of each participation node and the associated node. The event characteristics or class labels of the historical interaction events and the influence of the historical interaction events on the nodes are introduced into the implicit vector obtained in the method, so that the deep characteristics of the nodes can be comprehensively and comprehensively expressed. The current interaction event can be expressed and analyzed more comprehensively and accurately based on respective implicit vectors of the participating node and the associated node related to the current interaction event.
According to an embodiment of another aspect, an apparatus for processing interaction sequence data is provided, which may be deployed in any device, platform or cluster of devices having computing, processing capabilities. FIG. 11 illustrates a schematic block diagram of an apparatus for processing an interactivity event, according to one embodiment. As shown in fig. 11, the processing device 110 includes:
an interaction map obtaining unit 111 configured to obtain a dynamic interaction map constructed according to a dynamic interaction sequence, wherein the dynamic interaction sequence includes a plurality of interaction events arranged in a time sequence, each interaction event including two objects where interaction behavior occurs, an event feature, and an interaction time; at least some of the plurality of interactivity events have event category labels; the dynamic interaction graph comprises a plurality of nodes representing all objects in all events, wherein any node i points to two associated nodes through a connecting edge, and the two associated nodes are two nodes corresponding to the last interaction event in which the object represented by the node i participates;
the node determining unit 112 is configured to determine, in the dynamic interaction graph, a first node and a second node corresponding to a current interaction event to be analyzed, and four related associated nodes, where the four associated nodes include two associated nodes pointed by the first node and two associated nodes pointed by the second node;
A sub-graph determining unit 113, configured to determine, in the dynamic interaction graph, respective current sub-graphs with the first node, the second node, and the four associated nodes as current root nodes, so as to obtain a first sub-graph, a second sub-graph, and four associated sub-graphs, where the current sub-graphs include nodes in a predetermined range that are reached from the current root node via a connection edge;
a first processing unit 114, configured to input the first sub-graph and the second sub-graph into a first neural network model respectively, to obtain a corresponding first implicit vector and a corresponding second implicit vector respectively; the first neural network determines an implicit vector corresponding to the input subgraph according to first input features of all nodes in the input subgraph and pointing relations of connecting edges between the nodes, wherein the first input features comprise event features of events where the nodes are located;
the second processing unit 115 is configured to input the four associated subgraphs into a second neural network model respectively to obtain four associated implicit vectors respectively; the second neural network determines an implicit vector corresponding to the input subgraph according to second input features of all nodes in the input subgraph and the pointing relation of connecting edges between the nodes, wherein the second input features comprise event category labels of events where the nodes are located;
A prediction unit 116, configured to predict an event category of the current interaction event according to the first implicit vector, the second implicit vector and the four associated implicit vectors.
In one embodiment, the interaction map acquisition unit 111 is configured to:
acquiring an existing dynamic interaction diagram constructed based on an existing interaction sequence;
acquiring a newly added interaction event;
the first object and the second object related to the newly added interaction event are used as two newly added nodes and added into the existing dynamic interaction graph;
for each newly added node, if two associated nodes exist, a connecting edge pointing to the two associated nodes from the newly added node is added.
According to one embodiment, in the subgraph determined by the subgraph determining unit 113, nodes within a predetermined range reached via the connection edge include: nodes reached via connecting edges within a preset number K; and/or nodes which are reachable via the connection edge and whose interaction time is within a preset time range.
According to one embodiment, the first input features of each node processed by the first neural network model further include attribute features of the node, and/or a time difference between a first interaction time of an interaction event where the node is located and a second interaction time of an interaction event where two associated nodes are located.
In a specific embodiment, the interaction event is a transaction event, and the event features include at least one of the following: transaction type, transaction amount, transaction channel; the event category label is a transaction risk level label.
According to one embodiment, the first neural network comprises an LSTM layer for:
and sequentially taking each node in the first subgraph as a target node, and determining the hidden vector and the intermediate vector of the target node according to the intermediate vector and the hidden vector of each of the two associated nodes pointed by the target node according to the first input characteristics of the target node until the hidden vector of the first node is obtained.
Further, in a specific embodiment, the two association nodes pointed by the target node are a first association node and a second association node; the LSTM layer is specifically used for:
respectively inputting the first input characteristic of the target node, the implicit vector of the first association node and the implicit vector of the second association node into a first transformation function and a second transformation function which have the same algorithm and different parameters to respectively obtain a first transformation vector and a second transformation vector;
combining the first transformation vector and the second transformation vector with the intermediate vector of the first association node and the intermediate vector of the second association node respectively, and obtaining a combined vector based on an operation result;
Respectively inputting the first input characteristic of the target node, the implicit vector of the first association node and the implicit vector of the second association node into a third transformation function and a fourth transformation function to respectively obtain a third transformation vector and a fourth transformation vector;
determining an intermediate vector of the target node based on the combined vector and a third transformation vector;
an implicit vector of the target node is determined based on the intermediate vector and a fourth transformation vector of the target node.
According to one embodiment, the first neural network model includes a plurality of LSTM layers, wherein an implicit vector of the target node determined by a previous LSTM layer is input to a next LSTM layer as a first input feature of the target node.
Further, in one embodiment, the first neural network model synthesizes the implicit vectors of the first node output by each of the LSTM layers to obtain the first implicit vector.
Alternatively, in another embodiment, the first neural network model takes as the first implicit vector an implicit vector of the first node output by a last LSTM layer of the plurality of LSTM layers.
According to one implementation, the first neural network model and the second neural network model are neural network models of the same structure and algorithm and different parameters.
In one embodiment, prediction unit 116 is configured to:
fusing the first implicit vector, the second implicit vector and the four associated implicit vectors by using a fully connected neural network to obtain an event representation vector of the current interaction event;
and determining the event category of the current interaction event according to the event representation vector by using a classifier.
According to one embodiment, each neural network model is trained by the model training unit 117. In different embodiments, the model training unit 117 may be located outside or inside the apparatus 110.
In one embodiment, model training unit 117 is configured to:
acquiring a current category label of the current interaction event;
determining a predicted loss based at least on the determined event category and the current category label;
and jointly training the first neural network model, the second neural network model, the fully connected neural network and the classifier according to the prediction loss.
In another embodiment, the model training unit 117 is configured to:
acquiring a current category label of the current interaction event;
determining a predicted loss based at least on the determined event category and the current category label;
And training the first neural network model and the second neural network model according to the prediction loss.
Through the device, based on the dynamic interaction graph, the participation nodes and the association nodes of the interaction event are processed by adopting the neural network model, so that the interaction event is comprehensively analyzed and predicted.
According to an embodiment of another aspect, there is also provided a computer-readable storage medium having stored thereon a computer program which, when executed in a computer, causes the computer to perform the method described in connection with fig. 3.
According to an embodiment of yet another aspect, there is also provided a computing device including a memory having executable code stored therein and a processor that, when executing the executable code, implements the method described in connection with fig. 3.
Those skilled in the art will appreciate that in one or more of the examples described above, the functions described in the present invention may be implemented in hardware, software, firmware, or any combination thereof. When implemented in software, these functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium.
The foregoing embodiments have been provided for the purpose of illustrating the general principles of the present invention in further detail, and are not to be construed as limiting the scope of the invention, but are merely intended to cover any modifications, equivalents, improvements, etc. based on the teachings of the invention.

Claims (30)

1. A computer-implemented method of processing an interaction event, the method comprising:
acquiring a dynamic interaction diagram constructed according to a dynamic interaction sequence, wherein the dynamic interaction sequence comprises a plurality of interaction events which are arranged according to a time sequence, and each interaction event comprises two objects for generating interaction behaviors, event characteristics and interaction time; at least some of the plurality of interactivity events have event category labels; the dynamic interaction graph comprises a plurality of nodes representing all objects in all events, wherein any node i points to two associated nodes through a connecting edge, and the two associated nodes are two nodes corresponding to the last interaction event in which the object represented by the node i participates;
In the dynamic interaction graph, determining a first node and a second node corresponding to a current interaction event to be analyzed and related four associated nodes, wherein the four associated nodes comprise two associated nodes pointed by the first node and two associated nodes pointed by the second node;
respectively taking the first node, the second node and the four associated nodes as current root nodes, and determining respective corresponding current subgraphs in the dynamic interaction graph so as to respectively obtain a first subgraph, a second subgraph and four associated subgraphs, wherein the current subgraphs comprise nodes which start from the current root nodes and reach a preset range through connecting edges;
respectively inputting the first sub-graph and the second sub-graph into a first neural network model to respectively obtain a corresponding first hidden vector and a corresponding second hidden vector; the first neural network determines an implicit vector corresponding to the input subgraph according to first input features of all nodes in the input subgraph and pointing relations of connecting edges between the nodes, wherein the first input features comprise event features of events where the nodes are located;
inputting the four associated subgraphs into a second neural network model respectively to obtain four associated implicit vectors respectively; the second neural network determines an implicit vector corresponding to the input subgraph according to second input features of all nodes in the input subgraph and the pointing relation of connecting edges between the nodes, wherein the second input features comprise event category labels of events where the nodes are located;
And predicting the event category of the current interaction event according to the first implicit vector, the second implicit vector and the four associated implicit vectors.
2. The method of claim 1, wherein the obtaining a dynamic interaction map constructed from a dynamic interaction sequence comprises:
acquiring an existing dynamic interaction diagram constructed based on an existing interaction sequence;
acquiring a newly added interaction event;
the first object and the second object related to the newly added interaction event are used as two newly added nodes and added into the existing dynamic interaction graph;
for each newly added node, if two associated nodes exist, a connecting edge pointing to the two associated nodes from the newly added node is added.
3. The method of claim 1, wherein nodes within a predetermined range reached via the connecting edge comprise:
nodes reached via connecting edges within a preset number K; and/or
Nodes which are accessible via the connection edge and whose interaction time is within a predetermined time range.
4. The method of claim 1, wherein the first input feature further comprises an attribute feature of a node and/or a time difference between a first interaction time of an interaction event at which the node is located and a second interaction time of an interaction event at which two associated nodes are located.
5. The method of claim 1, wherein the interaction event is a transaction event, the event characteristics comprising at least one of: transaction type, transaction amount, transaction channel; the event category label is a transaction risk level label.
6. The method of claim 1, wherein the first neural network is an LSTM-based neural network comprising at least one LSTM layer for at least:
and sequentially taking each node in the first subgraph as a target node, and determining the hidden vector and the intermediate vector of the target node according to the intermediate vector and the hidden vector of each of the two associated nodes pointed by the target node according to the first input characteristics of the target node until the hidden vector of the first node is obtained.
7. The method of claim 6, wherein the two associated nodes to which the target node points are a first associated node and a second associated node; the determining the implicit vector and the intermediate vector of the target node includes:
respectively inputting the first input characteristic of the target node, the implicit vector of the first association node and the implicit vector of the second association node into a first transformation function and a second transformation function which have the same algorithm and different parameters to respectively obtain a first transformation vector and a second transformation vector;
Combining the first transformation vector and the second transformation vector with the intermediate vector of the first association node and the intermediate vector of the second association node respectively, and obtaining a combined vector based on an operation result;
respectively inputting the first input characteristic of the target node, the implicit vector of the first association node and the implicit vector of the second association node into a third transformation function and a fourth transformation function to respectively obtain a third transformation vector and a fourth transformation vector;
determining an intermediate vector of the target node based on the combined vector and a third transformation vector;
an implicit vector of the target node is determined based on the intermediate vector and a fourth transformation vector of the target node.
8. The method of claim 6, wherein the first neural network model comprises a plurality of LSTM layers, wherein an implicit vector of the target node determined by a previous LSTM layer is input to a next LSTM layer as a first input feature of the target node.
9. The method of claim 8, wherein the first neural network model synthesizes the implicit vectors of the first node output by each of the plurality of LSTM layers to obtain the first implicit vector.
10. The method of claim 8, wherein the first neural network model takes as the first implicit vector an implicit vector of the first node output by a last LSTM layer of the plurality of LSTM layers.
11. The method of any of claims 1,6-10, wherein the first and second neural network models are neural network models of identical structure and algorithm, different parameters.
12. The method of claim 1, wherein determining the event category of the current interaction event from the first implicit vector, the second implicit vector, and four associated implicit vectors comprises:
fusing the first implicit vector, the second implicit vector and the four associated implicit vectors by using a fully connected neural network to obtain an event representation vector of the current interaction event;
and determining the event category of the current interaction event according to the event representation vector by using a classifier.
13. The method of claim 12, further comprising:
acquiring a current category label of the current interaction event;
determining a predicted loss based at least on the determined event category and the current category label;
And jointly training the first neural network model, the second neural network model, the fully connected neural network and the classifier according to the prediction loss.
14. The method of claim 1, further comprising:
acquiring a current category label of the current interaction event;
determining a predicted loss based at least on the determined event category and the current category label;
and training the first neural network model and the second neural network model according to the prediction loss.
15. An apparatus for processing an interaction event, the apparatus comprising:
the interactive image acquisition unit is configured to acquire a dynamic interactive image constructed according to a dynamic interactive sequence, wherein the dynamic interactive sequence comprises a plurality of interactive events which are arranged according to a time sequence, and each interactive event comprises two objects for generating interactive behaviors, event characteristics and interactive time; at least some of the plurality of interactivity events have event category labels; the dynamic interaction graph comprises a plurality of nodes representing all objects in all events, wherein any node i points to two associated nodes through a connecting edge, and the two associated nodes are two nodes corresponding to the last interaction event in which the object represented by the node i participates;
The node determining unit is configured to determine a first node and a second node corresponding to a current interaction event to be analyzed and related four associated nodes in the dynamic interaction graph, wherein the four associated nodes comprise two associated nodes pointed by the first node and two associated nodes pointed by the second node;
the sub-graph determining unit is configured to determine respective corresponding current sub-graphs in the dynamic interaction graph by taking the first node, the second node and the four associated nodes as current root nodes respectively, so as to obtain the first sub-graph, the second sub-graph and the four associated sub-graphs respectively, wherein the current sub-graphs comprise nodes which start from the current root nodes and reach a preset range through connecting edges;
the first processing unit is configured to input the first sub-graph and the second sub-graph into a first neural network model respectively to obtain a corresponding first implicit vector and a corresponding second implicit vector respectively; the first neural network determines an implicit vector corresponding to the input subgraph according to first input features of all nodes in the input subgraph and pointing relations of connecting edges between the nodes, wherein the first input features comprise event features of events where the nodes are located;
The second processing unit is configured to input the four association subgraphs into a second neural network model respectively to obtain four association hidden vectors respectively; the second neural network determines an implicit vector corresponding to the input subgraph according to second input features of all nodes in the input subgraph and the pointing relation of connecting edges between the nodes, wherein the second input features comprise event category labels of events where the nodes are located;
and the prediction unit is configured to predict the event category of the current interaction event according to the first implicit vector, the second implicit vector and the four associated implicit vectors.
16. The apparatus of claim 15, wherein the interaction map acquisition unit is configured to:
acquiring an existing dynamic interaction diagram constructed based on an existing interaction sequence;
acquiring a newly added interaction event;
the first object and the second object related to the newly added interaction event are used as two newly added nodes and added into the existing dynamic interaction graph;
for each newly added node, if two associated nodes exist, a connecting edge pointing to the two associated nodes from the newly added node is added.
17. The apparatus of claim 15, wherein nodes within a predetermined range reached via the connecting edge comprise:
Nodes reached via connecting edges within a preset number K; and/or
Nodes which are accessible via the connection edge and whose interaction time is within a predetermined time range.
18. The apparatus of claim 15, wherein the first input feature further comprises an attribute feature of a node and/or a time difference between a first interaction time of an interaction event at which the node is located and a second interaction time of an interaction event at which two associated nodes are located.
19. The apparatus of claim 15, wherein the interaction event is a transaction event, the event characteristics comprising at least one of: transaction type, transaction amount, transaction channel; the event category label is a transaction risk level label.
20. The apparatus of claim 15, wherein the first neural network is an LSTM-based neural network comprising at least one LSTM layer configured to:
and sequentially taking each node in the first subgraph as a target node, and determining the hidden vector and the intermediate vector of the target node according to the intermediate vector and the hidden vector of each of the two associated nodes pointed by the target node according to the first input characteristics of the target node until the hidden vector of the first node is obtained.
21. The apparatus of claim 20, wherein the two associated nodes to which the target node points are a first associated node and a second associated node; the LSTM layer is specifically used for:
respectively inputting the first input characteristic of the target node, the implicit vector of the first association node and the implicit vector of the second association node into a first transformation function and a second transformation function which have the same algorithm and different parameters to respectively obtain a first transformation vector and a second transformation vector;
combining the first transformation vector and the second transformation vector with the intermediate vector of the first association node and the intermediate vector of the second association node respectively, and obtaining a combined vector based on an operation result;
respectively inputting the first input characteristic of the target node, the implicit vector of the first association node and the implicit vector of the second association node into a third transformation function and a fourth transformation function to respectively obtain a third transformation vector and a fourth transformation vector;
determining an intermediate vector of the target node based on the combined vector and a third transformation vector;
an implicit vector of the target node is determined based on the intermediate vector and a fourth transformation vector of the target node.
22. The apparatus of claim 20, wherein the first neural network model comprises a plurality of LSTM layers, wherein an implicit vector of the target node determined by a previous LSTM layer is input to a next LSTM layer as a first input feature of the target node.
23. The apparatus of claim 22, wherein the first neural network model synthesizes implicit vectors of the first node output by each of the plurality of LSTM layers to obtain the first implicit vector.
24. The apparatus of claim 22, wherein the first neural network model takes as the first implicit vector an implicit vector of the first node output by a last LSTM layer of the plurality of LSTM layers.
25. The apparatus of any of claims 15, 20-24, wherein the first and second neural network models are neural network models of identical structure and algorithm, different parameters.
26. The apparatus of claim 15, wherein the prediction unit is configured to:
fusing the first implicit vector, the second implicit vector and the four associated implicit vectors by using a fully connected neural network to obtain an event representation vector of the current interaction event;
And determining the event category of the current interaction event according to the event representation vector by using a classifier.
27. The apparatus of claim 26, comprising a model training unit configured to:
acquiring a current category label of the current interaction event;
determining a predicted loss based at least on the determined event category and the current category label;
and jointly training the first neural network model, the second neural network model, the fully connected neural network and the classifier according to the prediction loss.
28. The apparatus of claim 15, comprising a model training unit configured to:
acquiring a current category label of the current interaction event;
determining a predicted loss based at least on the determined event category and the current category label;
and training the first neural network model and the second neural network model according to the prediction loss.
29. A computer readable storage medium having stored thereon a computer program which, when executed in a computer, causes the computer to perform the method of any of claims 1-14.
30. A computing device comprising a memory and a processor, wherein the memory has executable code stored therein, which when executed by the processor, implements the method of any of claims 1-14.
CN201910803312.9A 2019-08-28 2019-08-28 Method and device for processing interaction event Active CN110689110B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910803312.9A CN110689110B (en) 2019-08-28 2019-08-28 Method and device for processing interaction event

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910803312.9A CN110689110B (en) 2019-08-28 2019-08-28 Method and device for processing interaction event

Publications (2)

Publication Number Publication Date
CN110689110A CN110689110A (en) 2020-01-14
CN110689110B true CN110689110B (en) 2023-06-02

Family

ID=69108428

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910803312.9A Active CN110689110B (en) 2019-08-28 2019-08-28 Method and device for processing interaction event

Country Status (1)

Country Link
CN (1) CN110689110B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111582873B (en) * 2020-05-07 2023-01-17 支付宝(杭州)信息技术有限公司 Method and device for evaluating interaction event, electronic equipment and storage medium
CN111476223B (en) * 2020-06-24 2020-09-22 支付宝(杭州)信息技术有限公司 Method and device for evaluating interaction event
CN112085279B (en) * 2020-09-11 2022-09-06 支付宝(杭州)信息技术有限公司 Method and device for training interactive prediction model and predicting interactive event
CN112541129B (en) * 2020-12-06 2023-05-23 支付宝(杭州)信息技术有限公司 Method and device for processing interaction event

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101600167A (en) * 2008-06-06 2009-12-09 瞬联软件科技(北京)有限公司 Towards moving information self-adaptive interactive system and its implementation of using
CN103902966A (en) * 2012-12-28 2014-07-02 北京大学 Video interaction event analysis method and device base on sequence space-time cube characteristics

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8463721B2 (en) * 2010-08-05 2013-06-11 Toyota Motor Engineering & Manufacturing North America, Inc. Systems and methods for recognizing events
US9535960B2 (en) * 2014-04-14 2017-01-03 Microsoft Corporation Context-sensitive search using a deep learning model
US10706840B2 (en) * 2017-08-18 2020-07-07 Google Llc Encoder-decoder models for sequence to sequence mapping
US11379715B2 (en) * 2017-12-15 2022-07-05 Meta Platforms, Inc. Deep learning based distribution of content items describing events to users of an online system
CN108764011B (en) * 2018-03-26 2021-05-18 青岛科技大学 Group identification method based on graphical interaction relation modeling

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101600167A (en) * 2008-06-06 2009-12-09 瞬联软件科技(北京)有限公司 Towards moving information self-adaptive interactive system and its implementation of using
CN103902966A (en) * 2012-12-28 2014-07-02 北京大学 Video interaction event analysis method and device base on sequence space-time cube characteristics

Also Published As

Publication number Publication date
CN110689110A (en) 2020-01-14

Similar Documents

Publication Publication Date Title
CN110689110B (en) Method and device for processing interaction event
CN110543935B (en) Method and device for processing interactive sequence data
WO2021027260A1 (en) Method and device for processing interaction sequence data
CN111210008B (en) Method and device for processing interactive data by using LSTM neural network model
CN110555469B (en) Method and device for processing interactive sequence data
CN112000819B (en) Multimedia resource recommendation method and device, electronic equipment and storage medium
US20190272553A1 (en) Predictive Modeling with Entity Representations Computed from Neural Network Models Simultaneously Trained on Multiple Tasks
CN110490274B (en) Method and device for evaluating interaction event
CN111242283B (en) Training method and device for evaluating self-encoder of interaction event
TW202008264A (en) Method and apparatus for recommendation marketing via deep reinforcement learning
CN111815415A (en) Commodity recommendation method, system and equipment
CN111695965B (en) Product screening method, system and equipment based on graphic neural network
CN111476223B (en) Method and device for evaluating interaction event
CN111523682B (en) Method and device for training interactive prediction model and predicting interactive object
CN112085293B (en) Method and device for training interactive prediction model and predicting interactive object
CN112580789B (en) Training graph coding network, and method and device for predicting interaction event
CN110705688A (en) Neural network system, method and device for risk assessment of operation event
CN113610610B (en) Session recommendation method and system based on graph neural network and comment similarity
WO2021139513A1 (en) Method and apparatus for processing interaction sequence data
CN113449176A (en) Recommendation method and device based on knowledge graph
CN112085279B (en) Method and device for training interactive prediction model and predicting interactive event
CN115564532A (en) Training method and device of sequence recommendation model
JP6558860B2 (en) Estimation device, prediction device, method, and program
CN115858766B (en) Interest propagation recommendation method and device, computer equipment and storage medium
CN113704626B (en) Conversation social recommendation method based on reconstructed social network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20201012

Address after: Cayman Enterprise Centre, 27 Hospital Road, George Town, Grand Cayman Islands

Applicant after: Advanced innovation technology Co.,Ltd.

Address before: A four-storey 847 mailbox in Grand Cayman Capital Building, British Cayman Islands

Applicant before: Alibaba Group Holding Ltd.

Effective date of registration: 20201012

Address after: Cayman Enterprise Centre, 27 Hospital Road, George Town, Grand Cayman Islands

Applicant after: Innovative advanced technology Co.,Ltd.

Address before: Cayman Enterprise Centre, 27 Hospital Road, George Town, Grand Cayman Islands

Applicant before: Advanced innovation technology Co.,Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant