Disclosure of Invention
In view of the above, the present invention provides a dynamic social user alignment method based on a GCN.
In order to achieve the purpose, the invention provides the following technical scheme:
a dynamic social user alignment method based on a graph neural network (GCN) comprises the following steps:
acquiring network structure information and tag information of anchor node users from a plurality of social networks, and fusing the network structure information and the tag information into a combined network according to rules of the combined network;
intercepting t according to time sequence when the social network platform changes at any time1To tnAnd constitute it into a combined network Z1To ZnAnd obtaining an adjacency matrix A of the combined network1,A2,...AT;
A is to be1Inputting a GCN layer to obtain a basic representation and a hidden state matrix of the whole network, and combining two t for capturing a dynamic network and improving the training efficiency of a modelnAnd tn-1The difference of the matrixes of the time is used as the input of the GCN layer;
laminating the GCN layer toTo hidden state matrix H1,ΔH2,...,ΔHnSequentially inputting the GRU layers, and obtaining a hidden state matrix h of the time information of the storage network through the GRU layersn;
Outputting a training result through a full-link layer, defining a loss function in the full-link layer to perform binary classification on the nodes, wherein the node with the classification result of 1 is a potential anchor node, and the node with the classification result of 0 is a non-anchor node;
the node classified as 1 in the combined network is a pair of potential anchor nodes in a source network and a target network, wherein the source network and the target network are network accounts belonging to the same person in reality.
Optionally, the merging of the combined networks into a combined network according to the rules of the combined network specifically includes:
setting the source network as X network, the target network as Y network, the combined network as Z network, G
X,G
Y,G
ZRespectively, an undirected graph representing X, Y, Z a network; g
X=(V
X,E
X),V
XFor a set of nodes in network X, E
XIs a set of edges in network X; g
Y=(V
Y,E
Y),V
YFor a set of nodes in the network Y, E
YIs the set of edges in network Y; g
Z=(V
Z,E
Z),V
ZFor a set of nodes in the network Z, E
ZIs the set of edges in network Z;
representing nodes v in a combined network
iu
jRespectively by v
i∈V
X,u
j∈V
YComposition is carried out;
if (v)
i,v
k) And (u)
j,u
l) At G
X,G
YEdges are all present in the network, then edges (v) are present in the combined network Z
iu
j,v
ku
l)。
Optionally, the operation formula of the GCN layer is as follows:
H
nis a hidden state matrix generated by the nth layer GCN,
is a regularized adjacency matrix that is a deformation of the adjacency matrix,
for the regularized band-from-loop adjacency matrix,
in order to have a self-looping abutment matrix,
for the degree of node i, W is the weight matrix trained by the GCN layer, and σ is the ReLU activation function.
Optionally, in the dynamic network, the network dynamically changes with time according to the time sequence T
1,T
2...T
nDividing the network X and the network Y into n network snapshots respectively,
a combined network Z is generated from snapshots of network X and network Y,
deriving an adjacency matrix A of the combined network Z
1,A
2,...A
n(ii) a To capture dynamic network and improve model training efficiency, first, A
1The input GCN layer obtains a basic representation of the whole network and combines two t
nAnd t
n-1Difference Δ A of the matrix of time instants
n=A
n-A
n-1As an input to the GCN layer; the matrix input of the GCN layer is A
1,ΔA
2,ΔA
3...ΔA
n。
Optionally, the GRU layer is used to represent time information of network dynamic change, and a formula of the GRU layer is as follows:
hn=GRU(hn-1,Hn)
hnhidden state matrix output for the nth GRU, HnIs a hidden state matrix generated by the n-th layer GCN.
Optionally, the forward propagation formula of the GCN layer is as follows:
rt=σ(Wrxt+Urht-1)
zt=σ(Wzxt+Uzht-1)
r
treset gate for GRU, z
tFor the update gate of GRU, W and U are both weight matrices, which are learned in training, x
tFor input at the current time t, h
t-1Is the hidden layer state at the moment of t-1, sigma is sigmoid function,
for resetting the hidden state of the gate calculation, h
tTo update the state update of the gate to the hidden,
for the hadamard product, corresponding elements in the operation matrix are multiplied.
Optionally, the nodes are classified by the full-connection layer, the node classified as 1 is a potential anchor node pair, the node classified as 0 is a non-anchor node, and a loss function defined in the full-connection layer is as follows:
f(hv) Function representing the entire depth model, yvThe labels are classified on behalf of the nodes.
The invention has the beneficial effects that: the invention completes the user alignment task of the dynamic network by constructing the deep neural network model, can effectively store multidimensional information such as network structure information, attribute information, time information and the like compared with the traditional model, can still obtain better accuracy under the condition of label information loss, can effectively solve the problems of retraining, single training information and the like of the dynamic network user alignment task model, and has certain improvement on the model efficiency.
Additional advantages, objects, and features of the invention will be set forth in part in the description which follows and in part will become apparent to those having ordinary skill in the art upon examination of the following or may be learned from practice of the invention. The objectives and other advantages of the invention may be realized and attained by the means of the instrumentalities and combinations particularly pointed out hereinafter.
Detailed Description
The embodiments of the present invention are described below with reference to specific embodiments, and other advantages and effects of the present invention will be easily understood by those skilled in the art from the disclosure of the present specification. The invention is capable of other and different embodiments and of being practiced or of being carried out in various ways, and its several details are capable of modification in various respects, all without departing from the spirit and scope of the present invention. It should be noted that the drawings provided in the following embodiments are only for illustrating the basic idea of the present invention in a schematic way, and the features in the following embodiments and examples may be combined with each other without conflict.
Wherein the showings are for the purpose of illustrating the invention only and not for the purpose of limiting the same, and in which there is shown by way of illustration only and not in the drawings in which there is no intention to limit the invention thereto; to better illustrate the embodiments of the present invention, some parts of the drawings may be omitted, enlarged or reduced, and do not represent the size of an actual product; it will be understood by those skilled in the art that certain well-known structures in the drawings and descriptions thereof may be omitted.
The same or similar reference numerals in the drawings of the embodiments of the present invention correspond to the same or similar components; in the description of the present invention, it should be understood that if there is an orientation or positional relationship indicated by terms such as "upper", "lower", "left", "right", "front", "rear", etc., based on the orientation or positional relationship shown in the drawings, it is only for convenience of description and simplification of description, but it is not an indication or suggestion that the referred device or element must have a specific orientation, be constructed in a specific orientation, and be operated, and therefore, the terms describing the positional relationship in the drawings are only used for illustrative purposes, and are not to be construed as limiting the present invention, and the specific meaning of the terms may be understood by those skilled in the art according to specific situations.
As shown in fig. 1, the present invention provides a method for dynamic social network user alignment of a GCN, which includes the following steps:
101. acquiring network structure information and tag information of anchor node users from a plurality of social networks, and fusing the network structure information and the tag information into a combined network according to rules of the combined network;
102. intercepting network snapshots t 1-tn according to a time sequence, forming a combined network Z1-Zn, and acquiring adjacency matrixes A1 and A2.. AT of the combined network;
103. inputting A1 and the difference between the two tn and tn-1 time matrixes into a GCN layer to obtain a basic representation and a hidden state matrix of the network;
104. inputting hidden state matrixes H1, delta H2.. delta Hn obtained by the GCN layer into the GRU layer in sequence, and obtaining a hidden state matrix for storing time information through the GRU layer;
105. and outputting the training result through the full-connection layer, and defining a loss function in the full-connection layer to perform binary classification on the nodes.
In step 101, the method includes obtaining a plurality of social network structure information and tag information of an anchor node user, and merging a combined network. Network alignment refers to finding social accounts belonging to the same person in multiple different social networks, and users with known identity attributes are referred to as anchor users. According to the invention, users in the social network platform are used as graph nodes, and a social network graph is constructed through social relations among networks. The two social networks are fused into one network to train the model, and the network alignment effect is obtained. The rule for merging two networks into a combined network is as follows:
setting the source network as X network, the target network as Y network, the combined network as Z network, G
X,G
Y,G
ZRespectively, an undirected graph representing X, Y, Z a network. G
X=(V
X,E
X),V
XFor a set of nodes in network X, E
XIs a set of edges in network X; g
Y=(V
Y,E
Y),V
YFor a set of nodes in the network Y, E
YIs the set of edges in network Y; g
Z=(V
Z,E
Z),V
ZFor a set of nodes in the network Z, E
ZIs the set of edges in the network Z.
Representing nodes v in a combined network
iu
jRespectively by v
i∈V
X,u
j∈V
YComposition is carried out;
if (v)
i,v
k) And (u)
j,u
l) At G
X,G
YEdges are all present in the network, then edges (v) are present in the combined network Z
iu
j,v
ku
l)。
In step 102, the network snapshot is intercepted in the time order t1 to tnAnd constructing a combined network Z1.. According to a time sequence T
1,T
2...T
nThe network X and the network Y may be divided into n network snapshots respectively,
a combined network Z is generated from snapshots of network X and network Y,
deriving an adjacency matrix A of the combined network Z
1,A
2,...A
n. To capture the dynamic information of the network and improve the training efficiency of the model, first, A
1The input GCN layer acquires a basic representation of the whole network, and then two t
nAnd t
n-1Difference Δ A of the matrix of time instants
n=A
n-A
n-1As input to the GCN layer. The matrix input of the GCN layer is therefore A
1,ΔA
2,ΔA
3...ΔA
n。
In step 103, the GCN layer represents the network structure information, and the operation formula of the GCN layer is as follows:
hn is the hidden state matrix generated by the nth layer GCN,
is a regularized adjacency matrix that is a deformation of the adjacency matrix,
for the regularized band-from-loop adjacency matrix,
in order to have a self-looping abutment matrix,
for the degree of node i, W is the weight matrix of GCN layer training, σ isThe ReLU activation function.
In step 104, the time information of the network dynamic change is represented by the GRU layer, and the formula of the GRU layer is as follows:
hn=GRU(hn-1,Hn)
hn is a hidden state matrix output by the nth layer GRU, and Hn is a hidden state matrix generated by the nth layer GCN.
The forward propagation formula of the GCN layer is as follows:
rt=σ(Wrxt+Urht-1)
zt=σ(Wzxt+Uzht-1)
r
treset gate for GRU, z
tFor the update gate of GRU, W and U are both weight matrices, which are learned in training, x
tFor input at the current time t, h
t-1Is the hidden layer state at the moment of t-1, sigma is sigmoid function,
for resetting the hidden state of the gate calculation, h
tTo update the state update of the gate to the hidden,
for the hadamard product, corresponding elements in the operation matrix are multiplied.
In step 105, the nodes are classified by the full link layer, the node classified as 1 is a potential anchor node pair, the node classified as 0 is a non-anchor node, and the loss function defined at the full link layer is as follows:
f(hv) Function representing the entire depth model, yvThe labels are classified on behalf of the nodes.
Finally, the above embodiments are only intended to illustrate the technical solutions of the present invention and not to limit the present invention, and although the present invention has been described in detail with reference to the preferred embodiments, it will be understood by those skilled in the art that modifications or equivalent substitutions may be made on the technical solutions of the present invention without departing from the spirit and scope of the technical solutions, and all of them should be covered by the claims of the present invention.