CN112395512B - Method for constructing complex attribute network representation model based on path aggregation - Google Patents

Method for constructing complex attribute network representation model based on path aggregation Download PDF

Info

Publication number
CN112395512B
CN112395512B CN202011228523.3A CN202011228523A CN112395512B CN 112395512 B CN112395512 B CN 112395512B CN 202011228523 A CN202011228523 A CN 202011228523A CN 112395512 B CN112395512 B CN 112395512B
Authority
CN
China
Prior art keywords
node
attribute
representation
vector
nodes
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011228523.3A
Other languages
Chinese (zh)
Other versions
CN112395512A (en
Inventor
印鉴
肖想
邱爽
刘威
余建兴
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sun Yat Sen University
Original Assignee
Sun Yat Sen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sun Yat Sen University filed Critical Sun Yat Sen University
Priority to CN202011228523.3A priority Critical patent/CN112395512B/en
Publication of CN112395512A publication Critical patent/CN112395512A/en
Application granted granted Critical
Publication of CN112395512B publication Critical patent/CN112395512B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9536Search customisation based on social or collaborative filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/049Temporal neural networks, e.g. delay elements, oscillating neurons or pulsed inputs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
    • G06Q50/01Social networking

Abstract

The invention provides a method for constructing a complex attribute network representation model based on path aggregation, which utilizes more network information, such as attribute information and global structure information of a network, and is more precise in processing, and a node representation vector can better represent the role and the position of a node in the network, so that the subsequent data mining task is more efficient and accurate. When the global structure information of the network is captured, the method improves the original random walk algorithm, and compared with the prior art, the method not only keeps the high efficiency of the random walk algorithm for capturing the information, but also improves the accuracy of information capture. When the node attribute information is extracted, compared with the prior art, the invention considers and designs more finely and efficiently, and respectively provides methods for extracting the attribute information between types and in the types, thereby not only keeping the distinctiveness of the attributes of different types, but also obtaining the high-order interaction relation of the attributes of different types.

Description

Method for constructing complex attribute network representation model based on path aggregation
Technical Field
The invention relates to the technical field of representation learning, in particular to a method for constructing a complex attribute network representation model based on path aggregation.
Background
Nowadays, the internet has entered into thousands of households, and the communication among people is not limited to telephone, short message, or even face-to-face communication in physical space, and the activity of the internet on various social networking platforms has become a part of people's life. The rise of the social network brings a great deal of data information and a great deal of business opportunities, and how to mine potential information from the big data of the network structure is a current research hotspot.
Network representation is one of the important tools to solve the data mining problem on networks. Network representation is a method of extracting and representing node information, and mapping each node in an original network into a low-dimensional representation space, i.e. representing the node by using a low-dimensional vector. As a first step in many data mining tasks, the network representation determines the upper bound of subsequent operations, the importance of which is self-evident. In order to be able to better mine the information in the network, the network representation module should keep as much as possible the structure of the original network and the attribute information of each node. However, the real network has high nonlinearity and dispersion, and the potential network information is complicated and various, so it is a complicated and troublesome problem to design a good network representation module.
The currently used network representation module usually represents network nodes as low-dimensional vectors by using a depth model, and simultaneously retains the structural information of the original network as much as possible. For example, many network representation modules utilize the similarity in physical space of nodes in the original network that are directly connected or have multiple common neighbors to limit the similarity of their corresponding representation vectors in a low-dimensional representation space, thereby preserving structural information of the original network. In a social network, for example, an interest tag, a personalized signature, a published picture, a published video and the like of a person can be regarded as attribute information of a node, and the rich attribute information is a supplement to network structure information and is very important to learning network representation. It is worth mentioning that the existing work is more oriented to local information and network structure information. Because the real network structure is often quite sparse, it is difficult to obtain a better node network representation only by considering local information and network structure information. The rich diversity of the node attribute information brings new opportunities for network representation learning and new challenges, and how to extract and utilize information from the rich and diverse node attributes as much as possible and consider network global information becomes an urgent problem in the field of network representation.
The patent specification with the application number of 201810922587.X discloses a network representation method based on a deep network structure and node attributes, and the method is used for constructing an adjacency matrix and an attribute relation matrix among nodes; obtaining a structure probability transition matrix and an attribute probability transition matrix among nodes; obtaining a multi-order probability relation matrix by utilizing an individualized random walk model according to the structure probability transition matrix and the attribute probability transition matrix; combining the multi-order probability relation matrixes through an attenuation function to obtain a global information matrix; and inputting the global information matrix into an automatic encoder, and training the automatic encoder to obtain the low-dimensional feature representation of the network nodes. The method solves the problem of data sparsity, and codes the global information of the nodes in the network into a low-dimensional and dense vector space by constructing the deep neural network, so that the nodes in the network are accurately represented. However, the patent cannot realize that the method for extracting the attribute information between the types and the attribute information in the types are respectively provided, so that the distinctiveness of the attributes of different types is maintained, and the high-order interaction relation of the attributes of different types is obtained.
Disclosure of Invention
The invention provides a method for constructing a complex attribute network representation model based on path aggregation, which realizes more effective tasks such as data mining and the like in a social network.
In order to achieve the technical effects, the technical scheme of the invention is as follows:
a method for constructing a complex attribute network representation model based on path aggregation comprises the following steps:
s1: constructing a node attribute embedding module, wherein the node attribute embedding module collects a group of attribute expression vectors of attribute output nodes of nodes in a social network graph;
s2: constructing a feature fusion module, wherein an input node attribute embedding module of the feature fusion module outputs a node attribute representation vector, the output is an src vector of a node, and the src vector is a high-order feature representation vector of the node;
s3: constructing a random walk module, wherein the random walk module collects the structure of the social network diagram and outputs a random walk sequence of nodes;
s4: constructing a node aggregation module, inputting a random walk sequence of an output node of the random walk module by the node aggregation module, and outputting a dst vector of the node;
s5: constructing a structural modeling module, wherein the structural modeling module restricts the similarity of the representation vectors in the representation space by using the similarity of the nodes in the network physical space according to the restriction on the dst vectors of the nodes, namely the dst vectors of the nodes and the src vectors of the nodes are mutually restricted; and splicing the src vector and the dst vector of the trained node, namely inputting the src vector and the dst vector of the node into a concat layer, wherein an output vector passing through the concat layer is a final node representation vector.
Further, the input of the node attribute embedding module is the attribute of the node in the social network graph, and the output is a set of attribute representation vectors of the node; the node attribute embedding module preprocesses the attribute, directly performs one-hot coding on the discrete attribute, and firstly discretizes the attribute and then performs one-hot coding on the continuous attribute; and the node attribute embedding module converts the attributes expressed by using the one-hot coding into a low-dimensional expression space to obtain an attribute expression vector.
Furthermore, the feature fusion module divides a group of attribute representation vectors of the nodes according to attribute types, each node obtains n groups of attribute representation vectors, and n is the number of the attribute types; the feature fusion module operates the n groups of divided attribute representation vectors, and performs mean-posing operation on the attribute representation vectors of the same type, namely the n groups of attribute representation vectors pass through a mean-posing layer respectively to obtain n attribute representation vectors; the feature fusion module splices attribute representation vectors of different types, namely the n attribute representation vectors pass through a concat layer to obtain an attribute representation vector which is an initial feature representation vector of a node; the feature fusion module extracts high-order cross features from the initial feature representation vectors of the nodes to obtain high-order feature representation vectors of the nodes, the vectors are used as src vectors of the nodes, the specific operation is to input the initial feature representation vectors of the nodes to an MLP layer, and the output after the MLP layer is the high-order feature representation vectors of the nodes.
Further, the specific process of step S1 is:
the node attribute describes the characteristics of the node, the attributes describing the characteristics of different aspects of the node are regarded as different types of attributes, and the attributes describing the characteristics of the same aspect of the node are regarded as the same type of attributes; for different types of attributes, converting the attributes into a low-dimensional representation space through a unique conversion matrix;
Figure BDA0002764391310000031
wherein
Figure BDA0002764391310000032
As attribute type is A j The binary vector representation of the ith attribute of (a),
Figure BDA0002764391310000033
in order to be a weight matrix, the weight matrix,
Figure BDA0002764391310000034
is represented by type A j Is determined by the binary vector dimension of the attribute of (a),
Figure BDA0002764391310000035
represents type A j Converting the attributes to dimensions of a low-dimensional representation space; the attributes of a node may be represented by a set of attribute representation vectors
Figure BDA0002764391310000036
And (4) performing representation.
Further, the specific process of step S2 is:
s21: dividing a group of attribute expression vectors expressing a node according to types to obtain N A Group attribute represents a vector, N A Indicates the total number of attribute types, i.e. to be
Figure BDA0002764391310000037
Is converted into
Figure BDA0002764391310000038
S22: for the attribute characteristics of the same type, aggregation is carried out by adopting a mean pooling mode,
Figure BDA0002764391310000039
wherein | A j I represents attribute type A j The number of attributes of (2); by
Figure BDA00027643913100000310
To obtain
Figure BDA0002764391310000041
S23: after the characteristic expressions of different types of attributes of the nodes are obtained, the characteristic expressions are input into a concat layer, so that the characteristic expressions are spliced together in proportion, and the weights of the different types of attributes are normalized, namely
Figure BDA0002764391310000042
N A Total number of attribute types represented:
Figure BDA0002764391310000043
s24: and performing feature crossing on different types of features by using the obtained initial feature expression vector of the node through a multi-layer perceptron MLP to obtain an expression vector with richer connotation:
Figure BDA0002764391310000044
Figure BDA0002764391310000045
information representing the node itself, which is the first representation vector of the node.
Further, in step S23, where λ j Representing attribute type A j The larger the weight of (a) is, the larger the effect that the type attribute feature can play according to the priori knowledge is.
Further, the specific process of step S3 is:
for each node pair < i, j >, the amount of inter-node pair information transfer can be estimated as:
Figure BDA0002764391310000046
wherein w i,j Represents an edge e i,j Weight of d i Degree representing node i, alpha represents smoothing coefficient, M <i,j> Not having symmetry, i.e.
Figure BDA0002764391310000047
If the condition is not satisfied,
Figure BDA0002764391310000048
the amount of information transmitted from node i to node j is shown, and the formula is extended to a more general case in consideration of the directionality of the edge:
Figure BDA0002764391310000049
wherein w i→j Representing directed edges e i→j The weight of (a) is determined,
Figure BDA00027643913100000410
which represents the in-degree of the node i,
Figure BDA00027643913100000411
represents the degree of departure of node j, i.e.:
set of nodes V ═ { V ═ V 1 ,v 2 ,…,v n }, weight matrix W of edges, out-degree set of nodes
Figure BDA00027643913100000412
In-degree set of nodes
Figure BDA00027643913100000413
The length of the Walk is L, then the random Walk sequence set is Walk { w ═ k ═ w 1 ,w 2 ,…,w n }。
Further, the specific process of step S4 is:
s41: all the sections are connectedThe point carries out one-hot coding according to the node number to obtain a binary representation vector of the node
Figure BDA00027643913100000414
S42: converting the nodes represented by the one-hot coding into a low-dimensional representation space to obtain a node structure representation vector:
Figure BDA00027643913100000415
s43: according to the random Walk algorithm based on the degrees, a random Walk sequence set Walk ═ w is obtained 1 ,w 2 ,...,w n In which w i ={s 1 ,s 2 ,...,s k ,...,s n Denotes a slave node v i Starting the sequence obtained by random walk, i.e. s 1 =v i W is to be i -s 1 Viewed as sampling neighbors of node i, i.e.
Figure BDA0002764391310000051
The sampling neighbors retain the sequential information of the sampling neighbors in the wandering sequence, and the sampling neighbors of the node i reflect the process of transmitting information in the network to the node i and also reflect the status and the action of the node i in the network:
s44: and (3) carrying out long-term and short-term memory network aggregation on the sampling neighbors:
Figure BDA0002764391310000052
Figure BDA0002764391310000053
representing the effect of its neighbor nodes on it, which is a second representation vector of the node.
Furthermore, when the long-short term memory network processes the time sequence, the input closer to the current time has larger influence on the output of the current time, and the influence of the node closer to the current node on the current node is consistent with the larger influence of the node closer to the current node on the current node, becauseThe sequence order of the input sequence into the long-short term memory network is just opposite to the sequence order generated by random walk, which means that s n Will be input into the long-short term memory network element at the first time step, and s 2 Will be entered into the long-short term memory network element at the last time step.
Further, the specific process of step S5 is:
s51: the representation vector of the target node should be able to embody the information of its sampling neighbors, that is to say the structural representation vector of the target node should be as close as possible to the aggregation of the representation vectors of its sampling neighbors:
Figure BDA0002764391310000054
through one-time iterative computation, the representation vector of each node contains the representation vector information of the sampling neighbor, so that the representation vector of each node contains the information of the node in the local network structure; by continuously updating iteration, the representation vector of each node gradually contains more distant node information; the representation vectors of the nodes all contain the global structure information of the network; in order to solve the problem that the influence of time steps at longer distances is smaller, so that it is difficult to ensure that the previous sequence information is well utilized, when the long-short-term memory network is used as an aggregation function, a task is additionally introduced to supervise the training of the long-short-term memory network: using the output of the t time step of the long-short term memory network unit to predict the input of the t +1 time step, i.e. the input sequence S for the long-short term memory network input ={s n ,s n-1 ,...,...,s 2 Hope that it will accurately predict the next node sequence S next ={s n-1 ,s n-2 ,...,...,s 1 I.e. maximizing the probability of predicting the next node:
P(S next |S input )
after a simple linear transformation, the output h of the t-th time step t Is converted into N S ,N S Representing the total number of network nodes, normalizing the total number by using a softmax activation function, converting the normalized total number into the probability of belonging to each node, and evaluating by using a cross entropy loss function:
Figure BDA0002764391310000061
wherein h is t Output representing the t-th time step, y t+1 The input node representing the t +1 time step,
Figure BDA0002764391310000062
representing a weight matrix, K h Representing hidden layer dimensions of the long-short term memory network;
s52: the similarity of the representation vectors of the nodes in the network physical space is utilized to restrict the similarity of the representation vectors in the representation space, and the nodes are directed to<i→j>In other words, the nodes i and j contribute to the formation of the edge, and also indicate that there is some relation between the nodes i and j, but there is a directed edge e i→j The roles of the intermediate nodes i and j are different, the node i plays the leading role and is the transmitter of the information, the node j is the receiver of the information, and therefore, the expression vector for expressing the information of the node i is expected to be
Figure BDA0002764391310000063
A representation vector representing the influence of a neighbor node to node j
Figure BDA0002764391310000069
Similar in representation space to make directed edge e i→j Is more interpretable, and therefore the loss function of this section is defined as:
Figure BDA0002764391310000065
Ω - representing a set of negative samples performed for the node pair, σ representing a sigmoid function; the local structure information of the network is embodied in the expression vectors, the two expression vectors are mutually perceived and constrained, and the learning capacity of the model is enhanced;
s53: after training
Figure BDA0002764391310000066
And
Figure BDA0002764391310000067
after the two expression vectors are used as the input of a concat layer, splicing is carried out to obtain a final node expression vector:
Figure BDA0002764391310000068
h i is a node v i The final node representation.
Compared with the prior art, the technical scheme of the invention has the beneficial effects that:
the invention utilizes more network information, such as attribute information and the global structure information of the network, and is more precise in processing, and the node expression vector can better express the role and the status of the node in the network, so that the subsequent data mining task is more efficient and accurate. When the global structure information of the network is captured, the method improves the original random walk algorithm, and compared with the prior art, the method not only keeps the high efficiency of the random walk algorithm for capturing the information, but also improves the accuracy of information capture. When the node attribute information is extracted, compared with the prior art, the invention considers and designs more finely and efficiently, and respectively provides methods for extracting the attribute information between types and in the types, thereby not only keeping the distinctiveness of different types of attributes, but also obtaining the high-order interaction relationship of the different types of attributes.
Drawings
FIG. 1 is a flow chart of the method of the present invention;
fig. 2 is an exemplary graph of a degree-based random walk algorithm.
Detailed Description
The drawings are for illustrative purposes only and are not to be construed as limiting the patent;
for the purpose of better illustrating the embodiments, certain features of the drawings may be omitted, enlarged or reduced, and do not represent the size of an actual product;
it will be understood by those skilled in the art that certain well-known structures in the drawings and descriptions thereof may be omitted.
The technical solution of the present invention is further described below with reference to the accompanying drawings and examples.
As shown in fig. 1, a method for constructing a complex attribute network representation model based on path aggregation includes the following steps:
s1: the input of the node attribute embedding module is the attribute of the node in the social network graph, and the output is a set of attribute representation vectors of the node. It can be divided into two steps: first, the attributes are preprocessed. For discrete attributes, one-hot coding is directly carried out, and for continuous attributes, the attributes are discretized and then one-hot coding is carried out; secondly, converting the attribute expressed by using the one-hot coding into a low-dimensional expression space to obtain an attribute expression vector; after the two steps of operation, a group of attribute expression vectors can be generated for the nodes;
s2: the input of the feature fusion module is the output of the node attribute embedding (S1), i.e. the node set of attribute representation vectors, the output is the src vector of the node, which is the higher-order feature representation vector of the node. It can be divided into four steps: the first step, a group of attribute representation vectors of nodes are divided according to attribute types, each node obtains n groups of attribute representation vectors, and n is the number of the attribute types; secondly, operating the n groups of divided attribute representation vectors, and performing mean-posing operation on the attribute representation vectors of the same type, namely passing the n groups of attribute representation vectors through a mean-posing layer respectively to obtain n attribute representation vectors; thirdly, splicing different types of attribute representation vectors, namely passing the n attribute representation vectors through a concat layer to obtain an attribute representation vector, wherein the attribute representation vector is an initial characteristic representation vector of a node; fourthly, extracting high-order cross features from the initial feature representation vectors of the nodes to obtain high-order feature representation vectors of the nodes, wherein the vectors are used as src vectors of the nodes, the operation is specifically to input the initial feature representation vectors of the nodes to an MLP layer, and the output after the MLP layer is the high-order feature representation vectors of the nodes;
s3: the input of the random walk module based on degrees is the structure of the social network graph, and the output is a random walk sequence consisting of nodes in sequence;
s4: the input of the node aggregation module is the output of the degree-based random walk module (S3), i.e., the random walk sequence, and the output is the dst vector of the node. The concrete steps can be divided into three steps: firstly, carrying out one-hot coding on each node; secondly, converting the nodes expressed by the one-hot codes into a low-dimensional expression space to obtain node structure expression vectors; thirdly, aggregating structure expression vectors of nodes in the random walk sequence to obtain an aggregate expression vector of the random walk sequence, and taking the aggregate expression vector as a neighbor aggregate vector of a head node (namely a first node) of the random walk sequence and also as a dst vector of the nodes;
s5: the inputs to the structural modeling module are the output of S2 and the output of S4, i.e., the src vector of nodes and the dst vector of nodes, whose output is the final node representation vector. The concrete steps can be divided into three steps: firstly, constraining dst vectors of nodes; secondly, the similarity of the nodes in the physical space of the network is used for constraining the similarity of the representation vectors in the representation space, namely the dst vector of the node and the src vector of the node are mutually constrained; and thirdly, splicing the src vector and the dst vector of the trained node, namely inputting the src vector and the dst vector of the node into a concat layer, wherein the output vector passing through the concat layer is the final node representation vector.
The invention will be further explained and analyzed at a theoretical and formula level.
1) Building node attribute embedded model
Due to the high complexity of the real world, the node properties of real social networks are extremely rich. The functions of different attributes are generally different, the functions of the nodes in the network are different, in order to avoid the specialization of the model under the condition that the influence degree of certain attribute on the functions of the nodes in the network is not clear, the influence degrees of different attributes on the network are not manually set, but the model can automatically select important node attributes through the iterative update of model parameters, otherwise, the introduction of proper prior knowledge can make the model training faster and better:
s11: for discrete attributes, they can be segmented, such as gender into { male, female }, specialty into { computer, financial, medical, … }, and so on. For this class of attributes, a vector is typically used that is converted into binary using one-hot encoding. Correspondingly, { male, female } { [0, 1}, { computer, financial, medical, … } { [1,0,0,0, … ], [0,1,0,0, … ], [0,0,1,0, … ], … };
continuous properties are also very common in the real world, such as video, audio, text, etc., and the processing of continuous properties is more complex than the processing of discrete properties, where the continuous properties used are mainly text, so only the processing of text properties is described. And for the text attribute, processing by using a bag-of-words model, inputting the text attribute of all nodes in the bag-of-words model, outputting a word-frequency vector of each node, wherein the word-frequency vector is composed of 0 and 1,0 represents that no corresponding word exists in the text attribute of the node, and 1 represents that one or more corresponding words exist in the text attribute of the node. Therefore, after the processing of the bag-of-words model, the text attribute is also converted into a binary vector;
s12: the node attributes characterize the node, the attributes characterizing different aspects of the node are regarded as different types of attributes, and the attributes characterizing the same aspect of the node are regarded as the same type of attributes. For different types of attributes, they are transformed into the low-dimensional representation space by a unique transformation matrix:
Figure BDA0002764391310000091
wherein
Figure BDA0002764391310000092
As attribute type is A j The binary vector representation of the ith attribute of (a),
Figure BDA0002764391310000093
in order to be a weight matrix, the weight matrix,
Figure BDA0002764391310000094
is represented by type A j Is determined by the binary vector dimension of the attribute of (a),
Figure BDA0002764391310000095
a is represented by type j Converting the attributes to dimensions of a low-dimensional representation space; finally, the attributes of a node may be represented by a set of attribute representation vectors
Figure BDA0002764391310000096
Carrying out representation;
2) constructing a feature fusion model
S21: dividing a group of attribute expression vectors expressing a node according to types to obtain N A Group attribute represents a vector, N A Indicates the total number of attribute types, i.e. to be
Figure BDA0002764391310000097
Is converted into
Figure BDA0002764391310000098
S22: and for the same type of attribute features, performing aggregation in a mean pooling mode:
Figure BDA0002764391310000099
wherein | A j I represents attribute type A j The number of attributes of (2);
thus is composed of
Figure BDA00027643913100000910
To obtain
Figure BDA00027643913100000911
S23: after the characteristic expressions of different types of attributes of the nodes are obtained, the nodes are input into a concat layer, so that the nodes are spliced together in proportion, wherein lambda is j Representing attribute type A j The larger the weight of (a) is, the larger the effect that the type attribute feature can play according to the priori knowledge is. At the same time, the invention performs normalization processing on the weights of different types of attributes, namely
Figure BDA0002764391310000101
N A Represents the total number of attribute types;
Figure BDA0002764391310000102
s24: and performing feature crossing on different types of features by using the obtained initial feature representation vector of the node through an MLP (multi-layer perceptron), so as to obtain a representation vector with a richer connotation:
Figure BDA0002764391310000103
Figure BDA0002764391310000104
information representing the node itself, which is the first representation vector of the node.
3) Construction of a building random walk Module
Generally, the first-order neighbors of a node have the greatest impact on it in the graph. However, if the gaze is limited to the first-order neighbors, much information of the second-order neighbors and even the high-order neighbors is lost, so a method based on random walk is adopted to sample the neighbor nodes to obtain more distant information. In order to ensure that as much information as possible is obtained, a degree-based random walk is used, and it is desirable to characterize the global structure information by the information transfer amount between nodes.
In many real-world networks, an edge exists between two nodes, representing that the two nodes have some similarity, such as in a social network, an edge represents a friendship between two people, and in a citation network, an edge represents a citation relationship between two documents. Many documents based on random walk methods describe the similarity between nodes by defining a first order proximity or a second order proximity, but the similarity does not well reflect the amount of information transfer between nodes, and the amount of information transfer between nodes is determined not only by the similarity between nodes but also by the information amount of the nodes themselves. As shown in fig. 2, the first-order proximity value of the node pair < i, j > is 1 from the viewpoint of first-order proximity, the first-order proximity value of the node pair < i, k > is also 1, the second-order proximity value of the node pair < i, j > is 1 from the viewpoint of second-order proximity, and the second-order proximity value of the node pair < i, k > is 0, so that the node i and the node j are more similar to each other than the node k in view of the combination of the first-order proximity and the second-order proximity. However, from an information transfer perspective, node k will transfer more information to node i than to node j, and therefore will be selected with greater probability when making random walks because its surrounding neighbors are richer. Here, the present invention uses a degree-based random walk that mainly evaluates the amount of information transfer between nodes. For each node pair < i, j >, the amount of inter-node pair information transfer can be estimated as:
Figure BDA0002764391310000111
wherein w i,j Represents an edge e i,j Weight of d i Indicates the degree of the node i, and α indicates the smoothing coefficient. It is worth mentioning that M <i,j> Not having symmetry, i.e.
Figure BDA0002764391310000112
If the condition is not satisfied,
Figure BDA0002764391310000113
the amount of information transmitted from node i to node j is shown, and the formula is extended to a more general case in consideration of the side directivity.
Figure BDA0002764391310000114
Wherein w i→j Representing directed edges e i→j The weight of (a) is determined,
Figure BDA0002764391310000115
which represents the in-degree of the node i,
Figure BDA0002764391310000116
representing the degree of egress of node j. In FIG. 2, it is assumed that all edges are weighted by 1, and thus
Figure BDA0002764391310000117
Figure BDA0002764391310000118
This means that the next hop for node i will select node k with greater probability, consistent with the intended target;
for the convenience of calculation, the transition probability is normalized, and the transition probability formula is defined as follows:
wherein
Figure BDA0002764391310000119
The pseudo code is described as:
inputting:
set of nodes V ═ { V ═ V 1 ,v 2 ,...,v n }
Weight matrix W of edges
Out-of-order aggregation of nodes
Figure BDA00027643913100001110
In-degree set of nodes
Figure BDA00027643913100001111
Wandering length L
And (3) outputting:
random Walk sequence set Walk ═ w 1 ,w 2 ,...,w n }:
1.Walk={}
2.for i=1:n do
3.w i ={v i }
4.now=i
5.for l=1:L do
6.P={}
7.for j=1:n do
8. According to a formula
Figure BDA0002764391310000121
9. Will be provided with
Figure BDA0002764391310000122
Join set P
10.end for
11. Normalizing the set P according to a formula
12. Selecting a probability from the set P by using a roulette algorithm, and finding out a node v corresponding to the probability k
13. Node v k Join to set w i In
14.now=k
15.end for
16. Will w i Joining Collection Walk
17.end for
4) Building node aggregation modules
S41: performing one-hot coding on all nodes according to the node numbers to obtain binary representation vectors of the nodes
Figure BDA0002764391310000123
S42: converting the nodes represented by the one-hot coding into a low-dimensional representation space to obtain a node structure representation vector:
Figure BDA0002764391310000124
s43: according to the random Walk algorithm based on the degrees, a random Walk sequence set Walk ═ w is obtained 1 ,w 2 ,...,w n In which w i ={s 1 ,s 2 ,...,s k ,...,s n Denotes a slave node v i The sequence obtained by starting the random walk, i.e. s 1 =v i W is to be i -s 1 Viewed as sampling neighbors of node i, i.e.
Figure BDA0002764391310000125
Its sequential information in the walk sequence is preserved in the sample neighborhood. The sampling neighbor of the node i reflects the process of transmitting information in the network to the node i, and also reflects the position and the action of the node i in the network.
The application provides the following three ways to aggregate sampling neighbors:
and (3) average value pooling polymerization:
Figure BDA0002764391310000126
linear polymerization:
Figure BDA0002764391310000131
long-short term memory network aggregation:
Figure BDA0002764391310000132
the long-short term memory network can well process the sequence, generally speaking, the input closer to the current time has larger influence on the output of the current time, which is just consistent with the influence of the node closer to the current node on the current node in the graph, so the sequence input into the long-short term memory network is just opposite to the sequence generated by random walk, which means that s n Will be input into the long-short term memory network element at the first time step, and s 2 Will be input into the long-short term memory network element at the last time step;
Figure BDA0002764391310000133
representing the effect of its neighbor nodes on it, which is a second representation vector of the node.
5) Building a structural modeling model
S51: the representation vector of the target node should be able to embody the information of its sampling neighbours, that is to say the structural representation vector of the target node should be as close as possible to the aggregation of the representation vectors of its sampling neighbours.
Figure BDA0002764391310000134
Through one iteration of calculation, the representation vector of each node contains the representation vector information of the sampling neighbor, and therefore, the representation vector of each node contains the information of the node in the local structure of the network. By constantly updating the iterations, the representation vector of each node will gradually contain more distant node information. Eventually, the representation vectors of the nodes will contain the global structure information of the network.
In general, when the sequence is long, the training of the long-short term memory network becomes difficult because the influence of the time step farther away is smaller, which makes it difficult to ensure that the previous sequence information is well utilized, and in order to solve this problem, when the long-short term memory network is used as the aggregation function, a task is additionally introduced to supervise the training of the long-short term memory network: using long and short term notesThe output of the memory network unit at the t time step is used for predicting the input of the t +1 time step, namely the input sequence S for the long-short term memory network input ={s n ,s n-1 ,…,…,s 2 Hope that it will accurately predict the next node sequence S next ={s n-1 ,s n-2 ,…,…,s 1 I.e. maximizing the probability of predicting the next node:
P(S next |S input )
after a simple linear transformation, the output h of the t-th time step t Is converted into N S ,N S Representing the total number of network nodes, normalizing the total number by using a softmax activation function, converting the normalized total number into the probability of belonging to each node, and evaluating by using a cross entropy loss function:
Figure BDA0002764391310000141
wherein h is t Output representing the t-th time step, y t+1 The input node representing the t +1 time step,
Figure BDA0002764391310000142
representing a weight matrix, K h Representing the hidden layer dimension of the long-short term memory network.
S52: the similarity of the nodes in the physical space of the network is also utilized to restrict the similarity of the representation vectors in the representation space. For directed node pairs<i→j>In other words, the nodes i and j contribute to the formation of the edge, and also indicate that there is some relation between the nodes i and j, but there is a directed edge e i→j The roles of the intermediate nodes i and j are different, the node i plays the leading role and is the transmitter of the information, the node j is the receiver of the information, and therefore, the expression vector for expressing the information of the node i is expected to be
Figure BDA0002764391310000143
Influence of neighbor node representing node j on itIs represented by a vector
Figure BDA0002764391310000144
Similar in representation space to make directed edge e i→j Is more interpretable, and therefore the loss function of this section is defined as:
Figure BDA0002764391310000145
Ω - represents the set of negative samples taken for that node pair, σ represents the sigmoid function.
This not only embodies the local structure information of the network in the representation vectors, but also makes the two representation vectors mutually aware and constrained. And enhancing the learning ability of the model.
S53: is well trained
Figure BDA0002764391310000146
And
Figure BDA0002764391310000147
after the two expression vectors are used as the input of a concat layer, splicing is carried out to obtain a final node expression vector:
Figure BDA0002764391310000148
h i is a node v i The final node representation.
The same or similar reference numerals correspond to the same or similar parts;
the positional relationships depicted in the drawings are for illustrative purposes only and are not to be construed as limiting the present patent;
it should be understood that the above-described embodiments of the present invention are merely examples for clearly illustrating the present invention, and are not intended to limit the embodiments of the present invention. Other variations and modifications will be apparent to persons skilled in the art in light of the above description. And are neither required nor exhaustive of all embodiments. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention should be included in the protection scope of the claims of the present invention.

Claims (9)

1. A method for constructing a complex attribute network representation model based on path aggregation is characterized by comprising the following steps:
s1: constructing a node attribute embedding module, wherein the node attribute embedding module collects a group of attribute expression vectors of attribute output nodes of nodes in a social network graph;
s2: constructing a feature fusion module, wherein an input node attribute embedding module of the feature fusion module outputs a node attribute representation vector, the output is an src vector of a node, and the src vector is a high-order feature representation vector of the node;
s3: constructing a random walk module, wherein the random walk module collects the structure of the social network diagram and outputs a random walk sequence of nodes;
s4: constructing a node aggregation module, inputting a random walk sequence of an output node of the random walk module by the node aggregation module, and outputting a dst vector of the node;
the specific process of step S4 is:
s41: performing one-hot coding on all nodes according to the node numbers to obtain binary representation vectors of the nodes
Figure FDA0003746912410000011
S42: converting the nodes represented by the one-hot coding into a low-dimensional representation space to obtain a node structure representation vector:
Figure FDA0003746912410000012
s43: according to the random Walk algorithm based on the degrees, a random Walk sequence set Walk ═ w is obtained 1 ,w 2 ,…,w n In which w i ={s 1 ,s 2 ,...,s k ,...,s n Means forSlave node v i Starting the sequence obtained by random walk, i.e. s 1 =v i W is to be i -s 1 Viewed as sampling neighbors of node i, i.e.
Figure FDA0003746912410000013
The sampling neighbors retain the sequential information of the sampling neighbors in the wandering sequence, and the sampling neighbors of the node i reflect the process of transmitting information in the network to the node i and also reflect the status and the action of the node i in the network:
s44: and (3) carrying out long-term and short-term memory network aggregation on the sampling neighbors:
Figure FDA0003746912410000014
Figure FDA0003746912410000015
represents the effect of its neighbor node on it, which is a second representation vector of the node;
s5: constructing a structural modeling module, wherein the structural modeling module restricts the similarity of the representation vectors in the representation space by using the similarity of the nodes in the network physical space according to the restriction on the dst vectors of the nodes, namely the dst vectors of the nodes and the src vectors of the nodes are mutually restricted; and splicing the src vector and the dst vector of the trained node, namely inputting the src vector and the dst vector of the node into a concat layer, wherein an output vector passing through the concat layer is a final node representation vector.
2. The method for constructing the complex attribute network representation model based on the path aggregation is characterized in that the input of the node attribute embedding module is the attribute of the node in the social network graph, and the output is a group of attribute representation vectors of the node; the node attribute embedding module preprocesses the attribute, directly performs one-hot coding on the discrete attribute, and firstly discretizes the attribute and then performs one-hot coding on the continuous attribute; and the node attribute embedding module converts the attributes represented by the one-hot coding into a low-dimensional representation space to obtain an attribute representation vector.
3. The method for constructing the complex attribute network representation model based on the path aggregation as claimed in claim 2, wherein the feature fusion module divides a group of attribute representation vectors of the nodes according to attribute types, each node obtains n groups of attribute representation vectors, and n is the number of attribute types; the feature fusion module operates the n groups of divided attribute representation vectors, and performs mean-posing operation on the attribute representation vectors of the same type, namely the n groups of attribute representation vectors pass through a mean-posing layer respectively to obtain n attribute representation vectors; the feature fusion module splices attribute representation vectors of different types, namely the n attribute representation vectors pass through a concat layer to obtain an attribute representation vector which is an initial feature representation vector of a node; the feature fusion module extracts high-order cross features from the initial feature representation vectors of the nodes to obtain high-order feature representation vectors of the nodes, the vectors are used as src vectors of the nodes, the specific operation is to input the initial feature representation vectors of the nodes to an MLP layer, and the output after the MLP layer is the high-order feature representation vectors of the nodes.
4. The method for constructing the complex attribute network representation model based on path aggregation according to claim 3, wherein the specific process of step S1 is as follows:
the node attribute describes the characteristics of the node, the attributes describing the characteristics of different aspects of the node are regarded as different types of attributes, and the attributes describing the characteristics of the same aspect of the node are regarded as the same type of attributes; for different types of attributes, converting the attributes into a low-dimensional representation space through a unique conversion matrix;
Figure FDA0003746912410000021
wherein
Figure FDA0003746912410000022
Is attribute type A j The binary vector representation of the ith attribute of (a),
Figure FDA0003746912410000023
in order to be a weight matrix, the weight matrix,
Figure FDA0003746912410000024
is represented by type A j Is determined by the binary vector dimension of the attribute of (a),
Figure FDA0003746912410000025
a is represented by type j Converting the attributes to dimensions of a low-dimensional representation space; the attributes of a node may be represented by a set of attribute representation vectors
Figure FDA0003746912410000026
And (4) performing representation.
5. The method for constructing the complex attribute network representation model based on path aggregation according to claim 4, wherein the specific process of the step S2 is as follows:
s21: dividing a group of attribute expression vectors expressing a node according to types to obtain N A Group attribute represents a vector, N A Indicates the total number of attribute types, i.e. to be
Figure FDA0003746912410000027
Is converted into
Figure FDA0003746912410000028
S22: for the attribute characteristics of the same type, aggregation is carried out by adopting a mean pooling mode,
Figure FDA0003746912410000029
wherein | A j I represents attribute type A j The number of attributes of (2); by
Figure FDA00037469124100000210
To obtain
Figure FDA00037469124100000211
S23: after the characteristic expressions of different types of attributes of the nodes are obtained, the characteristic expressions are input into a concat layer, so that the characteristic expressions are spliced together in proportion, and the weights of the different types of attributes are normalized, namely
Figure FDA00037469124100000212
N A Total number of attribute types represented:
Figure FDA00037469124100000213
s24: and performing feature crossing on different types of features by using the obtained initial feature expression vector of the node through a multi-layer perceptron MLP to obtain an expression vector with richer connotation:
Figure FDA00037469124100000214
Figure FDA00037469124100000215
information representing the node itself, which is the first representation vector of the node.
6. The method for constructing a complex attribute network representation model based on path aggregation as claimed in claim 5, wherein λ is λ 23 j Representing attribute type A j The larger the weight is, the larger the effect that the type attribute feature can play according to the prior knowledge is.
7. The method for constructing the complex attribute network representation model based on path aggregation according to claim 6, wherein the specific process of the step S3 is:
for each node pair < i, j >, the amount of inter-node pair information transfer can be estimated as:
Figure FDA00037469124100000216
wherein w i,j Represents an edge e i,j Weight of d i Degree representing node i, alpha represents smoothing factor, M <i,j> Not having symmetry, i.e.
Figure FDA00037469124100000217
If the condition is not satisfied,
Figure FDA00037469124100000218
the amount of information transmitted from node i to node j is shown, and the formula is extended to a more general case in consideration of the directionality of the edge:
Figure FDA0003746912410000031
wherein w i→j Representing directed edges e i→j The weight of (a) is determined,
Figure FDA0003746912410000032
which represents the in-degree of the node i,
Figure FDA0003746912410000033
represents the out degree of node j, i.e.:
set of nodes V ═ { V ═ V 1 ,v 2 ,…,v n }, weight matrix W of edges, out-degree set of nodes
Figure FDA0003746912410000034
In-degree set of nodes
Figure FDA0003746912410000035
The length of the Walk is L, then the random Walk sequence set is Walk { w ═ k ═ w 1 ,w 2 ,…,w n }。
8. The method as claimed in claim 7, wherein when the long-short term memory network processes the time sequence, the input closer to the current time has a greater influence on the output of the current time, and the influence of the node closer to the current node on the current node is greater, so that the sequence order input into the long-short term memory network is exactly opposite to the sequence order generated by random walk, which means that s is the sequence order generated by random walk n Will be input into the long-short term memory network element at the first time step, and s 2 Will be entered into the long-short term memory network element at the last time step.
9. The method for constructing the complex attribute network representation model based on path aggregation according to claim 8, wherein the specific process of step S5 is:
s51: the representation vector of the target node should be able to embody the information of its sampling neighbors, that is to say the structural representation vector of the target node should be as close as possible to the aggregation of the representation vectors of its sampling neighbors:
Figure FDA0003746912410000036
through one-time iterative computation, the representation vector of each node contains the representation vector information of the sampling neighbor, so that the representation vector of each node contains the information of the node in the local network structure; by continuously updating the iteration, the representation vector of each node gradually contains more distant node informationInformation; the representation vectors of the nodes all contain the global structure information of the network; in order to solve the problem that the influence of time steps at longer distances is smaller, so that it is difficult to ensure that the previous sequence information is well utilized, when the long-short-term memory network is used as an aggregation function, a task is additionally introduced to supervise the training of the long-short-term memory network: using the output of the t-th time step of the long-short term memory network unit to predict the input of the t + 1-th time step, i.e. the input sequence S for the long-short term memory network input ={s n ,s n-1 ,…,…,s 2 Hope that it will accurately predict the next node sequence S next ={s n-1 ,s n-2 ,…,…,s 1 I.e. maximizing the probability of predicting the next node:
P(S next |S input )
after a simple linear transformation, the output h of the t-th time step t Is converted into N S ,N S Representing the total number of network nodes, normalizing the total number by using a softmax activation function, converting the normalized total number into the probability of belonging to each node, and evaluating by using a cross entropy loss function:
Figure FDA0003746912410000037
wherein h is t Output representing the t-th time step, y t+1 The input node representing the t +1 time step,
Figure FDA0003746912410000038
representing a weight matrix, K h Representing hidden layer dimensions of the long-short term memory network;
s52: the similarity of the representation vectors of the nodes in the network physical space is utilized to restrict the similarity of the representation vectors in the representation space, and the nodes are directed to<i→j>In other words, node i and node j both contribute to the formation of the edge, which also indicates thatSome connection exists between node i and node j, but at directed edge e i→j The functions of the intermediate nodes i and j are different, the node i plays a leading role and is the transmitter of the information, the node j is the receiver of the information, and therefore, the expression vector expressing the information of the node i is expected to be
Figure FDA0003746912410000039
Representation vector representing the influence of neighbor node of node j on it
Figure FDA00037469124100000310
Similar in representation space to make directed edge e i→j Is more interpretable, and therefore the loss function of this section is defined as:
Figure FDA00037469124100000311
Ω - representing a set of negative samples performed for the node pair, σ representing a sigmoid function; the local structure information of the network is embodied in the expression vectors, the two expression vectors are mutually sensed and constrained, and the learning capacity of the model is enhanced;
s53: is well trained
Figure FDA0003746912410000041
And
Figure FDA0003746912410000042
after the two expression vectors are used as the input of a concat layer, splicing is carried out to obtain a final node expression vector:
Figure FDA0003746912410000043
h i is a node v i The final node representation.
CN202011228523.3A 2020-11-06 2020-11-06 Method for constructing complex attribute network representation model based on path aggregation Active CN112395512B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011228523.3A CN112395512B (en) 2020-11-06 2020-11-06 Method for constructing complex attribute network representation model based on path aggregation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011228523.3A CN112395512B (en) 2020-11-06 2020-11-06 Method for constructing complex attribute network representation model based on path aggregation

Publications (2)

Publication Number Publication Date
CN112395512A CN112395512A (en) 2021-02-23
CN112395512B true CN112395512B (en) 2022-09-16

Family

ID=74598381

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011228523.3A Active CN112395512B (en) 2020-11-06 2020-11-06 Method for constructing complex attribute network representation model based on path aggregation

Country Status (1)

Country Link
CN (1) CN112395512B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113807457A (en) * 2021-09-26 2021-12-17 北京市商汤科技开发有限公司 Method, device and equipment for determining road network characterization information and storage medium
WO2024000519A1 (en) * 2022-06-30 2024-01-04 华为技术有限公司 Trajectory data characterization method and apparatus

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111325326A (en) * 2020-02-21 2020-06-23 北京工业大学 Link prediction method based on heterogeneous network representation learning

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8874477B2 (en) * 2005-10-04 2014-10-28 Steven Mark Hoffberg Multifactorial optimization system and method
CN111523003A (en) * 2020-04-27 2020-08-11 北京图特摩斯科技有限公司 Data application method and platform with time sequence dynamic map as core
CN111709474A (en) * 2020-06-16 2020-09-25 重庆大学 Graph embedding link prediction method fusing topological structure and node attributes

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111325326A (en) * 2020-02-21 2020-06-23 北京工业大学 Link prediction method based on heterogeneous network representation learning

Also Published As

Publication number Publication date
CN112395512A (en) 2021-02-23

Similar Documents

Publication Publication Date Title
CN112529168B (en) GCN-based attribute multilayer network representation learning method
CN112507699B (en) Remote supervision relation extraction method based on graph convolution network
CN112395512B (en) Method for constructing complex attribute network representation model based on path aggregation
WO2020048389A1 (en) Method for compressing neural network model, device, and computer apparatus
CN110677284B (en) Heterogeneous network link prediction method based on meta path
CN111651671B (en) User object recommendation method, device, computer equipment and storage medium
CN112508085A (en) Social network link prediction method based on perceptual neural network
CN111709474A (en) Graph embedding link prediction method fusing topological structure and node attributes
CN113627557B (en) Context graph attention mechanism-based scene graph generation method
CN114780866B (en) Personalized intelligent recommendation method based on spatio-temporal context interest learning model
CN115760279A (en) Knowledge graph and multi-head attention-based dual-target cross-domain recommendation method and system
Amara et al. Cross-network representation learning for anchor users on multiplex heterogeneous social network
CN112100486A (en) Deep learning recommendation system and method based on graph model
KR102553041B1 (en) Method, device and system for providing matching platform service between user and interior supplier based on artificial intelligence model
CN110705279A (en) Vocabulary selection method and device and computer readable storage medium
CN112667920A (en) Text perception-based social influence prediction method, device and equipment
CN110020379B (en) Link prediction method based on deep dynamic network embedded representation model
CN116562286A (en) Intelligent configuration event extraction method based on mixed graph attention
CN113343041B (en) Message reply relation judgment system based on graph model representation learning
CN114648005A (en) Multi-fragment machine reading understanding method and device for multitask joint learning
CN116090504A (en) Training method and device for graphic neural network model, classifying method and computing equipment
Chen et al. Sampled in Pairs and Driven by Text: A New Graph Embedding Framework
CN117473124B (en) Self-supervision heterogeneous graph representation learning method with capability of resisting excessive smoothing
CN115391562A (en) Relationship extraction method based on graph neural network multi-hop inference
CN116962004A (en) Network malicious behavior detection method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant