CN112395512A - Method for constructing complex attribute network representation model based on path aggregation - Google Patents
Method for constructing complex attribute network representation model based on path aggregation Download PDFInfo
- Publication number
- CN112395512A CN112395512A CN202011228523.3A CN202011228523A CN112395512A CN 112395512 A CN112395512 A CN 112395512A CN 202011228523 A CN202011228523 A CN 202011228523A CN 112395512 A CN112395512 A CN 112395512A
- Authority
- CN
- China
- Prior art keywords
- node
- attribute
- representation
- nodes
- network
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 47
- 230000002776 aggregation Effects 0.000 title claims abstract description 35
- 238000004220 aggregation Methods 0.000 title claims abstract description 35
- 239000013598 vector Substances 0.000 claims abstract description 172
- 238000005295 random walk Methods 0.000 claims abstract description 43
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 9
- 239000013604 expression vector Substances 0.000 claims description 29
- 230000015654 memory Effects 0.000 claims description 29
- 239000011159 matrix material Substances 0.000 claims description 24
- 238000005070 sampling Methods 0.000 claims description 22
- 230000006870 function Effects 0.000 claims description 19
- 230000008569 process Effects 0.000 claims description 16
- 230000004927 fusion Effects 0.000 claims description 14
- 230000014509 gene expression Effects 0.000 claims description 9
- 238000012546 transfer Methods 0.000 claims description 9
- 230000000694 effects Effects 0.000 claims description 7
- 238000012549 training Methods 0.000 claims description 6
- 238000011176 pooling Methods 0.000 claims description 4
- 230000009466 transformation Effects 0.000 claims description 4
- 230000009471 action Effects 0.000 claims description 3
- 230000004913 activation Effects 0.000 claims description 3
- 230000015572 biosynthetic process Effects 0.000 claims description 3
- 238000009499 grossing Methods 0.000 claims description 3
- 238000006243 chemical reaction Methods 0.000 claims description 2
- 238000010586 diagram Methods 0.000 claims description 2
- 239000000284 extract Substances 0.000 claims description 2
- 230000007787 long-term memory Effects 0.000 claims description 2
- 230000006403 short-term memory Effects 0.000 claims description 2
- 238000012545 processing Methods 0.000 abstract description 8
- 238000007418 data mining Methods 0.000 abstract description 5
- 238000013461 design Methods 0.000 abstract description 3
- 230000003993 interaction Effects 0.000 abstract description 3
- 230000007704 transition Effects 0.000 description 6
- 238000013459 approach Methods 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 238000006116 polymerization reaction Methods 0.000 description 2
- 230000004931 aggregating effect Effects 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 239000006185 dispersion Substances 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 230000006855 networking Effects 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 239000000047 product Substances 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 239000013589 supplement Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/90—Details of database functions independent of the retrieved data types
- G06F16/95—Retrieval from the web
- G06F16/953—Querying, e.g. by the use of web search engines
- G06F16/9536—Search customisation based on social or collaborative filtering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/047—Probabilistic or stochastic networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/049—Temporal neural networks, e.g. delay elements, oscillating neurons or pulsed inputs
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
- G06Q50/01—Social networking
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Computing Systems (AREA)
- Data Mining & Analysis (AREA)
- Computational Linguistics (AREA)
- Molecular Biology (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Artificial Intelligence (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Computation (AREA)
- Databases & Information Systems (AREA)
- Business, Economics & Management (AREA)
- Probability & Statistics with Applications (AREA)
- Economics (AREA)
- Human Resources & Organizations (AREA)
- Marketing (AREA)
- Primary Health Care (AREA)
- Strategic Management (AREA)
- Tourism & Hospitality (AREA)
- General Business, Economics & Management (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The invention provides a method for constructing a complex attribute network representation model based on path aggregation, which utilizes more network information, such as attribute information and global structure information of a network, and is more precise in processing, and a node representation vector can better represent the role and the position of a node in the network, so that the subsequent data mining task is more efficient and accurate. When the global structure information of the network is captured, the method improves the original random walk algorithm, and compared with the prior art, the method not only keeps the high efficiency of the random walk algorithm for capturing the information, but also improves the accuracy of information capture. When the node attribute information is extracted, compared with the prior art, the invention considers and designs more finely and efficiently, and respectively provides methods for extracting the attribute information between types and in the types, thereby not only keeping the distinctiveness of different types of attributes, but also obtaining the high-order interaction relationship of the different types of attributes.
Description
Technical Field
The invention relates to the technical field of representation learning, in particular to a method for constructing a complex attribute network representation model based on path aggregation.
Background
Nowadays, the internet has gone into thousands of households, and the communication among people is not limited to telephone calls, short messages or even face-to-face communication in physical space, and the internet is active on various social networking platforms and becomes a part of the life of people. The rise of the social network brings a great deal of data information and a great deal of business opportunities, and how to mine potential information from the big data of the network structure is a current research hotspot.
Network representation is one of the important tools to solve the data mining problem on networks. Network representation is a method of extracting and representing node information, and mapping each node in an original network into a low-dimensional representation space, i.e. representing the node by using a low-dimensional vector. As a first step in many data mining tasks, the network representation determines the upper bound of subsequent operations, the importance of which is self-evident. In order to be able to better mine the information in the network, the network representation module should keep as much as possible the structure of the original network and the attribute information of each node. However, the real network has high nonlinearity and dispersion, and the potential network information is complicated and various, so it is a complicated and troublesome problem to design a good network representation module.
The currently used network representation module usually represents network nodes as low-dimensional vectors by using a depth model, and simultaneously retains the structural information of the original network as much as possible. For example, many network representation modules utilize the similarity in physical space of nodes in the original network that are directly connected or have multiple common neighbors to limit the similarity of their corresponding representation vectors in a low-dimensional representation space, thereby preserving structural information of the original network. The attribute information of the nodes in the real network is rich and diverse, for example, in a social network, an interest tag, a personalized signature, a published picture, a published video and the like of a person can be regarded as the attribute information of the nodes, and the rich attribute information is a supplement to network structure information and is very important to the learning of network representation. It is worth mentioning that the existing work is more oriented to local information and network structure information. Because the real network structure is often quite sparse, it is difficult to obtain a better node network representation only by considering local information and network structure information. The rich diversity of the node attribute information brings new opportunities for network representation learning and new challenges, and how to extract and utilize information from the rich and diverse node attributes as much as possible and consider network global information becomes an urgent problem in the field of network representation.
The patent specification with the application number of 201810922587.X discloses a network representation method based on a deep network structure and node attributes, and the method is used for constructing an adjacency matrix and an attribute relation matrix among nodes; obtaining a structure probability transition matrix and an attribute probability transition matrix among nodes; obtaining a multi-order probability relation matrix by utilizing an individualized random walk model according to the structure probability transition matrix and the attribute probability transition matrix; merging the multi-order probability relation matrixes through an attenuation function to obtain a global information matrix; and inputting the global information matrix into an automatic encoder, and training the automatic encoder to obtain the low-dimensional feature representation of the network nodes. The method solves the problem of data sparsity, and codes the global information of the nodes in the network into a low-dimensional and dense vector space by constructing the deep neural network, so that the nodes in the network are accurately represented. However, the patent cannot realize that the method for extracting the attribute information between the types and the attribute information in the types are respectively provided, so that the distinctiveness of the attributes of different types is maintained, and the high-order interaction relation of the attributes of different types is obtained.
Disclosure of Invention
The invention provides a method for constructing a complex attribute network representation model based on path aggregation, which realizes more effective tasks such as data mining and the like in a social network.
In order to achieve the technical effects, the technical scheme of the invention is as follows:
a method for constructing a complex attribute network representation model based on path aggregation comprises the following steps:
s1: constructing a node attribute embedding module, wherein the node attribute embedding module collects a group of attribute expression vectors of attribute output nodes of nodes in a social network graph;
s2: constructing a feature fusion module, wherein an input node attribute embedding module of the feature fusion module outputs a node attribute representation vector, the output is an src vector of a node, and the src vector is a high-order feature representation vector of the node;
s3: constructing a random walk module, wherein the random walk module collects the structure of the social network diagram and outputs a random walk sequence of nodes;
s4: constructing a node aggregation module, inputting a random walk sequence of an output node of the random walk module by the node aggregation module, and outputting a dst vector of the node;
s5: constructing a structural modeling module, wherein the structural modeling module restricts the similarity of the representation vectors in the representation space by using the similarity of the nodes in the network physical space according to the restriction on the dst vectors of the nodes, namely the dst vectors of the nodes and the src vectors of the nodes are mutually restricted; and splicing the src vector and the dst vector of the trained node, namely inputting the src vector and the dst vector of the node into a concat layer, wherein an output vector passing through the concat layer is a final node representation vector.
Further, the input of the node attribute embedding module is the attribute of the node in the social network graph, and the output is a set of attribute representation vectors of the node; the node attribute embedding module preprocesses the attribute, directly performs one-hot coding on the discrete attribute, and firstly discretizes the attribute and then performs one-hot coding on the continuous attribute; and the node attribute embedding module converts the attributes represented by the one-hot coding into a low-dimensional representation space to obtain an attribute representation vector.
Furthermore, the feature fusion module divides a group of attribute representation vectors of the nodes according to attribute types, each node obtains n groups of attribute representation vectors, and n is the number of the attribute types; the feature fusion module operates the n groups of divided attribute representation vectors, and performs mean-posing operation on the attribute representation vectors of the same type, namely the n groups of attribute representation vectors pass through a mean-posing layer respectively to obtain n attribute representation vectors; the feature fusion module splices attribute representation vectors of different types, namely the n attribute representation vectors pass through a concat layer to obtain an attribute representation vector which is an initial feature representation vector of a node; the feature fusion module extracts high-order cross features from the initial feature representation vectors of the nodes to obtain high-order feature representation vectors of the nodes, the vectors are used as src vectors of the nodes, the specific operation is to input the initial feature representation vectors of the nodes to an MLP layer, and the output after the MLP layer is the high-order feature representation vectors of the nodes.
Further, the specific process of step S1 is:
the node attribute describes the characteristics of the node, the attributes describing the characteristics of different aspects of the node are regarded as different types of attributes, and the attributes describing the characteristics of the same aspect of the node are regarded as the same type of attributes; for different types of attributes, converting the attributes into a low-dimensional representation space through a unique conversion matrix;
whereinAs attribute type is AjThe binary vector representation of the ith attribute of (a),in order to be a weight matrix, the weight matrix,is represented by type AjIs determined by the binary vector dimension of the attribute of (a),a is represented by typejConverting the attributes to dimensions of a low-dimensional representation space; an attribute of a node may be represented by a set of attributesVector quantityAnd (4) performing representation.
Further, the specific process of step S2 is:
s21: dividing a group of attribute expression vectors expressing a node according to types to obtain NAGroup attribute represents a vector, NAIndicates the total number of attribute types, i.e. to beIs converted into
S22: for the attribute characteristics of the same type, aggregation is carried out by adopting a mean pooling mode,
S23: after the characteristic expressions of different types of attributes of the nodes are obtained, the characteristic expressions are input into a concat layer, so that the characteristic expressions are spliced together in proportion, and the weights of the different types of attributes are normalized, namelyNATotal number of attribute types represented:
s24: and performing feature crossing on different types of features by using the obtained initial feature expression vector of the node through a multi-layer perceptron MLP to obtain an expression vector with richer connotation:
Further, in step S23, where λjRepresenting attribute type AjThe larger the weight of (a) is, the larger the effect that the type attribute feature can play according to the priori knowledge is.
Further, the specific process of step S3 is:
for each node pair < i, j >, the amount of inter-node pair information transfer can be estimated as:
wherein wi,jRepresents an edge ei,jWeight of diDegree representing node i, alpha represents smoothing factor, M<i,j>Not having symmetry, i.e.If the condition is not satisfied,the amount of information transmitted from node i to node j is shown, and the formula is extended to a more general case in consideration of the directionality of the edge:
wherein wi→jRepresenting directed edges ei→jThe weight of (a) is determined,which represents the in-degree of the node i,represents the degree of departure of node j, i.e.:
set of nodes V ═ { V ═ V1,v2,…,vn}, weight matrix W of edges, out-degree set of nodesIn-degree set of nodesThe length of the Walk is L, then the random Walk sequence set is Walk { w ═ k ═ w1,w2,…,wn}。
Further, the specific process of step S4 is:
s41: performing one-hot coding on all nodes according to the node numbers to obtain binary representation vectors of the nodes
S42: converting the nodes represented by the one-hot coding into a low-dimensional representation space to obtain a node structure representation vector:
s43: according to the random Walk algorithm based on the degrees, a random Walk sequence set Walk ═ w is obtained1,w2,...,wnIn which wi={s1,s2,...,sk,...,snDenotes a slave node viStarting the sequence obtained by random walk, i.e. s1=viW is to bei-s1Viewed as sampling neighbors of node i, i.e.The sampling neighbors retain the sequential information of the sampling neighbors in the wandering sequence, and the sampling neighbors of the node i reflect the process of transmitting information in the network to the node i and also reflect the status and the action of the node i in the network:
s44: and (3) carrying out long-term and short-term memory network aggregation on the sampling neighbors:
representing the effect of its neighbor nodes on it, which is a second representation vector of the node.
Further, when the long-short term memory network processes the time sequence, the input closer to the current time has a larger influence on the output of the current time, and the influence on the current node by the input closer to the current node is consistent with the influence on the current node, so that the sequence order input into the long-short term memory network is just opposite to the sequence order generated by random walk, which means that s isnWill be input into the long-short term memory network element at the first time step, and s2Will be entered into the long-short term memory network element at the last time step.
Further, the specific process of step S5 is:
s51: the representation vector of the target node should be able to embody the information of its sampling neighbors, that is to say the structural representation vector of the target node should be as close as possible to the aggregation of the representation vectors of its sampling neighbors:
through one-time iterative computation, the representation vector of each node contains the representation vector information of the sampling neighbor, so that the representation vector of each node contains the representation vector of the node in the networkInformation in the local structure; by continuously updating iteration, the representation vector of each node gradually contains more distant node information; the representation vectors of the nodes all contain the global structure information of the network; in order to solve the problem that the influence of time steps at longer distances is smaller, so that it is difficult to ensure that the previous sequence information is well utilized, when the long-short-term memory network is used as an aggregation function, a task is additionally introduced to supervise the training of the long-short-term memory network: using the output of the t-th time step of the long-short term memory network unit to predict the input of the t + 1-th time step, i.e. the input sequence S for the long-short term memory networkinput={sn,sn-1,...,...,s2Hope that it will accurately predict the next node sequence Snext={sn-1,sn-2,...,...,s1I.e. maximizing the probability of predicting the next node:
P(Snext|Sinput)
after a simple linear transformation, the output h of the t-th time steptIs converted into NS,NSRepresenting the total number of network nodes, normalizing the total number by using a softmax activation function, converting the normalized total number into the probability of belonging to each node, and evaluating by using a cross entropy loss function:
wherein h istOutput representing the t-th time step, yt+1The input node representing the t +1 time step,representing a weight matrix, KhRepresenting hidden layer dimensions of the long-short term memory network;
s52: the similarity of the representation vectors of the nodes in the network physical space is utilized to restrict the similarity of the representation vectors in the representation space, and the pairTo directed node pairs<i→j>In other words, the nodes i and j contribute to the formation of the edge, and also indicate that there is some relation between the nodes i and j, but there is a directed edge ei→jThe roles of the intermediate nodes i and j are different, the node i plays the leading role and is the transmitter of the information, the node j is the receiver of the information, and therefore, the expression vector for expressing the information of the node i is expected to beRepresentation vector representing the influence of neighbor node of node j on itSimilar in representation space to make directed edge ei→jIs more interpretable, and therefore the loss function of this section is defined as:
Ω-representing a set of negative samples performed for the node pair, σ representing a sigmoid function; the local structure information of the network is embodied in the expression vectors, the two expression vectors are mutually sensed and constrained, and the learning capacity of the model is enhanced;
s53: is well trainedAndafter the two expression vectors are used as the input of a concat layer, splicing is carried out to obtain a final node expression vector:
hiis a node viThe final node representation.
Compared with the prior art, the technical scheme of the invention has the beneficial effects that:
the invention utilizes more network information, such as attribute information and the global structure information of the network, and is more precise in processing, and the node expression vector can better express the role and the status of the node in the network, so that the subsequent data mining task is more efficient and accurate. When the global structure information of the network is captured, the method improves the original random walk algorithm, and compared with the prior art, the method not only keeps the high efficiency of the random walk algorithm for capturing the information, but also improves the accuracy of information capture. When the node attribute information is extracted, compared with the prior art, the invention considers and designs more finely and efficiently, and respectively provides methods for extracting the attribute information between types and in the types, thereby not only keeping the distinctiveness of different types of attributes, but also obtaining the high-order interaction relationship of the different types of attributes.
Drawings
FIG. 1 is a flow chart of the method of the present invention;
fig. 2 is an exemplary graph of a degree-based random walk algorithm.
Detailed Description
The drawings are for illustrative purposes only and are not to be construed as limiting the patent;
for the purpose of better illustrating the embodiments, certain features of the drawings may be omitted, enlarged or reduced, and do not represent the size of an actual product;
it will be understood by those skilled in the art that certain well-known structures in the drawings and descriptions thereof may be omitted.
The technical solution of the present invention is further described below with reference to the accompanying drawings and examples.
As shown in fig. 1, a method for constructing a complex attribute network representation model based on path aggregation includes the following steps:
s1: the input of the node attribute embedding module is the attribute of the node in the social network graph, and the output is a set of attribute representation vectors of the node. It can be divided into two steps: first, the attributes are preprocessed. For discrete attributes, one-hot coding is directly carried out, and for continuous attributes, the attributes are discretized and then one-hot coding is carried out; secondly, converting the attribute expressed by using the one-hot coding into a low-dimensional expression space to obtain an attribute expression vector; after the two steps of operation, a group of attribute expression vectors can be generated for the nodes;
s2: the input of the feature fusion module is the output of the node attribute embedding (S1), i.e. the node set of attribute representation vectors, the output is the src vector of the node, which is the higher-order feature representation vector of the node. It can be divided into four steps: the first step, a group of attribute representation vectors of nodes are divided according to attribute types, each node obtains n groups of attribute representation vectors, and n is the number of the attribute types; secondly, operating the n groups of divided attribute representation vectors, and performing mean-posing operation on the attribute representation vectors of the same type, namely passing the n groups of attribute representation vectors through a mean-posing layer respectively to obtain n attribute representation vectors; thirdly, splicing different types of attribute representation vectors, namely passing the n attribute representation vectors through a concat layer to obtain an attribute representation vector, wherein the attribute representation vector is an initial characteristic representation vector of a node; fourthly, extracting high-order cross features from the initial feature representation vectors of the nodes to obtain high-order feature representation vectors of the nodes, wherein the vectors are used as src vectors of the nodes, the operation is specifically to input the initial feature representation vectors of the nodes to an MLP layer, and the output after the MLP layer is the high-order feature representation vectors of the nodes;
s3: the input of the random walk module based on degrees is the structure of the social network graph, and the output is a random walk sequence consisting of nodes in sequence;
s4: the input of the node aggregation module is the output of the degree-based random walk module (S3), i.e., the random walk sequence, and the output is the dst vector of the node. The concrete steps can be divided into three steps: firstly, carrying out one-hot coding on each node; secondly, converting the nodes expressed by the one-hot codes into a low-dimensional expression space to obtain node structure expression vectors; thirdly, aggregating structure expression vectors of nodes in the random walk sequence to obtain an aggregate expression vector of the random walk sequence, and taking the aggregate expression vector as a neighbor aggregate vector of a head node (namely a first node) of the random walk sequence and also as a dst vector of the nodes;
s5: the inputs to the structural modeling module are the output of S2 and the output of S4, i.e., the src vector of nodes and the dst vector of nodes, whose output is the final node representation vector. The concrete steps can be divided into three steps: firstly, constraining dst vectors of nodes; secondly, the similarity of the nodes in the physical space of the network is used for constraining the similarity of the representation vectors in the representation space, namely the dst vector of the node and the src vector of the node are mutually constrained; and thirdly, splicing the src vector and the dst vector of the trained node, namely inputting the src vector and the dst vector of the node into a concat layer, wherein the output vector passing through the concat layer is the final node representation vector.
The invention will be further explained and analyzed at a theoretical and formula level.
1) Building node attribute embedded model
Due to the high complexity of the real world, the node properties of real social networks are extremely rich. The functions of different attributes are generally different, the functions of the nodes in the network are different, in order to avoid the specialization of the model under the condition that the influence degree of certain attribute on the functions of the nodes in the network is not clear, the influence degrees of different attributes on the network are not manually set, but the model can automatically select important node attributes through the iterative update of model parameters, otherwise, the introduction of proper prior knowledge can make the model training faster and better:
s11: for discrete attributes, they can be segmented, such as gender into { male, female }, specialty into { computer, financial, medical, … }, and so on. For this class of attributes, a vector is typically used that is converted into binary using one-hot encoding. Correspondingly, { male, female } { [0, 1}, { computer, financial, medical, … } { [1,0,0,0, … ], [0,1,0,0, … ], [0,0,1,0, … ], … };
continuous properties are also very common in the real world, such as video, audio, text, etc., and the processing of continuous properties is more complex than the processing of discrete properties, where the continuous properties used are mainly text, so only the processing of text properties is described. And for the text attribute, processing by using a bag-of-words model, inputting the text attribute of all nodes in the bag-of-words model, outputting a word-frequency vector of each node, wherein the word-frequency vector is composed of 0 and 1,0 represents that no corresponding word exists in the text attribute of the node, and 1 represents that one or more corresponding words exist in the text attribute of the node. Therefore, after the processing of the bag-of-words model, the text attribute is also converted into a binary vector;
s12: the node attributes characterize the node, the attributes characterizing different aspects of the node are regarded as different types of attributes, and the attributes characterizing the same aspect of the node are regarded as the same type of attributes. For different types of attributes, they are transformed into the low-dimensional representation space by a unique transformation matrix:
whereinAs attribute type is AjThe binary vector representation of the ith attribute of (a),in order to be a weight matrix, the weight matrix,is represented by type AjIs determined by the binary vector dimension of the attribute of (a),a is represented by typejConverting the attributes to dimensions of a low-dimensional representation space; finally, the attributes of a node may be represented by a set of attribute representation vectorsCarrying out representation;
2) constructing a feature fusion model
S21: dividing a group of attribute expression vectors expressing a node according to types to obtain NAGroup attribute represents a vector, NAIndicates the total number of attribute types, i.e. to beIs converted into
S22: and for the same type of attribute features, performing aggregation in a mean pooling mode:
wherein | AjI represents attribute type AjThe number of attributes of (2);
S23: after the characteristic expressions of different types of attributes of the nodes are obtained, the nodes are input into a concat layer, so that the nodes are spliced together in proportion, wherein lambda isjRepresenting attribute type AjThe larger the weight of (a) is, the larger the effect that the type attribute feature can play according to the priori knowledge is. At the same time, the invention performs normalization processing on the weights of different types of attributes, namelyNARepresents the total number of attribute types;
s24: and performing feature crossing on different types of features by using the obtained initial feature representation vector of the node through an MLP (multi-layer perceptron), so as to obtain a representation vector with a richer connotation:
3) Construction of a building random walk Module
Generally, the first-order neighbors of a node have the greatest impact on it in the graph. However, if the gaze is limited to the first-order neighbors, much information of the second-order neighbors and even the high-order neighbors is lost, so a method based on random walk is adopted to sample the neighbor nodes to obtain more distant information. In order to ensure that as much information as possible is obtained, a degree-based random walk is used, and it is desirable to characterize the global structure information by the information transfer amount between nodes.
In many real-world networks, an edge exists between two nodes, representing that the two nodes have some similarity, such as in a social network, an edge represents a friendship between two people, and in a citation network, an edge represents a citation relationship between two documents. Many documents based on the random walk method describe the similarity between nodes by defining a first order approach or a second order approach, but the similarity does not well reflect the information transfer amount between nodes, and the information transfer amount between nodes is determined not only by the similarity between nodes but also by the information amount of the nodes themselves. As shown in fig. 2, the first-order proximity value of the node pair < i, j > is 1 from the viewpoint of first-order proximity, the first-order proximity value of the node pair < i, k > is also 1, the second-order proximity value of the node pair < i, j > is 1 from the viewpoint of second-order proximity, and the second-order proximity value of the node pair < i, k > is 0, so that the node i and the node j are more similar to each other than the node k in view of the combination of the first-order proximity and the second-order proximity. However, from an information transfer perspective, node k will transfer more information to node i than to node j, and therefore will be selected with greater probability when making random walks because its surrounding neighbors are richer. Here, the present invention uses a degree-based random walk that mainly evaluates the amount of information transfer between nodes. For each node pair < i, j >, the amount of inter-node pair information transfer can be estimated as:
wherein wi,jRepresents an edge ei,jWeight of diIndicates the degree of the node i, and α indicates the smoothing coefficient. It is worth mentioning that M<i,j>Not having symmetry, i.e.If the condition is not satisfied,the amount of information transmitted from node i to node j is shown, and the formula is extended to a more general case in consideration of the side directivity.
Wherein wi→jRepresenting directed edges ei→jThe weight of (a) is determined,which represents the in-degree of the node i,representing the degree of egress of node j. In FIG. 2, all edges are assumedAre all 1, so This means that the next hop for node i will select node k with greater probability, consistent with the intended target;
for the convenience of calculation, the transition probability is normalized, and the transition probability formula is defined as follows:
The pseudo code is described as:
inputting:
set of nodes V ═ { V ═ V1,v2,...,vn}
Weight matrix W of edges
Wandering length L
And (3) outputting:
random Walk sequence set Walk ═ w1,w2,...,wn}:
1.Walk={}
2.for i=1:n do
3.wi={vi}
4.now=i
5.for l=1:L do
6.P={}
7.for j=1:n do
10.end for
11. Normalizing the set P according to a formula
12. Selecting a probability from the set P by using a roulette algorithm, and finding out a node v corresponding to the probabilityk
13. Node vkJoin to set wiIn
14.now=k
15.end for
16. Will wiJoining Collection Walk
17.end for
4) Building node aggregation modules
S41: performing one-hot coding on all nodes according to the node numbers to obtain binary representation vectors of the nodes
S42: converting the nodes represented by the one-hot coding into a low-dimensional representation space to obtain a node structure representation vector:
s43: according to the random Walk algorithm based on the degrees, a random Walk sequence set Walk ═ w is obtained1,w2,...,wnIn which wi={s1,s2,...,sk,...,snDenotes a slave node viStarting the sequence obtained by random walk, i.e. s1=viW is to bei-s1Viewed as sampling neighbors of node i, i.e.Preserving its order in the wandering sequence in the sample neighborhoodAnd (4) order information. The sampling neighbor of the node i reflects the process of transmitting information in the network to the node i, and also reflects the position and the action of the node i in the network.
The application provides the following three ways to aggregate sampling neighbors:
and (3) average value pooling polymerization:
linear polymerization:
long-short term memory network aggregation:
the long-short term memory network can well process the sequence, generally speaking, the input closer to the current time has larger influence on the output of the current time, which is just consistent with the influence of the node closer to the current node on the current node in the graph, so the sequence input into the long-short term memory network is just opposite to the sequence generated by random walk, which means that snWill be input into the long-short term memory network element at the first time step, and s2Will be input into the long-short term memory network element at the last time step;representing the effect of its neighbor nodes on it, which is a second representation vector of the node.
5) Building a structural modeling model
S51: the representation vector of the target node should be able to embody the information of its sampling neighbours, that is to say the structural representation vector of the target node should be as close as possible to the aggregation of the representation vectors of its sampling neighbours.
Through one iteration of calculation, the representation vector of each node contains the representation vector information of the sampling neighbor, and therefore, the representation vector of each node contains the information of the node in the local structure of the network. By constantly updating the iterations, the representation vector for each node will gradually contain more distant node information. Eventually, the representation vectors of the nodes will contain the global structure information of the network.
In general, when the sequence is long, the training of the long-short term memory network becomes difficult because the influence of the time step farther away is smaller, which makes it difficult to ensure that the previous sequence information is well utilized, and in order to solve this problem, when the long-short term memory network is used as the aggregation function, a task is additionally introduced to supervise the training of the long-short term memory network: using the output of the t-th time step of the long-short term memory network unit to predict the input of the t + 1-th time step, i.e. the input sequence S for the long-short term memory networkinput={sn,sn-1,…,…,s2Hope that it will accurately predict the next node sequence Snext={sn-1,sn-2,…,…,s1I.e. maximizing the probability of predicting the next node:
P(Snext|Sinput)
after a simple linear transformation, the output h of the t-th time steptIs converted into NS,NSRepresenting the total number of network nodes, normalizing the total number by using a softmax activation function, converting the normalized total number into the probability of belonging to each node, and evaluating by using a cross entropy loss function:
wherein h istDenotes the t thOutput of time step, yt+1The input node representing the t +1 time step,representing a weight matrix, KhRepresenting the hidden layer dimension of the long-short term memory network.
S52: the similarity of the nodes in the physical space of the network is also utilized to restrict the similarity of the representation vectors in the representation space. For directed node pairs<i→j>In other words, the nodes i and j contribute to the formation of the edge, and also indicate that there is some relation between the nodes i and j, but there is a directed edge ei→jThe roles of the intermediate nodes i and j are different, the node i plays the leading role and is the transmitter of the information, the node j is the receiver of the information, and therefore, the expression vector for expressing the information of the node i is expected to beRepresentation vector representing the influence of neighbor node of node j on itSimilar in representation space to make directed edge ei→jIs more interpretable, and therefore the loss function of this section is defined as:
Ω-represents the set of negative samples taken for that node pair, σ represents the sigmoid function.
This not only embodies the local structure information of the network in the representation vectors, but also makes the two representation vectors mutually aware and constrained. And enhancing the learning ability of the model.
S53: is well trainedAndafter the two expression vectors are used as the input of a concat layer, splicing is carried out to obtain a final node expression vector:
hiis a node viThe final node representation.
The same or similar reference numerals correspond to the same or similar parts;
the positional relationships depicted in the drawings are for illustrative purposes only and are not to be construed as limiting the present patent;
it should be understood that the above-described embodiments of the present invention are merely examples for clearly illustrating the present invention, and are not intended to limit the embodiments of the present invention. Other variations and modifications will be apparent to persons skilled in the art in light of the above description. And are neither required nor exhaustive of all embodiments. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention should be included in the protection scope of the claims of the present invention.
Claims (10)
1. A method for constructing a complex attribute network representation model based on path aggregation is characterized by comprising the following steps:
s1: constructing a node attribute embedding module, wherein the node attribute embedding module collects a group of attribute expression vectors of attribute output nodes of nodes in a social network graph;
s2: constructing a feature fusion module, wherein an input node attribute embedding module of the feature fusion module outputs a node attribute representation vector, the output is an src vector of a node, and the src vector is a high-order feature representation vector of the node;
s3: constructing a random walk module, wherein the random walk module collects the structure of the social network diagram and outputs a random walk sequence of nodes;
s4: constructing a node aggregation module, inputting a random walk sequence of an output node of the random walk module by the node aggregation module, and outputting a dst vector of the node;
s5: constructing a structural modeling module, wherein the structural modeling module restricts the similarity of the representation vectors in the representation space by using the similarity of the nodes in the network physical space according to the restriction on the dst vectors of the nodes, namely the dst vectors of the nodes and the src vectors of the nodes are mutually restricted; and splicing the src vector and the dst vector of the trained node, namely inputting the src vector and the dst vector of the node into a concat layer, wherein an output vector passing through the concat layer is a final node representation vector.
2. The method for constructing the complex attribute network representation model based on the path aggregation is characterized in that the input of the node attribute embedding module is the attribute of the node in the social network graph, and the output is a group of attribute representation vectors of the node; the node attribute embedding module preprocesses the attribute, directly performs one-hot coding on the discrete attribute, and firstly discretizes the attribute and then performs one-hot coding on the continuous attribute; and the node attribute embedding module converts the attributes represented by the one-hot coding into a low-dimensional representation space to obtain an attribute representation vector.
3. The method for constructing the complex attribute network representation model based on the path aggregation as claimed in claim 2, wherein the feature fusion module divides a group of attribute representation vectors of the nodes according to attribute types, each node obtains n groups of attribute representation vectors, and n is the number of attribute types; the feature fusion module operates the n groups of divided attribute representation vectors, and performs mean-posing operation on the attribute representation vectors of the same type, namely the n groups of attribute representation vectors pass through a mean-posing layer respectively to obtain n attribute representation vectors; the feature fusion module splices attribute representation vectors of different types, namely the n attribute representation vectors pass through a concat layer to obtain an attribute representation vector which is an initial feature representation vector of a node; the feature fusion module extracts high-order cross features from the initial feature representation vectors of the nodes to obtain high-order feature representation vectors of the nodes, the vectors are used as src vectors of the nodes, the specific operation is to input the initial feature representation vectors of the nodes to an MLP layer, and the output after the MLP layer is the high-order feature representation vectors of the nodes.
4. The method for constructing the complex attribute network representation model based on path aggregation according to claim 3, wherein the specific process of step S1 is as follows:
the node attribute describes the characteristics of the node, the attributes describing the characteristics of different aspects of the node are regarded as different types of attributes, and the attributes describing the characteristics of the same aspect of the node are regarded as the same type of attributes; for different types of attributes, converting the attributes into a low-dimensional representation space through a unique conversion matrix;
whereinAs attribute type is AjThe binary vector representation of the ith attribute of (a),in order to be a weight matrix, the weight matrix,is represented by type AjIs determined by the binary vector dimension of the attribute of (a),a is represented by typejConverting the attributes to dimensions of a low-dimensional representation space; the attributes of a node may be represented by a set of attribute representation vectorsAnd (4) performing representation.
5. The method for constructing the complex attribute network representation model based on path aggregation according to claim 4, wherein the specific process of the step S2 is as follows:
s21: dividing a group of attribute expression vectors expressing a node according to types to obtain NAGroup attribute represents a vector, NAIndicates the total number of attribute types, i.e. to beIs converted into
S22: for the attribute characteristics of the same type, aggregation is carried out by adopting a mean pooling mode,
S23: after the characteristic expressions of different types of attributes of the nodes are obtained, the characteristic expressions are input into a concat layer, so that the characteristic expressions are spliced together in proportion, and the weights of the different types of attributes are normalized, namelyNATotal number of attribute types represented:
s24: and performing feature crossing on different types of features by using the obtained initial feature expression vector of the node through a multi-layer perceptron MLP to obtain an expression vector with richer connotation:
6. The method for constructing a complex attribute network representation model based on path aggregation as claimed in claim 5, wherein λ is λ 23jRepresenting attribute type AiThe larger the weight of (a) is, the larger the effect that the type attribute feature can play according to the priori knowledge is.
7. The method for constructing the complex attribute network representation model based on path aggregation according to claim 6, wherein the specific process of the step S3 is:
for each node pair < i, j >, the amount of inter-node pair information transfer can be estimated as:
wherein wi,jRepresents an edge ei,jWeight of diDegree representing node i, alpha represents smoothing factor, M<i,j>Not having symmetry, i.e.If the condition is not satisfied,the amount of information transmitted from node i to node j is shown, and the formula is extended to a more general case in consideration of the directionality of the edge:
wherein wi→jRepresenting directed edges ei→jThe weight of (a) is determined,which represents the in-degree of the node i,represents the degree of departure of node j, i.e.:
8. The method for constructing a complex attribute network representation model based on path aggregation according to claim 7, wherein the specific process of step S4 is:
s41: performing one-hot coding on all nodes according to the node numbers to obtain binary representation vectors of the nodes
S42: converting nodes represented using one-hot encoding to low-dimensional tablesIn the space representation, a node structure representation vector is obtained:
s43: according to the random Walk algorithm based on the degrees, a random Walk sequence set Walk ═ w is obtained1,w2,...,wnIn which wi={s1,s2,...,sk,...,snDenotes a slave node viStarting the sequence obtained by random walk, i.e. s1=viW is to bei-s1Viewed as sampling neighbors of node i, i.e.The sampling neighbors retain the sequential information of the sampling neighbors in the wandering sequence, and the sampling neighbors of the node i reflect the process of transmitting information in the network to the node i and also reflect the status and the action of the node i in the network:
s44: and (3) carrying out long-term and short-term memory network aggregation on the sampling neighbors:
9. The method as claimed in claim 8, wherein when the long-short term memory network processes the time sequence, the input closer to the current time has a greater influence on the output of the current time, and the influence of the node closer to the current node on the current node is greater, so that the sequence order input into the long-short term memory network is exactly opposite to the sequence order generated by random walkThis means that snWill be input into the long-short term memory network element at the first time step, and s2Will be entered into the long-short term memory network element at the last time step.
10. The method for constructing a complex attribute network representation model based on path aggregation according to claim 9, wherein the specific process of step S5 is:
s51: the representation vector of the target node should be able to embody the information of its sampling neighbors, that is to say the structural representation vector of the target node should be as close as possible to the aggregation of the representation vectors of its sampling neighbors:
through one-time iterative computation, the representation vector of each node contains the representation vector information of the sampling neighbor, so that the representation vector of each node contains the information of the node in the local network structure; by continuously updating iteration, the representation vector of each node gradually contains more distant node information; the representation vectors of the nodes all contain the global structure information of the network; in order to solve the problem that the influence of time steps at longer distances is smaller, so that it is difficult to ensure that the previous sequence information is well utilized, when the long-short-term memory network is used as an aggregation function, a task is additionally introduced to supervise the training of the long-short-term memory network: using the output of the t-th time step of the long-short term memory network unit to predict the input of the t + 1-th time step, i.e. the input sequence S for the long-short term memory networkinput={sn,sn-1,...,...,s2Hope that it will accurately predict the next node sequence Snext={sn-1,sn-2,...,...,s1I.e. maximizing the probability of predicting the next node:
P(Snext|Sinput)
after a simple linear transformation, the output h of the t-th time steptIs converted into NS,NSRepresenting the total number of network nodes, normalizing the total number by using a softmax activation function, converting the normalized total number into the probability of belonging to each node, and evaluating by using a cross entropy loss function:
wherein h istOutput representing the t-th time step, yt+1The input node representing the t +1 time step,representing a weight matrix, KhRepresenting hidden layer dimensions of the long-short term memory network;
s52: the similarity of the representation vectors of the nodes in the network physical space is utilized to restrict the similarity of the representation vectors in the representation space, and the nodes are directed to<i→j>In other words, the nodes i and j contribute to the formation of the edge, and also indicate that there is some relation between the nodes i and j, but there is a directed edge ei→jThe roles of the intermediate nodes i and j are different, the node i plays the leading role and is the transmitter of the information, the node j is the receiver of the information, and therefore, the expression vector for expressing the information of the node i is expected to beRepresentation vector representing the influence of neighbor node of node j on itSimilar in representation space to make directed edge ei→jIs more interpretable, and therefore the loss function of this section is defined as:
Ω-representing a set of negative samples performed for the node pair, σ representing a sigmoid function; the local structure information of the network is embodied in the expression vectors, the two expression vectors are mutually sensed and constrained, and the learning capacity of the model is enhanced;
s53: is well trainedAndafter the two expression vectors are used as the input of a concat layer, splicing is carried out to obtain a final node expression vector:
hiis a node viThe final node representation.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011228523.3A CN112395512B (en) | 2020-11-06 | 2020-11-06 | Method for constructing complex attribute network representation model based on path aggregation |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011228523.3A CN112395512B (en) | 2020-11-06 | 2020-11-06 | Method for constructing complex attribute network representation model based on path aggregation |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112395512A true CN112395512A (en) | 2021-02-23 |
CN112395512B CN112395512B (en) | 2022-09-16 |
Family
ID=74598381
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011228523.3A Active CN112395512B (en) | 2020-11-06 | 2020-11-06 | Method for constructing complex attribute network representation model based on path aggregation |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112395512B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113807457A (en) * | 2021-09-26 | 2021-12-17 | 北京市商汤科技开发有限公司 | Method, device and equipment for determining road network characterization information and storage medium |
WO2024000519A1 (en) * | 2022-06-30 | 2024-01-04 | 华为技术有限公司 | Trajectory data characterization method and apparatus |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070087756A1 (en) * | 2005-10-04 | 2007-04-19 | Hoffberg Steven M | Multifactorial optimization system and method |
CN111325326A (en) * | 2020-02-21 | 2020-06-23 | 北京工业大学 | Link prediction method based on heterogeneous network representation learning |
CN111523003A (en) * | 2020-04-27 | 2020-08-11 | 北京图特摩斯科技有限公司 | Data application method and platform with time sequence dynamic map as core |
CN111709474A (en) * | 2020-06-16 | 2020-09-25 | 重庆大学 | Graph embedding link prediction method fusing topological structure and node attributes |
-
2020
- 2020-11-06 CN CN202011228523.3A patent/CN112395512B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070087756A1 (en) * | 2005-10-04 | 2007-04-19 | Hoffberg Steven M | Multifactorial optimization system and method |
CN111325326A (en) * | 2020-02-21 | 2020-06-23 | 北京工业大学 | Link prediction method based on heterogeneous network representation learning |
CN111523003A (en) * | 2020-04-27 | 2020-08-11 | 北京图特摩斯科技有限公司 | Data application method and platform with time sequence dynamic map as core |
CN111709474A (en) * | 2020-06-16 | 2020-09-25 | 重庆大学 | Graph embedding link prediction method fusing topological structure and node attributes |
Non-Patent Citations (1)
Title |
---|
JIWON JUNG ET AL.: "Location-Aware Point-to-Point RPL in Indoor", 《MDPI》 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113807457A (en) * | 2021-09-26 | 2021-12-17 | 北京市商汤科技开发有限公司 | Method, device and equipment for determining road network characterization information and storage medium |
WO2024000519A1 (en) * | 2022-06-30 | 2024-01-04 | 华为技术有限公司 | Trajectory data characterization method and apparatus |
Also Published As
Publication number | Publication date |
---|---|
CN112395512B (en) | 2022-09-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112529168B (en) | GCN-based attribute multilayer network representation learning method | |
CN112508085B (en) | Social network link prediction method based on perceptual neural network | |
CN112380435B (en) | Document recommendation method and system based on heterogeneous graph neural network | |
CN112507699B (en) | Remote supervision relation extraction method based on graph convolution network | |
CN110677284B (en) | Heterogeneous network link prediction method based on meta path | |
CN112395512B (en) | Method for constructing complex attribute network representation model based on path aggregation | |
CN111709474A (en) | Graph embedding link prediction method fusing topological structure and node attributes | |
CN111651671B (en) | User object recommendation method, device, computer equipment and storage medium | |
CN114817663B (en) | Service modeling and recommendation method based on class perception graph neural network | |
US20220383127A1 (en) | Methods and systems for training a graph neural network using supervised contrastive learning | |
CN115858788A (en) | Visual angle level text emotion classification system based on double-graph convolutional neural network | |
CN112801063B (en) | Neural network system and image crowd counting method based on neural network system | |
CN112561031A (en) | Model searching method and device based on artificial intelligence and electronic equipment | |
CN114780866B (en) | Personalized intelligent recommendation method based on spatio-temporal context interest learning model | |
Amara et al. | Cross-network representation learning for anchor users on multiplex heterogeneous social network | |
CN115760279A (en) | Knowledge graph and multi-head attention-based dual-target cross-domain recommendation method and system | |
CN116090504A (en) | Training method and device for graphic neural network model, classifying method and computing equipment | |
CN114997360B (en) | Evolution parameter optimization method, system and storage medium of neural architecture search algorithm | |
CN110020379B (en) | Link prediction method based on deep dynamic network embedded representation model | |
CN116562286A (en) | Intelligent configuration event extraction method based on mixed graph attention | |
CN116910190A (en) | Method, device and equipment for acquiring multi-task perception model and readable storage medium | |
CN109978013A (en) | A kind of depth clustering method for figure action identification | |
CN115563573A (en) | Information detection method based on modal dynamic feature fusion and cross-modal relationship extraction | |
CN116468030A (en) | End-to-end face-level emotion analysis method based on multitasking neural network | |
CN114900435A (en) | Connection relation prediction method and related equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |