CN112417159A - Cross-language entity alignment method of context alignment enhanced graph attention network - Google Patents
Cross-language entity alignment method of context alignment enhanced graph attention network Download PDFInfo
- Publication number
- CN112417159A CN112417159A CN202011201832.1A CN202011201832A CN112417159A CN 112417159 A CN112417159 A CN 112417159A CN 202011201832 A CN202011201832 A CN 202011201832A CN 112417159 A CN112417159 A CN 112417159A
- Authority
- CN
- China
- Prior art keywords
- entity
- graph
- knowledge
- entities
- aligned
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 25
- 230000002776 aggregation Effects 0.000 claims abstract description 34
- 238000004220 aggregation Methods 0.000 claims abstract description 34
- 238000013528 artificial neural network Methods 0.000 claims abstract description 30
- 239000013598 vector Substances 0.000 claims abstract description 23
- 238000012549 training Methods 0.000 claims abstract description 22
- 230000004927 fusion Effects 0.000 claims abstract description 21
- 238000012360 testing method Methods 0.000 claims abstract description 13
- 238000012216 screening Methods 0.000 claims abstract description 4
- 230000007246 mechanism Effects 0.000 claims description 21
- 230000006870 function Effects 0.000 claims description 20
- 238000013519 translation Methods 0.000 claims description 18
- 238000004364 calculation method Methods 0.000 claims description 15
- 239000011159 matrix material Substances 0.000 claims description 12
- 230000004913 activation Effects 0.000 claims description 9
- 230000008569 process Effects 0.000 claims description 5
- 238000010276 construction Methods 0.000 claims description 4
- 230000005540 biological transmission Effects 0.000 claims description 3
- 230000003472 neutralizing effect Effects 0.000 claims description 3
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 claims description 2
- 102100034330 Chromaffin granule amine transporter Human genes 0.000 claims 3
- 101000641221 Homo sapiens Chromaffin granule amine transporter Proteins 0.000 claims 3
- 238000012512 characterization method Methods 0.000 claims 1
- 230000001323 posttranslational effect Effects 0.000 claims 1
- 238000012546 transfer Methods 0.000 description 3
- 230000004931 aggregating effect Effects 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000003058 natural language processing Methods 0.000 description 1
- 238000003062 neural network model Methods 0.000 description 1
- 230000000644 propagated effect Effects 0.000 description 1
- 230000001902 propagating effect Effects 0.000 description 1
- 238000000547 structure data Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/36—Creation of semantic tools, e.g. ontology or thesauri
- G06F16/367—Ontology
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/20—Natural language analysis
- G06F40/205—Parsing
- G06F40/211—Syntactic parsing, e.g. based on context-free grammar [CFG] or unification grammars
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/20—Natural language analysis
- G06F40/279—Recognition of textual entities
- G06F40/289—Phrasal analysis, e.g. finite state techniques or chunking
- G06F40/295—Named entity recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Computational Linguistics (AREA)
- General Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- General Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Data Mining & Analysis (AREA)
- Biomedical Technology (AREA)
- Computing Systems (AREA)
- Molecular Biology (AREA)
- Evolutionary Computation (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Biophysics (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Animal Behavior & Ethology (AREA)
- Databases & Information Systems (AREA)
- Machine Translation (AREA)
Abstract
The invention provides a cross-language entity alignment method of a context alignment enhancement graph attention network. Introducing a first knowledge graph and a second knowledge graph, screening and aligning a seed entity set, and translating each entity name into English; constructing a training set and a testing set from the translated aligned seed set, and converting the entity name into a word vector by using a word2vec algorithm; initial features of the two maps are constructed separately by summing the word vectors for each entity name. Dividing a training set into a context alignment seed set and a target alignment seed set, and taking the context alignment seed set and the target alignment seed set together with the initial characteristics as input data; the features of each entity containing map fusion information and multi-hop neighbor information are obtained by a cross-knowledge-map aggregation layer and an attention-based neural network. The method fully utilizes the context to align the seed set, and transmits the information between the maps through the cross-knowledge map aggregation layer; communicating entity neighbor information and entity alignment information across atlases is collected by an attention-based graph neural network.
Description
Technical Field
The invention relates to a cross-language entity alignment method, in particular to a method for collecting and propagating heterogeneous knowledge graph information by using a cross-knowledge graph neural network model to solve the problem of cross-language knowledge graph entity alignment, which comprises a cross-knowledge graph aggregation layer and an attention-based information propagation layer. The pre-aligned seed entity pair is regarded as a medium for information transfer between two heterogeneous knowledge graphs of different languages, the information transfer is carried out between the two heterogeneous knowledge graphs, so that more neighbor alignment characteristics are provided for equivalent entities in different knowledge graphs, and then the equivalent entities in different knowledge graphs are predicted through the learned characteristic representation.
Background
In recent years, knowledge-graphs have shown great potential in many natural language processing tasks, such as language modeling and question and answer. With the rapid growth of multi-language knowledge maps (e.g., DBpedia, YAGO), cross-language entity alignment has attracted much researchers' attention due to the lack of connections between cross-language entities. The cross-language entity alignment task aims to automatically search equivalent entities from different single-language knowledge maps so as to close gaps of different languages.
Recently, many Graph Neural Network (GNNs) based methods have been proposed for entity alignment tasks. GNN-based methods achieve good performance because GNN can learn representations of graph structure data by aggregating neighborhood information. However, existing GNN-based approach models separately model two knowledge-graphs across languages, ignoring the useful pre-aligned connections (seed entities) between the two knowledge-graphs. These GNN-based methods only optimize the objective function using the seed entities during the training process, but do not make full use of the seed entities that provide context alignment information, resulting in non-optimal results.
Disclosure of Invention
The invention provides a cross-language entity alignment method based on a context alignment enhancement graph attention network, aiming at the defects of the prior art in cross-language entity alignment, and the method comprises a cross-knowledge graph aggregation layer and an attention-based cross-knowledge graph propagation layer. The information across the knowledge graphs is propagated through the pre-aligned pairs of seed entities, so that feature representations of different knowledge graphs are obtained, and then equivalent entities in different knowledge graphs are predicted through learned feature representations (embedding).
In order to achieve the above object, the present invention is conceived as follows: firstly, regarding a seed entity pair as a medium for information transfer between two heterogeneous knowledge graphs, and collecting cross-knowledge graph information by using a cross-knowledge graph aggregation layer; the neighborhood information for the entity is then collected using an attention-based graph neural network. And learning multi-hop neighbor information by stacking a plurality of the two layers. Finally, model parameters are optimized using an edge-based loss function.
According to the conception, the invention adopts a technical scheme that: a cross-language entity alignment method of a context alignment enhanced graph attention network is provided, which comprises the following steps:
step 1: introducing a first knowledge graph and a second knowledge graph, screening an aligned seed entity set according to the first knowledge graph and the second knowledge graph, translating the name of each entity in each aligned entity pair in the aligned seed entity set into English, defining the translated name of each entity in each aligned entity pair in the aligned seed entity set, and constructing a training set and a test set from the translated aligned seed entity set;
step 2: converting the translated name of each entity in the aligned entity pair in the aligned seed entity set into a word vector of the entity name by using a word2vec algorithm, and 2, summing the word vectors of each entity name to serve as the initialization feature of the entity, and respectively constructing the initialization feature of the first knowledge graph and the initialization feature of the second knowledge graph;
and step 3: randomly dividing a training set into a context alignment seed set and a target alignment seed set, and constructing input data of a neural network through the context alignment seed set, the target alignment seed set, the initialization feature of a first knowledge graph and the initialization feature of a second knowledge graph;
and 4, step 4: information transmission of different knowledge graphs is carried out through a cross-knowledge graph aggregation layer;
and 5: collecting neighbor information of each entity in the first knowledge graph and the second knowledge graph through the attention-based neural network;
step 6: aligning the context with the seed set and inputting the initialized characteristics of the entity into the model, and obtaining the characteristics of each entity containing map fusion information and multi-hop neighbor information through the steps 4 and 5;
preferably, the first knowledge-graph in step 1 is:
G1=(E1,R1,T1)
step 1 the second knowledge-graph is:
G2=(E2,R2,T2)
wherein E is1Set of entities, R, representing a first knowledge-graph1Set of relationships, T, representing a first knowledge-graph1Three tuple sets representing a first knowledge graph, E2Set of entities, R, representing a second knowledge-graph2Set of relationships, T, representing a second knowledge-graph2A set of triplets representing a second knowledge-graph,
step 1, aligning the seed entity sets:
A={ak=(ek,1,i,ek,2,j)|ek,1,i∈E1,ek,2,j∈E2}
i∈[1,M],j∈[1,N]
k∈[1,K]
wherein A represents an aligned set of seed entities, akRepresents the K-th aligned entity pair, K represents the number of aligned entity pairs in the aligned set of seed entities, ek,1,iRepresenting the ith entity from the entity set of the first knowledge-graph in the k-th aligned pair of entities, ek,2,jRepresenting the jth entity of the k aligned pair of entities from the set of entities of the second knowledge-graph, ek,1,iAnd ek,2,jThe two Chinese meanings are the same, M represents the number of the entities from the first knowledge graph in A, and N represents the number of the entities from the second knowledge graph in A;
step 1, translating the name of each aligned entity pair in the aligned seed entity set into english as follows:
align the names of the entities in each aligned entity pair in A, i.e. ek,1,i,ek,2,jAll translated into English, and the set of aligned seed entities after translation is marked as A*Specifically defined as:
A*={a* k=(e* k,1,i,e* k,2,j)}
i∈[1,M],j∈[1,N]
k∈[1,K]
wherein A is*Representing a set of post-translationally aligned seed entities, a* kRepresenting the K-th aligned entity pair after translation, K representing the number of aligned entity pairs in the set of seed entities aligned after translation, e* k,1,iRepresenting the translated ith entity from the entity set of the first knowledge-graph in the k aligned pair of entities, e* k,2,jRepresenting the i-th translated entity from the entity set of the first knowledge-graph in the k-th aligned entity pair, M represents A*The number of entities from the first knowledge-graph in (A), N represents*The number of entities from the second knowledge-graph;
step 1, defining the name of each translated entity as:
the translated name of each entity is contained in a plurality of English words, and is specifically represented as follows:
wherein, wordk,1,i,tRepresenting the tth word, of the translated ith entity from the entity set of the first knowledge-graph in the kth aligned pair of entitiesk,2,j,tRepresenting the tth word of the translated jth entity from the entity set of the second knowledge-graph in the kth aligned entity pair, n being the total number of words of the translated ith entity from the entity set of the first knowledge-graph in the kth aligned entity pair, m being the total number of words of the translated jth entity from the entity set of the second knowledge-graph in the kth aligned entity pair;
step 1, the construction of a training set and a test set from the set of translated aligned seed entities is as follows:
from a set of post-translationally aligned seed entities, namely A*Randomly selecting P entities from the K aligned entity pairs as a training set, and using AtrainIs represented by A*The remaining K-P aligned entity pairs in the test set, using AtestRepresents;
preferably, the word vector of each entity name in step 2 is:
i∈[1,M],j∈[1,N]
k∈[1,K]
wherein,representing a translated set of entities from the first knowledge-graph in the k-th aligned pair of entitiesThe word vector for the t word for the ith entity,a word vector representing the t word of the translated j entity from the entity set of the second knowledge-graph in the K aligned entity pair, K represents the number of aligned entity pairs in the set of translated aligned seed entities, M represents A*The number of entities from the first knowledge-graph in (A), N represents*The number of entities from the second knowledge-graph;
step 2, summing the word vectors of each entity name as the initialization feature of the entity, specifically as follows:
where n is the total number of words of the translated ith entity from the entity set of the first knowledge-graph in the kth aligned pair of entities, m is the total number of words of the translated ith entity from the entity set of the first knowledge-graph in the kth aligned pair of entities,representing initialization features of the translated ith entity from the entity set of the first knowledge-graph in the kth aligned pair of entities,representing initialized features of a translated jth entity of the kth aligned pair of entities from the entity set of the second knowledge-graph;
step 2, respectively constructing the initialization features from the first knowledge graph and the initialization features from the second knowledge graph as follows:
initialization features of the second knowledge-graph are notedThe specific definition is as follows:
wherein E is1Set of entities representing a first knowledge-graph, E2Set of entities representing a second knowledge-graph, ek,1,iRepresenting the ith entity from the entity set of the first knowledge-graph in the k-th aligned pair of entities, ek,2,jRepresenting the jth entity, A, of the k-th aligned pair of entities from the set of entities of the second knowledge-graph*Representing a set of post-translationally aligned seed entities, e* k,1,iRepresenting the translated ith entity from the entity set of the first knowledge-graph in the k aligned pair of entities, e* k,2,jRepresenting a translated jth entity from the entity set of the second knowledge-graph in the kth aligned pair of entities;
preferably, the training set in step 3 is Atrain;
Step 3, the context alignment seed set is ActxThe input data is used as the input data of the model and is used for transmitting information between the first knowledge graph and the second knowledge graph;
step 3, the target alignment seed set is AobjFor calculating a loss function;
step 3, the initialization characteristic of the first knowledge graph isAs input to the modelInputting data;
step 3 the initialization characteristic of the second knowledge-graph isAs input data for the model;
preferably, the information propagation of different knowledge graphs through the cross-knowledge-graph aggregation layer in the step 4 is as follows:
step 4.1: calculating the weight of the information fusion of the first knowledge graph and the second knowledge graph by using a door mechanism according to the initialization characteristics of the translated entity from the entity set of the first knowledge graph in the seed set and the aligned entity pair in the following text alignment and the initialization characteristics of the translated entity from the entity set of the second knowledge graph in the aligned entity pair, and specifically calculating the weight as follows:
i∈[1,M],j∈[1,N]
k∈[1,K]
wherein, | | represents the concatenation operation of the vector, W is the weight matrix learned in the training process, b is the deviation, and σ is the sigmoid activation function.Is A*From knowledge-graph G in the k-th entity pair1The embedding of the ith entity of (1),is A*From knowledge-graph G in the k-th entity pair2Embedding of the j-th entity in (a). l refers to the l-th layer of the network. K represents the number of aligned entity pairs in the set of post-translation aligned seed entities, M represents A*From the first knowledge-graph G1The number of entities, N represents A*The number of entities from the second knowledge-graph G2;is represented by A*From knowledge-graph G in the k-th entity pair1The fusion weight of the ith entity of (1),is represented by A*From knowledge-graph G in the k-th entity pair2The fusion weight of the jth entity of (1);
step 4.2: and fusing the information of the first knowledge graph and the second knowledge graph by using a door mechanism to obtain a fused representation of the first knowledge graph and the second knowledge graph, wherein the specific calculation is as follows:
wherein,is represented by A*From knowledge-graph G in the k-th entity pair1The fusion weight of the ith entity of (1),is represented by A*From knowledge-graph G in the k-th entity pair2The fusion weight of the jth entity of (1);is represented by A*From knowledge-graph G in the k-th entity pair1The embedding of the ith entity of (1),A*the k th best inBody centered from knowledge graph G1(ii) embedding of the ith entity of (1);is represented by A*From knowledge-graph G in the k-th entity pair1The ith entity of (2) is subjected to embedding after a door mechanism,is represented by A*From knowledge-graph G in the k-th entity pair2The jth entity of (1) is subjected to embedding after a door mechanism,
step 4.3: calculating characteristic matrixes after cross-knowledge-map aggregation layers for all entities in the first knowledge map and the second knowledge map;
the characteristic matrix after the cross-knowledge-map aggregation layer is as follows:
wherein, the calculation formula of crossAggr is as follows:
wherein,representing a feature representation of an ith entity from the first knowledge-graph in a kth entity pair of the set of translated aligned seed entities,representing post-translation aligned set of seed entities A*(ii) a feature representation of an ith entity from the second knowledge-graph in the kth pair of entities;
preferably, the step 5 collects the neighbor information of each entity in the first knowledge graph and the second knowledge graph by the attention-based neural network as follows:
step 5.1: the weights, i.e. attention, are calculated according to the neighbor entities of the entity, and the specific calculation formula is as follows:
wherein, the first and second guide rollers are arranged in a row,representing neutralizing entity e in a first knowledge-graphk,1,iSet of linked neighbour entities, exp is an exponential function;representing the characteristic representation of the ith entity from the first knowledge graph in the kth entity pair in the translated aligned seed entity set after a door mechanism;andrespectively represent A*Of the kth pair of entities from the ith entity of the first knowledge-graph, a1,k,iRepresenting the weight of the ith entity from the first knowledge-graph G1 in the kth pair of entities in a;
wherein LeakyRelu is the activation function and V is the mathematical formulaThe network parameter matrix to be learned is,show thatAndboth features represent the execution of the splicing operation,as entity ek,1,iIs determined to be a certain one of the neighbour entities,representing feature representations of the ith entity from the first knowledge-graph in the kth entity pair of the post-translation aligned seed entity set after a door mechanism
Step 5.2: and (3) fusing neighbor information according to the weight calculated in the step (5.1) to collect the neighbor information of the entity, wherein the calculation formula is as follows:
wherein Relu is the activation function, αk,1,i,pIs the first knowledge-graph G1The weight of the p-th neighbor of the ith entity in (1). crossAtt is the attention-based graph neural network layer, crosssaggr is the cross-knowledgegraph aggregation layer,is represented by A*From knowledge-graph G in the k-th entity pair1Is characterized by the i-th entity ofIs represented by A*In the kth entity pair, the ith entity from the first knowledge graph is represented by the characteristics of the gate mechanism;is represented by A*(ii) a characteristic representation of a p-th neighbor of the k-th entity pair from the i-th entity of the first knowledge-graph; alpha is alpha1,k,iRepresenting the weight of the ith entity from the first knowledge-graph G1 in the kth pair of entities in a;
step 5.3: calculating weights of all entities of the first knowledge graph and the second knowledge graph according to neighbor entities of the entities and fusing neighbor information according to the calculated weights so as to collect the neighbor information of the entities, namely step 5.1 and step 5.2, and then obtaining a feature matrix after the neighbor information is collected by a graph neural network layer based on attention, namely:
wherein, crossAtt is the neural network layer of the attention-based map,is the output feature representation of the first knowledge-graph through the cross-knowledge-graph aggregation layer;is the output feature representation of the second knowledge-graph by the cross-knowledge-graph aggregation layer;
preferably, the characteristics of each entity containing the graph fusion information and the multi-hop neighbor information in step 6 are as follows:
wherein,is the output of the first knowledge-graph through the l-th CGAT layer; whereinIs the output of the second knowledge-graph through the l-th CGAT layer;features initialized for the first knowledge-graphFeatures initialized for the second knowledge-graphcrossAtt is the attention-based graph neural network layer; crossknowledgeable aggregation layer. By superposing L CGAT layers, the characteristics of the first knowledge graph and the second knowledge graph can be updated for L times, and finally the updated characteristics are outputAnd
calculating a target loss function on the target alignment seed set, wherein the formula is as follows:
wherein,represents the final embedding of the entity,represents the L1 distance between two entities, φ is a model parameter that can be trained, e-Refers to a negative entity corresponding to a certain entity, AobjIs the data used to optimize the model parameters.
And optimizing and updating model parameters phi by using an Adam algorithm, wherein the model parameters phi comprise parameters which are trainable across the knowledge graph aggregation layer and the attention-based graph neural network layer, and constructing a context-based alignment enhancement graph attention network model according to the optimized parameters phi.
The method has the advantages that the context alignment seed set is fully utilized, and the information among the maps is transmitted through the cross-knowledge map aggregation layer; communicating entity neighbor information and entity alignment information across atlases is collected by an attention-based graph neural network.
Drawings
FIG. 1 is a schematic diagram of a context enhancement graph attention network in the present invention.
Detailed Description
The present invention will be described in further detail with reference to examples for the purpose of facilitating understanding and practice of the invention by those of ordinary skill in the art, and it is to be understood that the embodiments described herein are merely illustrative and explanatory of the invention and are not restrictive thereof.
The following describes an embodiment of the present invention with reference to fig. 1:
step 1, introducing a first knowledge graph and a second knowledge graph, screening an aligned seed entity set according to the first knowledge graph and the second knowledge graph, translating the name of each entity in each aligned entity pair in the aligned seed entity set into English, defining the translated name of each entity in each aligned entity pair in the aligned seed entity set, and constructing a training set and a test set from the translated aligned seed entity set;
step 1 the first knowledge-graph is:
G1=(E1,R1,T1)
step 1 the second knowledge-graph is:
G2=(E2,R2,T2)
wherein E is1Set of entities, R, representing a first knowledge-graph1Set of relationships, T, representing a first knowledge-graph1Three tuple sets representing a first knowledge graph, E2Set of entities, R, representing a second knowledge-graph2Set of relationships, T, representing a second knowledge-graph2A set of triplets representing a second knowledge-graph,
step 1, aligning the seed entity sets:
A={ak=(ek,1,i,ek,2,j)|ek,1,i∈E1,ek,2,j∈E2}
i∈[1,M],j∈[1,N]
k∈[1,K]
wherein A represents an aligned set of seed entities, akDenotes the K-th aligned entity pair, K15000 denotes the number of aligned entity pairs in the set of aligned seed entities, ek,1,iRepresenting the ith entity from the entity set of the first knowledge-graph in the k-th aligned pair of entities, ek,2,jRepresenting the jth entity of the k aligned pair of entities from the set of entities of the second knowledge-graph, ek,1,iAnd ek,2,jBoth belong to the kth aligned entity pair and have the same Chinese meaning, M66469 represents the number of entities from the first knowledge-graph in a, and N98125 represents the number of entities from the second knowledge-graph in a;
step 1, translating the name of each aligned entity pair in the aligned seed entity set into english as follows:
align the names of the entities in each aligned entity pair in A, i.e. ek,1,i,ek,2,jAll translated into English, and the set of aligned seed entities after translation is marked as A*Specifically defined as:
A*={a* k=(e* k,1,i,e* k,2,j)}
i∈[1,M],j∈[1,N]
k∈[1,K]
wherein A is*Representing a set of post-translationally aligned seed entities, a* kDenotes the K-th aligned entity pair after translation, K15000 denotes the number of aligned entity pairs in the set of aligned seed entities after translation, e* k,1,iRepresenting the translated ith entity from the entity set of the first knowledge-graph in the k aligned pair of entities, e* k,2,jRepresenting the i-th translated entity from the entity set of the first knowledge-graph in the k-th aligned entity pair, M66469 represents a*Wherein N is 98125 represents a*The number of entities from the second knowledge-graph;
step 1, defining the name of each translated entity as:
the translated name of each entity is contained in a plurality of English words, and is specifically represented as follows:
wherein, wordk,1,i,tRepresenting the tth word, of the translated ith entity from the entity set of the first knowledge-graph in the kth aligned pair of entitiesk,2,j,tRepresenting the tth word of the translated jth entity from the entity set of the second knowledge-graph in the kth aligned entity pair, n being the total number of words of the translated ith entity from the entity set of the first knowledge-graph in the kth aligned entity pair, m being the total number of words of the translated jth entity from the entity set of the second knowledge-graph in the kth aligned entity pair;
step 1, the construction of a training set and a test set from the set of translated aligned seed entities is as follows:
from a set of post-translationally aligned seed entities, namely A*In 15000 aligned entity pairs, P4500 are randomly selected as training set, and A is usedtrainIs represented by A*The remaining 10500 aligned entity pairs of K-P in (A) are used as test settestRepresents;
step 2, converting the translated name of each entity in the aligned entity pair in the aligned seed entity set into a word vector of the entity name by using a word2vec algorithm, and summing the word vectors of each entity name as the initialization feature of the entity in the step 2 to respectively construct the initialization feature of the first knowledge graph and the initialization feature of the second knowledge graph;
step 2, the word vector of each entity name is:
i∈[1,M],j∈[1,N]
k∈[1,K]
wherein,a word vector representing the t word of the translated ith entity from the entity set of the first knowledge-graph in the k aligned pair of entities,a word vector representing the t word of the translated j entity from the entity set of the second knowledge-graph in the K aligned entity pair, K4500 representing the number of aligned entity pairs in the set of translated aligned seed entities, M4500 representing a*Wherein N is 4500, which represents a*The number of entities from the second knowledge-graph;
step 2, summing the word vectors of each entity name as the initialization feature of the entity, specifically as follows:
where n is the total number of words of the translated ith entity from the entity set of the first knowledge-graph in the kth aligned pair of entities, m is the total number of words of the translated ith entity from the entity set of the first knowledge-graph in the kth aligned pair of entities,representing initialization features of the translated ith entity from the entity set of the first knowledge-graph in the kth aligned pair of entities,representing initialized features of a translated jth entity of the kth aligned pair of entities from the entity set of the second knowledge-graph;
step 2, respectively constructing the initialization features from the first knowledge graph and the initialization features from the second knowledge graph as follows:
initialization features of the second knowledge-graph are notedThe specific definition is as follows:
wherein E is1Set of entities representing a first knowledge-graph, E2Set of entities representing a second knowledge-graph, ek,1,iTo representThe ith entity from the entity set of the first knowledge-graph in the k aligned pair of entities, ek,2,jRepresenting the jth entity, A, of the k-th aligned pair of entities from the set of entities of the second knowledge-graph*Representing a set of post-translationally aligned seed entities, e* k,1,iRepresenting the translated ith entity from the entity set of the first knowledge-graph in the k aligned pair of entities, e* k,2,jRepresenting a translated jth entity from the entity set of the second knowledge-graph in the kth aligned pair of entities;
and step 3, the context alignment enhancement graph attention network (CGAT) mainly comprises a cross-knowledge graph aggregation layer and an attention-based graph neural network layer. The cross-knowledge-graph aggregation layer is used to pass cross-knowledge-graph information between two knowledge graphs, while the attention-based graph neural network layer is used to collect neighbor information for each entity in the knowledge graphs. By superposing a plurality of CGAT layers, multi-hop cross-knowledge map information and neighbor information are spread in a knowledge map. The following describes the construction and training process of the model in detail.
And 4, step 4: randomly dividing a training set into a context alignment seed set and a target alignment seed set, and constructing input data of a neural network through the context alignment seed set, the target alignment seed set, the initialization feature of a first knowledge graph and the initialization feature of a second knowledge graph;
step 4 the training set is Atrain;
Step 4, the context alignment seed set is ActxThe input data is used as the input data of the model and is used for transmitting information between the first knowledge graph and the second knowledge graph;
step 4, the target alignment seed set is AobjFor calculating a loss function;
step 4 the initialization characteristic of the first knowledge-graph isAs input data for the model;
step 4 the initialization characteristic of the second knowledge-graph isAs input data for the model;
and 5: information transmission of different knowledge graphs is carried out through a cross-knowledge graph aggregation layer;
and 5, the information propagation of different knowledge graphs through the cross-knowledge-graph aggregation layer comprises the following steps:
step 5.1: calculating the weight of the information fusion of the first knowledge graph and the second knowledge graph by using a door mechanism according to the initialization characteristics of the translated entity from the entity set of the first knowledge graph in the seed set and the aligned entity pair in the following text alignment and the initialization characteristics of the translated entity from the entity set of the second knowledge graph in the aligned entity pair, and specifically calculating the weight as follows:
i∈[1,M],j∈[1,N]
k∈[1,K]
wherein, | | represents the concatenation operation of the vector, W is the weight matrix learned in the training process, b is the deviation, and σ is the sigmoid activation function.Is A*From knowledge-graph G in the k-th entity pair1The embedding of the ith entity of (1),is A*From knowledge-graph G in the k-th entity pair2Embedding of the j-th entity in (a). l refers to the l-th layer of the network. K4500 denotes the number of aligned entity pairs in the set of post-translation aligned seed entities, M4500 denotes a*From a first knowledge-graph G1 entityThe number, N4500, represents A*The number of entities from the second knowledge-graph G2;is represented by A*From knowledge-graph G in the k-th entity pair1The fusion weight of the ith entity of (1),is represented by A*From knowledge-graph G in the k-th entity pair2The fusion weight of the jth entity of (1);
step 5.2: and fusing the information of the first knowledge graph and the second knowledge graph by using a door mechanism to obtain a fused representation of the first knowledge graph and the second knowledge graph, wherein the specific calculation is as follows:
wherein,is represented by A*From knowledge-graph G in the k-th entity pair1The fusion weight of the ith entity of (1),is represented by A*From knowledge-graph G in the k-th entity pair2The fusion weight of the jth entity of (1);is represented by A*From knowledge-graph G in the k-th entity pair1The embedding of the ith entity of (1),is represented by A*To middlek pairs of entities from knowledge graph G1(ii) embedding of the ith entity of (1);is represented by A*From knowledge-graph G in the k-th entity pair1The ith entity of (2) is subjected to embedding after a door mechanism,is represented by A*From knowledge-graph G in the k-th entity pair2The jth entity of (1) is subjected to embedding after a door mechanism,
step 5.3: calculating characteristic matrixes after cross-knowledge-map aggregation layers for all entities in the first knowledge map and the second knowledge map;
the characteristic matrix after the cross-knowledge-map aggregation layer is as follows:
wherein, the calculation formula of crossAggr is as follows:
wherein,representing a feature representation of an ith entity from the first knowledge-graph in a kth entity pair of the set of translated aligned seed entities,representing post-translation aligned set of seed entities A*The feature table of the ith entity from the second knowledge-graph in the kth entity pairShown in the specification;
step 6: collecting neighbor information of each entity in the first knowledge graph and the second knowledge graph through the attention-based neural network;
step 6, collecting neighbor information of each entity in the first knowledge graph and the second knowledge graph based on the attention neural network is as follows:
step 6.1: the weights, i.e. attention, are calculated according to the neighbor entities of the entity, and the specific calculation formula is as follows:
wherein, the first and second guide rollers are arranged in a row,representing neutralizing entity e in a first knowledge-graphk,1,iSet of linked neighbour entities, exp is an exponential function;representing the characteristic representation of the ith entity from the first knowledge graph in the kth entity pair in the translated aligned seed entity set after a door mechanism;andrespectively represent A*Of the kth pair of entities from the ith entity of the first knowledge-graph, a1,k,iRepresenting the weight of the ith entity from the first knowledge-graph G1 in the kth pair of entities in a;
wherein LeakyRelu is an activation function, V is a learnable network parameter matrix,show thatAndboth features represent the execution of the splicing operation,as entity ek,1,iIs determined to be a certain one of the neighbour entities,representing feature representations of the ith entity from the first knowledge-graph in the kth entity pair of the post-translation aligned seed entity set after a door mechanism
Step 6.2: and fusing neighbor information according to the weight calculated in the step 6.1 to collect the neighbor information of the entity, wherein the calculation formula is as follows:
wherein Relu is the activation function, αk,1,i,pIs the first knowledge-graph G1The weight of the p-th neighbor of the ith entity in (1). crossAtt is the attention-based graph neural network layer, crosssaggr is the cross-knowledgegraph aggregation layer,is represented by A*From knowledge-graph G in the k-th entity pair1Is characterized by the i-th entity ofIs represented by A*In the kth entity pair, the ith entity from the first knowledge graph is represented by the characteristics of the gate mechanism;is represented by A*(ii) a characteristic representation of a p-th neighbor of the k-th entity pair from the i-th entity of the first knowledge-graph; alpha is alpha1,k,iRepresenting the weight of the ith entity from the first knowledge-graph G1 in the kth pair of entities in a;
step 6.3: calculating weights of all entities of the first knowledge graph and the second knowledge graph according to neighbor entities of the entities and fusing neighbor information according to the calculated weights so as to collect the neighbor information of the entities, namely step 6.1 and step 6.2, and then obtaining a feature matrix after the neighbor information is collected by a graph neural network layer based on attention, namely:
wherein, crossAtt is the neural network layer of the attention-based map,is the output feature representation of the first knowledge-graph through the cross-knowledge-graph aggregation layer;is the output feature representation of the second knowledge-graph by the cross-knowledge-graph aggregation layer;
and 7: the context is aligned to the seed set and the initialized characteristic of the entity is input into the model, and the characteristic representation of each entity containing the map fusion information and the multi-hop neighbor information can be obtained through the steps 5 and 6, and the formula is as follows:
wherein,is the output of the first knowledge-graph through the l-th CGAT layer; whereinIs the output of the second knowledge-graph through the l-th CGAT layer;features initialized for the first knowledge-graphFeatures initialized for the second knowledge-graphcrossAtt is the attention-based graph neural network layer; crossknowledgeable aggregation layer. By superposing L CGAT layers, the characteristics of the first knowledge graph and the second knowledge graph can be updated for L times, and finally the updated characteristics are outputAnd
calculating a target loss function on the target alignment seed set, wherein the formula is as follows:
wherein,represents the final embedding of the entity,represents the L1 distance between two entities, φ is a model parameter that can be trained, e-Refers to a negative entity corresponding to a certain entity, AobjIs the data used to optimize the model parameters.
It should be understood that parts of the application not described in detail are prior art.
It should be understood that the above description of the preferred embodiments is given for clearness of understanding and no unnecessary limitations should be understood therefrom, and all changes and modifications may be made by those skilled in the art without departing from the scope of the invention as defined by the appended claims.
Claims (7)
1. A cross-language entity alignment method of a context alignment enhanced graph attention network is characterized by comprising the following steps:
step 1: introducing a first knowledge graph and a second knowledge graph, screening an aligned seed entity set according to the first knowledge graph and the second knowledge graph, translating the name of each entity in each aligned entity pair in the aligned seed entity set into English, defining the translated name of each entity in each aligned entity pair in the aligned seed entity set, and constructing a training set and a test set from the translated aligned seed entity set;
step 2: converting the translated name of each entity in the aligned entity pair in the aligned seed entity set into a word vector of the entity name by using a word2vec algorithm, and 2, summing the word vectors of each entity name to serve as the initialization feature of the entity, and respectively constructing the initialization feature of the first knowledge graph and the initialization feature of the second knowledge graph;
and step 3: randomly dividing a training set into a context alignment seed set and a target alignment seed set, and constructing input data of a neural network through the context alignment seed set, the target alignment seed set, the initialization feature of a first knowledge graph and the initialization feature of a second knowledge graph;
and 4, step 4: information transmission of different knowledge graphs is carried out through a cross-knowledge graph aggregation layer;
and 5: collecting neighbor information of each entity in the first knowledge graph and the second knowledge graph through the attention-based neural network;
step 6: and (5) inputting the context alignment seed set and the initialized characteristic of the entity into the model, and obtaining the characteristic of each entity containing the map fusion information and the multi-hop neighbor information through the steps 4 and 5.
2. The method of cross-language entity alignment for a context alignment enhanced graph attention network of claim 1, wherein:
step 1 the first knowledge-graph is:
G1=(E1,R1,T1)
step 1 the second knowledge-graph is:
G2=(E2,R2,T2)
wherein E is1Set of entities, R, representing a first knowledge-graph1Set of relationships, T, representing a first knowledge-graph1Three tuple sets representing a first knowledge graph, E2Set of entities, R, representing a second knowledge-graph2Set of relationships, T, representing a second knowledge-graph2A set of triplets representing a second knowledge-graph,
step 1, aligning the seed entity sets:
A={ak=(ek,1,i,ek,2,j)|ek,1,i∈E1,ek,2,j∈E2}
i∈[1,M],j∈[1,N]
k∈[1,K]
wherein A represents alignedSet of seed entities, akRepresents the K-th aligned entity pair, K represents the number of aligned entity pairs in the aligned set of seed entities, ek,1,iRepresenting the ith entity from the entity set of the first knowledge-graph in the k-th aligned pair of entities, ek,2,jRepresenting the jth entity of the k aligned pair of entities from the set of entities of the second knowledge-graph, ek,1,iAnd ek,2,jThe two Chinese meanings are the same, M represents the number of the entities from the first knowledge graph in A, and N represents the number of the entities from the second knowledge graph in A;
step 1, translating the name of each aligned entity pair in the aligned seed entity set into english as follows:
align the names of the entities in each aligned entity pair in A, i.e. ek,1,i,ek,2,jAll translated into English, and the set of aligned seed entities after translation is marked as A*Specifically defined as:
A*={a* k=(e* k,1,i,e* k,2,j)}
i∈[1,M],j∈[1,N]
k∈[1,K]
wherein A is*Representing a set of post-translationally aligned seed entities, a* kRepresenting the K-th aligned entity pair after translation, K representing the number of aligned entity pairs in the set of seed entities aligned after translation, e* k,1,iRepresenting the translated ith entity from the entity set of the first knowledge-graph in the k aligned pair of entities, e* k,2,jRepresenting the i-th translated entity from the entity set of the first knowledge-graph in the k-th aligned entity pair, M represents A*The number of entities from the first knowledge-graph in (A), N represents*The number of entities from the second knowledge-graph;
step 1, defining the name of each translated entity as:
the translated name of each entity is contained in a plurality of English words, and is specifically represented as follows:
wherein, wordk,1,i,tRepresenting the tth word, of the translated ith entity from the entity set of the first knowledge-graph in the kth aligned pair of entitiesk,2,j,tRepresenting the tth word of the translated jth entity from the entity set of the second knowledge-graph in the kth aligned entity pair, n being the total number of words of the translated ith entity from the entity set of the first knowledge-graph in the kth aligned entity pair, m being the total number of words of the translated jth entity from the entity set of the second knowledge-graph in the kth aligned entity pair;
step 1, the construction of a training set and a test set from the set of translated aligned seed entities is as follows: from a set of post-translationally aligned seed entities, namely A*Randomly selecting P entities from the K aligned entity pairs as a training set, and using AtrainIs represented by A*The remaining K-P aligned entity pairs in the test set, using AtestAnd (4) showing.
3. The method of cross-language entity alignment for a context alignment enhanced graph attention network of claim 1, wherein:
step 2, the word vector of each entity name is:
i∈[1,M],j∈[1,N]
k∈[1,K]
wherein,a word vector representing the t word of the translated ith entity from the entity set of the first knowledge-graph in the k aligned pair of entities,a word vector representing the t word of the translated j entity from the entity set of the second knowledge-graph in the K aligned entity pair, K represents the number of aligned entity pairs in the set of translated aligned seed entities, M represents A*The number of entities from the first knowledge-graph in (A), N represents*The number of entities from the second knowledge-graph;
step 2, summing the word vectors of each entity name as the initialization feature of the entity, specifically as follows:
where n is the total number of words of the translated ith entity from the entity set of the first knowledge-graph in the kth aligned pair of entities, m is the total number of words of the translated ith entity from the entity set of the first knowledge-graph in the kth aligned pair of entities,representing initialization features of the translated ith entity from the entity set of the first knowledge-graph in the kth aligned pair of entities,representing initialized features of a translated jth entity of the kth aligned pair of entities from the entity set of the second knowledge-graph;
step 2, respectively constructing the initialization features from the first knowledge graph and the initialization features from the second knowledge graph as follows:
initialization features of the second knowledge-graph are notedThe specific definition is as follows:
wherein E is1Set of entities representing a first knowledge-graph, E2Set of entities representing a second knowledge-graph, ek,1,iRepresenting the ith entity from the entity set of the first knowledge-graph in the k-th aligned pair of entities, ek,2,jRepresenting the jth entity, A, of the k-th aligned pair of entities from the set of entities of the second knowledge-graph*Representing a set of post-translationally aligned seed entities, e* k,1,iRepresenting the translated ith entity from the entity set of the first knowledge-graph in the k aligned pair of entities, e* k,2,jRepresenting a translated jth entity from the entity set of the second knowledge-graph in the kth aligned pair of entities.
4. The method of cross-language entity alignment for a context alignment enhanced graph attention network of claim 1, wherein:
step 3 the training set is Atrain;
Step 3, the context alignment seed set is ActxAs input data for the model, for the first knowledge-graph, the second knowledge-graphInformation is transmitted between knowledge maps;
step 3, the target alignment seed set is AobjFor calculating a loss function;
step 3, the initialization characteristic of the first knowledge graph isAs input data for the model;
5. The method of cross-language entity alignment for a context alignment enhanced graph attention network of claim 1, wherein:
and 4, the information propagation of different knowledge graphs through the cross-knowledge-graph aggregation layer comprises the following steps:
step 4.1: calculating the weight of the information fusion of the first knowledge graph and the second knowledge graph by using a door mechanism according to the initialization characteristics of the translated entity from the entity set of the first knowledge graph in the seed set and the aligned entity pair in the following text alignment and the initialization characteristics of the translated entity from the entity set of the second knowledge graph in the aligned entity pair, and specifically calculating the weight as follows:
i∈[1,M],j∈[1,N]
k∈[1,K]
wherein, | | represents the splicing operation of the vector, W is a weight matrix learned in the training process, b is a deviation, and σ is a sigmoid activation function;is A*From knowledge-graph G in the k-th entity pair1The characteristic representation of the i-th entity of (1),is A*From knowledge-graph G in the k-th entity pair2A characteristic representation of the jth entity in (a); l refers to the l-th layer of the network; k represents the number of aligned entity pairs in the set of post-translation aligned seed entities, M represents A*The number of entities from the first knowledge-graph G1, N denotes A*The number of entities from the second knowledge-graph G2;is represented by A*From knowledge-graph G in the k-th entity pair1The fusion weight of the ith entity of (1),is represented by A*From knowledge-graph G in the k-th entity pair2The fusion weight of the jth entity of (1);
step 4.2: and fusing the information of the first knowledge graph and the second knowledge graph by using a door mechanism to obtain a fused representation of the first knowledge graph and the second knowledge graph, wherein the specific calculation is as follows:
wherein,is represented by A*From knowledge-graph G in the k-th entity pair1The fusion weight of the ith entity of (1),is represented by A*From knowledge-graph G in the k-th entity pair2The fusion weight of the jth entity of (1);is represented by A*From knowledge-graph G in the k-th entity pair1The characteristic representation of the i-th entity of (1),from knowledge-graph G in the k-th entity pair1A characterization of the ith entity of (1);is represented by A*From knowledge-graph G in the k-th entity pair1Characterized by the door mechanism of the ith entity,is represented by A*From knowledge-graph G in the k-th entity pair2Characterized by the door mechanism of the jth entity of (a),
step 4.3: calculating characteristic matrixes after cross-knowledge-map aggregation layers for all entities in the first knowledge map and the second knowledge map;
the characteristic matrix after the cross-knowledge-map aggregation layer is as follows:
wherein, the calculation formula of crossAggr is as follows:
wherein,representing a feature representation of an ith entity from the first knowledge-graph in a kth entity pair of the set of translated aligned seed entities,representing post-translation aligned set of seed entities A*The feature representation of the ith entity from the second knowledge-graph in the kth pair of entities.
6. The method of cross-language entity alignment for a context alignment enhanced graph attention network of claim 1, wherein:
step 5, collecting neighbor information of each entity in the first knowledge graph and the second knowledge graph based on the attention neural network is as follows:
step 5.1: the weights, i.e. attention, are calculated according to the neighbor entities of the entity, and the specific calculation formula is as follows:
wherein, the first and second guide rollers are arranged in a row,representing neutralizing entity e in a first knowledge-graphk,1,iSet of linked neighbour entities, exp is an exponential function;representing post-translational aligned seedsPerforming door mechanism on the ith entity from the first knowledge graph in the kth entity pair in the entity set to represent the characteristics;andrespectively represent A*Of the kth pair of entities from the ith entity of the first knowledge-graph, a1,k,iRepresenting the weight of the ith entity from the first knowledge-graph G1 in the kth pair of entities in a;
wherein LeakyRelu is an activation function, V is a learnable network parameter matrix,show thatAndboth features represent the execution of the splicing operation,as entity ek,1,iIs determined to be a certain one of the neighbour entities,representing post-translation aligned sets of seed entitiesIn the kth entity pair, the ith entity from the first knowledge-graph is represented by the characteristics of the gate mechanism
Step 5.2: and (3) fusing neighbor information according to the weight calculated in the step (5.1) to collect the neighbor information of the entity, wherein the calculation formula is as follows:
wherein Relu is the activation function, αk,1,i,pIs the first knowledge-graph G1The weight of the p-th neighbor of the ith entity in (1); crossAtt is the attention-based graph neural network layer, crosssaggr is the cross-knowledgegraph aggregation layer,is represented by A*From knowledge-graph G in the k-th entity pair1Is characterized by the i-th entity ofIs represented by A*In the kth entity pair, the ith entity from the first knowledge graph is represented by the characteristics of the gate mechanism;is represented by A*(ii) a characteristic representation of a p-th neighbor of the k-th entity pair from the i-th entity of the first knowledge-graph; alpha is alpha1,k,iRepresenting the weight of the ith entity from the first knowledge-graph G1 in the kth pair of entities in a;
step 5.3: calculating weights of all entities of the first knowledge graph and the second knowledge graph according to neighbor entities of the entities and fusing neighbor information according to the calculated weights so as to collect the neighbor information of the entities, namely step 5.1 and step 5.2, and then obtaining a feature matrix after the neighbor information is collected by a graph neural network layer based on attention, namely:
wherein, crossAtt is the neural network layer of the attention-based map,is the output feature representation of the first knowledge-graph through the cross-knowledge-graph aggregation layer;is that the second knowledge-graph is represented by output features across knowledge-graph aggregation layers.
7. The method of cross-language entity alignment for a context alignment enhanced graph attention network of claim 1, wherein:
step 6, the characteristics of each entity containing the map fusion information and the multi-hop neighbor information are as follows:
wherein,is the output of the first knowledge-graph through the l-th CGAT layer; whereinIs the output of the second knowledge-graph through the l-th CGAT layer;features initialized for the first knowledge-graph Features initialized for the second knowledge-graphcrossAtt is the attention-based graph neural network layer; cross-knowledge-map aggregation layer; by superposing L CGAT layers, the characteristics of the first knowledge graph and the second knowledge graph can be updated for L times, and finally the updated characteristics are outputAnd
calculating a target loss function on the target alignment seed set, wherein the formula is as follows:
wherein,the final characteristic representation of the representation entity,represents the L1 distance between two entities, φ is a model parameter that can be trained, e-Refers to a negative entity corresponding to a certain entity, AobjIs data used to optimize model parameters;
and optimizing and updating model parameters phi by using an Adam algorithm, wherein the model parameters phi comprise parameters which are trainable across the knowledge graph aggregation layer and the attention-based graph neural network layer, and constructing a context-based alignment enhancement graph attention network model according to the optimized parameters phi.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011201832.1A CN112417159B (en) | 2020-11-02 | 2020-11-02 | Cross-language entity alignment method of context alignment enhanced graph attention network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011201832.1A CN112417159B (en) | 2020-11-02 | 2020-11-02 | Cross-language entity alignment method of context alignment enhanced graph attention network |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112417159A true CN112417159A (en) | 2021-02-26 |
CN112417159B CN112417159B (en) | 2022-04-15 |
Family
ID=74827829
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011201832.1A Active CN112417159B (en) | 2020-11-02 | 2020-11-02 | Cross-language entity alignment method of context alignment enhanced graph attention network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112417159B (en) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112905807A (en) * | 2021-03-25 | 2021-06-04 | 北京邮电大学 | Multi-source space-time knowledge fusion method |
CN113157935A (en) * | 2021-03-16 | 2021-07-23 | 中国科学技术大学 | Graph neural network model and method for entity alignment based on relationship context |
CN113392216A (en) * | 2021-06-23 | 2021-09-14 | 武汉大学 | Remote supervision relation extraction method and device based on consistency text enhancement |
CN113407759A (en) * | 2021-08-18 | 2021-09-17 | 中国人民解放军国防科技大学 | Multi-modal entity alignment method based on adaptive feature fusion |
CN113704495A (en) * | 2021-08-30 | 2021-11-26 | 合肥智能语音创新发展有限公司 | Entity alignment method and device, electronic equipment and storage medium |
CN113761221A (en) * | 2021-06-30 | 2021-12-07 | 中国人民解放军32801部队 | Knowledge graph entity alignment method based on graph neural network |
CN115080761A (en) * | 2022-06-08 | 2022-09-20 | 昆明理工大学 | Semantic perception-based low-resource knowledge graph entity alignment method |
CN116257643A (en) * | 2023-05-09 | 2023-06-13 | 鹏城实验室 | Cross-language entity alignment method, device, equipment and readable storage medium |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110929041A (en) * | 2019-11-20 | 2020-03-27 | 北京邮电大学 | Entity alignment method and system based on layered attention mechanism |
CN111723587A (en) * | 2020-06-23 | 2020-09-29 | 桂林电子科技大学 | Chinese-Thai entity alignment method oriented to cross-language knowledge graph |
-
2020
- 2020-11-02 CN CN202011201832.1A patent/CN112417159B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110929041A (en) * | 2019-11-20 | 2020-03-27 | 北京邮电大学 | Entity alignment method and system based on layered attention mechanism |
CN111723587A (en) * | 2020-06-23 | 2020-09-29 | 桂林电子科技大学 | Chinese-Thai entity alignment method oriented to cross-language knowledge graph |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113157935A (en) * | 2021-03-16 | 2021-07-23 | 中国科学技术大学 | Graph neural network model and method for entity alignment based on relationship context |
CN113157935B (en) * | 2021-03-16 | 2024-02-27 | 中国科学技术大学 | Entity alignment based on relation context and graph neural network system and method |
CN112905807A (en) * | 2021-03-25 | 2021-06-04 | 北京邮电大学 | Multi-source space-time knowledge fusion method |
CN113392216A (en) * | 2021-06-23 | 2021-09-14 | 武汉大学 | Remote supervision relation extraction method and device based on consistency text enhancement |
CN113392216B (en) * | 2021-06-23 | 2022-06-17 | 武汉大学 | Remote supervision relation extraction method and device based on consistency text enhancement |
CN113761221A (en) * | 2021-06-30 | 2021-12-07 | 中国人民解放军32801部队 | Knowledge graph entity alignment method based on graph neural network |
CN113407759A (en) * | 2021-08-18 | 2021-09-17 | 中国人民解放军国防科技大学 | Multi-modal entity alignment method based on adaptive feature fusion |
CN113407759B (en) * | 2021-08-18 | 2021-11-30 | 中国人民解放军国防科技大学 | Multi-modal entity alignment method based on adaptive feature fusion |
CN113704495A (en) * | 2021-08-30 | 2021-11-26 | 合肥智能语音创新发展有限公司 | Entity alignment method and device, electronic equipment and storage medium |
CN113704495B (en) * | 2021-08-30 | 2024-05-28 | 合肥智能语音创新发展有限公司 | Entity alignment method, device, electronic equipment and storage medium |
CN115080761A (en) * | 2022-06-08 | 2022-09-20 | 昆明理工大学 | Semantic perception-based low-resource knowledge graph entity alignment method |
CN116257643A (en) * | 2023-05-09 | 2023-06-13 | 鹏城实验室 | Cross-language entity alignment method, device, equipment and readable storage medium |
CN116257643B (en) * | 2023-05-09 | 2023-08-25 | 鹏城实验室 | Cross-language entity alignment method, device, equipment and readable storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN112417159B (en) | 2022-04-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112417159B (en) | Cross-language entity alignment method of context alignment enhanced graph attention network | |
Han et al. | A survey on metaheuristic optimization for random single-hidden layer feedforward neural network | |
CN108095716B (en) | Electrocardiosignal detection method based on confidence rule base and deep neural network | |
CN106650789A (en) | Image description generation method based on depth LSTM network | |
CN114898121A (en) | Concrete dam defect image description automatic generation method based on graph attention network | |
CN112685504B (en) | Production process-oriented distributed migration chart learning method | |
US20230401390A1 (en) | Automatic concrete dam defect image description generation method based on graph attention network | |
CN112015868A (en) | Question-answering method based on knowledge graph completion | |
CN110322959B (en) | Deep medical problem routing method and system based on knowledge | |
CN113190688A (en) | Complex network link prediction method and system based on logical reasoning and graph convolution | |
CN114491039B (en) | Primitive learning few-sample text classification method based on gradient improvement | |
CN112529188B (en) | Knowledge distillation-based industrial process optimization decision model migration optimization method | |
CN114818703B (en) | Multi-intention recognition method and system based on BERT language model and TextCNN model | |
CN114564596A (en) | Cross-language knowledge graph link prediction method based on graph attention machine mechanism | |
CN114969367B (en) | Cross-language entity alignment method based on multi-aspect subtask interaction | |
CN113344044A (en) | Cross-species medical image classification method based on domain self-adaptation | |
CN117131933A (en) | Multi-mode knowledge graph establishing method and application | |
CN115330142B (en) | Training method of joint capacity model, capacity demand matching method and device | |
CN114580388A (en) | Data processing method, object prediction method, related device and storage medium | |
CN115101145A (en) | Medicine virtual screening method based on adaptive meta-learning | |
CN118171231A (en) | Multi-dimensional feature fused dynamic graph neurocognitive diagnosis method | |
Zheng et al. | Learning from the guidance: Knowledge embedded meta-learning for medical visual question answering | |
CN115936073A (en) | Language-oriented convolutional neural network and visual question-answering method | |
Cao et al. | [Retracted] Online and Offline Interaction Model of International Chinese Education Based on Few‐Shot Learning | |
Remya | An adaptive neuro-fuzzy inference system to monitor and manage the soil quality to improve sustainable farming in agriculture |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |