WO2022011681A1 - 一种基于迭代补全的知识图谱融合方法 - Google Patents
一种基于迭代补全的知识图谱融合方法 Download PDFInfo
- Publication number
- WO2022011681A1 WO2022011681A1 PCT/CN2020/102683 CN2020102683W WO2022011681A1 WO 2022011681 A1 WO2022011681 A1 WO 2022011681A1 CN 2020102683 W CN2020102683 W CN 2020102683W WO 2022011681 A1 WO2022011681 A1 WO 2022011681A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- entity
- vector
- knowledge graph
- similarity
- entities
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 21
- 239000013598 vector Substances 0.000 claims abstract description 124
- 238000012549 training Methods 0.000 claims abstract description 13
- 230000004927 fusion Effects 0.000 claims abstract description 9
- 230000008447 perception Effects 0.000 claims abstract description 5
- 239000011159 matrix material Substances 0.000 claims description 40
- 238000007500 overflow downdraw method Methods 0.000 claims description 12
- 230000008569 process Effects 0.000 claims description 6
- 238000012545 processing Methods 0.000 claims description 4
- 238000012935 Averaging Methods 0.000 claims description 3
- 238000004364 calculation method Methods 0.000 claims description 3
- 238000012512 characterization method Methods 0.000 claims description 3
- 230000006870 function Effects 0.000 claims description 3
- 230000000694 effects Effects 0.000 abstract description 7
- 238000010586 diagram Methods 0.000 description 3
- 239000000047 product Substances 0.000 description 2
- 238000006467 substitution reaction Methods 0.000 description 2
- 230000001502 supplementing effect Effects 0.000 description 2
- 230000003190 augmentative effect Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 230000007812 deficiency Effects 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 238000003058 natural language processing Methods 0.000 description 1
- 239000013589 supplement Substances 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/28—Databases characterised by their database models, e.g. relational or object models
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/36—Creation of semantic tools, e.g. ontology or thesauri
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/042—Knowledge-based neural networks; Logical representations of neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/044—Recurrent networks, e.g. Hopfield networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N5/00—Computing arrangements using knowledge-based models
- G06N5/02—Knowledge representation; Symbolic representation
- G06N5/022—Knowledge engineering; Knowledge acquisition
Definitions
- the invention belongs to the technical field of natural language processing, relates to knowledge graph generation and fusion, and in particular relates to a knowledge graph fusion method based on iterative completion.
- a feasible method is to introduce relevant knowledge from other knowledge graphs, because knowledge graphs constructed in different ways have knowledge redundancy and complementarity.
- a general knowledge graph constructed from web pages may only contain the names of scientists, while more information can be found in an academic knowledge graph constructed based on academic data.
- the most important step is to align the different knowledge graphs.
- the entity alignment (EA) task was proposed and received extensive attention. This task aims to find entity pairs expressing the same meaning in different knowledge graphs. These entity pairs serve as a hub to link different knowledge graphs and serve subsequent tasks.
- mainstream entity alignment methods mainly rely on the structural features of knowledge graphs to determine whether two entities point to the same thing. Such methods assume that entities expressing the same meaning in different knowledge graphs have similar adjacency information.
- the entity generation structure vector proposed by Lingbing Guo et al., and then the recognition of entity pairs has achieved certain results (Reference: Lingbing Guo, Zequn Sun, and Wei Hu. 2019. Learning to Exploit Long-term Relational Dependencies in Knowledge Graphs. In Proceedings of the 36th International Conference on Machine Learning,ICML 2019,9-15 June 2019,Long Beach,California,USA.2505–2514), on artificially constructed datasets, this kind of method achieved the best experimental results .
- the knowledge graphs in these artificially constructed datasets are denser than the real-world knowledge graphs, and the entity alignment method based on structural features is much less effective on the knowledge graphs with normal distribution.
- the purpose of the present invention is to propose a knowledge graph fusion method based on iterative completion, which overcomes the deficiencies of the prior art and is used for identifying and aligning the same or similar entities from multiple knowledge graphs, and then Realize the knowledge fusion of multiple knowledge graphs, and improve the coverage and accuracy of knowledge graphs.
- a knowledge graph fusion method based on iterative completion includes the following steps:
- Step 1 obtain multiple knowledge graph data, and identify all entities in the knowledge graph
- Step 2 perform structure vector representation learning on all entities to obtain the structure vector of each entity; perform entity name vector representation learning on all entities to obtain the entity name vector of each entity;
- Step 3 calculates the structural similarity between entities according to the described structure vector, calculates the entity name similarity between the entities according to the described entity name vector;
- Step 4 establish a mutual attention network based on degree perception, and calculate the entity similarity between entities after fusion;
- Step 5 Select high-confidence entity pairs according to the entity similarity, and use iterative training to complete the knowledge map to obtain a fused knowledge map.
- the calculation process of the mutual attention network includes the following steps:
- Step 401 construct feature matrix: construct a feature matrix for each entity, which is represented by the entity name vector of the entity. structure vector and the entity degree vector composition, the entity degree vector is in, is the one-hot vector of the entity degree, is the fully connected parameter matrix, d g is the dimension of the degree vector, and for the entity e 1 , its feature matrix is further expressed as:
- d m max ⁇ d n ,d s ,d g ⁇ , d n represents the dimension of the entity name vector, and d s represents the dimension of the structure vector;
- Step 402 Calculate the mutual attention similarity matrix: a feature matrix that dynamically characterizes the entity e 1 and the feature matrix of entity e 2 The correlation between, constructs a mutual attention similarity matrix Wherein the degree of similarity between an i-th feature e e 1 and j-th 2 wherein:
- Step 403 assign weights, calculate entity similarity: use mutual attention similarity matrix Generate attention vector and first It is sent to the softmax layer, and then sent to the averaging layer to generate the attention vector, where Wherein the degree of characterization related to e 1 e 2 features, and Represents the degree of correlation between the e 2 feature and the e 1 feature. Finally, by multiplying the similarity values of different features with their weights, the fused entity similarity value is obtained:
- the first and second values of respectively represent the weights corresponding to the structural similarity Sim s (e 1 , e 2 ) and the entity name similarity Sim t (e 1 , e 2 ).
- the entity name vector is a power-averaged word vector.
- the word vectors of all words that constitute the entity name are expressed in matrix form as where l represents the number of words, d represents the dimension of the embedding, and Perform the power average operation to generate the power average word vector,
- the power average operation formula is:
- the entity name vector adopts the splicing K-th power average word vector.
- the word vectors of all the words that constitute the entity name are expressed in matrix form as: Among them, l represents the number of words, and d represents the dimension of the embedding.
- the K-th power average word vector is calculated for the word vector of the entity name, and then the K-th power average word vector is spliced to generate the entity name vector. which is:
- the specific values of the K different power averages are respectively 1, negative infinity and positive infinity.
- the structural similarity Sim s (e 1 , e 2 ) is the structural vector of the two entities and The cosine similarity of
- the entity name similarity Sim t (e 1 , e 2 ) is the entity name vector of the two entities and The cosine similarity of .
- the step of selecting a high-confidence entity pair is: for each entity e 1 in the original knowledge graph, it is assumed that the most similar entity in the external knowledge graph is e 2 , and the second similar entity is e' 2 , the similarity difference is For e 2 in the external knowledge graph, the most similar entity in the original knowledge graph is exactly e 1 , the second similar entity is e′ 1 , and the similarity difference is If the similarity difference values ⁇ 1 , ⁇ 2 are both higher than a predetermined value, then (e 1 , e 2 ) is considered to be a high-confidence entity pair;
- the iterative training process for the completion of the knowledge graph is multi-round. For each triple in the external knowledge graph, if the head entity and the tail entity are both in the original knowledge graph, the entity in the external knowledge graph will be replaced. Then use the added knowledge graph to re-learn the structure vector, calculate the entity similarity, and generate a new high-confidence entity pair, continue The knowledge graph is added and completed until the stop condition is met, and the iterative training is stopped.
- the present invention has the following advantages and beneficial effects:
- a degree-aware mutual attention network is proposed to fuse entity name information and structure information, so that the alignment effect is better;
- Fig. 1 is the overall flow schematic diagram of the embodiment of the present invention.
- FIG. 2 is a structural diagram of a mutual attention network according to an embodiment of the present invention.
- FIG. 3 is an overall flow frame diagram of an embodiment of the present invention.
- a knowledge graph fusion method based on iterative completion includes the following steps:
- Step 1 obtain multiple knowledge graph data, and identify all entities in the knowledge graph
- Step 2 perform structure vector representation learning on all entities to obtain the structure vector of each entity; perform entity name vector representation learning on all entities to obtain the entity name vector of each entity;
- Step 3 calculates the structural similarity between entities according to the described structure vector, calculates the entity name similarity between the entities according to the described entity name vector;
- Step 4 establish a mutual attention network based on degree perception, and calculate the entity similarity between entities after fusion;
- Step 5 Select high-confidence entity pairs according to the entity similarity, and use iterative training to complete the knowledge map to obtain a fused knowledge map.
- the learning of the structure vector can use the existing method in the background technology to generate the structure vector, and the structure matrix is expressed as: where n represents the number of entities and d s represents the dimension of the structure vector.
- the entity name vector can be a power-averaged word vector.
- the word vector of all the words constituting the entity name is expressed in matrix form as where l represents the number of words, d represents the dimension of the embedding, and Perform the power average operation to generate the power average word vector,
- the power average operation formula is:
- the entity name vector can be spliced with the K-th power average word vector. After the average word vector is spliced, the entity name vector is generated which is:
- K represents the specific value of K different power averages.
- the specific numerical values of the K different power averages are respectively 1, negative infinity and positive infinity.
- the concatenated power average word vector can capture more information of the entity name and reduce the uncertainty of the vector representation.
- the input of the mutual attention network is the structural similarity Sim s (e 1 , e 2 ) between the two entities, the entity name similarity Sim t (e 1 , e 2 ) and the degree of the entity.
- the calculation process includes the following step:
- Step 401 construct feature matrix: construct a feature matrix for each entity, which is represented by the entity name vector of the entity. structure vector and the entity degree vector composition, the entity degree vector is in, is the one-hot vector of the entity degree, is the fully connected parameter matrix, d g is the dimension of the degree vector, and for the entity e 1 , its feature matrix is further expressed as:
- Step 402 Calculate the mutual attention similarity matrix: a feature matrix that dynamically characterizes the entity e 1 and the feature matrix of entity e 2 The correlation between, constructs a mutual attention similarity matrix Wherein the degree of similarity between an i-th feature e e 1 and j-th 2 wherein:
- Step 403 assign weights, calculate entity similarity: use mutual attention similarity matrix Generate attention vector and first It is sent to the softmax layer, and then sent to the averaging layer to generate the attention vector, where Wherein the degree of characterization related to e 1 e 2 features, and Represents the degree of correlation between the e 2 feature and the e 1 feature. Finally, by multiplying the similarity values of different features with their weights, the fused entity similarity value is obtained:
- the first and second values of respectively represent the weights corresponding to the structural similarity Sim s (e 1 , e 2 ) and the entity name similarity Sim t (e 1 , e 2 ).
- Long-tail entities may have little structural information in the original knowledge graph, but richer structural information in the external knowledge graph. If the structural information in the external knowledge graph can be introduced to supplement the structural information of the long-tail entities in the original knowledge graph, the long-tail problem can be alleviated to a certain extent and the coverage of the knowledge graph can be improved.
- the augmented knowledge graph can generate more accurate structure vectors and improve the effect of entity alignment.
- the step of selecting a high-confidence entity pair is: for each entity e 1 in the original knowledge graph, it is assumed that the most similar entity in the external knowledge graph is e 2 , the second similar entity is e' 2 , and the similarity
- the degree difference is For e 2 in the external knowledge graph, the most similar entity in the original knowledge graph is exactly e 1 , the second similar entity is e′ 1 , and the similarity difference is If the similarity difference values ⁇ 1 , ⁇ 2 are both higher than a predetermined value, then (e 1 , e 2 ) is considered to be a high-confidence entity pair;
- the iterative training process for the completion of the knowledge graph is multi-round. For each triple in the external knowledge graph, if the head entity and the tail entity are both in the original knowledge graph, the entity in the external knowledge graph will be replaced. Then use the added knowledge graph to re-learn the structure vector, calculate the entity similarity, and generate a new high-confidence entity pair, continue The knowledge graph is added and completed until the stop condition is met, and the iterative training is stopped.
- the method of the present invention proposes a new entity alignment framework, as shown in Figure 3, so as to better realize the fusion of knowledge graphs.
- the main technical effects of the present invention are as follows:
- the method of the present invention takes the entity name as a new alignment information.
- the method of the present invention uses the entity name as a separate feature, and represents the entity name by splicing the power average word vector, which can capture more of the entity name. information, and reduce the uncertainty of vector representation; in the alignment stage, it is observed that for entities with different degrees, the importance of structural information and entity name information is also different, a mutual attention network is designed to determine under the guidance of degrees weights of different features, and effectively integrate multi-source information; in the post-alignment processing stage, an iterative training algorithm based on knowledge graph completion is proposed. While supplementing the knowledge graph structure information, it iteratively improves the entity alignment effect, thereby making Long tailed entities are easier to align.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Computational Linguistics (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Databases & Information Systems (AREA)
- Medical Informatics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
Description
Claims (7)
- 一种基于迭代补全的知识图谱融合方法,其特征在于,包括以下步骤:步骤1,获取多个知识图谱数据,识别知识图谱中的所有实体;步骤2,对所有实体进行结构向量表示学习,获得每一个实体的结构向量;对所有实体进行实体名向量表示学习,获得每一个实体的实体名向量;步骤3,根据所述的结构向量计算实体间的结构相似度,根据所述的实体名向量计算实体间的实体名相似度;步骤4,建立基于度感知的互注意力网络,计算融合后实体间的实体相似度;步骤5,根据所述的实体相似度选择高置信度实体对,并采用迭代训练进行知识图谱补全,获得融合后的知识图谱。
- 根据权利要求1所述的知识图谱融合方法,其特征在于,所述的互注意力网络的计算过程包括以下步骤:步骤401,构建特征矩阵:为每个实体构建一个特征矩阵,由所述实体的实体名向量 结构向量 以及实体度向量 组成,实体度向量为 其中, 是所述实体度数的one-hot向量, 是全连接参数矩阵,d g是度向量的维度,对于实体e 1,其特征矩阵进一步表示为:其中;代表沿着列的拼接,d m=max{d n,d s,d g},d n表示实体名向量的维度,d s表示结构向量的维度;步骤403,分配权重,计算实体相似度:利用互注意力相似度矩阵 生成注意力向量 和 先将 送入softmax层,再送入平均化层,进而生成注意力向量,其中 表征e 1特征与e 2特征的相关程度,而 代表e 2特征与e 1特征的相关程度,最后,通过将不同特征的相似度值与其权重相乘,得到融合后的实体相似度值:
- 根据权利要求4所述的知识图谱融合方法,其特征在于,K个不同的幂平均的具体数值分别取1,负无穷和正无穷三个数值。
- 根据权利要求1所述的知识图谱融合方法,其特征在于,所述选择高置信度实体对的步骤为:对于原有知识图谱中每一个实体e 1,假定其在外部知识图谱中最相似的实体是e 2,第二相似的实体是e′ 2,相似度差值为 若对于外部知识图谱中的e 2,其在原有知识图谱中最相似的实体正好是e 1,第二相似的实体是e′ 1,并且相似度差值为 若相似度差值Δ 1,Δ 2均高于某一预设值的话,则认为(e 1,e 2)是一个高置信度实体对;所述知识图谱补全的迭代训练过程是多轮的,对于外部知识图谱中的每一个三元组,如果其头实体和尾实体均在原有知识图谱中,则将外部知识图谱中的实体换成原有知识图谱中对应的实体,并将其添入到原有知识图谱中;接着利用添入后的知识图谱重新学习结构向量、计算实体相似度,生成新的高置信度实体对,继续进行知识图谱添入补全,直到满足停止条件,停止迭代训练。
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2020/102683 WO2022011681A1 (zh) | 2020-07-17 | 2020-07-17 | 一种基于迭代补全的知识图谱融合方法 |
US18/097,292 US20230206127A1 (en) | 2020-07-17 | 2023-01-16 | Knowledge graph fusion method based on iterative completion |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2020/102683 WO2022011681A1 (zh) | 2020-07-17 | 2020-07-17 | 一种基于迭代补全的知识图谱融合方法 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US18/097,292 Continuation US20230206127A1 (en) | 2020-07-17 | 2023-01-16 | Knowledge graph fusion method based on iterative completion |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022011681A1 true WO2022011681A1 (zh) | 2022-01-20 |
Family
ID=79554468
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2020/102683 WO2022011681A1 (zh) | 2020-07-17 | 2020-07-17 | 一种基于迭代补全的知识图谱融合方法 |
Country Status (2)
Country | Link |
---|---|
US (1) | US20230206127A1 (zh) |
WO (1) | WO2022011681A1 (zh) |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114417845A (zh) * | 2022-03-30 | 2022-04-29 | 支付宝(杭州)信息技术有限公司 | 一种基于知识图谱的相同实体识别方法和系统 |
CN114626530A (zh) * | 2022-03-14 | 2022-06-14 | 电子科技大学 | 一种基于双边路径质量评估的强化学习知识图谱推理方法 |
CN114708929A (zh) * | 2022-04-08 | 2022-07-05 | 南京邮电大学 | 药物与药物相互作用预测模型及其训练方法 |
CN114942998A (zh) * | 2022-04-25 | 2022-08-26 | 西北工业大学 | 融合多源数据的知识图谱邻域结构稀疏的实体对齐方法 |
CN115203436A (zh) * | 2022-07-15 | 2022-10-18 | 国网江苏省电力有限公司信息通信分公司 | 一种基于有向图数据融合的电力知识图谱构建方法和装置 |
CN115292504A (zh) * | 2022-09-29 | 2022-11-04 | 北京如炬科技有限公司 | 实体关系分类方法、装置、设备及存储介质 |
CN115795060A (zh) * | 2023-02-06 | 2023-03-14 | 吉奥时空信息技术股份有限公司 | 一种基于知识增强的实体对齐方法 |
CN116187446A (zh) * | 2023-05-04 | 2023-05-30 | 中国人民解放军国防科技大学 | 基于自适应注意力机制的知识图谱补全方法、装置和设备 |
CN116702899A (zh) * | 2023-08-07 | 2023-09-05 | 上海银行股份有限公司 | 一种适用于公私联动场景的实体融合方法 |
CN117235285A (zh) * | 2023-11-09 | 2023-12-15 | 支付宝(杭州)信息技术有限公司 | 融合知识图谱数据的方法及装置 |
CN117251583A (zh) * | 2023-11-20 | 2023-12-19 | 湖北大学 | 基于局部图结构的文本增强知识图谱表示学习方法及系统 |
CN117370583A (zh) * | 2023-12-08 | 2024-01-09 | 湘江实验室 | 一种基于生成对抗网络的知识图谱实体对齐方法及系统 |
CN117725555A (zh) * | 2024-02-08 | 2024-03-19 | 暗物智能科技(广州)有限公司 | 多源知识树的关联融合方法、装置、电子设备及存储介质 |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110245299A (zh) * | 2019-06-19 | 2019-09-17 | 中国人民解放军国防科技大学 | 一种基于动态交互注意力机制的序列推荐方法及其系统 |
CN110489554A (zh) * | 2019-08-15 | 2019-11-22 | 昆明理工大学 | 基于位置感知互注意力网络模型的属性级情感分类方法 |
CN110851613A (zh) * | 2019-09-09 | 2020-02-28 | 中国电子科技集团公司电子科学研究院 | 基于实体概念的知识图谱补全、推演、存储方法及装置 |
US20200104311A1 (en) * | 2018-09-27 | 2020-04-02 | Babylon Partners Limited | Method and system for extracting information from graphs |
CN111291139A (zh) * | 2020-03-17 | 2020-06-16 | 中国科学院自动化研究所 | 基于注意力机制的知识图谱长尾关系补全方法 |
CN111339313A (zh) * | 2020-02-18 | 2020-06-26 | 北京航空航天大学 | 一种基于多模态融合的知识库构建方法 |
-
2020
- 2020-07-17 WO PCT/CN2020/102683 patent/WO2022011681A1/zh active Application Filing
-
2023
- 2023-01-16 US US18/097,292 patent/US20230206127A1/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20200104311A1 (en) * | 2018-09-27 | 2020-04-02 | Babylon Partners Limited | Method and system for extracting information from graphs |
CN110245299A (zh) * | 2019-06-19 | 2019-09-17 | 中国人民解放军国防科技大学 | 一种基于动态交互注意力机制的序列推荐方法及其系统 |
CN110489554A (zh) * | 2019-08-15 | 2019-11-22 | 昆明理工大学 | 基于位置感知互注意力网络模型的属性级情感分类方法 |
CN110851613A (zh) * | 2019-09-09 | 2020-02-28 | 中国电子科技集团公司电子科学研究院 | 基于实体概念的知识图谱补全、推演、存储方法及装置 |
CN111339313A (zh) * | 2020-02-18 | 2020-06-26 | 北京航空航天大学 | 一种基于多模态融合的知识库构建方法 |
CN111291139A (zh) * | 2020-03-17 | 2020-06-16 | 中国科学院自动化研究所 | 基于注意力机制的知识图谱长尾关系补全方法 |
Cited By (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114626530A (zh) * | 2022-03-14 | 2022-06-14 | 电子科技大学 | 一种基于双边路径质量评估的强化学习知识图谱推理方法 |
CN114417845A (zh) * | 2022-03-30 | 2022-04-29 | 支付宝(杭州)信息技术有限公司 | 一种基于知识图谱的相同实体识别方法和系统 |
CN114708929A (zh) * | 2022-04-08 | 2022-07-05 | 南京邮电大学 | 药物与药物相互作用预测模型及其训练方法 |
CN114708929B (zh) * | 2022-04-08 | 2024-03-08 | 南京邮电大学 | 药物与药物相互作用预测模型及其训练方法 |
CN114942998A (zh) * | 2022-04-25 | 2022-08-26 | 西北工业大学 | 融合多源数据的知识图谱邻域结构稀疏的实体对齐方法 |
CN114942998B (zh) * | 2022-04-25 | 2024-02-13 | 西北工业大学 | 融合多源数据的知识图谱邻域结构稀疏的实体对齐方法 |
CN115203436B (zh) * | 2022-07-15 | 2023-12-15 | 国网江苏省电力有限公司信息通信分公司 | 一种基于有向图数据融合的电力知识图谱构建方法和装置 |
CN115203436A (zh) * | 2022-07-15 | 2022-10-18 | 国网江苏省电力有限公司信息通信分公司 | 一种基于有向图数据融合的电力知识图谱构建方法和装置 |
CN115292504A (zh) * | 2022-09-29 | 2022-11-04 | 北京如炬科技有限公司 | 实体关系分类方法、装置、设备及存储介质 |
CN115795060A (zh) * | 2023-02-06 | 2023-03-14 | 吉奥时空信息技术股份有限公司 | 一种基于知识增强的实体对齐方法 |
CN116187446A (zh) * | 2023-05-04 | 2023-05-30 | 中国人民解放军国防科技大学 | 基于自适应注意力机制的知识图谱补全方法、装置和设备 |
CN116187446B (zh) * | 2023-05-04 | 2023-07-04 | 中国人民解放军国防科技大学 | 基于自适应注意力机制的知识图谱补全方法、装置和设备 |
CN116702899B (zh) * | 2023-08-07 | 2023-11-28 | 上海银行股份有限公司 | 一种适用于公私联动场景的实体融合方法 |
CN116702899A (zh) * | 2023-08-07 | 2023-09-05 | 上海银行股份有限公司 | 一种适用于公私联动场景的实体融合方法 |
CN117235285B (zh) * | 2023-11-09 | 2024-02-02 | 支付宝(杭州)信息技术有限公司 | 融合知识图谱数据的方法及装置 |
CN117235285A (zh) * | 2023-11-09 | 2023-12-15 | 支付宝(杭州)信息技术有限公司 | 融合知识图谱数据的方法及装置 |
CN117251583A (zh) * | 2023-11-20 | 2023-12-19 | 湖北大学 | 基于局部图结构的文本增强知识图谱表示学习方法及系统 |
CN117251583B (zh) * | 2023-11-20 | 2024-01-26 | 湖北大学 | 基于局部图结构的文本增强知识图谱表示学习方法及系统 |
CN117370583A (zh) * | 2023-12-08 | 2024-01-09 | 湘江实验室 | 一种基于生成对抗网络的知识图谱实体对齐方法及系统 |
CN117370583B (zh) * | 2023-12-08 | 2024-03-19 | 湘江实验室 | 一种基于生成对抗网络的知识图谱实体对齐方法及系统 |
CN117725555A (zh) * | 2024-02-08 | 2024-03-19 | 暗物智能科技(广州)有限公司 | 多源知识树的关联融合方法、装置、电子设备及存储介质 |
Also Published As
Publication number | Publication date |
---|---|
US20230206127A1 (en) | 2023-06-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2022011681A1 (zh) | 一种基于迭代补全的知识图谱融合方法 | |
CN108399428B (zh) | 一种基于迹比准则的三元组损失函数设计方法 | |
CN110021051A (zh) | 一种基于生成对抗网络通过文本指导的人物图像生成方法 | |
CN110110642A (zh) | 一种基于多通道注意力特征的行人重识别方法 | |
CN111797703A (zh) | 基于鲁棒深度语义分割网络的多源遥感影像分类方法 | |
CN109255002B (zh) | 一种利用关系路径挖掘解决知识图谱对齐任务的方法 | |
CN105550649A (zh) | 基于全耦合局部约束表示的极低分辨率人脸识别方法及系统 | |
CN114090783A (zh) | 一种异构知识图谱融合方法及系统 | |
CN110347857A (zh) | 基于强化学习的遥感影像的语义标注方法 | |
CN115438197B (zh) | 一种基于双层异质图的事理知识图谱关系补全方法及系统 | |
CN107885787A (zh) | 基于谱嵌入的多视角特征融合的图像检索方法 | |
CN111027700A (zh) | 基于wcur算法的知识库补全方法 | |
CN109145763A (zh) | 基于自然语言描述的视频监控行人搜索图像文本融合方法 | |
CN112464816A (zh) | 基于二次迁移学习的地方手语识别方法、装置 | |
Zhang et al. | An open set domain adaptation algorithm via exploring transferability and discriminability for remote sensing image scene classification | |
CN116258990A (zh) | 一种基于跨模态亲和力的小样本参考视频目标分割方法 | |
Wang et al. | Graph meta transfer network for heterogeneous few-shot hyperspectral image classification | |
CN114519107A (zh) | 一种联合实体关系表示的知识图谱融合方法 | |
Li et al. | Msvit: training multiscale vision transformers for image retrieval | |
CN113807214A (zh) | 基于deit附属网络知识蒸馏的小目标人脸识别方法 | |
CN117350330A (zh) | 基于混合教学的半监督实体对齐方法 | |
CN116862080A (zh) | 一种基于双视角对比学习的碳排放预测方法及系统 | |
Guo et al. | TLRM: Task-level relation module for GNN-based few-shot learning | |
CN113160291B (zh) | 一种基于图像配准的变化检测方法 | |
Liu et al. | Data assimilation network for generalizable person re-identification |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20945094 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20945094 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 060723) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20945094 Country of ref document: EP Kind code of ref document: A1 |