CN114625886A - Entity query method and system based on knowledge graph small sample relation learning model - Google Patents
Entity query method and system based on knowledge graph small sample relation learning model Download PDFInfo
- Publication number
- CN114625886A CN114625886A CN202210242159.9A CN202210242159A CN114625886A CN 114625886 A CN114625886 A CN 114625886A CN 202210242159 A CN202210242159 A CN 202210242159A CN 114625886 A CN114625886 A CN 114625886A
- Authority
- CN
- China
- Prior art keywords
- entity
- information interaction
- sample
- queried
- relation
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 46
- 230000003993 interaction Effects 0.000 claims abstract description 38
- 239000013598 vector Substances 0.000 claims abstract description 27
- 230000007246 mechanism Effects 0.000 claims abstract description 15
- 230000008569 process Effects 0.000 claims description 16
- 230000006870 function Effects 0.000 claims description 14
- 238000004590 computer program Methods 0.000 claims description 9
- 238000003860 storage Methods 0.000 claims description 6
- 238000009826 distribution Methods 0.000 claims description 3
- 238000012545 processing Methods 0.000 abstract description 6
- 239000000523 sample Substances 0.000 description 49
- 239000013074 reference sample Substances 0.000 description 26
- 238000010586 diagram Methods 0.000 description 20
- 238000012549 training Methods 0.000 description 9
- 230000000875 corresponding effect Effects 0.000 description 8
- 238000004422 calculation algorithm Methods 0.000 description 6
- 230000004913 activation Effects 0.000 description 5
- 230000008901 benefit Effects 0.000 description 5
- 238000002474 experimental method Methods 0.000 description 5
- 238000004088 simulation Methods 0.000 description 5
- 238000010168 coupling process Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000002596 correlated effect Effects 0.000 description 1
- 238000010219 correlation analysis Methods 0.000 description 1
- 230000008878 coupling Effects 0.000 description 1
- 238000005859 coupling reaction Methods 0.000 description 1
- 238000005520 cutting process Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000002996 emotional effect Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 238000005065 mining Methods 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 238000005192 partition Methods 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 230000000306 recurrent effect Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000012827 research and development Methods 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000010200 validation analysis Methods 0.000 description 1
- 238000012800 visualization Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/36—Creation of semantic tools, e.g. ontology or thesauri
- G06F16/367—Ontology
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/33—Querying
- G06F16/3331—Query processing
- G06F16/334—Query execution
- G06F16/3347—Query execution using vector based model
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/35—Clustering; Classification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Databases & Information Systems (AREA)
- Life Sciences & Earth Sciences (AREA)
- Computational Linguistics (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Computation (AREA)
- Evolutionary Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Computational Biology (AREA)
- Artificial Intelligence (AREA)
- Animal Behavior & Ethology (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
本发明属于知识图谱数据处理技术领域,提供了一种基于知识图谱小样本关系学习模型的实体查询方法及系统。其中,该方法包括获取待查询的信息交互数据及其包含的关系‑实体对,编码得到待查询向量;基于知识图谱将待查询向量所包含的头尾实体对进行特征编码,得到相应三元组表示;将待查询的信息交互数据的三元组表示与预先聚类的各组信息交互参考小样本的三元组表示进行注意力机制匹配,得到最相似组的信息交互参考小样本并作为查询结果。
The invention belongs to the technical field of knowledge graph data processing, and provides an entity query method and system based on a knowledge graph small sample relation learning model. The method includes acquiring the information interaction data to be queried and the relation-entity pairs contained therein, and encoding to obtain a vector to be queried; and performing feature encoding on the head and tail entity pairs contained in the vector to be queried based on the knowledge graph to obtain a corresponding triplet Representation; match the triple representation of the information interaction data to be queried with the triple representation of the pre-clustered groups of information cross-reference small samples to perform attention mechanism matching, and obtain the most similar group of information cross-reference small samples as the query. result.
Description
技术领域technical field
本发明属于知识图谱数据处理技术领域,尤其涉及一种基于知识图谱小样本关系学习模型的实体查询方法及系统。The invention belongs to the technical field of knowledge graph data processing, and in particular relates to an entity query method and system based on a knowledge graph small sample relation learning model.
背景技术Background technique
本部分的陈述仅仅是提供了与本发明相关的背景技术信息,不必然构成在先技术。The statements in this section merely provide background information related to the present invention and do not necessarily constitute prior art.
目前服务机器人在家庭、医院等非结构化环境下,由于服务器人并不能真正像人类一样具备自主学习、知识共享和情感交互能力,无法进行高效导航并安全的执行各类服务任务。在服务机器人在人机交互问题方面,知识图谱(KGs)是研究重点。为了进一步扩大KGs的覆盖范围,传统KGs完成方法对每个关系都需要大量的训练实例(即头尾实体对)。长尾关系实际上在KGs中更为常见,而这些新添加的关系通常没有许多已知的训练三元组。At present, in unstructured environments such as homes and hospitals, service robots cannot perform efficient navigation and safely perform various service tasks because server humans do not really have the capabilities of autonomous learning, knowledge sharing, and emotional interaction like humans. Knowledge graphs (KGs) are the focus of research in the human-robot interaction problem of service robots. To further expand the coverage of KGs, traditional KGs completion methods require a large number of training instances (i.e. head-tail entity pairs) for each relation. Long-tailed relations are actually more common in KGs, and these newly added relations usually do not have many known training triples.
在实际应用中,服务器人的人机问题交互样本较少,而且小样本知识表示学习相比于传统的知识图谱学习方式,所需考虑的不仅仅是参考样本数量的差异性,此外也需要考虑参考样本间语义信息与结果信息的利用方式。目前的算法均未考虑学习实体三元组的动态属性,即,实体可能在任务关系中表现出不同的角色,而参考样本可能对查询样本做出不同的贡献。如现有设计模块来增强实体与其局部图邻居的实体嵌入方法,并没有充分利用监督信息。目前的技术方案中,有通过学习实体和引用的静态表示来解决这个问题,通过实体简单表示关系,却忽略了关系之间的特征联系,即参考集中关系间的特征可能对查询做出不同的贡献。In practical applications, there are few human-computer interaction samples of the server person, and compared with the traditional knowledge graph learning method, the small sample knowledge representation learning needs to consider not only the difference in the number of reference samples, but also the need to consider Refer to the use of semantic information and result information between samples. None of the current algorithms consider the dynamic properties of learning entity triples, i.e., entities may exhibit different roles in task relations, while reference samples may contribute differently to query samples. Entity embedding methods, such as existing design modules to enhance entities and their local graph neighbors, do not take full advantage of supervised information. In the current technical solution, this problem is solved by learning the static representation of entities and references. The relationship is simply represented by the entity, but the feature relationship between the relationships is ignored, that is, the features between the relationships in the reference set may make different queries to the query. contribute.
发明人在研发的过程中发现,现有的小样本知识表示学习方法存在参考样本信息利用不充分、噪声干扰严重等巨大缺点,且先前的研究并未将样本动态属性结合实际样本语义细粒度考虑,因此,这样降低了实际服务器人的人机交互问答的准确性,使得服务器人的体验性差。In the process of research and development, the inventor found that the existing small-sample knowledge representation learning methods have huge shortcomings such as insufficient use of reference sample information and serious noise interference, and previous studies have not considered the dynamic attributes of samples combined with the fine-grained semantics of actual samples. , therefore, this reduces the accuracy of the actual server person's human-computer interaction question answering, making the server person's experience poor.
发明内容SUMMARY OF THE INVENTION
为了解决上述背景技术中存在的技术问题,本发明提供一种基于知识图谱小样本关系学习模型的实体查询方法及系统,其能够提高实际服务器人的人机交互问答的准确性,使得服务器人的体验性更好。In order to solve the technical problems existing in the above-mentioned background art, the present invention provides an entity query method and system based on a knowledge graph small sample relation learning model, which can improve the accuracy of the human-computer interaction question and answer of the actual server person, so that the server person's Experience is better.
为了实现上述目的,本发明采用如下技术方案:In order to achieve the above object, the present invention adopts the following technical solutions:
本发明的第一个方面提供一种基于知识图谱小样本关系学习模型的实体查询方法,其包括:A first aspect of the present invention provides an entity query method based on a knowledge graph small sample relation learning model, which includes:
获取待查询的信息交互数据及其包含的关系-实体对,编码得到待查询向量;Obtain the information interaction data to be queried and the relation-entity pairs it contains, and encode to obtain the vector to be queried;
基于知识图谱将待查询向量所包含的头尾实体对进行特征编码,得到相应三元组表示;Based on the knowledge graph, the head and tail entity pairs contained in the vector to be queried are feature-encoded to obtain the corresponding triple representation;
将待查询的信息交互数据的三元组表示与预先聚类的各组信息交互参考小样本的三元组表示进行注意力机制匹配,得到最相似组的信息交互参考小样本并作为查询结果。Match the triple representation of the information interaction data to be queried with the triple representation of the pre-clustered groups of information cross-reference small samples to perform attention mechanism matching, and obtain the most similar group of information cross-reference small samples as the query result.
本发明的第二个方面提供一种基于知识图谱小样本关系学习模型的实体查询系统,其包括:A second aspect of the present invention provides an entity query system based on a knowledge graph small sample relation learning model, which includes:
查询向量编码模块,其用于获取待查询的信息交互数据及其包含的关系-实体对,编码得到待查询向量;a query vector encoding module, which is used to obtain the information interaction data to be queried and the relation-entity pairs contained therein, and encodes to obtain the to-be-queried vector;
三元组表示模块,其用于基于知识图谱将待查询向量所包含的头尾实体对进行特征编码,得到相应三元组表示;The triplet representation module is used to encode the head and tail entity pairs contained in the vector to be queried based on the knowledge graph to obtain the corresponding triplet representation;
匹配查找模块,其用于将待查询的信息交互数据的三元组表示与预先聚类的各组信息交互参考小样本的三元组表示进行注意力机制匹配,得到最相似组的信息交互参考小样本并作为查询结果。The matching search module is used to match the triple representation of the information interaction data to be queried with the triple representation of the pre-clustered groups of information cross-reference small samples to perform attention mechanism matching to obtain the most similar group of information cross-references Small sample and as query result.
本发明的第三个方面提供一种计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现如上述所述的基于知识图谱小样本关系学习模型的实体查询方法中的步骤。A third aspect of the present invention provides a computer-readable storage medium on which a computer program is stored, and when the program is executed by a processor, implements the above-mentioned entity query method based on a knowledge graph small sample relation learning model step.
本发明的第四个方面提供一种电子设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,所述处理器执行所述程序时实现如上述所述的基于知识图谱小样本关系学习模型的实体查询方法中的步骤。A fourth aspect of the present invention provides an electronic device, comprising a memory, a processor, and a computer program stored in the memory and executable on the processor, when the processor executes the program, the above-mentioned based Steps in an entity query method for a knowledge graph few-shot relational learning model.
与现有技术相比,本发明的有益效果是:Compared with the prior art, the beneficial effects of the present invention are:
本发明有效地利用了注意力机制与实体耦合方法的独特优势,更细粒度分配贡献权重,优化得分元素占比,充分利用参考样本信息,强化实体嵌入,提高了实际服务器人的人机交互问答的准确性,使得服务器人的体验性更好。The invention effectively utilizes the unique advantages of the attention mechanism and the entity coupling method, allocates the contribution weight in a finer-grained manner, optimizes the proportion of score elements, makes full use of the reference sample information, strengthens the entity embedding, and improves the human-computer interaction question-and-answer of the actual server person. The accuracy of the server makes the experience of the server better.
本发明附加方面的优点将在下面的描述中部分给出,部分将从下面的描述中变得明显,或通过本发明的实践了解到。Advantages of additional aspects of the invention will be set forth in part in the description which follows, and in part will become apparent from the description which follows, or may be learned by practice of the invention.
附图说明Description of drawings
构成本发明的一部分的说明书附图用来提供对本发明的进一步理解,本发明的示意性实施例及其说明用于解释本发明,并不构成对本发明的不当限定。The accompanying drawings forming a part of the present invention are used to provide further understanding of the present invention, and the exemplary embodiments of the present invention and their descriptions are used to explain the present invention, and do not constitute an improper limitation of the present invention.
图1是本发明实施例的基于知识图谱小样本关系学习模型的实体查询方法流程框架;Fig. 1 is an entity query method process framework based on a knowledge graph small sample relation learning model according to an embodiment of the present invention;
图2是本发明实施例的云服务平台资源调度消耗模型示意图;2 is a schematic diagram of a cloud service platform resource scheduling consumption model according to an embodiment of the present invention;
图3是本发明实施例的循环处理器中LSTM结构示意图;3 is a schematic diagram of the structure of an LSTM in a loop processor according to an embodiment of the present invention;
图4是本发明实施例的NELL数据集关系频率可视化仿真效果图;Fig. 4 is the NELL data set relational frequency visualization simulation effect diagram of the embodiment of the present invention;
图5(a)是本发明实施例的NELL数据集关系r的特征聚类仿真结果示意图;Fig. 5 (a) is the characteristic clustering simulation result schematic diagram of the NELL data set relation r of the embodiment of the present invention;
图5(b)是本发明实施例的WiKi数据集关系r的特征聚类仿真结果示意图;5(b) is a schematic diagram of a feature clustering simulation result of the WiKi dataset relationship r according to an embodiment of the present invention;
图6是本发明实施例的特征聚类编码模块具体结构示意图;6 is a schematic structural diagram of a feature cluster coding module according to an embodiment of the present invention;
图7是本发明实施例的基于头尾实体对的transformer特征编码模块具体结构示意图;7 is a schematic structural diagram of a transformer feature encoding module based on head and tail entity pairs according to an embodiment of the present invention;
图8是本发明实施例的匹配得分网络模块具体结构示意图;8 is a schematic diagram of a specific structure of a matching scoring network module according to an embodiment of the present invention;
图9(a)是本发明实施例的探究样本数量K对预测结果影响的MRR实验结果分析图;Fig. 9 (a) is the MRR experiment result analysis diagram of exploring the influence of sample quantity K on the predicted result according to the embodiment of the present invention;
图9(b)是本发明实施例的探究样本数量K对预测结果影响的Hits@10实验结果分析图;Fig. 9 (b) is the Hits@10 experimental result analysis diagram of the influence of the sample number K on the predicted result according to the embodiment of the present invention;
图9(c)是本发明实施例的探究样本数量K对预测结果影响的Hits@5实验结果分析图;Fig. 9 (c) is the Hits@5 experimental result analysis diagram of the influence of the exploration sample number K on the predicted result according to the embodiment of the present invention;
图9(d)是本发明实施例的探究样本数量K对预测结果影响的Hits@1实验结果分析图;Fig. 9 (d) is the Hits@1 experimental result analysis diagram of the influence of the sample number K on the predicted result according to the embodiment of the present invention;
图10(a)是本发明实施例的探究参考样本邻居数量对预测结果影响的MRR实验结果分析图;Fig. 10 (a) is the MRR experiment result analysis diagram of exploring the influence of the number of reference sample neighbors on the predicted result according to the embodiment of the present invention;
图10(b)是本发明实施例的探究参考样本邻居数量对预测结果影响的Hits@10实验结果分析图;Figure 10(b) is an analysis diagram of the Hits@10 experimental result of exploring the influence of the number of reference sample neighbors on the prediction result according to an embodiment of the present invention;
图10(c)是本发明实施例的探究参考样本邻居数量对预测结果影响的Hits@5实验结果分析图;Fig. 10 (c) is the Hits@5 experiment result analysis diagram of exploring the influence of the number of reference sample neighbors on the prediction result according to the embodiment of the present invention;
图10(d)是本发明实施例的探究参考样本邻居数量对预测结果影响的Hits@1实验结果分析图。FIG. 10(d) is an analysis diagram of the Hits@1 experimental result of exploring the influence of the number of reference sample neighbors on the prediction result according to an embodiment of the present invention.
具体实施方式Detailed ways
下面结合附图与实施例对本发明作进一步说明。The present invention will be further described below with reference to the accompanying drawings and embodiments.
应该指出,以下详细说明都是例示性的,旨在对本发明提供进一步的说明。除非另有指明,本文使用的所有技术和科学术语具有与本发明所属技术领域的普通技术人员通常理解的相同含义。It should be noted that the following detailed description is exemplary and intended to provide further explanation of the invention. Unless otherwise defined, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
需要注意的是,这里所使用的术语仅是为了描述具体实施方式,而非意图限制根据本发明的示例性实施方式。如在这里所使用的,除非上下文另外明确指出,否则单数形式也意图包括复数形式,此外,还应当理解的是,当在本说明书中使用术语“包含”和/或“包括”时,其指明存在特征、步骤、操作、器件、组件和/或它们的组合。It should be noted that the terminology used herein is for the purpose of describing specific embodiments only, and is not intended to limit the exemplary embodiments according to the present invention. As used herein, unless the context clearly dictates otherwise, the singular is intended to include the plural as well, furthermore, it is to be understood that when the terms "comprising" and/or "including" are used in this specification, it indicates that There are features, steps, operations, devices, components and/or combinations thereof.
实施例一Example 1
参照图1,本实施例提供了一种基于知识图谱小样本关系学习模型的实体查询方法,其包括:Referring to FIG. 1, the present embodiment provides an entity query method based on a knowledge graph small sample relation learning model, which includes:
S101:获取待查询的信息交互数据及其包含的关系-实体对,编码得到待查询向量。S101: Acquire the information interaction data to be queried and the relation-entity pairs contained therein, and encode to obtain a vector to be queried.
如图2所示,在知识图中,一个实体可能既是一个三元组的头实体,又是多个三元组的尾实体,即实体可能在任务关系中显示不同的角色。根据三元组关系的不同,实体在其中表示的含义也有不同。因此在对隐藏关系进行预测时,模型可以利用其邻居节点,如图4中5/6/7/8节点之间的关系,学习关系间的联系。As shown in Figure 2, in the knowledge graph, an entity may be both the head entity of a triplet and the tail entity of multiple triples, that is, the entity may show different roles in the task relationship. Depending on the triple relationship, entities have different meanings in them. Therefore, when predicting the hidden relationship, the model can use its neighbor nodes, such as the relationship between the 5/6/7/8 nodes in Figure 4, to learn the relationship between the relationships.
例如:待查询三元组(Ming Yao,Work For,Chinese men's basketball team),其中(Ming Yao,Chinese men's basketball team)为头尾实体对,姚明与中国男篮分别为实体对的头尾实体,Work For为待聚类编码处理的查询关系向量。For example: the triplet to be queried (Ming Yao, Work For, Chinese men's basketball team), where (Ming Yao, Chinese men's basketball team) is the head and tail entity pair, Yao Ming and Chinese men's basketball are the head and tail entities of the entity pair, respectively, Work For is the query relation vector to be clustered and encoded.
另一方面,不同语义的参考集可能会对查询做出不同的贡献。如对于查询中的关系“work for”而言,参考样本中的关系“work for”、“work as”要比“subject to”、“famousfor”等更加相似于查询样本,匹配得分时受影响程度也会更大,模型使用细粒度注意力机制,突出相关性强样本贡献。On the other hand, reference sets with different semantics may contribute differently to the query. For example, for the relationship "work for" in the query, the relationships "work for" and "work as" in the reference sample are more similar to the query sample than "subject to", "famousfor", etc., and the degree of influence when matching scores is affected. It will also be larger, and the model uses a fine-grained attention mechanism to highlight the contribution of strongly correlated samples.
在具体实施过程中,各组信息交互参考小样本采用无监督聚类,生成预设数量的中心点,将参考小样本中各关系的向量表示进行聚类。In the specific implementation process, each group of information cross-reference small samples adopts unsupervised clustering, generates a preset number of center points, and performs clustering on the vector representation of each relationship in the reference small samples.
例如:通过K-means中心聚类算法以及余弦相似度计算,得到参考样本中N个关系聚类,每个关系分类生成一个公共关系特征,使embedding数据可以更加细粒度进行学习与权重分配。For example, through the K-means central clustering algorithm and cosine similarity calculation, N relational clusters in the reference sample are obtained, and each relational classification generates a public relation feature, so that the embedding data can be learned and weighted more finely.
在一些具体实施过程中,采用特征聚类编码模块对各组信息交互参考小样本进行聚类,例如图6所示。首先,对于给定的参考样本数据进行关系聚类,得到n个已知事实分组[ri]。针对每个分组中的参考关系-实体对进行编码,来探究其相关程度。模块使用以下满足上述属性的函数f:In some specific implementation processes, a feature clustering coding module is used to cluster each group of information cross-reference small samples, as shown in FIG. 6 , for example. First, relational clustering is performed for the given reference sample data, and n known fact groups [r i ] are obtained. The reference relationship-entity pairs in each grouping are coded to explore their degree of correlation. The module uses the following function f that satisfies the above properties:
其中是关系实体对(ri,ti)的特征表示,σ为激活函数,设置为σ=tanh。in is the feature representation of the relation entity pair (r i , t i ), σ is the activation function, set as σ=tanh.
随后应用一个前馈层来编码这个元组中的交互:A feedforward layer is then applied to encode the interactions in this tuple:
其中ωc、bc为可学习的参数,表示连接。where ω c and b c are learnable parameters, Indicates connection.
得到关系函数f后,在输入一个查询样本时,模块分别对参考样本与查询样本的关系及邻居头尾实体进行编码,得到:After obtaining the relationship function f, when a query sample is input, the module encodes the relationship between the reference sample and the query sample and the neighbor head and tail entities respectively, and obtains:
其中s、q分别为参考样本与查询样本的关系邻居向量表示,其中rs i、hs i、ts i分别为参考样本中的关系与头、尾实体邻居的特征表示,rq i、hq i、tq i分别为查询样本中的关系与头、尾实体邻居的特征表示。where s and q are the vector representations of the relational neighbors between the reference sample and the query sample, respectively, where rs i , h s i , and t s i are the feature representations of the relation and the neighbors of the head and tail entities in the reference sample, respectively, r qi , h qi and t qi are the feature representations of the relationship in the query sample and the neighbors of the head and tail entities , respectively.
通过对查询的关系与各个关系组[ri]进行相关性分析,在输入一个查询样本qi时,聚类编码模块将查询样本与各关系类别进行余弦相似度分析,对相似度最高的关系类别赋予最高权重。根据参考样本与查询样本的关系邻居向量表示,通过余弦相似度函数进行权重分配:Through the correlation analysis between the query relationship and each relationship group [r i ] , when a query sample qi is input, the cluster coding module will perform cosine similarity analysis between the query sample and each relationship category, and analyze the relationship with the highest similarity. The category is given the highest weight. According to the relational neighbor vector representation of the reference sample and the query sample, the weight is assigned by the cosine similarity function:
查询样本的关系与某一分组的相关性越大,权重αi越高,这样通过初步分类加权,将查询样本归类到相关性最大的某一参考样本关系分组中。The greater the correlation between the relationship of the query sample and a certain group, the higher the weight α i . In this way, through the preliminary classification weighting, the query sample is classified into a certain reference sample relationship group with the greatest correlation.
随后,模块根据任务关系与参考关系之间的相关性来对查询样本进行细粒度权重匹配。首先定义一个度量函数ψ,通过双线性点积来计算它们的相关性交互表示:Subsequently, the module performs fine-grained weight matching on query samples according to the correlation between task relations and reference relations. First define a metric function ψ, and calculate their correlation interaction representation by bilinear dot product:
ψ(rs,rq)=rs TWrq+bψ( rs ,r q )= rs T Wr q +b
其中rs、rq分别为参考与查询样本关系的向量表示,W、b为可学习的参数。Among them, r s and r q are the vector representations of the relationship between reference and query samples, respectively, and W and b are learnable parameters.
在权重αi的初权重下,模型进一步将各个分组中的单一样本与查询样本进行相似度加权得到次权重βij为:Under the initial weight of the weight α i , the model further weights the similarity between the single sample in each group and the query sample to obtain the secondary weight β ij as:
其中mn代表第m个关系分组中第n个参考样本。where m n represents the nth reference sample in the mth relation grouping.
自此可得到查询样本与各参考样本之间的初权重与次权重,通过点积运算可得到基于关系r的特征聚类编码模型的注意力参数:From this, the initial weight and secondary weight between the query sample and each reference sample can be obtained, and the attention parameters of the feature clustering encoding model based on the relation r can be obtained through the dot product operation:
随后关系的邻居向量进行注意力加权,得到聚类编码模型的输出,即预先训练的实体嵌入,以头实体为例:The neighbor vector of the relationship is then weighted by attention, and the output of the cluster encoding model is obtained, that is, the pre-trained entity embedding, taking the head entity as an example:
其中σ为激活函数。设置σ=tanh。基于关系r的特征聚类编码模型以这种方式获得的实体表示,保留当前嵌入模型所产生的个体属性,并根据查询样本中不同参考样本起到的作用不同,更加细粒度对参考样本与查询样本间的相关性进行描述。上述公式也适用于候选尾部实体t。where σ is the activation function. Set σ=tanh. The entity representation obtained by the feature clustering coding model based on the relation r in this way retains the individual attributes generated by the current embedding model, and according to the different roles played by different reference samples in the query sample, more fine-grained analysis of the reference sample and query The correlation between samples is described. The above formula also applies to the candidate tail entity t.
S102:基于知识图谱将待查询向量所包含的头尾实体对进行特征编码,得到相应三元组表示。S102: Encode the head and tail entity pairs included in the vector to be queried based on the knowledge graph to obtain a corresponding triple representation.
对于每个关系分组[ri]中的参考样本,隐藏其关系分组[ri],将每个具有任务关系的实体对(hij,[ri],tij)作为一个序列X,并将实体嵌入与其角色感知的邻居嵌入进行耦合,在构建所有输入表示后,将其输入Transformer块堆栈,对X进行编码并获得编码后三元组表示,针对期望表示进行排序择优,得到匹配三元组。For the reference samples in each relation grouping [r i ], hide its relation grouping [r i ], take each entity pair (h ij , [r i ], t ij ) with task relation as a sequence X, and Coupling the entity embeddings with their role-aware neighbor embeddings, after building all input representations, feed them into the Transformer block stack, encode X and obtain the encoded triplet representation, sort the desired representation, and get the matching triplet Group.
具体地,采用基于头尾实体对的transformer特征编码模块特征编码,其运行机制如下:Specifically, the feature encoding of the transformer feature encoding module based on the head-tail entity pair is adopted, and its operating mechanism is as follows:
Transformer特征编码模块具体结构如图7所示。模块给定任务r中的一个三元组,即(hij,[ri],tij)∈Dr,对于每个关系分组[ri]中的参考样本,我们隐藏其关系分组[ri],将每个具有任务关系的实体对(hij,[ri],tij)作为一个序列X=(x1,x2,x3),其中第一个/最后一个元素是头/尾实体,中间是隐藏的任务关系。我们通过聚类模块,将参考样本头尾实体表示为训练之后的特征嵌入。为了增强实体嵌入,以头实体h为例,模块同时将实体嵌入h与其角色感知的邻居嵌入进行耦合。h可以表述为:The specific structure of the Transformer feature encoding module is shown in Figure 7. Module Given a triple in task r, i.e. (h ij , [r i ], t ij ) ∈ Dr, for each reference sample in relation grouping [r i ], we hide its relation grouping [r i ] ], take each entity pair (hi ij , [ri ], t ij ) with task relation as a sequence X = (x1, x2, x3), where the first/last element is the head/tail entity, In the middle is the hidden task relationship. Through the clustering module, we represent the reference sample head and tail entities as feature embeddings after training. To enhance the entity embedding, taking the head entity h as an example, the module simultaneously couples the entity embedding h with its role-aware neighbor embeddings. h can be expressed as:
FC(h)=σ[ω1hs i+ω2F&(hs i)]F C (h)=σ[ω 1 h s i +ω 2 F & (h s i )]
其中σ为激活函数。设置σ=relu,W1、W2为可学习的参数。where σ is the activation function. Set σ=relu, and W1 and W2 are learnable parameters.
在构建所有输入表示后,编码模块将特征嵌入输入Transformer块堆栈,对X进行编码并获得:After building all input representations, the encoding module feeds feature embeddings into the stack of Transformer blocks, encodes X and obtains:
Zk i=Transformer(Zk-1 i),k=1,2,...,LZ k i =Transformer(Z k-1 i ),k=1,2,...,L
其中Zk i是第k层后实体对的隐藏状态,L为Transformer的隐藏层数。where Z k i is the hidden state of the entity pair after the kth layer, and L is the number of hidden layers of the Transformer.
最后的隐藏状态ZL 2作为实体对关系的期望表示,作为匹配得分时新三元组的关系嵌入。这种表示编码了每个实体的语义角色,从而有助于识别与不同实体对相关联的任务关系的细粒度意义。The final hidden state Z L 2 serves as the entity's expected representation of the relation, as the relation embedding of the new triple when matching scores. This representation encodes the semantic role of each entity, thereby helping to identify the fine-grained meaning of task relationships associated with different pairs of entities.
为了更加细粒度利用有限的参考样本,使得与查询更相关的参考样本发挥更大作用,模块对Transformer最后输出的实体对隐藏状态ZL 1/ZL 3采样排序,根据各关系分组所占权重不同,根据欧式空间距离公式针对样本相关性进行排序:In order to use the limited reference samples in a finer-grained manner and make the reference samples more relevant to the query play a greater role, the module samples and sorts the hidden state Z L 1 /Z L 3 of the entities finally output by the Transformer, according to the weight of each relationship grouping Different, according to the Euclidean space distance formula to sort the sample correlation:
gh i=ZL 1(Si),gt i=ZL 3(Si)g h i =Z L 1 (S i ), g t i =Z L 3 (S i )
Sort1={MIN[F(hq i,tq i)],(hq i,rq i,tq i)∈G',rq i∈[ri]}Sort1={MIN[F(h q i ,t q i )],(h q i ,r q i ,t q i )∈G',r q i ∈[r i ]}
其中gh i/gt i为参考样本进入Transformer最后输出的实体对隐藏状态ZL 1/ZL 3。参考样本实体对与某个分组查询样本实体对之间距离越小,说明查询样本与该组参考样本之间相关性越大,聚类时在同一类的几率越大。Sort1输出参考样本实体对与查询样本实体对之间最小距离,取前n组输出sort对应分组,作为下一步的参考嵌入。Where g h i /g t i is the hidden state Z L 1 /Z L 3 of the entity pair that the reference sample enters into the final output of the Transformer. The smaller the distance between the reference sample entity pair and a certain grouped query sample entity pair, the greater the correlation between the query sample and the group of reference samples, and the greater the probability of being in the same class when clustering. Sort1 outputs the minimum distance between the reference sample entity pair and the query sample entity pair, and takes the first n groups of output sort corresponding groups as the reference embedding for the next step.
S103:将待查询的信息交互数据的三元组表示与预先聚类的各组信息交互参考小样本的三元组表示进行注意力机制匹配,得到最相似组的信息交互参考小样本并作为查询结果。S103: Perform attention mechanism matching between the triple representation of the information interaction data to be queried and the triple representation of the pre-clustered groups of information cross-reference small samples to obtain the most similar group of information cross-reference small samples and use them as a query result.
在具体实施过程中,所述相似度采用欧式距离表征。在进行注意力机制匹配的过程中,通过余弦相似度函数进行待查询的信息交互数据的三元组表示与各组信息交互参考小样本的三元组表示之间的权重分配。In a specific implementation process, the similarity is represented by Euclidean distance. In the process of matching the attention mechanism, the cosine similarity function is used to assign weights between the triple representation of the information interaction data to be queried and the triple representation of each group of information interaction reference samples.
在得到最相似组的信息交互参考小样本的过程中,根据各关系分组所占权重不同,根据欧式空间距离公式待查询的信息交互数据与各组信息交互参考小样本的相关性进行排序。In the process of obtaining the most similar group of information cross-reference small samples, according to the different weights occupied by each relationship group, the correlation between the information interaction data to be queried and each group of information cross-reference small samples is sorted according to the Euclidean space distance formula.
使用图3中循环处理器中LSTM执行多个步骤匹配。LSTM隐藏层的输入包括上一时刻隐藏层的状态ct-1、上一时刻隐藏层的输出向量ht-1与当前时刻的序列输入xt。LSTM的遗忘门控制对上一个存储单元状态的记忆,决定上一时刻存储单元状态ct-1中的多少信息可以传递到当前时刻ct中;输入门决定当前序列输入xt中的多少信息可以保存到当前时刻ct;输出门基于新状态ct得到当前时刻的输出ht。LSTM的更新方式可表示为:Multi-step matching is performed using the LSTM in the loop processor in Figure 3. The input of the LSTM hidden layer includes the state c t-1 of the hidden layer at the previous moment, the output vector h t-1 of the hidden layer at the previous moment and the sequence input x t at the current moment. The forget gate of LSTM controls the memory of the state of the previous storage unit, and determines how much information in the state c t-1 of the storage unit at the previous moment can be transferred to the current moment c t ; the input gate determines how much information in the current sequence input x t It can be saved to the current time c t ; the output gate obtains the current time output h t based on the new state c t . The update method of LSTM can be expressed as:
ft=σ(Wxfxt+Whfht-1+bf)f t =σ(W xf x t +W hf h t-1 +b f )
it=σ(Wxixt+Whiht-1+bi)i t =σ(W xi x t +W hi h t-1 +b i )
ot=σ(Wxoxt+Whoht-1+bo)o t =σ(W xo x t +W ho h t-1 +b o )
ht=ot·tanh(ct)h t =o t ·tanh(c t )
式中,ct为当前时刻存储单元状态信息,为当前时刻积累的状态信息,W表示不同门所对应的权重系数矩阵,b表示的偏置项,σ和tanh分别代表sigmoid激活函数和tanh激活函数。In the formula, c t is the state information of the storage unit at the current moment, is the state information accumulated at the current moment, W represents the weight coefficient matrix corresponding to different gates, b represents the bias term, σ and tanh represent the sigmoid activation function and the tanh activation function, respectively.
在具体实施中,采用匹配得分网络模块进行匹配查询,其运行机制如下:In the specific implementation, the matching scoring network module is used to perform matching query, and its operating mechanism is as follows:
匹配得分网络模块具体结构如图8所示。得到重新构造的新关系查询样本与最相似分组的参考样本的嵌入。通过匹配得分网络模块对每个查询三元组(hi,rnew,ti)与引用集(h,[ri],t)进行匹配预测得分结果。The specific structure of the matching score network module is shown in Figure 8. Embeddings of the reconstructed new relational query samples and the most similar grouped reference samples are obtained. The matching score network module matches each query triplet ( hi ,r new ,t i ) with the reference set (h,[r i ],t) to predict the score result.
为了度量两个向量之间的相似性,我们使用了一个循环处理器F(m)执行多个步骤匹配。第t个处理步骤的表述如下:To measure the similarity between two vectors, we use a recurrent processor F(m) to perform multi-step matching. The t-th processing step is expressed as follows:
μ1=(hi,rnew,ti),μ2=(h,[ri],t)μ 1 =( hi ,r new ,t i ),μ 2 =(h,[r i ],t)
gt=g′t+μ1 g t =g't+μ 1
其中RNNmatch是LSTM单元,输入μ1、隐藏状态gt和单元状态ct。T“处理”步骤后的最后一个隐藏状态gT是查询三元组的细化嵌入:μ1=gT。匹配得分模块使用μ1和μ2之间的内积作为后续排序优化过程的相似度评分。where RNN match is an LSTM cell with input μ 1 , hidden state g t and cell state c t . The last hidden state g T after the T "processing" step is the refined embedding of the query triplet: μ 1 =g T . The matching score module uses the inner product between μ1 and μ2 as the similarity score for the subsequent ranking optimization process.
对本实施例进行仿真验证。This embodiment is simulated and verified.
模型在训练过程中会产生大量的原始数据,这些原始数据存在大量的缺失和噪声,严重影响了数据的质量,对挖掘有效信息造成了一定的困扰,应用一些方法,如数据切割,可以提高数据的质量。数据预处理针对原始公开数据集NELL数据集中的训练数据进行分割,根据出现频率不同,将训练集中关系划分为低、中、高三个分区,对应数量级分别为[0,200)、[200,400)、[400,400+]。根据关系出现频率进行可视化分析,结果如图4所示,数据集中的关系频率符合长尾分布,满足模型训练要求。During the training process of the model, a large amount of raw data will be generated. There are a lot of missing and noise in these raw data, which seriously affects the quality of the data and causes certain troubles in mining effective information. Applying some methods, such as data cutting, can improve the data. the quality of. Data preprocessing divides the training data in the original public data set NELL data set. According to the frequency of occurrence, the relationship in the training set is divided into three partitions: low, medium and high, and the corresponding orders of magnitude are [0,200), [200,400), [400,400 respectively. +]. Visual analysis is performed according to the frequency of relationship occurrence. The results are shown in Figure 4. The relationship frequency in the data set conforms to the long-tail distribution, which meets the requirements of model training.
参考样本三元组关系r的特征聚类编码分组的仿真结果如图5(a)-图5(b)所示,我们分别对公开数据集NELL与WiKi数据集进行K-means中心聚类,并进行可视化分析,分别得到不同中心点数K时最优的聚类结果。以K=6与K=7时为例,可以观察到,NELL与WiKi数据集中参考样本关系类别聚类数N分别为N=3与N=5,聚类完成度较高。Figure 5(a)-Figure 5(b) shows the simulation results of the feature clustering coding grouping of the reference sample triplet relationship r. We perform K-means center clustering on the public dataset NELL and WiKi dataset respectively. And carry out visual analysis to obtain the optimal clustering results when the number of center points K is different. Taking K=6 and K=7 as an example, it can be observed that the number of reference samples in the NELL and WiKi datasets is N=3 and N=5, respectively, and the clustering degree is relatively high.
实验在公开数据集NELL数据集和WiKi数据集上进行,选择那些没有太多三元组的关系作为一次性任务关系,选取最新的转储关系,并删除这些反比关系。数据集选择了小于500个但超过50个三元组的关系作为一次性任务。Experiments are performed on the public datasets NELL dataset and WiKi dataset, selecting those relations that do not have too many triples as one-time task relations, picking the latest dump relations, and removing these inverse relations. The dataset selected relations with less than 500 but more than 50 triplets as one-off tasks.
将FANC模型与多个基线模型在相同参数预训练模型下进行训练,对于所有实现的小样本学习方法,我们通过TransE初始化实体嵌入。在模型训练前对实体邻居进行随机采样和固定。利用公开数据集NELL和WiKi训练数据中的关系及其实体对对模型进行训练,利用验证数据和测试数据中的关系分别对模型进行调整和评价。实验使用top-k命中率(Hitsk)和平均倒数秩(MRR)来评估不同方法的性能。k被设为1、5和10。The FANC model is trained with multiple baseline models under the same parameter pre-trained model, and for all implemented few-shot learning methods, we initialize entity embeddings via TransE. Random sampling and fixation of entity neighbors before model training. The models are trained using the relationships and their entity pairs in the training data of the public datasets NELL and WiKi, and the models are adjusted and evaluated using the relationships in the validation data and test data, respectively. The experiments use top-k hit ratio (Hitsk) and mean reciprocal rank (MRR) to evaluate the performance of different methods. k is set to 1, 5 and 10.
以NELL数据集为例,在其上所有模型的性能如下所示:Taking the NELL dataset as an example, the performance of all models on it is as follows:
实验结果很明显证明,与传统的KG嵌入方法相比,FANC模型在两个数据集上都取得了更好的性能,基于注意力聚类知识图谱的小样本关系学习网络模型更适合于解决小样本问题。The experimental results clearly prove that compared with the traditional KG embedding method, the FANC model achieves better performance on both datasets, and the small-sample relation learning network model based on the attention clustering knowledge graph is more suitable for solving small-scale problems. Sample question.
以NELL数据集为例,探究样本数量K对预测结果影响的实验结果如图9(a)-图9(d)所示。在不同的K下,FANC模型比所有基线模型都更好,显示新模型在小样本场景下的有效性。在样本K增大到一定程度时传统模型效果停滞,FANC模型仍呈现效果上升,说明在小样本的场景中,更大的参考集并不总是能获得更好的性能。因为小样本场景使性能对可用的引用敏感,引入了噪声。但新模型中transformer编码模块择优选取相关度最高n组样本,显著减少不相干参考数据干扰与噪声引入,模型预测结果得到优化提升。Taking the NELL dataset as an example, the experimental results to explore the influence of the sample size K on the prediction results are shown in Figure 9(a)-Figure 9(d). Under different K, the FANC model outperforms all baseline models, showing the effectiveness of the new model in small-sample scenarios. When the sample K increases to a certain extent, the effect of the traditional model stagnates, and the FANC model still shows an increase in effect, indicating that in the scene of small samples, a larger reference set does not always achieve better performance. Noise is introduced because small sample scenarios make performance sensitive to available references. However, the transformer coding module in the new model selects the n groups of samples with the highest correlation, which significantly reduces the interference of irrelevant reference data and the introduction of noise, and the model prediction results are optimized and improved.
以NELL数据集为例,探究参考样本邻居数量对预测结果影响的实验结果如图10(a)-图10(d)所示。对于每个实体,传统模型编码更多的邻居有时产生更差的性能。原因是对于一些实体对,有一些局部连接是无关的,并为模型提供噪声信息。对于FANC模型,三元组由聚类邻居编码器建模,通过关系聚类以识别它们面向任务的角色,根据贡献不同给予相应权重。参考集与查询集都可以捕获邻居细粒度的语义意义,相关性大的邻居向量权重大,从而呈现更具表现性的表示,有效规避噪声干扰。Taking the NELL dataset as an example, the experimental results to explore the influence of the number of reference sample neighbors on the prediction results are shown in Figure 10(a)-Figure 10(d). For each entity, traditional models that encode more neighbors sometimes yield worse performance. The reason is that for some entity pairs, there are some local connections that are irrelevant and provide noisy information to the model. For the FANC model, triples are modeled by a clustered neighbor encoder, which is clustered by relation to identify their task-oriented roles, and given corresponding weights according to their contributions. Both the reference set and the query set can capture the fine-grained semantic meaning of neighbors. Neighbor vectors with high correlation have more weight, so as to present a more expressive representation and effectively avoid noise interference.
本实施例提出的基于注意力聚类知识图谱的小样本关系学习网络流程框架,利用了关系聚类算法和耦合排序算法更细粒度优化特征三元组,提高了模型预测结果的准确度与算法收敛速度。通过仿真实验更好的验证了算法的高效性和低成本性。The small sample relationship learning network process framework based on the attention clustering knowledge graph proposed in this embodiment utilizes the relationship clustering algorithm and the coupled sorting algorithm to optimize feature triples in a finer-grained manner, which improves the accuracy of the model prediction results and the algorithm convergence speed. The high efficiency and low cost of the algorithm are better verified by simulation experiments.
本发明的优点在于:The advantages of the present invention are:
(1)本发明提出一种全新的小样本知识图谱表示学习的流程框架,对于查询与参考样本进行相似性度量学习,核心是两个编码函数:F(&)与F(T)。因此,对于任何查询关系rq,只要有一个已知事实分组[ri],通过实体和关系编码模块,进入匹配得分网络,模型就可以预测检验三元组(hq,rq,tq)和(h,[ri],t)的匹配分数,优化预测结果精确性。(1) The present invention proposes a new small sample knowledge graph representation learning process framework. The core of the similarity metric learning between query and reference samples is two encoding functions: F(&) and F(T). Therefore, for any query relation r q , as long as there is a known fact group [r i ], through the entity and relation encoding module, into the matching scoring network, the model can predict the test triplet (h q ,r q ,t q ) ) and (h,[r i ],t) matching scores to optimize the accuracy of prediction results.
(2)本发明提出一种全新基于关系r的特征聚类编码模型,以该聚类模型获得的实体表示,保留当前嵌入模型所产生的个体属性,并不根据查询样本中不同参考样本起到的作用不同,更加细粒度对参考样本与查询样本间的相关性进行描述。(2) The present invention proposes a new feature clustering coding model based on relation r, which uses the entity representation obtained by the clustering model to retain the individual attributes generated by the current embedding model, and does not play the role of different reference samples in the query sample. The role of the sample is different, and the correlation between the reference sample and the query sample is described in a finer-grained manner.
(3)本发明在transformer编码模型中创新构建期望排序模块,参考样本实体对与某个分组查询样本实体对之间相关性,择优选取相关度最高n组样本,显著减少不相干参考数据干扰与噪声引入,模型预测结果得到优化提升。(3) The present invention innovatively constructs an expectation sorting module in the transformer coding model, refers to the correlation between the sample entity pair and a certain grouped query sample entity pair, and selects the n groups of samples with the highest correlation degree, which significantly reduces the interference of irrelevant reference data. Noise is introduced, and the model prediction results are optimized and improved.
(4)本发明有效地利用了注意力机制与实体耦合方法的独特优势,更细粒度分配贡献权重,优化得分元素占比,充分利用参考样本信息,强化实体嵌入。(4) The present invention effectively utilizes the unique advantages of the attention mechanism and the entity coupling method, assigns contribution weights in a finer-grained manner, optimizes the proportion of score elements, makes full use of reference sample information, and strengthens entity embedding.
实施例二
本实施例提供了一种基于知识图谱小样本关系学习模型的实体查询系统,其具体包括如下模块:This embodiment provides an entity query system based on a knowledge graph small sample relation learning model, which specifically includes the following modules:
查询向量编码模块,其用于获取待查询的信息交互数据及其包含的关系-实体对,编码得到待查询向量;a query vector encoding module, which is used to obtain the information interaction data to be queried and the relation-entity pairs contained therein, and encodes to obtain the to-be-queried vector;
三元组表示模块,其用于基于知识图谱将待查询向量所包含的头尾实体对进行特征编码,得到相应三元组表示;The triplet representation module is used to encode the head and tail entity pairs contained in the vector to be queried based on the knowledge graph to obtain the corresponding triplet representation;
匹配查找模块,其用于将待查询的信息交互数据的三元组表示与预先聚类的各组信息交互参考小样本的三元组表示进行注意力机制匹配,得到最相似组的信息交互参考小样本并作为查询结果。The matching search module is used to match the triple representation of the information interaction data to be queried with the triple representation of the pre-clustered groups of information cross-reference small samples to perform attention mechanism matching to obtain the most similar group of information cross-references Small sample and as query result.
此处需要说明的是,本实施例中的各个模块与实施例一中的各个步骤一一对应,其具体实施过程相同,此处不再累述。It should be noted here that each module in this embodiment corresponds to each step in
实施例三
本实施例提供了一种计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现如上述所述的基于知识图谱小样本关系学习模型的实体查询方法中的步骤。This embodiment provides a computer-readable storage medium on which a computer program is stored, and when the program is executed by a processor, implements the steps in the above-mentioned entity query method based on a knowledge graph small sample relation learning model.
实施例四
本实施例提供了一种电子设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,所述处理器执行所述程序时实现如上述所述的基于知识图谱小样本关系学习模型的实体查询方法中的步骤。This embodiment provides an electronic device, including a memory, a processor, and a computer program stored in the memory and running on the processor. When the processor executes the program, the above-mentioned knowledge graph-based small Steps in an entity query method for a sample relational learning model.
本发明是参照本发明实施例的方法、设备(系统)和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each flow and/or block in the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to the processor of a general purpose computer, special purpose computer, embedded processor or other programmable data processing device to produce a machine such that the instructions executed by the processor of the computer or other programmable data processing device produce Means for implementing the functions specified in a flow or flow of a flowchart and/or a block or blocks of a block diagram.
以上所述仅为本发明的优选实施例而已,并不用于限制本发明,对于本领域的技术人员来说,本发明可以有各种更改和变化。凡在本发明的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本发明的保护范围之内。The above descriptions are only preferred embodiments of the present invention, and are not intended to limit the present invention. For those skilled in the art, the present invention may have various modifications and changes. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention shall be included within the protection scope of the present invention.
Claims (10)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210242159.9A CN114625886B (en) | 2022-03-11 | 2022-03-11 | Entity query method and system based on knowledge graph small sample relationship learning model |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210242159.9A CN114625886B (en) | 2022-03-11 | 2022-03-11 | Entity query method and system based on knowledge graph small sample relationship learning model |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114625886A true CN114625886A (en) | 2022-06-14 |
CN114625886B CN114625886B (en) | 2024-11-22 |
Family
ID=81902065
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210242159.9A Active CN114625886B (en) | 2022-03-11 | 2022-03-11 | Entity query method and system based on knowledge graph small sample relationship learning model |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114625886B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115422321A (en) * | 2022-07-26 | 2022-12-02 | 亿达信息技术有限公司 | Knowledge graph complex logic reasoning method and component and knowledge graph query and retrieval method |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111046187A (en) * | 2019-11-13 | 2020-04-21 | 山东财经大学 | One-sample knowledge graph relation learning method and system based on adversarial attention mechanism |
CN111291139A (en) * | 2020-03-17 | 2020-06-16 | 中国科学院自动化研究所 | A Completion Method for Long-tail Relationships in Knowledge Graph Based on Attention Mechanism |
WO2021139283A1 (en) * | 2020-06-16 | 2021-07-15 | 平安科技(深圳)有限公司 | Knowledge graph question-answer method and apparatus based on deep learning technology, and device |
CN113535950A (en) * | 2021-06-15 | 2021-10-22 | 杭州电子科技大学 | A Small-Sample Intent Recognition Method Based on Knowledge Graph and Capsule Network |
CN113962219A (en) * | 2021-10-13 | 2022-01-21 | 国网浙江省电力有限公司电力科学研究院 | Semantic matching method and system for power transformer knowledge retrieval and question answering |
-
2022
- 2022-03-11 CN CN202210242159.9A patent/CN114625886B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111046187A (en) * | 2019-11-13 | 2020-04-21 | 山东财经大学 | One-sample knowledge graph relation learning method and system based on adversarial attention mechanism |
CN111291139A (en) * | 2020-03-17 | 2020-06-16 | 中国科学院自动化研究所 | A Completion Method for Long-tail Relationships in Knowledge Graph Based on Attention Mechanism |
WO2021139283A1 (en) * | 2020-06-16 | 2021-07-15 | 平安科技(深圳)有限公司 | Knowledge graph question-answer method and apparatus based on deep learning technology, and device |
CN113535950A (en) * | 2021-06-15 | 2021-10-22 | 杭州电子科技大学 | A Small-Sample Intent Recognition Method Based on Knowledge Graph and Capsule Network |
CN113962219A (en) * | 2021-10-13 | 2022-01-21 | 国网浙江省电力有限公司电力科学研究院 | Semantic matching method and system for power transformer knowledge retrieval and question answering |
Non-Patent Citations (1)
Title |
---|
王晓东;刘进营;: "基于本体的分布式网络协作学习系统建模研究", 中国电化教育, no. 04, 10 April 2010 (2010-04-10) * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115422321A (en) * | 2022-07-26 | 2022-12-02 | 亿达信息技术有限公司 | Knowledge graph complex logic reasoning method and component and knowledge graph query and retrieval method |
CN115422321B (en) * | 2022-07-26 | 2024-03-26 | 亿达信息技术有限公司 | Knowledge graph complex logic reasoning method, component and knowledge graph query and retrieval method |
Also Published As
Publication number | Publication date |
---|---|
CN114625886B (en) | 2024-11-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2022083624A1 (en) | Model acquisition method, and device | |
CN112116030B (en) | Image classification method based on vector standardization and knowledge distillation | |
US20200167659A1 (en) | Device and method for training neural network | |
CN113010774B (en) | Click rate prediction method based on dynamic deep attention model | |
CN114117945B (en) | A deep learning cloud service QoS prediction method based on user-service interaction graph | |
CN112668688B (en) | Intrusion detection method, system, equipment and readable storage medium | |
CN107403188A (en) | A kind of quality evaluation method and device | |
CN111985623A (en) | Attribute graph group discovery method based on maximized mutual information and graph neural network | |
CN117037898A (en) | Molecular interaction prediction method based on knowledge graph and multi-task learning | |
CN110738245A (en) | An automatic clustering algorithm selection system and method for scientific data analysis | |
US20240086388A1 (en) | Encoder-decoder transformer for table generation | |
CN113869496A (en) | Acquisition method of neural network, data processing method and related equipment | |
WO2023174189A1 (en) | Method and apparatus for classifying nodes of graph network model, and device and storage medium | |
CN108320027B (en) | Big data processing method based on quantum computation | |
CN114625886A (en) | Entity query method and system based on knowledge graph small sample relation learning model | |
CN105719006A (en) | Cause-and-effect structure learning method based on flow characteristics | |
Mahanipour et al. | Wrapper-based federated feature selection for iot environments | |
CN109740743A (en) | Hierarchical neural network query recommendation method and device | |
CN116796797A (en) | Network architecture search method, image classification method, device and electronic device | |
Wang et al. | Parameters optimization of classifier and feature selection based on improved artificial bee colony algorithm | |
CN116795934A (en) | Less-sample knowledge graph completion method combining type perception attention | |
Dandugala et al. | Advancing big data clustering with fuzzy logic-based IMV-FCA and ensemble approach | |
Venkatesh et al. | Parallel and Streaming Wavelet Neural Networks for Classification and Regression under Apache Spark | |
CN113779287A (en) | Cross-domain and multi-view target retrieval method and device based on multi-stage classifier network | |
Okamura et al. | Improving dropout in graph convolutional networks for recommendation via contrastive loss |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |