WO2022057669A1 - 基于结构化上下文信息的知识图谱预训练方法 - Google Patents

基于结构化上下文信息的知识图谱预训练方法 Download PDF

Info

Publication number
WO2022057669A1
WO2022057669A1 PCT/CN2021/116769 CN2021116769W WO2022057669A1 WO 2022057669 A1 WO2022057669 A1 WO 2022057669A1 CN 2021116769 W CN2021116769 W CN 2021116769W WO 2022057669 A1 WO2022057669 A1 WO 2022057669A1
Authority
WO
WIPO (PCT)
Prior art keywords
context
triple
vector
triplet
seq
Prior art date
Application number
PCT/CN2021/116769
Other languages
English (en)
French (fr)
Inventor
陈华钧
叶橄强
张文
Original Assignee
浙江大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 浙江大学 filed Critical 浙江大学
Priority to US17/791,897 priority Critical patent/US20240177047A1/en
Publication of WO2022057669A1 publication Critical patent/WO2022057669A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/36Creation of semantic tools, e.g. ontology or thesauri
    • G06F16/367Ontology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/02Knowledge representation; Symbolic representation
    • G06N5/022Knowledge engineering; Knowledge acquisition

Definitions

  • the invention belongs to the technical field of data storage and processing, and in particular relates to a knowledge map pre-training method based on structured context information.
  • Knowledge Graph can be regarded as a directed labeled graph, and the facts in the graph are represented as triples in the form of (head entity, relationship, tail entity), abbreviated as (h, r, t according to the English initials) ).
  • knowledge graphs have developed rapidly in both construction and application, and have broad application prospects in artificial intelligence fields such as semantic search, information extraction, and question answering.
  • the representation learning method embeds entities and relationships into a continuous vector space, can automatically extract structural features, and derive new triples through algebraic operations.
  • This method was first proposed by the TransE model, which effectively simplifies the mathematical operations in the knowledge graph.
  • the TransH model proposes that an entity has different representations under different relations to overcome the limitations of the multi-relation problem; the TransR model introduces a relation-specific space to solve the multi-relation problem.
  • the TransD model decomposes the projection matrix into the product of two vectors, which further improves the performance of various tasks in the knowledge graph.
  • pre-trained language models such as Bert have achieved significant improvements in a variety of downstream tasks of natural language processing.
  • the main difference between the pre-trained language model and the knowledge graph representation learning model lies in the following two points: one is that the language is represented as sequence data, and the word context is used as the key information to detect the semantics of words and sentences, while the knowledge graph is represented as structured graph data.
  • the second is that the input of the downstream task of the pre-trained language model can be unified into two sentences, while the input of the knowledge graph is a triple.
  • the main challenges facing pre-training knowledge graph representation learning models that adapt to different tasks are: (1) regardless of the specific knowledge graph downstream tasks, the pre-training model should be able to automatically capture the deep structural context information of a given triple; (2) The representation of entities and relationships needs to be trained in different ways according to different downstream tasks and different structural features of the input data of downstream tasks to improve its robustness.
  • the combination of knowledge graph and pre-trained language model has attracted more and more attention of researchers.
  • the K-Bert model injects knowledge from knowledge graphs into sentences to generate knowledge-rich language representations.
  • the KG-Bert model uses a pre-trained language model to complement the knowledge graph, and the ERNIE model integrates the entity representation in the knowledge module into the semantic module to enhance text representation capabilities.
  • KEPLER incorporates the structural information of the knowledge graph into the text representation vector of the entity.
  • the KnowBert model proposes a knowledge-enhancing representation method, which aligns entities in sentences with entities in knowledge graphs, and fuses the vector representations of the two to improve the effect of prediction tasks.
  • the purpose of the present invention is to provide a knowledge map pre-training method based on structured context information, the structure representation vector of triples obtained by the knowledge map pre-training method is combined with context information, and only needs to be trained once in the pre-training stage, In the fine-tuning stage, the training can be completed more quickly and better experimental results can be achieved in various downstream tasks of the knowledge graph.
  • a knowledge graph pre-training method based on structured context information adopts a pre-training model including a triple integration module, a structured information module and a general task module to train triples in the knowledge graph, and the specific training process includes:
  • For the target triplet construct an instance composed of context triples, and use the triplet integration module to encode each context triplet of the instance to obtain an integration vector;
  • the integrated vectors of all context triples for the instance are formed into a context vector sequence, and the structured information module is used to encode the context vector sequence to obtain the structure representation vector of the triple;
  • the general task module is used to calculate the structure representation vector of the triplet, and the label prediction value of the triplet is obtained. Based on the cross entropy loss between the label prediction value of the triplet and the true value of the label, the parameters and structure of the triplet integration module are updated. The parameters of the optimization information module, the parameters of the general task module and the structure representation vector of the triplet are obtained until the end of training, and the optimized structure representation vector of the target triplet is obtained.
  • the triplet integration module adopts the Transformer model, and assigns a triplet tag [TRI] to the context triplet, where the triplet tag represents k [TRI] and the head entity of the context triplet represents h′, The tail entity represents r' and the relationship represents t'.
  • the sequence ⁇ k [TRI] , h', r', t'> is used as the input of the triple integration module. After the triple integration module calculates, the triple is marked. The output corresponding to k [TRI] is the integrated vector.
  • the structured information module adopts a Transformer model, and the context vector sequence is represented as ⁇ seq h , seq r , seq t >, wherein seq h , seq r , and seq t are the head entity h, the relationship r, and the tail entity respectively.
  • the segment representation type to which each context triplet belongs is added to the integrated vector, namely:
  • sh represents the segment vector of the head entity
  • s r represents the segment vector of the relation
  • s t represents the segment vector of the tail entity
  • the general task module includes at least one fully connected layer and a softmax layer, the fully connected layer is used to perform full connection settlement on the input sequence, the deep context information of the target triplet is obtained, and the softmax layer is used to calculate the label of the deep context information Predictive value.
  • the instances of the target triplet include positive instances and negative instances, and the number of positive instances and negative instances is guaranteed to be the same.
  • the construction method is: construct a positive instance based on the context triplet sequence of the target triplet, and replace the target triplet by replacing the target triplet.
  • the head entity, relationship or tail entity of the new triple is obtained, and the negative instance is constructed using the context triple sequence of the new triple, the label truth value of the target triple is 1, and the label truth value of the new triple is 1 0.
  • the instance size of the defined target triplet is fixed to n, that is, each instance contains n context triples, and during construction, if the number of context triples is greater than n, randomly selected from the context triples n context triples form an instance, otherwise zeros are directly padded after all context triples to make up to n.
  • the optimized structure representation vector of the triple is used as the input of the specific task module, and the optimized structure representation vector of the triple is used to fine-tune the parameters of the specific task module.
  • the beneficial effects of the present invention at least include:
  • the invention can automatically encode the depth map structure by using the structure context triples, and dynamically obtain the structure information of entities and relationships; meanwhile, it has good experimental results for various downstream tasks of the knowledge map; moreover, after a pre-training , which can quickly achieve better test indicators for downstream tasks of a variety of knowledge graphs.
  • FIG. 1 is a schematic structural diagram of a pre-training model provided by an embodiment
  • FIG. 2 is a schematic structural diagram of a triplet integration module provided by an embodiment
  • FIG. 3 is a schematic structural diagram of a structured information module provided by an embodiment.
  • the knowledge graph pre-training based on structured context information provided by the embodiment adopts a pre-training model including a triple integration module, a structured information module and a general task module to train the triples in the knowledge graph, and the specific training process is as follows:
  • Step 1 use the triple integration module to encode each context triple to obtain an integration vector.
  • the input of the model not only includes target triples (h, r, t), but also the structured contexts of these target triples
  • the sequence of tuples i.e. the sequence of neighbor triples of h, r and t, is denoted as C(h), C(r) and C(t).
  • T_Mod() represents the encoding result of the input data by the triple integration module.
  • the triple based on the Transformer model is used.
  • the integrated module has been widely used due to its good performance and parallel computing architecture.
  • the triple tag [TRI] after integration, is the vector c.
  • This triplet tag [TRI] is assigned a vector of triplet tags, denoted as Therefore, the combined sequence ⁇ k [TRI] ,h′,r′,t′> is input into the multi-layer bidirectional Transformer encoder. After encoding by the multi-layer bidirectional Transformer encoder, the triples are marked [TRI] The corresponding output is taken as the integration vector.
  • all context triples are encoded and calculated in parallel by a unified triple integration module to obtain integration vectors.
  • Step 2 using the structured information module to encode the context vector sequence composed of the integrated vectors of all the context triples to obtain the structure representation vector of the triples.
  • the Structure Module takes the context triple representation of h, r, and t as input, and the corresponding output can be represented by h s , rs s and t s , respectively, so
  • the framework of S-Mod can be expressed as:
  • seq h , seq r , seq t is a sequence of context triples of h, r, t, the specific form is as follows:
  • each triplet representation indicating whether it belongs to the head entity h, relation r or tail entity t, correspondingly are represented as s h , s r and s t respectively, then the integrated vector with the corresponding segment representation type added is represented as:
  • the structured information module encodes the input sequence i using a multi-layer bidirectional Transformer encoder different from that set by the triple integration module. From the last layer of Transformer, the outputs h s , rs s and ts s corresponding to the positions [HEA], [REL] and [TAI] are represented as the structure representation vector of the head entity h, the relation r and the tail entity t, respectively, The structures that make up the triples represent vectors.
  • the structured vector hs in the model depends not only on its own structural context triples, but also on the context triples of r and t . The same goes for the structured vectors rs and ts . Therefore, even for the same entity or the same relationship in different target triples at the input, the structured vectors obtained after the structured information module are different.
  • step 3 the general task module is used to calculate the structure representation vector of the triplet to obtain the label prediction value of the triplet.
  • [h s ; rs ; ts ] represents the vector after splicing h s , rs , ts , is the weight, and is the bias vector.
  • a softmax layer is used to obtain the label prediction value s ⁇ based on the deep context information v ⁇ :
  • ⁇ 0 represents a triple with the correct label
  • ⁇ 1 represents a triple with the wrong label
  • Step 4 Update the triplet integration module parameters, structured information module parameters, general task module parameters, and triplet structure representation vector based on the cross-entropy loss of the triplet’s predicted label value and the label’s true value, until the end of training So far, the optimized structure representation vector of the target triplet is obtained.
  • a positive instance can be constructed based on the context triple sequence of the current target triple, and new triples are obtained by replacing the head entity, relation or tail entity of the target triple, and the context triples of these new triples are used. Constructs a negative instance.
  • the length of the context triple sequence in the instance needs to be fixed.
  • the number of context triples based on head entity h, relation r, or tail entity t varies widely, from zero to hundreds. That is to say, some entity neighbor triples are very rich, while some have almost no adjacent neighbor triples. Therefore, when generating entities, the sequence length must be unified with the specified size n to ensure that the model works properly.
  • the rules are defined as follows: if the number of context triples is greater than the specified size n, a context sequence with a fixed sequence length is randomly selected from the context triples, otherwise the context triples will be directly padded with zeros to meet the above requirements .
  • the sequence length of the context triples should be set as long as possible.
  • the training time and space complexity of the Transformer model is quadratic of the sequence, the longer the sequence is, the more time-consuming it takes to train and the higher the training cost.
  • the number of layers and heads of Self-Attention in the Transformer model are denoted as L and A, respectively, and the hidden dimension of the representation vector is denoted as H.
  • T-Mod triple integration module
  • S-Mod structured information module
  • model inputs and intermediate processing are appropriately adjusted for different downstream tasks.
  • the pre-training model of knowledge graph mainly focuses on the field of knowledge graph, and draws on the idea of pre-training language model. It only needs to train the complete knowledge graph once and extract structured context information from it, and then it can be improved in the fine-tuning stage, including link prediction,
  • the effects of various knowledge graph downstream tasks, including entity alignment can also perform better in some downstream tasks combined with natural language processing datasets, including relation extraction, entity linking, knowledge question answering, etc.
  • it can be more competitive in terms of training time and training parameters for these tasks. It is precisely because of this that the entire knowledge graph pre-training model has stronger versatility, Robustness and generalization ability.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Medical Informatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Animal Behavior & Ethology (AREA)
  • Databases & Information Systems (AREA)
  • Machine Translation (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

本发明公开了一种基于结构化上下文信息的知识图谱预训练方法,包括:针对目标三元组,构建由上下文三元组组成的实例,采用三元组整合模块对实例的每个上下文三元组进行编码,得到整合向量;将针对实例的所有上下文三元组的整合向量组成上下文向量序列,采用结构化信息模块对上下文向量序列进行编码得到三元组的结构表示向量;采用通用任务模块对三元组的结构表示向量进行计算,得到三元组的标签预测值,基于三元组的标签预测值与标签真值的交叉熵损失来更新三元组的结构表示向量,直到训练结束为止,得到目标三元组的优化后结构表示向量。该方法获得的三元组的结构表示向量结合了上下文信息。

Description

基于结构化上下文信息的知识图谱预训练方法 技术领域
本发明属于数据存储与处理技术领域,具体涉及一种基于结构化上下文信息的知识图谱预训练方法。
背景技术
知识图谱(Knowledge Graph)可以看作是有向标记图,而图中的事实以(头实体、关系、尾实体)的形式表示为三元组,按照英文首字母简写为(h,r,t)。近年来,知识图谱在构建和应用方面都得到了迅速的发展,在语义搜索、信息抽取和问答等人工智能领域有着广泛的应用前景。
由于知识图谱中的图结构包含了大量有价值的信息,因此对于各种知识图谱任务,如实体类型化、链接预测、实体对齐等,提取深层的结构信息至关重要。表示学习方法将实体和关系嵌入到连续向量空间中,能自动提取结构特征,并通过代数运算推导出新的三元组,该方法由TransE模型首先提出,有效简化了知识图谱中的数学运算。之后,TransH模型提出一个实体在不同关系下有不同的表示,以克服多关系问题的局限性;TransR模型引入了关系专用空间来解决多关系问题。TransD模型将投影矩阵分解为两个向量的乘积,进一步提升了知识图谱各项任务的效果。
针对实体分类、实体对齐、链接预测和推荐等多种不同的知识图谱特定任务,之前的研究中提出了不同的表示学习方法,以适应不同的知识图谱训练任务。
在自然语言处理领域,Bert等预训练语言模型在多种自然语言处理下 游任务中都取得了显著的改进。预训练语言模型与知识图谱表示学习模型的主要区别在于以下两点:一是语言表示为序列数据,以词上下文为关键信息检测词和句子的语义,而知识图谱表示为结构化图数据。二是预训练语言模型的下游任务的输入可以统一为两个句子,而知识图谱的输入是三元组。因此,适应不同任务的训练前知识图表示学习模型面临的主要挑战是:(1)无论具体的知识图谱下游任务如何,预训练模型都应该能够自动捕获给定三元组的深层结构上下文信息;(2)实体和关系的表示需要根据不同的下游任务,以及下游任务输入数据的不同结构特征进行不同方式的训练,以提高其鲁棒性。
知识图谱与预训练语言模型的结合越来越受到研究者的关注。K-Bert模型将知识图谱中的知识注入句子中,生成知识丰富的语言表示。KG-Bert模型使用预训练的语言模型来补全知识图谱,ERNIE模型将知识模块中的实体表示集成到语义模块中,以增强文本表示能力。KEPLER在实体的文本表示向量中,融入了知识图谱的结构信息。KnowBert模型提出了知识增强表示模方法,将句子中的实体和知识图谱中的实体对齐,融合两者的向量表示来提升预测任务的效果。
发明内容
本发明的目的就是提供一种基于结构化上下文信息的知识图谱预训练方法,该知识图谱预训练方法获得的三元组的结构表示向量结合了上下文信息,且只需要在预训练阶段训练一次,即可在微调阶段在多种知识图谱下游任务中,更快速地完成训练,并取得更好的实验效果。
为实现上述发明目的,本发明提供以下技术方案:
一种基于结构化上下文信息的知识图谱预训练方法,采用包含三元组 整合模块、结构化信息模块以及通用任务模块的预训练模型对知识图谱中的三元组进行训练,具体训练过程包括:
针对目标三元组,构建由上下文三元组组成的实例,采用三元组整合模块对实例的每个上下文三元组进行编码,得到整合向量;
将针对实例的所有上下文三元组的整合向量组成上下文向量序列,采用结构化信息模块对上下文向量序列进行编码得到三元组的结构表示向量;
采用通用任务模块对三元组的结构表示向量进行计算,得到三元组的标签预测值,基于三元组的标签预测值与标签真值的交叉熵损失来更新三元组整合模块参数、结构化信息模块参数、通用任务模块参数以及三元组的结构表示向量,直到训练结束为止,得到目标三元组的优化后结构表示向量。
优选地,所述三元组整合模块采用Transformer模型,为上下文三元组分配一个三元组标记[TRI],三元组标记表示k [TRI]与上下文三元组的头实体表示h′、尾实体表示r′、关系表示t′组成表示序列<k [TRI],h′,r′,t′>作为三元组整合模块的输入,经三元组整合模块计算,将三元组标记k [TRI]对应的输出为整合向量。
优选地,所述结构化信息模块采用Transformer模型,上下文向量序列表示为<seq h,seq r,seq t>,其中,seq h,seq r,seq t分别是头实体h,关系r,尾实体t的上下文三元组的序列,具体形式为:
Figure PCTCN2021116769-appb-000001
Figure PCTCN2021116769-appb-000002
Figure PCTCN2021116769-appb-000003
其中,
Figure PCTCN2021116769-appb-000004
表示头实体h的第i个整合向量,类似地,
Figure PCTCN2021116769-appb-000005
表示关系r的第i个 整合向量,
Figure PCTCN2021116769-appb-000006
表示尾实体t的第i个整合向量。
为seq h,seq r,seq t分配一个头实体标记[HEA]、关系标记[REL]、尾实体标记[TAI],该seq h,seq r,seq t与头实体标记向量k [HEA]、关系标记向量k [REL]、尾实体标记向量k [TAI]组成序列<k [HEA],seq h,k [REL],seq r,k [TAI],seq t>作为结构化信息模块的输入。
优选地,为每个上下文三元组的整合向量添加所属的段表示类型,即:
Figure PCTCN2021116769-appb-000007
Figure PCTCN2021116769-appb-000008
Figure PCTCN2021116769-appb-000009
其中,s h表示头实体的段向量,类似地,s r表示关系的段向量,s t表示尾实体的段向量;
则添加有段表示类型的上下文三元组的序列
Figure PCTCN2021116769-appb-000010
表示为:
Figure PCTCN2021116769-appb-000011
Figure PCTCN2021116769-appb-000012
Figure PCTCN2021116769-appb-000013
则输入结构化信息模块的序列为:
Figure PCTCN2021116769-appb-000014
优选地,所述通用任务模块包括至少1个全连接层和softmax层,利用全连接层对输入序列进行全连接结算,获得目标三元组的深层上下文信息,利用softmax层计算深层上下文信息的标签预测值。
其中,目标三元组的实例包括正实例和负实例,且保证正实例和负实例数量相同,构建方法为:基于目标三元组的上下文三元组序列构造正实例,通过替换目标三元组的头实体、关系或尾实体得到了新三元组,利用新三元组的上下文三元组序列构造负实例,目标三元组的标签真值为1,新三元组的标签真值为0。
优选地,限定目标三元组的实例大小固定为n,即每个实例包含n个 上下文三元组,在构建时,若上下文三元组的数目大于n,则从上下文三元组中随机抽取n个上下文三元组组成实例,否则在所有上下文三元组后直接填充零以补足到n个。
当针对特定任务训练时,将三元组的优化后结构表示向量作为特定任务模块的输入,利用三元组的优化后结构表示向量对特定任务模块进行参数微调。
与现有技术相比,本发明具有的有益效果至少包括:
本发明能够利用结构上下文三元组对深度图结构进行自动编码,动态地获取实体和关系的结构信息;同时针对多种知识图谱下游任务都有较好的实验效果;再者经过一次预训练后,能够快速达到多种知识图谱下游任务较好的试验指标。
附图说明
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图做简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动前提下,还可以根据这些附图获得其他附图。
图1是实施例提供的预训练模型的结构示意图;
图2是实施例提供的三元组整合模块的结构示意图;
图3是实施例提供的结构化信息模块的结构示意图。
具体实施方式
为使本发明的目的、技术方案及优点更加清楚明白,以下结合附图及实施例对本发明进行进一步的详细说明。应当理解,此处所描述的具体实 施方式仅仅用以解释本发明,并不限定本发明的保护范围。
实施例提供的基于结构化上下文信息的知识图谱预训练采用包含三元组整合模块、结构化信息模块以及通用任务模块的预训练模型对知识图谱中的三元组进行训练,具体训练过程为:
步骤1,采用三元组整合模块对每个上下文三元组进行编码,得到整合向量。
由于预训练模型需要捕获整合知识图谱中的各种深层次结构化信息,因此模型的输入不仅包括目标三元组(h,r,t),而且还包括这些目标三元组的结构化上下文三元组序列,即h,r和t的邻居三元组序列,表示为C(h),C(r)和C(t)。
针对给定目标三元组τ=(h,r,t),三元组整合模块(Triple Module,T-Mod)首先将每个上下文三元组c=(h′,r′,t′)∈{C(h),C(h),C(h)}编码为向量c,因此有
c=T_Mod(<h′,r′,t′>)
其中<h′,r′,t′>表示向量h′,r′,t′的序列,T_Mod()表示三元组整合模块对输入的数据的编码结果,具体采用基于Transformer模型的三元组整合模块,由于其良好的性能和并行计算架构而得到了广泛的应用。
如图2所示,在将三元组<h′,r′,t′>输入到Transformer模型之前,需要在<h′,r′,t′>前面引入了一个用于整合三元组的三元组标记[TRI],整合后即为向量c。为这个三元组标记[TRI]分配了一个三元组标记向量,表示为
Figure PCTCN2021116769-appb-000015
因此,组合后的序列<k [TRI],h′,r′,t′>被输入到多层双向Transformer编码器中,经过多层双向Transformer编码器的编码,将三元组标记[TRI]对应的输出作为整合向量。
本实施例中,所有的上下文三元组都由统一的三元组整合模块编码并行计算得到整合向量。
步骤2,采用结构化信息模块对由所有上下文三元组的整合向量组成上下文向量序列进行编码得到三元组的结构表示向量。
如图3所示,结构化信息模块(Structure Module,S-Mod)以h,r,t的上下文三元组表示作为输入,对应的输出可以分别用h s,r s和t s表示,因此S-Mod的框架可以表示为:
h s,r s,t s=S_Mod(<seq h,seq r,seq t>)
其中seq h,seq r,seq t是h,r,t的上下文三元组的序列,具体形式如下:
Figure PCTCN2021116769-appb-000016
Figure PCTCN2021116769-appb-000017
Figure PCTCN2021116769-appb-000018
其中,
Figure PCTCN2021116769-appb-000019
表示头实体h的第i个整合向量,类似地,
Figure PCTCN2021116769-appb-000020
表示关系r的第i个整合向量,
Figure PCTCN2021116769-appb-000021
表示尾实体t的第i个整合向量。
为了增强目标三元组τ中用于训练的不同元素的独立性,在每个三元表示中添加一个所属的段表示类型,指示它是属于头实体h、关系r还是尾实体t,相应地分别表示为s h,s r和s t,则添加有所属的段表示类型的整合向量表示为:
Figure PCTCN2021116769-appb-000022
Figure PCTCN2021116769-appb-000023
Figure PCTCN2021116769-appb-000024
则添加有段表示类型的上下文三元组的序列
Figure PCTCN2021116769-appb-000025
表示为:
Figure PCTCN2021116769-appb-000026
Figure PCTCN2021116769-appb-000027
Figure PCTCN2021116769-appb-000028
在添加段向量之后,为了进一步区分当前上下文三元组所属的元素, 引入了三种类似于三元组整合模块的三元组标记[HEA]、[REL]和[TAI]。在第一个头实体上下文三元组前面添加[HEA],第一个关系上下文三元组前面添加[REL],第一个尾实体上下文三元组前面添加[TAI],那么它们对应的向量表示形为k [HEA]、k [REL]和k [TAI]。因此,输入序列的格式也可以表示为:
Figure PCTCN2021116769-appb-000029
结构化信息模块使用不同于三元组整合模块设置的多层双向Transformer编码器,对输入序列i进行编码。从最后一层Transformer中,将位置[HEA]、[REL]和[TAI]相对应的输出h s,r s和t s分别表示为头实体h、关系r和尾实体t的结构表示向量,组成三元组的结构表示向量。
为了使模型具有动态地生成实体和关系的上下文表示的能力,模型中的结构化向量h s不仅依赖于其自身的结构上下文三元组,而且还依赖于r和t的上下文三元组。同理,结构化向量r s和t s也是如此。因此,即使对于输入时不同目标三元组中的同一实体或者同一关系,经过结构化信息模块之后,得到结构化向量也是是不同的。
步骤3,采用通用任务模块对三元组的结构表示向量进行计算,得到三元组的标签预测值。
将h s,r s和t s这三种结构表示向量输入到通用任务模块(tasK Module,K-Mod)中,并通过一个简单的全连接的神经网络来集成目标三元组τ=(h,r,t)的深层上下文信息v τ
v τ=[h s;r s;t s]W int+b
其中[h s;r s;t s]表示h s,r s,t s拼接后的向量,
Figure PCTCN2021116769-appb-000030
是权重,而
Figure PCTCN2021116769-appb-000031
是偏差向量。
采用softmax层基于深层上下文信息v τ获得标签预测值s τ
s τ=f(h,r,t)=softmax(v τW cls)
其中,
Figure PCTCN2021116769-appb-000032
是分类向量权重,
Figure PCTCN2021116769-appb-000033
是二维实向量,经过softmax操作之后,存在
Figure PCTCN2021116769-appb-000034
的关系,τ 0表示标签为正确的三元组,而τ 1表示标签为错误的三元组。
步骤4,基于三元组的标签预测值与标签真值的交叉熵损失来更新三元组整合模块参数、结构化信息模块参数、通用任务模块参数以及三元组的结构表示向量,直到训练结束为止,得到目标三元组的优化后结构表示向量。
给定相应构造的三元组正样例集合
Figure PCTCN2021116769-appb-000035
和三元组负样例集合
Figure PCTCN2021116769-appb-000036
于是结合s τ和三元组标签可以计算交叉熵损失:
Figure PCTCN2021116769-appb-000037
其中,y τ∈{0,1}是三元组τ的标签,当
Figure PCTCN2021116769-appb-000038
时,标签y τ是1时,而当
Figure PCTCN2021116769-appb-000039
时,标签y τ是0时。三元组负样例集合
Figure PCTCN2021116769-appb-000040
是由头实体h或尾实体t替换为另一个随机实体e∈ε,或将关系r替换为另一个随机关系
Figure PCTCN2021116769-appb-000041
而生成得到的。
在为每个目标三元组生成训练实例时,需要保持正实例和负实例的数量相同。基于当前目标三元组的上下文三元组序列可以构造正实例,而通过替换目标三元组的头实体、关系或尾实体得到了新三元组,利用这些新三元组的上下文三元组构造负实例。
本实施例中定义了以下规则来替换目标三元组的一个元素:对于头实体h或尾实体t,它们被替换为一个随机的实体e∈ε,类似地,关系r替换为一个随机关系
Figure PCTCN2021116769-appb-000042
或与h或t连接的关系r′,并设置替换这两种关系的概率相等。
以上基于结构化上下文信息的知识图谱预训练方法中,需要固定实例中上下文三元组序列的长度。基于头实体h、关系r或尾实体t的上下文三元组的数目有很大的差异,从零到数百。也就是说,有的实体邻居三元组非常丰富,而有的几乎没有相邻的邻居三元组。因此,在生成实体时,必须将序列长度与指定的大小n统一,以确保模型正常工作。为此,规则定义如下:如果上下文三元组的数目大于指定的大小n,从上下文三元组中随机抽取具有固定序列长度的上下文序列,否则上下文三元组后将直接填充零以满足上述要求。
以上基于结构化上下文信息的知识图谱预训练方法中,为了使上下文三元组尽可能全面地包含知识图谱的深层结构信息,应尽可能地设置上下文三元组的序列长度更长。然而,因为Transformer模型的训练时间和空间复杂度是序列的二次方,越长的序列训练起来越耗时,训练成本更高。
为了平衡二者之间的冲突,分析了实体和关系的上下文三元组长度的分布。具体来说,在WN18RR中,20个上下文三元组就能覆盖96.28%的实体和关系,而如果要覆盖99%,则需要115个上下文三元组,边际效益快速递减。因此,将为h、r或t的上下文三元组的长度设置为20即可,再加上考虑到额外的标记[HEA]、[REL]和[TAI],预训练模型的输入序列的长度设置为64。同理,在数据集FB15k-237中,选择128作为输入序列的长度。
为了简单起见,将Transformer模型中Self-Attention的层数和头数目分别表示为L和A,将表示向量的隐藏维数表示为H。在三元组整合模块(T-Mod)中,有以下配置:L=6,A=3和H=768,在结构化信息模块(S-Mod)中,L=12,A=12和H=768。我们将学习率设置为2e -4,Batch大小设置为64。
当针对特定任务训练时,在下游任务中,会针对不同的下游任务适当调整模型输入和中间处理过程。例如,针对实体对齐任务,只需输入两个实体及这两个实体的结构化三元组序列信息,利用实体对齐的数据集对各个模块的部分参数进行训练微调,最终得到适配于实体对齐任务的整套模型。使得模型在各个不同的下游任务上都有非常好的实验效果。
知识图谱的预训练模型主要着眼于知识图谱领域,借鉴了预训练语言模型的思想,只需要对完整的知识图谱训练一次并从中提取结构化上下文信息,就能够在微调阶段,提升包括链接预测、实体对齐等在内的多种知识图谱下游任务效果,还能够在部分结合自然语言处理数据集的下游任务中有更优异的表现,包括关系抽取、实体链接、知识问答等。除此以外,相比于其他特定的下游任务模型,能够在这些任务的训练时间和训练参数量等方面更有竞争力,也正是如此,整个知识图谱预训练模型具有更强的通用性、健壮性和泛化能力。
以上所述的具体实施方式对本发明的技术方案和有益效果进行了详细说明,应理解的是以上所述仅为本发明的最优选实施例,并不用于限制本发明,凡在本发明的原则范围内所做的任何修改、补充和等同替换等,均应包含在本发明的保护范围之内。

Claims (8)

  1. 一种基于结构化上下文信息的知识图谱预训练方法,其特征在于,采用包含三元组整合模块、结构化信息模块以及通用任务模块的预训练模型对知识图谱中的三元组进行训练,具体训练过程包括:
    针对目标三元组,从知识图谱中构建由上下文三元组组成的实例,采用三元组整合模块对实例的每个上下文三元组进行编码,得到整合向量;
    将针对实例的所有上下文三元组的整合向量组成上下文向量序列,采用结构化信息模块对上下文向量序列进行编码得到三元组的结构表示向量;
    采用通用任务模块对三元组的结构表示向量进行计算,得到三元组的标签预测值,基于三元组的标签预测值与标签真值的交叉熵损失来更新三元组整合模块参数、结构化信息模块参数、通用任务模块参数以及三元组的结构表示向量,直到训练结束为止,得到目标三元组的优化后结构表示向量;
    为上下文三元组分配一个三元组标记[TRI],三元组标记表示k [TRI]与上下文三元组的头实体表示h′、尾实体表示r′、关系表示t′组成表示序列<k [TRI],h′,r′,t′>作为三元组整合模块的输入,经三元组整合模块计算,将三元组标记k [TRI]对应的输出为整合向量;
    上下文向量序列表示为<seq h,seq r,seq t>,其中,seq h,seq r,seq t分别是头实体h,关系r,尾实体t的上下文三元组的序列,具体形式为:
    Figure PCTCN2021116769-appb-100001
    Figure PCTCN2021116769-appb-100002
    Figure PCTCN2021116769-appb-100003
    其中,
    Figure PCTCN2021116769-appb-100004
    表示头实体h的第i个整合向量,
    Figure PCTCN2021116769-appb-100005
    表示关系r的第i个整合向量,
    Figure PCTCN2021116769-appb-100006
    表示尾实体t的第i个整合向量。
    为seq h,seq r,seq t分配一个头实体标记[HEA]、关系标记[REL]、尾实体标记[TAI],该seq h,seq r,seq t与头实体标记向量k [HEA]、关系标记向量k [REL]、尾实体标记向量k [TAI]组成序列<k [HEA],seq h,k [REL],seq r,k [TAI],seq t>作为结构化信息模块的输入。
  2. 如权利要求1所述的基于结构化上下文信息的知识图谱预训练方法,其特征在于,所述三元组整合模块采用Transformer模型。
  3. 如权利要求1所述的基于结构化上下文信息的知识图谱预训练方法,其特征在于,为每个上下文三元组的整合向量添加所属的段表示类型,即:
    Figure PCTCN2021116769-appb-100007
    Figure PCTCN2021116769-appb-100008
    Figure PCTCN2021116769-appb-100009
    其中,s h表示头实体的段向量,s r表示关系的段向量,s t表示尾实体的段向量;
    则添加有段表示类型的上下文三元组的序列
    Figure PCTCN2021116769-appb-100010
    表示为:
    Figure PCTCN2021116769-appb-100011
    Figure PCTCN2021116769-appb-100012
    Figure PCTCN2021116769-appb-100013
    则输入结构化信息模块的序列为:
    Figure PCTCN2021116769-appb-100014
  4. 如权利要求1所述的基于结构化上下文信息的知识图谱预训练方法,其特征在于,所述结构化信息模块采用Transformer模型。
  5. 如权利要求1所述的基于结构化上下文信息的知识图谱预训练方 法,其特征在于,所述通用任务模块包括至少1个全连接层和softmax层,利用全连接层对输入序列进行全连接结算,获得目标三元组的深层上下文信息,利用softmax层计算深层上下文信息的标签预测值。
  6. 如权利要求1所述的基于结构化上下文信息的知识图谱预训练方法,其特征在于,目标三元组的实例包括正实例和负实例,且保证正实例和负实例数量相同,构建方法为:基于目标三元组的上下文三元组序列构造正实例,通过替换目标三元组的头实体、关系或尾实体得到了新三元组,利用新三元组的上下文三元组序列构造负实例,目标三元组的标签真值为1,新三元组的标签真值为0。
  7. 如权利要求1所述的基于结构化上下文信息的知识图谱预训练方法,其特征在于,限定目标三元组的实例大小固定为n,即每个实例包含n个上下文三元组,在构建时,若上下文三元组的数目大于n,则从上下文三元组中随机抽取n个上下文三元组组成实例,否则在所有上下文三元组后直接填充零以补足到n个。
  8. 如权利要求1所述的基于结构化上下文信息的知识图谱预训练方法,其特征在于,当针对特定任务训练时,将三元组的优化后结构表示向量作为特定任务模块的输入,利用三元组的优化后结构表示向量对特定任务模块进行参数微调。
PCT/CN2021/116769 2020-09-16 2021-09-06 基于结构化上下文信息的知识图谱预训练方法 WO2022057669A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US17/791,897 US20240177047A1 (en) 2020-09-16 2021-09-06 Knowledge grap pre-training method based on structural context infor

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010975552.X 2020-09-16
CN202010975552.XA CN112100404B (zh) 2020-09-16 2020-09-16 基于结构化上下文信息的知识图谱预训练方法

Publications (1)

Publication Number Publication Date
WO2022057669A1 true WO2022057669A1 (zh) 2022-03-24

Family

ID=73760415

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/116769 WO2022057669A1 (zh) 2020-09-16 2021-09-06 基于结构化上下文信息的知识图谱预训练方法

Country Status (3)

Country Link
US (1) US20240177047A1 (zh)
CN (1) CN112100404B (zh)
WO (1) WO2022057669A1 (zh)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114724010A (zh) * 2022-05-16 2022-07-08 中译语通科技股份有限公司 一种待训练样本的确定方法、装置、设备及可读存储介质
CN115062587A (zh) * 2022-06-02 2022-09-16 北京航空航天大学 一种基于周围信息的知识图谱嵌入及回复生成方法
CN115564049A (zh) * 2022-12-06 2023-01-03 北京航空航天大学 一种双向编码的知识图谱嵌入方法
CN115936737A (zh) * 2023-03-10 2023-04-07 云筑信息科技(成都)有限公司 一种确定建材真伪的方法和系统
CN116187446A (zh) * 2023-05-04 2023-05-30 中国人民解放军国防科技大学 基于自适应注意力机制的知识图谱补全方法、装置和设备
CN116340524A (zh) * 2022-11-11 2023-06-27 华东师范大学 一种基于关系自适应网络的小样本时态知识图谱补全方法
CN116910272A (zh) * 2023-08-09 2023-10-20 西安工程大学 基于预训练模型t5的学术知识图谱补全方法
CN117540035A (zh) * 2024-01-09 2024-02-09 安徽思高智能科技有限公司 一种基于实体类型信息融合的rpa知识图谱构建方法
CN115062587B (zh) * 2022-06-02 2024-05-31 北京航空航天大学 一种基于周围信息的知识图谱嵌入及回复生成方法

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112100404B (zh) * 2020-09-16 2021-10-15 浙江大学 基于结构化上下文信息的知识图谱预训练方法
CN112632290B (zh) * 2020-12-21 2021-11-09 浙江大学 一种融合图结构和文本信息的自适应知识图谱表示学习方法
CN112507706B (zh) * 2020-12-21 2023-01-31 北京百度网讯科技有限公司 知识预训练模型的训练方法、装置和电子设备
CN113377968B (zh) * 2021-08-16 2021-10-29 南昌航空大学 一种采用融合实体上下文的知识图谱链路预测方法
CN115051843A (zh) * 2022-06-06 2022-09-13 华北电力大学 基于kge的区块链威胁情报知识图谱推理方法

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200073996A1 (en) * 2018-08-28 2020-03-05 Stitched.IO Limited Methods and Systems for Domain-Specific Disambiguation of Acronyms or Homonyms
CN111198950A (zh) * 2019-12-24 2020-05-26 浙江工业大学 一种基于语义向量的知识图谱表示学习方法
CN111444721A (zh) * 2020-05-27 2020-07-24 南京大学 一种基于预训练语言模型的中文文本关键信息抽取方法
CN111626063A (zh) * 2020-07-28 2020-09-04 浙江大学 一种基于投影梯度下降和标签平滑的文本意图识别方法及系统
CN112100404A (zh) * 2020-09-16 2020-12-18 浙江大学 基于结构化上下文信息的知识图谱预训练方法

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10303999B2 (en) * 2011-02-22 2019-05-28 Refinitiv Us Organization Llc Machine learning-based relationship association and related discovery and search engines
CN109376864A (zh) * 2018-09-06 2019-02-22 电子科技大学 一种基于堆叠神经网络的知识图谱关系推理算法
CN110297870B (zh) * 2019-05-30 2022-08-30 南京邮电大学 一种金融领域中文新闻标题情感分类方法
CN111026875A (zh) * 2019-11-26 2020-04-17 中国人民大学 一种基于实体描述和关系路径的知识图谱补全方法
CN111444305B (zh) * 2020-03-19 2022-10-14 浙江大学 一种基于知识图谱嵌入的多三元组联合抽取方法
CN111428055B (zh) * 2020-04-20 2023-11-10 神思电子技术股份有限公司 一种面向行业的上下文省略问答方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200073996A1 (en) * 2018-08-28 2020-03-05 Stitched.IO Limited Methods and Systems for Domain-Specific Disambiguation of Acronyms or Homonyms
CN111198950A (zh) * 2019-12-24 2020-05-26 浙江工业大学 一种基于语义向量的知识图谱表示学习方法
CN111444721A (zh) * 2020-05-27 2020-07-24 南京大学 一种基于预训练语言模型的中文文本关键信息抽取方法
CN111626063A (zh) * 2020-07-28 2020-09-04 浙江大学 一种基于投影梯度下降和标签平滑的文本意图识别方法及系统
CN112100404A (zh) * 2020-09-16 2020-12-18 浙江大学 基于结构化上下文信息的知识图谱预训练方法

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114724010B (zh) * 2022-05-16 2022-09-02 中译语通科技股份有限公司 一种待训练样本的确定方法、装置、设备及可读存储介质
CN114724010A (zh) * 2022-05-16 2022-07-08 中译语通科技股份有限公司 一种待训练样本的确定方法、装置、设备及可读存储介质
CN115062587A (zh) * 2022-06-02 2022-09-16 北京航空航天大学 一种基于周围信息的知识图谱嵌入及回复生成方法
CN115062587B (zh) * 2022-06-02 2024-05-31 北京航空航天大学 一种基于周围信息的知识图谱嵌入及回复生成方法
CN116340524A (zh) * 2022-11-11 2023-06-27 华东师范大学 一种基于关系自适应网络的小样本时态知识图谱补全方法
CN116340524B (zh) * 2022-11-11 2024-03-08 华东师范大学 一种基于关系自适应网络的小样本时态知识图谱补全方法
CN115564049A (zh) * 2022-12-06 2023-01-03 北京航空航天大学 一种双向编码的知识图谱嵌入方法
CN115564049B (zh) * 2022-12-06 2023-05-09 北京航空航天大学 一种双向编码的知识图谱嵌入方法
CN115936737A (zh) * 2023-03-10 2023-04-07 云筑信息科技(成都)有限公司 一种确定建材真伪的方法和系统
CN115936737B (zh) * 2023-03-10 2023-06-23 云筑信息科技(成都)有限公司 一种确定建材真伪的方法和系统
CN116187446B (zh) * 2023-05-04 2023-07-04 中国人民解放军国防科技大学 基于自适应注意力机制的知识图谱补全方法、装置和设备
CN116187446A (zh) * 2023-05-04 2023-05-30 中国人民解放军国防科技大学 基于自适应注意力机制的知识图谱补全方法、装置和设备
CN116881471B (zh) * 2023-07-07 2024-06-04 深圳智现未来工业软件有限公司 一种基于知识图谱的大语言模型微调方法及装置
CN116910272A (zh) * 2023-08-09 2023-10-20 西安工程大学 基于预训练模型t5的学术知识图谱补全方法
CN116910272B (zh) * 2023-08-09 2024-03-01 西安工程大学 基于预训练模型t5的学术知识图谱补全方法
CN117540035A (zh) * 2024-01-09 2024-02-09 安徽思高智能科技有限公司 一种基于实体类型信息融合的rpa知识图谱构建方法
CN117540035B (zh) * 2024-01-09 2024-05-14 安徽思高智能科技有限公司 一种基于实体类型信息融合的rpa知识图谱构建方法

Also Published As

Publication number Publication date
CN112100404B (zh) 2021-10-15
US20240177047A1 (en) 2024-05-30
CN112100404A (zh) 2020-12-18

Similar Documents

Publication Publication Date Title
WO2022057669A1 (zh) 基于结构化上下文信息的知识图谱预训练方法
US11941522B2 (en) Address information feature extraction method based on deep neural network model
WO2023065545A1 (zh) 风险预测方法、装置、设备及存储介质
CN109902145B (zh) 一种基于注意力机制的实体关系联合抽取方法和系统
CN108073711B (zh) 一种基于知识图谱的关系抽取方法和系统
CN110245229B (zh) 一种基于数据增强的深度学习主题情感分类方法
CN106777125B (zh) 一种基于神经网络及图像关注点的图像描述生成方法
CN114169330B (zh) 融合时序卷积与Transformer编码器的中文命名实体识别方法
JP2022105126A (ja) 複数の言語タスク階層を通じてデータを処理するための深層ニューラルネットワークモデル
CN111753024B (zh) 一种面向公共安全领域的多源异构数据实体对齐方法
CN111079409B (zh) 一种利用上下文和方面记忆信息的情感分类方法
CN110196980A (zh) 一种基于卷积网络在中文分词任务上的领域迁移
CN112100332A (zh) 词嵌入表示学习方法及装置、文本召回方法及装置
CN109189862A (zh) 一种面向科技情报分析的知识库构建方法
CN112380863A (zh) 一种基于多头自注意力机制的序列标注方法
CN116932722A (zh) 一种基于跨模态数据融合的医学视觉问答方法及系统
CN113779225A (zh) 实体链接模型的训练方法、实体链接方法及装置
CN113591478A (zh) 一种基于深度强化学习的远程监督文本实体关系抽取方法
CN114429132A (zh) 一种基于混合格自注意力网络的命名实体识别方法和装置
CN115563314A (zh) 多源信息融合增强的知识图谱表示学习方法
CN116384371A (zh) 一种基于bert和依存句法联合实体及关系抽取方法
CN114444694A (zh) 一种开放世界知识图谱补全方法及装置
CN114048314A (zh) 一种自然语言隐写分析方法
CN114020900A (zh) 基于融合空间位置注意力机制的图表英语摘要生成方法
CN113268985A (zh) 基于关系路径的远程监督关系抽取方法、装置及介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21868494

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 17791897

Country of ref document: US

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21868494

Country of ref document: EP

Kind code of ref document: A1