WO2022057671A1 - Procédé de raisonnement d'incohérence de graphe de connaissances basé sur un réseau neuronal - Google Patents

Procédé de raisonnement d'incohérence de graphe de connaissances basé sur un réseau neuronal Download PDF

Info

Publication number
WO2022057671A1
WO2022057671A1 PCT/CN2021/116777 CN2021116777W WO2022057671A1 WO 2022057671 A1 WO2022057671 A1 WO 2022057671A1 CN 2021116777 W CN2021116777 W CN 2021116777W WO 2022057671 A1 WO2022057671 A1 WO 2022057671A1
Authority
WO
WIPO (PCT)
Prior art keywords
axiom
triplet
neural network
representation
inconsistency
Prior art date
Application number
PCT/CN2021/116777
Other languages
English (en)
Chinese (zh)
Inventor
陈华钧
李娟�
张文
Original Assignee
浙江大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 浙江大学 filed Critical 浙江大学
Publication of WO2022057671A1 publication Critical patent/WO2022057671A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/36Creation of semantic tools, e.g. ontology or thesauri
    • G06F16/367Ontology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/04Inference or reasoning models

Definitions

  • the invention belongs to the field of knowledge graphs and neural networks, and in particular relates to a knowledge graph inconsistency reasoning method based on neural networks.
  • Knowledge graph is a knowledge system formed by structuring knowledge, and has been widely used in knowledge-driven tasks such as search engines, recommendation systems, and question answering systems.
  • large-scale knowledge graphs for open and vertical domains have been constructed using manual annotation, semi-automatic or automated methods.
  • Classical knowledge graphs such as Wikidata, Freebase, and DBpedia, use triples to store relationships between entities and entities. Each triplet corresponds to a piece of knowledge, for example (China, the capital is, Beijing) means that the capital of China is Beijing, where "China" is called the head entity, "Beijing" is called the tail entity, and "the capital is” is called for relationship.
  • the constructed graphs may have quality problems such as incomplete knowledge and knowledge errors.
  • many knowledge graph representation learning algorithms have been proposed to predict new knowledge to complement existing knowledge graphs, mainly predicting missing entities by given entities and relationships, or predicting between two entities given two entities. possible relationship.
  • Aiming at the problem of knowledge error it includes two situations: not conforming to the axioms and conforming to the axioms but the knowledge content is incorrect, mainly through knowledge inconsistency reasoning to realize the detection of wrong knowledge in order to correct the errors.
  • knowledge graph inconsistency reasoning is to detect wrong knowledge. It contains two types of tasks, one is to detect whether a triple is consistent, and the other is to detect whether an axiom of a triple is consistent. Perform binary classification.
  • the existing work on inconsistent reasoning of triples in the knowledge graph requires the knowledge base ontology information, or some custom templates.
  • Existing knowledge bases usually do not have well-defined ontologies, and manual definition by experts is not only time-intensive, but also leads to the problem of incomplete ontologies.
  • the current knowledge representation learning models include distance models, semantic models, and neural network models, etc., which embed knowledge graphs into low-dimensional vector spaces, and represent entities and relationships in triples as low-dimensional vectors.
  • the above methods obtain the score of the triplet through vector calculation to determine whether a piece of knowledge is correct, which corresponds to the triplet classification task.
  • these methods reflect the advantages of knowledge representation learning algorithms and neural networks in knowledge graph reasoning tasks.
  • these methods can only judge the consistency of triples, and cannot judge whether the axioms corresponding to triples are consistent in fine-grained manner. Therefore, it is urgent to detect whether a certain axiom of triples is consistent to realize the detection of wrong knowledge. for error correction.
  • the purpose of the present invention is to provide a knowledge graph inconsistency reasoning method based on neural network, which does not require given ontology information, uses neural network to learn the inconsistency axiom, and judges whether a triplet is a triple through knowledge representation learning algorithm and neural network. There is an inconsistency, whether there is an inconsistency on the given axioms.
  • a neural network-based knowledge graph inconsistency reasoning method comprising the following steps:
  • the knowledge representation learning algorithm is used to learn the entity representation and relationship representation of the triplet in the knowledge graph, and the representation score of the triplet is calculated at the same time.
  • Representation and relational representation are used as the input of the neural network, using triples to model the axioms through the neural network to learn the parameters of the neural network used to represent the corresponding axioms to obtain the axiom model, and using the axiomatic model to obtain the axiom prediction value of the triples , based on the representation score of the triplet and the axiom prediction value to realize the judgment of the inconsistency of the triplet and the corresponding axiom.
  • the knowledge graph is embedded in a low-dimensional vector space, entities and relationships are represented as vectors, and the structure information of triples is preserved through the knowledge representation learning algorithm.
  • triple Group s,r,o
  • the input is the vector representation of entities and relationships, and the output is the representation score of the current triplet.
  • the knowledge representation learning algorithm and the neural network are trained at the same time, so that when learning the vector representation and the neural network parameters, it can not only encode the structural information of the triplet, but also make the triplet conform to the relevant axioms at the same time.
  • the representation learning algorithm can be any knowledge graph representation learning algorithm, and the neural network performs binary classification to output the probability that each axiom satisfies the consistency.
  • the specific input of the neural network during the training process comes from the elements of the current triplet, or the elements in the related triplet found through the triplet.
  • the elements from triples or related triples that need to be considered for the corresponding axioms are analyzed; then, according to the assumptions of the axioms, the elements of the existing triples and their related triples are regarded as neural networks through the existing triples conforming to the axiom constraints.
  • the positive samples of each neural network module are constructed through closed assumptions. This setting allows the model to learn the axioms only by using the triples existing in the knowledge graph without ontology information.
  • the neural network model corresponding to each axiom is constructed, and the positive samples of each neural network model are constructed according to the constraints or conditions of the inconsistency corresponding to the axioms, and the negative samples are constructed based on the positive samples;
  • the corresponding entity representation and relationship representation are spliced and input into the neural network model, and the predicted value corresponding to each axiom is calculated and calculated based on all
  • the prediction value corresponding to the axiom is obtained by the prediction score of the neural network model, and the representation score and prediction score calculated by the knowledge representation learning algorithm are combined to obtain the total score of the triplet;
  • an edge loss function is constructed according to the total score of the positive sample triplet and the corresponding total score of the negative sample triplet, and the edge loss function is used to jointly update the optimized knowledge representation learning algorithm and neural network model parameters, as well as entity representation and relation representation.
  • the neural network model determined by the parameters is finally learned as the axiomatic model, as well as the entity representation and relation representation determined in the knowledge graph.
  • axioms when selecting axioms for inconsistency detection, first determine whether the axioms can be used for inconsistency detection according to the conditions or constraints mentioned in the axiom definition that need to be satisfied, and then Select axioms from the axioms that can be used for inconsistency detection, and benchmark the conditions or constraints corresponding to the selected axioms to the related elements of the triples in the knowledge graph and the correlation between the elements of the triples, and then realize the positive construction of the axioms. sample.
  • a relationship threshold is set for each relationship, and the knowledge representation is used according to the entity representation and relationship representation determined at the end of the optimization.
  • the learning algorithm and the axiomatic model calculate the total score of the triplet. When the total score of the triplet is lower than the relation threshold, the triplet is considered to be a correct triplet, otherwise it is a wrong triplet, which means there is inconsistency sex.
  • an axiom threshold is set for each axiom, and the entity vector representation and relationship determined when the knowledge representation learning model is generated is used.
  • Vector representation using each axiom model to calculate the predicted value of the triple for the axiom, when the predicted value of the triple for the axiom is lower than the corresponding axiom threshold, the triple is considered inconsistent on the axiom.
  • the selected axioms include:
  • the Object Property Domain referred to as the domain axiom, defines the type of the head entity s of the relationship r should conform to the corresponding category;
  • Object Property Range referred to as the range axiom, defines that the type of the tail entity o of the relation r should conform to the corresponding category;
  • Disjoint Object Properties referred to as the disjoint axiom, defines that relation r and relation r 1 are mutually exclusive, and triples (s, r, o) and triples (s, r 1 , o) should not exist at the same time in a knowledge graph;
  • the input of the neural network module is the element of the current triplet or the triplet related to the current triplet, and the output is the probability that conforms to the corresponding axiom.
  • the calculations of the neural network module are as follows:
  • the asymmetric axiom predicts the value:
  • Pasym (s,r,o,s 1 ,r,o 1 ) g(W 5 ⁇ [s;r;o;s 1 ;r;o 1 ]+b 5 );
  • f(s,r,o) f r (s,o)+ ⁇ (1-P dm )+ ⁇ (1-P rg )+ ⁇ (1-P irre )+ ⁇ (1-P dis )+ ⁇ (1-P asym )
  • s, s 1 , r, r 1 , o, o 1 respectively represent the learning representation vector, symbol of entity s, entity s 1 , relation r, relation r 1 , entity o, and entity o 1 ; represent the splicing operation, g () represents the sigmoid function, W 1 , W 2 , W 3 , W 4 , W 5 represent the weight vector, b 1 , b 2 , b 3 , b 4 , b 5 are biases, ⁇ , ⁇ , ⁇ , ⁇ is a hyperparameter representing the weight, and fr (s, o) represents the representation score of the triplet obtained by using the knowledge representation learning model.
  • the axiom probability value output by each axiomatic model is relatively higher than that of wrong triples, and the above scoring function ensures that the total score of triples of positive samples is lower than that of triples of negative samples.
  • the constructed marginal loss function is:
  • F represents the set of positive samples
  • F' represents the set of negative samples
  • f(s, r, o) represents the total score of the triplet (s, r, o)
  • f(s', r', o') represents the The total score for the triple (s',r',o').
  • a triple (s, r, o) is given, and the training phase is used to learn entities and relationships.
  • the vector representation and the parameters of the neural network model calculate the final score f(s,r,o), and the values of each axiom module including P dm , P rg , P irre , P dis , and P asym .
  • a threshold is introduced for each relationship.
  • the score f(s,r,o) is lower than the threshold, it means that the triplet is a correct triplet, otherwise it is a wrong triplet, That is, there is inconsistency.
  • a threshold is introduced for each axiom model according to the validation set. If P dm , P rg , P irre , P dis , and P asym are all lower than the corresponding axiom thresholds , it indicates that the triplet has inconsistency in domain axiom, range axiom, disjoint axiom, irreflexive axiom and asymmetric axiom respectively.
  • the beneficial effects of the present invention at least include:
  • the above-mentioned knowledge graph inconsistency reasoning method does not require well-defined ontology information, and only uses the existing triples in the knowledge graph for axiom learning, so that it can also be captured on knowledge graphs without ontology definitions or incomplete ontology definitions. Part of the axioms and inconsistency reasoning, greatly reducing labor costs. At the same time, the knowledge representation learning model and the axiomatic model can be used in any knowledge graph.
  • this method converts axiom learning into neural network parameter learning, and uses the knowledge representation learning algorithm and neural network joint training to allow the model to retain structural information and learn inconsistency. related axioms.
  • Axiom learning allows the model to detect not only inconsistent triples, but also fine-grained detection of inconsistencies for several axioms under consideration. In this way, inconsistency inference can be better implemented, and it is convenient for subsequent correction of triples.
  • Figure 1 is a flowchart of a neural network-based knowledge graph inconsistency reasoning method.
  • Figure 1 is a flowchart of a neural network-based knowledge graph inconsistency reasoning method. As shown in Figure 1, for a given knowledge graph containing a large number of triples (s, r, o), the knowledge graph inconsistency reasoning method includes the following steps:
  • Step 1 select the following five axioms from the axioms that can be used for inconsistency detection in the OWL2 object attribute axioms, and analyze the description and judgment conditions of the five axioms in OWL2 as follows:
  • Step 2 Obtain the training samples of the neural network corresponding to each kilometer according to the judgment conditions, that is, determine the specific input of each neural network according to the judgment conditions of the axioms.
  • the following is a sample knowledge graph to illustrate the input of each axiom model:
  • the given sample graph contains 6 triples, including entities (s 1 , s 2 , o 1 , o 2 ) and relations (r 1 , r 2 , r 3 ).
  • the domain axiom focuses on relations and head entities, and the module inputs are (r 1 ,s 1 ),(r 2 ,s 2 ),(r 3 ,s 1 ),(r 1 ,s 2 ),(r 3 ,o 1 );
  • the range axiom concerns relations and tail entities, the module inputs are (r 1 ,o 1 ),(r 2 ,o 2 ),(r 3 ,o 1 ),(r 1 ,o 2 ),( r 1 ,s 1 ),(r 3 ,s 1 );
  • the disjoint axiom focuses on the correlation of relations in triples where both head and tail entities are the same.
  • the input of this module is (s 1 ,r 1 ,o 1 , s 1 ,r 3 ,o 1 ),(s 2 ,r 2 ,o 2 ,s 2 ,r 1 ,o 2 ); the irreflexive axiom concerns the current triple, and the module input is (s 1 ,r 1 ,o 1 ),(s 2 ,r 2 ,o 2 ),(s 1 ,r 3 ,o 1 ),(s 2 ,r 1 ,o 2 ),(s 1 ,r 1 ,s 1 ),(o 1 , r 3 , s 1 ); the asymmetric axiom concerns two triples with the same relationship, the input is (s 1 ,r 1 ,o 1 ,s 2 ,r 1 ,o 2 ),(s 1 ,r 1 ,o 1 ,s 1 ,r 1 ,s 1 ),(s 2 ,r 1 ,o 2 ,s 1 ),(s 2 ,r 1 ,o 2
  • the knowledge representation learning algorithm focuses on the current triple, and the input is (s 1 ,r 1 ,o 1 ), (s 2 ,r 2 ,o 2 ), (s 1 ,r 3 ,o 1 ), (s 2 ,r 1 ,o 2 ), (s 1 ,r 1 ,s 1 ),(o 1 ,r 3 ,s 1 ).
  • the input of each of the above modules can be regarded as a positive sample for that module.
  • Step 3 jointly train the knowledge representation learning algorithm and the neural network corresponding to the axiom
  • the entity representation and relation representation of the sample are spliced and input into the neural network model, the predicted value corresponding to each axiom is calculated, and the neural network model is obtained based on the predicted value corresponding to all axioms
  • the prediction score of the triplet is combined with the representation score of the triplet and the prediction score of the axiomatic model to obtain the total score of the triplet; finally, the edge loss function is constructed according to the total score of the positive sample triplet and the corresponding total score of the negative sample triplet , and use the edge loss function to jointly update and optimize the parameters of the knowledge representation learning algorithm and the neural network model.
  • the knowledge representation learning algorithm of the determined entity vector representation and relation vector representation is the knowledge representation learning model, and the neural network model determined by the parameters is Axiom Model.
  • Step 4 After the knowledge representation learning model and the axiom model trained in step 3, the inconsistent reasoning of the knowledge graph can be realized. Given a triplet, calculate the final score for the current triplet, if it is lower than the threshold, the triplet is judged to be correct, if it is higher than the threshold, it is considered that the triplet is inconsistent; for each axiom module, calculate the probability of conforming to the axiom, corresponding to the The probability of is higher than the threshold value, indicating that the triplet conforms to the corresponding axiom consistency, otherwise there is inconsistency on the axiom.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biophysics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Databases & Information Systems (AREA)
  • Animal Behavior & Ethology (AREA)
  • Machine Translation (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

La présente invention concerne un procédé de raisonnement d'incohérence de graphe de connaissances basé sur un réseau neuronal, comprenant les étapes suivantes consistant à : effectuer un apprentissage de représentations sur un triplet en utilisant un algorithme d'apprentissage de représentations de connaissances pour obtenir des représentations d'entités et une représentation de relations et calculer un score de représentations; puis utiliser les représentations d'entités et la représentation de relations en tant qu'entrée d'un réseau neuronal; modéliser un axiome au moyen du réseau neuronal en utilisant le triplet pour apprendre les paramètres du réseau neuronal utilisé pour représenter l'axiome correspondant, ce qui permet d'obtenir un modèle d'axiomes; obtenir une valeur de prédiction d'axiomes du triplet en utilisant le modèle d'axiomes; et implémenter la détermination d'incohérence du triplet et de l'axiome correspondant sur la base du score de représentations et de la valeur de prédiction d'axiomes du triplet. Selon le procédé, il n'est pas nécessaire de présenter des informations d'ontologie, un axiome d'incohérence est appris en utilisant un réseau neuronal et il est déterminé s'il existe une incohérence dans un triplet et s'il existe une incohérence sur l'axiome donné au moyen d'un algorithme d'apprentissage de représentations de connaissances et du réseau neuronal.
PCT/CN2021/116777 2020-09-16 2021-09-06 Procédé de raisonnement d'incohérence de graphe de connaissances basé sur un réseau neuronal WO2022057671A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010973433.0A CN112100403A (zh) 2020-09-16 2020-09-16 一种基于神经网络的知识图谱不一致性推理方法
CN202010973433.0 2020-09-16

Publications (1)

Publication Number Publication Date
WO2022057671A1 true WO2022057671A1 (fr) 2022-03-24

Family

ID=73759649

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/116777 WO2022057671A1 (fr) 2020-09-16 2021-09-06 Procédé de raisonnement d'incohérence de graphe de connaissances basé sur un réseau neuronal

Country Status (2)

Country Link
CN (1) CN112100403A (fr)
WO (1) WO2022057671A1 (fr)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114741460A (zh) * 2022-06-10 2022-07-12 山东大学 基于规则间关联的知识图谱数据扩展方法及系统
CN116340524A (zh) * 2022-11-11 2023-06-27 华东师范大学 一种基于关系自适应网络的小样本时态知识图谱补全方法
CN117591657A (zh) * 2023-12-22 2024-02-23 宿迁乐享知途网络科技有限公司 一种基于ai的智能对话管理系统及方法

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112100403A (zh) * 2020-09-16 2020-12-18 浙江大学 一种基于神经网络的知识图谱不一致性推理方法
CN112633927B (zh) * 2020-12-23 2021-11-19 浙江大学 一种基于知识图谱规则嵌入的组合商品挖掘方法
CN113449118B (zh) * 2021-06-29 2022-09-20 华南理工大学 一种基于标准知识图谱的标准文档冲突检测方法及系统
CN114357192B (zh) * 2021-12-31 2024-04-19 海南大学 基于dikw的内容完整性建模与判断方法

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200065668A1 (en) * 2018-08-27 2020-02-27 NEC Laboratories Europe GmbH Method and system for learning sequence encoders for temporal knowledge graph completion
CN111368096A (zh) * 2020-03-09 2020-07-03 中国平安人寿保险股份有限公司 基于知识图谱的信息分析方法、装置、设备和存储介质
CN111444305A (zh) * 2020-03-19 2020-07-24 浙江大学 一种基于知识图谱嵌入的多三元组联合抽取方法
CN111582509A (zh) * 2020-05-07 2020-08-25 南京邮电大学 一种基于知识图谱表示学习和神经网络的协同推荐方法
CN112100403A (zh) * 2020-09-16 2020-12-18 浙江大学 一种基于神经网络的知识图谱不一致性推理方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200065668A1 (en) * 2018-08-27 2020-02-27 NEC Laboratories Europe GmbH Method and system for learning sequence encoders for temporal knowledge graph completion
CN111368096A (zh) * 2020-03-09 2020-07-03 中国平安人寿保险股份有限公司 基于知识图谱的信息分析方法、装置、设备和存储介质
CN111444305A (zh) * 2020-03-19 2020-07-24 浙江大学 一种基于知识图谱嵌入的多三元组联合抽取方法
CN111582509A (zh) * 2020-05-07 2020-08-25 南京邮电大学 一种基于知识图谱表示学习和神经网络的协同推荐方法
CN112100403A (zh) * 2020-09-16 2020-12-18 浙江大学 一种基于神经网络的知识图谱不一致性推理方法

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114741460A (zh) * 2022-06-10 2022-07-12 山东大学 基于规则间关联的知识图谱数据扩展方法及系统
CN116340524A (zh) * 2022-11-11 2023-06-27 华东师范大学 一种基于关系自适应网络的小样本时态知识图谱补全方法
CN116340524B (zh) * 2022-11-11 2024-03-08 华东师范大学 一种基于关系自适应网络的小样本时态知识图谱补全方法
CN117591657A (zh) * 2023-12-22 2024-02-23 宿迁乐享知途网络科技有限公司 一种基于ai的智能对话管理系统及方法
CN117591657B (zh) * 2023-12-22 2024-05-07 宿迁乐享知途网络科技有限公司 一种基于ai的智能对话管理系统及方法

Also Published As

Publication number Publication date
CN112100403A (zh) 2020-12-18

Similar Documents

Publication Publication Date Title
WO2022057671A1 (fr) Procédé de raisonnement d'incohérence de graphe de connaissances basé sur un réseau neuronal
CN112232416B (zh) 一种基于伪标签加权的半监督学习方法
CN111753101B (zh) 一种融合实体描述及类型的知识图谱表示学习方法
CN111309912B (zh) 文本分类方法、装置、计算机设备及存储介质
CN108399428B (zh) 一种基于迹比准则的三元组损失函数设计方法
CN108197290A (zh) 一种融合实体和关系描述的知识图谱表示学习方法
CN115511118A (zh) 一种基于人工智能的供热系统故障辅助决策方法及系统
CN111914550B (zh) 一种面向限定领域的知识图谱更新方法及系统
TWI717826B (zh) 通過強化學習提取主幹詞的方法及裝置
CN114022904B (zh) 一种基于两阶段的噪声鲁棒行人重识别方法
CN114998691B (zh) 半监督船舶分类模型训练方法及装置
CN113032238A (zh) 基于应用知识图谱的实时根因分析方法
CN108596204B (zh) 一种基于改进型scdae的半监督调制方式分类模型的方法
CN116668083A (zh) 一种网络流量异常检测方法及系统
CN112348108A (zh) 一种基于众包模式的样本标注方法
CN117473102A (zh) 一种基于标签混淆学习的bim知识图谱构建方法和系统
CN117196033A (zh) 基于异构图神经网络的无线通信网络知识图谱表示学习方法
CN112579777A (zh) 一种未标注文本的半监督分类方法
CN111694966A (zh) 面向化工领域的多层次知识图谱构建方法及系统
Cano et al. A score based ranking of the edges for the PC algorithm
WO2023273171A1 (fr) Procédé et appareil de traitement d'images, dispositif et support de stockage
CN113283243B (zh) 一种实体与关系联合抽取的方法
Vagin et al. Inductive inference and argumentation methods in modern intelligent decision support systems
CN115620046A (zh) 一种基于半监督性能预测器的多目标神经架构搜索方法
CN115292296A (zh) 一种基于联邦学习提高众包标注数据质量的方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21868496

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21868496

Country of ref document: EP

Kind code of ref document: A1