CN110909172B - Knowledge representation learning method based on entity distance - Google Patents

Knowledge representation learning method based on entity distance Download PDF

Info

Publication number
CN110909172B
CN110909172B CN201911004013.5A CN201911004013A CN110909172B CN 110909172 B CN110909172 B CN 110909172B CN 201911004013 A CN201911004013 A CN 201911004013A CN 110909172 B CN110909172 B CN 110909172B
Authority
CN
China
Prior art keywords
entity
knowledge
relation
entities
vector
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911004013.5A
Other languages
Chinese (zh)
Other versions
CN110909172A (en
Inventor
张毅
曹万华
王振杰
饶子昀
刘俊涛
王军伟
高子文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
709th Research Institute of CSIC
Original Assignee
709th Research Institute of CSIC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 709th Research Institute of CSIC filed Critical 709th Research Institute of CSIC
Priority to CN201911004013.5A priority Critical patent/CN110909172B/en
Publication of CN110909172A publication Critical patent/CN110909172A/en
Application granted granted Critical
Publication of CN110909172B publication Critical patent/CN110909172B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/36Creation of semantic tools, e.g. ontology or thesauri
    • G06F16/367Ontology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/02Knowledge representation; Symbolic representation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Animal Behavior & Ethology (AREA)
  • Databases & Information Systems (AREA)
  • Machine Translation (AREA)

Abstract

The invention discloses a knowledge representation learning method based on entity distance, aiming at solving the problem that training efficiency is underground due to unbalanced distribution of relationships and entities in a knowledge map, comprising the following steps of: (1) performing vector representation on the relation and the entity in the training sample of the knowledge graph; (2) for each relation in the training sample, dividing all entities into two sets according to whether the entities appear in the triples formed by the relation; (3) selecting entities from the two sets divided in the step (2) according to a preset probability to replace the entities to generate a negative case; (4) defining a trained objective function; (5) and substituting the entity relation triple into the model for training, and solving the expression vector of the entity and the relation. The invention provides a knowledge representation learning method based on entity distance, which can effectively express the semantic relevance of entities and relations in a knowledge map and improve the performance of knowledge acquisition, fusion and reasoning.

Description

Knowledge representation learning method based on entity distance
Technical Field
The invention belongs to the technical field of artificial intelligence, and particularly relates to a dynamic vectorization knowledge representation learning method based on entity distance.
Background
People typically organize knowledge in a knowledge base in the form of a network, where each node represents an entity (person, place, organization, concept, etc.) and each connecting edge represents a relationship between entities. Therefore, most knowledge can often be represented by triplets (entity 1, relationship, entity 2), corresponding to a connecting edge and its connected 2 entities in the knowledge base network. Knowledge representation learning aims to dynamically vectorize entities and relations in triples in a low-dimensional real number vector space so as to effectively measure semantic relevance of the entities and relations in a knowledge graph and improve knowledge acquisition, fusion and reasoning performance. In recent years, knowledge representation models have emerged, comparing classical TransE, and later TransH, ComplEx, DistMult, and ANALOGY. Although these models have shown their advantages and innovations in some respects. However, these models usually use only uniform random sampling (unif) or bernoulli random sampling (bern) of head and tail entities for substitution when constructing negative examples, but do not consider the distance between the substituted entities on a specific relationship plane. For example, when constructing a negative example for a triplet (james cameron, gender, male), it may generate a negative fact (james cameron, gender, female), but if a large number of negative examples are selected (james cameron, gender, amada), (james cameron, gender, vegetables), etc., it is not favorable for the training effect and learning performance of the knowledge representation model.
Disclosure of Invention
The invention aims to provide a knowledge representation learning method based on entity distance, so that the method is more targeted when a negative case is constructed, and semantic reasoning is realized in a vector space through distance calculation.
In order to achieve the above object, the present invention provides a knowledge representation learning method based on entity distance, including:
(1) performing vector representation on the relation and the entity in the training sample of the knowledge graph;
(2) for each relation in the training sample, dividing all entities into two sets according to whether the entities appear in the triples formed by the relation;
(3) selecting entities from the two sets divided in the step (2) according to a preset probability to replace the entities to generate a negative case;
(4) defining a trained objective function;
(5) and substituting the entity relation triple into the model for training, and solving the expression vector of the entity and the relation.
In one embodiment of the present invention, in the step (1), a vector representation is performed on the relationships and entities in the training sample data of the knowledge graph, that is, the entity set E ═ { E ═ E 1 ,e 2 ,…,e n Wherein each entity e i Are all m-dimensional vectors; there is also a set of relationships R ═ R 1 ,r 2 ,…,r n In which each relation r i Is also a vector of m dimensions, and n represents the number of training sample data.
In one embodiment of the invention, E ═ E is set for all entities in the knowledge-graph 1 ,e 2 ,…,e n Each entity e in i Carrying out random initialization to form m-dimensional vectors, and limiting the modular length to be 1; similarly for all relationship sets R ═ R 1 ,r 2 ,…,r n Each of the relationships r i Random initialization is also performed as a vector in m dimensions, where m is a natural number greater than 0, and the modulo length is limited to 1.
In an embodiment of the present invention, in step (2), the entity set partitioning method includes: for each relationship R (R ∈ R), the set of entities E is divided into E r ' and E r ". Wherein the entity set E r The entity in' has appeared in the triplet consisting of the relation r in the knowledge-graph, i.e. E r ' { E | E ∈ E, h, t ∈ E, (h, r, E) ∈ S or (E, r, t) ∈ S }; and E r Is "E r The complement of `, i.e. E r ″=E-E r ′。
In one embodiment of the present invention, the method for replacing the production negative of the entity in step (3) is: for the correct triplet (h, r, t) in the training sample data, the head entity h or the tail entity t is replaced by a random entity to be a new head entity h 'or a new tail entity t', wherein h 'and t' also belong to the entity set E.
In an embodiment of the present invention, the method for replacing the production negative case of the entity specifically includes: given E r ' and E r "is assigned a probability ω, where ω is a real number and ω ∈ [0,1]]Taking a random value pr epsilon [0,1 ∈ ]]When pr ∈ [0, ω)]When E is greater r ' taking entity construction negative examples from a set; otherwise, at E r "taking entity construction negative in the set.
In one embodiment of the present invention, in the step (4), an objective function of the training is defined
Figure BDA0002242204710000031
Wherein [ x ]] + =max(0,x),
Figure BDA0002242204710000032
L2 distance squared, S, representing vector x (h,r,t) Representing a set of triples, S ', present in the knowledge-graph' (h,r,t) Is represented by selecting E r ' the entity in this example replaces the negative example set of triples, S ″, generated (h,r,t) Indicates by choosing E ″) r The entity in (1) replaces the generated negative example triple set; f. of r (h, t) values of score functions representing the correct triplet, f r (h ', t') and f r (h ', t') each represents a decimated set E r ' and E r "the Chinese entity constructs the scoring function of the negative example triplet; alpha and beta respectively represent the weight occupied by the scoring function when the negative example of the triad construction is selected from the set of S 'and S'; gamma is a constant value used to separate the positive and negative examples, the tail term
Figure BDA0002242204710000033
The method is used for preventing line-outgoing overfitting in the training process.
In an embodiment of the present invention, in the step (5), the model is trained by using a random gradient descent method, and finally, a representation vector of the entity and the relationship is obtained.
In one embodiment of the invention, if a TransE knowledge representation model is employed, then
Figure BDA0002242204710000041
If the model is represented by TransH knowledge, then
Figure BDA0002242204710000042
w r A standard vector of the hyperplane for a particular relationship,
Figure BDA0002242204710000044
is a vector w r Transpose of, operation on symbols
Figure BDA0002242204710000045
Indicating the L1 or L2 distance of the computed vector.
In one embodiment of the invention, a negative case is constructed using a bernoulli random sampling method:
for the triplet containing the relation r in the knowledge-graph E, the following two quantities are counted:
a) the average tail entity number corresponding to each head entity is represented as tail _ per _ head;
b) the average number of head entities corresponding to each tail entity is denoted as head _ per _ tail.
Thus, the probability ε of replacing the head entity can be found, i.e.
Figure BDA0002242204710000043
Taking a random number pr1 belonging to [0,1], if pr1 belonging to [0, epsilon ], replacing a head entity h in an original triple (h, r, t) to generate a negative case (h ', r, t), and ensuring that the (h', r, t) is not in a knowledge graph E; otherwise, replacing the tail entity t in the triplet, generating a negative case (h, r, t '), and ensuring that (h, r, t') is not in the knowledge-graph E.
Compared with the prior art, the invention has the following beneficial effects: in the training process of knowledge representation learning, the method classifies the entities in the training sample data according to the semantic distance of the specific relation, the constructed negative case is more targeted, and the accuracy of finally carrying out vector representation on the entities and the relation is improved, so that the semantic reasoning can be realized in a vector space through distance calculation.
Drawings
FIG. 1 is a flow chart of a knowledge representation learning method based on entity distance in accordance with the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention. In addition, the technical features involved in the embodiments of the present invention described below may be combined with each other as long as they do not conflict with each other.
(1) Initializing entities and relationships:
carrying out vector representation on relationships and entities in training sample data of the knowledge graph, namely an entity set E ═ { E } 1 ,e 2 ,…,e n Wherein each entity e i Are all m-dimensional vectors; there is also a set of relationships R ═ R 1 ,r 2 ,…,r n In which each relation r i Is also a vector of m dimensions, and n represents the number of training sample data.
For example, in an embodiment of the present invention, E ═ E is set for all entities in the knowledge-graph 1 ,e 2 ,…,e n Each entity e in the (b) } is a node b in the network i Carrying out random initialization to form m-dimensional vectors, and limiting the modular length to be 1; similarly for all relationship sets R ═ R 1 ,r 2 ,…,r n Each of the relationships r i Also carries out random initialization to a vector with m dimensions, andthe mode length is limited to 1, wherein m is a natural number greater than 0.
(2) Entity grouping:
for training sample data, i.e. knowledge-graph S (h,r,t) (which can be abbreviated as a triple set S), each relation R (R epsilon. R) in the triple set is searched in the knowledge graph, the head and tail entities appearing in all the triples where the relation is located are put into a set S ', and the entities which do not appear are put into a set S'.
Specifically, for each relationship R (R ∈ R) scanned in the knowledge-graph, all triplets (h, R, t) containing the relationship R are stored into the set E r In this connection, E can be determined r ″=E-E r ′。
For example, for knowledge graph S (h,r,t) The triplet (h, R, t) in the triplet set S can be abbreviated as "triplet", where the m-dimensional real number vector corresponding to the head entity h (h E, italic h represents the vector) is h (bold h represents the vector), the m-dimensional real number vector corresponding to the relation R (R E R, italic R represents the vector) is R (bold R represents the vector), and the m-dimensional real number vector corresponding to the tail entity t (t E, italic t represents the vector) is t (bold t represents the vector).
For each relationship R (R ∈ R), the set of entities E is divided into E r ' and E r ". Wherein the entity set E r The entity in' has appeared in the triplet consisting of the relation r in the knowledge-graph, i.e. E r ' { E | E ∈ E, h, t ∈ E, (h, r, E) ∈ S or (E, r, t) ∈ S }; and E r Is "E r The complement of `, i.e. E r ″=E-E r ′。
(3) Constructing a positive and negative sample training pair:
for the correct triplet (h, r, t) in the training sample data, the head entity h or the tail entity t is replaced by a random entity to be a new head entity h 'or a new tail entity t', wherein h 'and t' also belong to the entity set E.
Given E r ' and E r "is assigned a probability ω (ω is a real number and ω ∈ [0,1]]Generally, 0.1) is taken. Taking a random value pr epsilon [0,1 ∈ ]]When pr ∈ [0, ω)]When is at E r ' taking entity construction negative examples from a set; otherwise, at E r "picking entities from a setA negative example is constructed.
After the entities are selected, head or tail entities in the original triples can be replaced by a replacement mode of uniform random sampling (unif) or Bernoulli random sampling (bern) to construct negative examples, and the constructed negative example triples are ensured not to exist in the knowledge graph.
Here, the embodiment of the present invention takes bernoulli random sampling (bern) as an example to illustrate an alternative manner:
for the triplet containing the relation r in the knowledge-graph S, the following two quantities are counted:
a) the average tail entity number corresponding to each head entity is represented as tail _ per _ head;
b) the average number of head entities corresponding to each tail entity is denoted as head _ per _ tail.
Thus, the probability ε of replacing the head entity can be found, i.e.
Figure BDA0002242204710000061
Taking a random number pr1 belonging to [0,1], if pr1 belonging to [0, epsilon ], replacing a head entity h in an original triple (h, r, t) to generate a negative case (h ', r, t), and ensuring that the (h', r, t) is not in a knowledge graph S; otherwise, replacing the tail entity t in the triplet, generating a negative case (h, r, t '), and ensuring that (h, r, t') is not in the knowledge-graph S.
(4) The optimization loss function is defined as:
Figure BDA0002242204710000062
in the formula, the symbol [ x ] is operated on] + Max (0, x), operation symbol
Figure BDA0002242204710000071
L2 distance squared, S, representing vector x (h,r,t) Representing a set of triples, S ', present in the knowledge-graph' (h,r,t) Is represented by selecting E' r The entity in (1) replaces the generated negative example triple set, S ″) (h,r,t) Indicates by choosing E ″) r The entity in (1) replaces the generated negative example triple set;
f r (h, t) score function values representing the correct triplet (if the model is represented using TransE knowledge
Figure BDA0002242204710000072
If the model is represented by TransH knowledge, then
Figure BDA0002242204710000073
f r (h ', t') and f r (h ', t') each represents a decimated set E r ' and E r "the Chinese entity constructs the scoring function of the negative example triplet; w is a r A standard vector of the hyperplane for a particular relationship,
Figure BDA0002242204710000076
is a vector w r Transpose of, operation on symbols
Figure BDA0002242204710000074
L1 or L2 distance representing the calculated vector.
Alpha and beta (alpha is generally 0.1, beta is generally 0.9) respectively represent the weight occupied by the scoring function when the negative example of the triad construction is selected from the set of S 'and S';
gamma is a constant used to separate the positive and negative cases, generally taking the value 1,2 or 4;
tail item
Figure BDA0002242204710000075
(η is typically 0.001) to prevent overfitting during training.
(5) Model training and solving are carried out, and then an entity relation vector for carrying out knowledge representation learning based on entity distance is obtained: and (4) training all triples in the knowledge graph according to the target function in the step (4) by adopting a random Gradient Descent (SGD) method, and finally obtaining expression vectors of all entities and relations.
It will be understood by those skilled in the art that the foregoing is only a preferred embodiment of the present invention, and is not intended to limit the invention, and that any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (8)

1. A knowledge representation learning method based on entity distance is characterized by comprising the following steps:
(1) performing vector representation on the relation and the entity in the training sample of the knowledge graph;
(2) for each relation in the training sample, dividing all entities into two sets according to whether the entities appear in the triples formed by the relation; the entity set dividing method comprises the following steps: for each relationship R, R ∈ R, the set of entities E is divided into E r ' and E r ", wherein the entity set E r The entity in' has appeared in the triplet consisting of the relation r in the knowledge-graph, i.e. E r ' { E | E ∈ E, h, t ∈ E, (h, r, E) ∈ S or (E, r, t) ∈ S }; and E r Is "E r The complement of `, i.e. E r ″=E-E r '; wherein h is a head entity and t is a tail entity;
(3) selecting entities from the two sets divided in the step (2) according to a preset probability to replace the entities to generate a negative case; the method for replacing the production negative example of the entity specifically comprises the following steps: given E r ' and E r "is assigned a probability ω, where ω is a real number and ω ∈ [0,1]]Taking a random value pr epsilon [0,1 ∈ ]]When pr ∈ [0, ω)]When E is greater r ' taking entity construction negative examples from a set; otherwise, at E r "taking entity construct negative in the set;
(4) defining a trained objective function;
(5) and substituting the entity relation triple into the model for training, and solving the expression vector of the entity and the relation, thereby realizing semantic reasoning in a vector space through distance calculation.
2. The method of claim 1, wherein in step (1), knowledge-graph learning is performedThe relation and entity in the training sample data are represented by vector, i.e. the entity set E ═ E 1 ,e 2 ,…,e n Wherein each entity e i Are all m-dimensional vectors; there is also a set of relationships R ═ R 1 ,r 2 ,…,r n In which each relation r i Is also a vector with m dimensions, and n represents the number of training sample data.
3. The method of claim 2, wherein E ═ E is set for all entities in the knowledge-graph 1 ,e 2 ,…,e n Each entity e in i Carrying out random initialization to form m-dimensional vectors, and limiting the modular length to be 1; similarly for all relationship sets R ═ R 1 ,r 2 ,…,r n Each relation r of i Random initialization is also performed as a vector in m dimensions, where m is a natural number greater than 0, and the modulo length is limited to 1.
4. The method for learning knowledge representation based on entity distance according to any one of claims 1 to 3, wherein the method for replacing the production negative examples of the entities in the step (3) is as follows:
for the correct triplet (h, r, t) in the training sample data, the head entity h or the tail entity t is replaced by a random entity to be a new head entity h 'or a new tail entity t', wherein h 'and t' also belong to the entity set E.
5. The method of claim 4, wherein in step (4), a trained objective function is defined:
Figure FDA0003740647640000021
in the formula, the symbol [ x ] is operated on] + Max (0, x), operation sign
Figure FDA0003740647640000022
L2 distance squared, S, representing vector x (h,r,t) Representing a set of triples, S ', present in the knowledge-graph' (h,r,t) Is represented by selecting E r ' the entity in this example replaces the negative example set of triples, S ″, generated (h,r,t) Is represented by selecting E r "replaces the generated negative example triple set; f. of r (h, t) represents the score function value of the positive triplet, f r (h ', t') and f r (h ', t') each represents a decimated set E r ' and E r "the Chinese entity constructs the scoring function of the negative example triplet; alpha and beta respectively represent the weight occupied by the scoring function when a triplet structure negative example is selected from the set of S 'and S'; gamma is a constant value used to separate the positive and negative examples, the tail term
Figure FDA0003740647640000023
The method is used for preventing line-outgoing overfitting in the training process.
6. The method for learning knowledge representation based on entity distance according to any one of claims 1 to 3, wherein in the step (5), a model is trained by using a Stochastic Gradient Descent (SGD) method, and finally a representation vector of the entity and the relationship is obtained.
7. The method of claim 5, wherein if a TransE knowledge representation model is used, then
Figure FDA0003740647640000031
If the model is represented by TransH knowledge, then
Figure FDA0003740647640000032
h is the m-dimensional real number vector corresponding to the head entity h, r is the m-dimensional real number vector corresponding to the relation r, t is the m-dimensional real number vector corresponding to the tail entity t,w r a standard vector of the hyperplane for a particular relationship,
Figure FDA0003740647640000033
is a vector w r Transpose of, operation on symbols
Figure FDA0003740647640000034
Indicating the L1 or L2 distance of the computed vector.
8. The method of claim 1, wherein the negative examples are constructed using bernoulli random sampling method:
for the triplet containing the relation r in the knowledge-graph E, the following two quantities are counted:
a) the average tail entity number corresponding to each head entity is represented as tail _ per _ head;
b) the average number of head entities corresponding to each tail entity is represented as head _ per _ tail;
finding the probability ε of replacing the head entity, i.e.
Figure FDA0003740647640000035
Taking a random number pr1 belonging to [0,1], if pr1 belonging to [0, epsilon ], replacing a head entity h in an original triple (h, r, t) to generate a negative case (h ', r, t), and ensuring that the (h', r, t) is not in a knowledge graph E; otherwise, replacing the tail entity t in the triplet, generating a negative case (h, r, t '), and ensuring that (h, r, t') is not in the knowledge-graph E.
CN201911004013.5A 2019-10-22 2019-10-22 Knowledge representation learning method based on entity distance Active CN110909172B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911004013.5A CN110909172B (en) 2019-10-22 2019-10-22 Knowledge representation learning method based on entity distance

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911004013.5A CN110909172B (en) 2019-10-22 2019-10-22 Knowledge representation learning method based on entity distance

Publications (2)

Publication Number Publication Date
CN110909172A CN110909172A (en) 2020-03-24
CN110909172B true CN110909172B (en) 2022-08-16

Family

ID=69815493

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911004013.5A Active CN110909172B (en) 2019-10-22 2019-10-22 Knowledge representation learning method based on entity distance

Country Status (1)

Country Link
CN (1) CN110909172B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111523327B (en) * 2020-04-23 2023-08-22 北京市科学技术情报研究所 Text determination method and system based on voice recognition
CN111383116A (en) * 2020-05-28 2020-07-07 支付宝(杭州)信息技术有限公司 Method and device for determining transaction relevance
CN112711667B (en) * 2021-03-29 2021-07-06 上海旻浦科技有限公司 Knowledge graph complex relation reasoning method based on multidirectional semantics
CN113220833A (en) * 2021-05-07 2021-08-06 支付宝(杭州)信息技术有限公司 Entity association degree identification method and device

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107729497A (en) * 2017-10-20 2018-02-23 同济大学 A kind of word insert depth learning method of knowledge based collection of illustrative plates
CN107885760A (en) * 2016-12-21 2018-04-06 桂林电子科技大学 It is a kind of to represent learning method based on a variety of semantic knowledge mappings
CN108197290A (en) * 2018-01-19 2018-06-22 桂林电子科技大学 A kind of knowledge mapping expression learning method for merging entity and relationship description
CN108509483A (en) * 2018-01-31 2018-09-07 北京化工大学 The mechanical fault diagnosis construction of knowledge base method of knowledge based collection of illustrative plates
CN109840283A (en) * 2019-03-01 2019-06-04 东北大学 A kind of local adaptive knowledge mapping optimization method based on transitive relation
CN109933674A (en) * 2019-03-22 2019-06-25 中国电子科技集团公司信息科学研究院 A kind of knowledge mapping embedding grammar and its storage medium based on attribute polymerization
CN110275959A (en) * 2019-05-22 2019-09-24 广东工业大学 A kind of Fast Learning method towards large-scale knowledge base

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9110882B2 (en) * 2010-05-14 2015-08-18 Amazon Technologies, Inc. Extracting structured knowledge from unstructured text

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107885760A (en) * 2016-12-21 2018-04-06 桂林电子科技大学 It is a kind of to represent learning method based on a variety of semantic knowledge mappings
CN107729497A (en) * 2017-10-20 2018-02-23 同济大学 A kind of word insert depth learning method of knowledge based collection of illustrative plates
CN108197290A (en) * 2018-01-19 2018-06-22 桂林电子科技大学 A kind of knowledge mapping expression learning method for merging entity and relationship description
CN108509483A (en) * 2018-01-31 2018-09-07 北京化工大学 The mechanical fault diagnosis construction of knowledge base method of knowledge based collection of illustrative plates
CN109840283A (en) * 2019-03-01 2019-06-04 东北大学 A kind of local adaptive knowledge mapping optimization method based on transitive relation
CN109933674A (en) * 2019-03-22 2019-06-25 中国电子科技集团公司信息科学研究院 A kind of knowledge mapping embedding grammar and its storage medium based on attribute polymerization
CN110275959A (en) * 2019-05-22 2019-09-24 广东工业大学 A kind of Fast Learning method towards large-scale knowledge base

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
A Novel Negative Sample Generating Method for Knowledge Graph Embedding;Yi Zhang;《Proceedings of the 2019 International Conference on Embedded Wireless Systems and Networks》;20190315;2866–2872 *
Knowledge representation learning with entities, attributes and relations;Yankai Lin;《Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence》;20160709;401-406 *
基于表示学习和语义要素感知的关系推理算法;刘峤等;《计算机研究与发展》;20170815(第08期);57-67 *
融合知识图谱表示学习和矩阵分解的推荐算法;陈平华等;《计算机工程与设计》;20181016(第10期);145-150 *
面向知识图谱的表示学习研究;栗永芳;《中国优秀硕士学位论文全文数据库 社会科学Ⅱ辑》;20190115;H127-502 *

Also Published As

Publication number Publication date
CN110909172A (en) 2020-03-24

Similar Documents

Publication Publication Date Title
CN110909172B (en) Knowledge representation learning method based on entity distance
Nickel et al. Learning continuous hierarchies in the lorentz model of hyperbolic geometry
CN106503148B (en) A kind of table entity link method based on multiple knowledge base
CN107608953B (en) Word vector generation method based on indefinite-length context
CN112800770B (en) Entity alignment method based on heteromorphic graph attention network
CN110020711A (en) A kind of big data analysis method using grey wolf optimization algorithm
CN110263236B (en) Social network user multi-label classification method based on dynamic multi-view learning model
CN110175286A (en) It is combined into the Products Show method and system to optimization and matrix decomposition
CN107194818A (en) Label based on pitch point importance propagates community discovery algorithm
CN111611801B (en) Method, device, server and storage medium for identifying text region attribute
CN109710921A (en) Calculation method, device, computer equipment and the storage medium of Words similarity
CN108536844B (en) Text-enhanced network representation learning method
CN109446414A (en) A kind of software information website fast tag recommended method based on neural network classification
CN109117891B (en) Cross-social media account matching method fusing social relations and naming features
CN110390014A (en) A kind of Topics Crawling method, apparatus and storage medium
CN108052683B (en) Knowledge graph representation learning method based on cosine measurement rule
CN115062732A (en) Resource sharing cooperation recommendation method and system based on big data user tag information
Cui et al. Quantum-inspired moth-flame optimizer with enhanced local search strategy for cluster analysis
D'Alessandro et al. Multimodal parameter-efficient few-shot class incremental learning
JP7283554B2 (en) LEARNING DEVICE, LEARNING METHOD, AND PROGRAM
CN112132841A (en) Medical image cutting method and device
CN116383398A (en) Professional field term entity word vector self-correction method, system and device
US20160292300A1 (en) System and method for fast network queries
Looks et al. Streaming hierarchical clustering for concept mining
Rahman et al. Denclust: A density based seed selection approach for k-means

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant