CN112836007B - Relational element learning method based on contextualized attention network - Google Patents

Relational element learning method based on contextualized attention network Download PDF

Info

Publication number
CN112836007B
CN112836007B CN202110094919.1A CN202110094919A CN112836007B CN 112836007 B CN112836007 B CN 112836007B CN 202110094919 A CN202110094919 A CN 202110094919A CN 112836007 B CN112836007 B CN 112836007B
Authority
CN
China
Prior art keywords
entity
embedding
task
relation
relationship
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110094919.1A
Other languages
Chinese (zh)
Other versions
CN112836007A (en
Inventor
史树敏
周月阳
黄河燕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Institute of Technology BIT
Original Assignee
Beijing Institute of Technology BIT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Institute of Technology BIT filed Critical Beijing Institute of Technology BIT
Priority to CN202110094919.1A priority Critical patent/CN112836007B/en
Publication of CN112836007A publication Critical patent/CN112836007A/en
Application granted granted Critical
Publication of CN112836007B publication Critical patent/CN112836007B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/31Indexing; Data structures therefor; Storage structures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/36Creation of semantic tools, e.g. ontology or thesauri
    • G06F16/367Ontology

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Animal Behavior & Ethology (AREA)
  • Software Systems (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Machine Translation (AREA)

Abstract

The invention discloses a relational element learning method based on a contextualized attention network, and belongs to the technical field of knowledge map completion and element learning framework application. The method includes the steps that according to the fact that entities and relations in a knowledge graph have different meanings in different contexts, contextualized embedding of entity pairs is obtained through a Transformer encoder, attention weights of different entity pairs are obtained through an attention network, relation element embedding is obtained through weighted summation, and finally the relation element embedding is input into a meta-learning framework for training. The method reduces the dependence of the knowledge graph completion on a large-scale data set, considers the characteristics of contexts and unequal attributes of different entity pairs, and improves the performance of the knowledge graph completion method with few samples.

Description

Relational element learning method based on contextualized attention network
Technical Field
The invention relates to a knowledge graph completion method with few samples, in particular to a relational element learning method based on a contextualized attention network, and belongs to the technical field of knowledge graph completion and element learning framework application.
Background
In recent years, knowledge maps have been applied to the fields of question answering, information extraction, and the like. Although the knowledge graph can effectively represent the structured information, the problems of data quality, data sparseness and the like exist in the application. Knowledge-graph completion techniques are used to solve these problems, and the goal is to enrich the incomplete graph by assessing the likelihood of missing triples.
One common approach is: vector embedding is used to learn low-dimensional representations of entities and relationships. These typical embedded-knowledge-graph-based completion algorithms can be divided into a fact-based model and an additional-information-based model. Fact-based models, which refer to learning embedding using only facts extracted from triples, can be further divided into translation distance models, which treat relationships as translations measured by the distance between two entities, and semantic matching models, which are models that match underlying semantics in the representation of entities and relationships. The model of the additional information contains other information than the fact, such as entity type, relationship path, and text description.
The above approach requires the use of a large number of entity pairs for each relationship. However, in practical datasets, the frequency distribution of relationships often has a long-tailed problem, i.e., most relationships have only a small number of entity pairs. How to handle the relationships of a limited (small) number of entity pairs is an important and challenging technical problem. Such a sample-less scenario makes the above models infeasible because they assume that all relationships have enough training instances.
In recent years, how to solve the few-sample scene of knowledge graph completion by using a meta-learning framework becomes a popular research direction. For example, in the text "Empirical Methods in Natural Language Processing" (EMNLP) conference, 4216-4225 pages "metal relational learning for raw-shot link prediction in knowledge graphs" in 2019, a model MetaR for Processing low-sample knowledge graph completion is proposed, which embeds and splices entity pairs of a support set, obtains relationship elements through semantic averaging after a fully-connected neural network of an L layer, and inputs the relationship elements into an embedding learner for training. However, in the process of obtaining the relationship element, metaR does not consider the contexts of different entity pairs and the characteristics of unequal attributes, so that the learned relationship element does not sufficiently obtain information specific to the relationship in the entity pair, and the result is poor.
Disclosure of Invention
The invention aims to creatively provide a relational meta-learning method based on a contextualized attention network in order to better utilize a meta-learning framework to solve a few-sample scene of knowledge graph completion aiming at the defect that the prior art cannot sufficiently acquire information in an input entity pair to cause poor results.
The invention relates to the following definitions:
definition 1: a knowledge graph refers to a multiple relationship graph composed of entities (nodes) and relationships (edges of different types).
Definition 2: meta learning, refers to a method of training a model from a large number of tasks and learning faster in a new task with a small amount of data.
Definition 3: and the support set refers to a training set corresponding to each task in the meta learning.
Definition 4: and inquiring a set, namely a verification set corresponding to each task in the element learning.
Definition 5: contextualization refers to the property that an entity, relationship, or word may have different meanings in different contexts.
Definition 6: attention, which refers to a phenomenon in which a human needs to select a specific portion in a visual region and then focus on it in order to make reasonable use of limited visual information processing resources.
Artificial intelligence exploits this phenomenon to provide neural networks with the ability to select specific inputs. In this method, attention is drawn to: if the entity pair in the support set is more relevant to the relationship in the query set, the entity pair in the support set is given higher weight.
Definition 7: and the relation element refers to the relation between the head entity and the tail entity in the support set and the query set.
Definition 8: the value of hits @ n refers to the general model performance evaluation index in the field of knowledge graph completion, and the larger the value of hits @ n is, the better the representative model performance is.
Definition 9: the MRR value refers to a model performance evaluation index which is universal in the field of knowledge graph completion, and the larger the MRR value is, the better the performance of the representative model is.
A relation element learning method based on contextualized attention networks comprises the following steps:
step 1: and pre-training the relation data set with few samples by using a TransE model to obtain entity embedding.
The entity embedding means that entities in the knowledge graph are represented by low-dimensional vectors;
the few-sample relational data set is constructed by selecting the relation meeting the conditions in NELL and Wikida, NELL is a system for continuously collecting structured knowledge by reading websites, and Wikida is a project for structuring information in a Wikipedia;
the TransE model is a classical model in the field of knowledge graph completion.
Step 2: and calculating contextual embedding of entity pairs in the support set through the entity embedding obtained in the step 1.
Specifically, step 2 comprises the steps of:
step 2.1: given one entity pair (h) in the task relationship r i ,t i ),(h i ,t i )∈S r ,S r A support set representing the task relationship r. Each entity pair is added with its task relation r as a sequence X = (X) 1 ,x 2 ,x 3 ) Wherein, the first element is a head entity h, the middle is a task relation r, and the last element is a tail entity t;
step 2.2: for each element X in X j Calculating its input representation
Figure BDA0002913736120000031
The formula is as follows:
Figure BDA0002913736120000032
wherein the content of the first and second substances,
Figure BDA0002913736120000033
is element embedding, d is the dimension of entity embedding, R is the entity set;
Figure BDA0002913736120000034
is a position embedding with the length of 3,
Figure BDA0002913736120000035
and
Figure BDA0002913736120000036
all are entity embeddings obtained by step 1;
step 2.3: will be provided with
Figure BDA0002913736120000037
Sequence X is encoded by a Transformer encoder input to the L layerThe formula is as follows:
Figure BDA0002913736120000038
wherein the content of the first and second substances,
Figure BDA0002913736120000039
is x after l layers j Hidden state of (2);
the Transformer encoder uses a multi-headed self-attention mechanism to represent the element x of the task relationship r 2 Masked off, final hidden state
Figure BDA00029137361200000310
As an entity pair (h) i ,t i ) Representing entity pairs (h) by contextualized embedding i ,t i ) Specific relation element
Figure BDA00029137361200000311
And step 3: will support k entity pairs (h) in the set i ,t i ) Specific relationship elements
Figure BDA00029137361200000312
The embedding of the relationship elements is obtained by attention mechanism aggregation.
Specifically, step 3 includes the steps of:
step 3.1: embedding z of computing task relationships r r The formula is as follows:
Figure BDA00029137361200000313
wherein z is r ∈R d Is the embedding of the task relationship r, k is the number of entity pairs supporting the centralized task relationship r,
Figure BDA00029137361200000314
is the tail entity embedding of the ith entity pair,
Figure BDA00029137361200000315
is the head entity embedding of the ith entity pair;
step 3.2: the attention weight is obtained by the attention network, and the formula is as follows:
Figure BDA0002913736120000041
wherein, alpha' i Is the attention weight that supports the ith entity pair in focus;
Figure BDA0002913736120000042
representing a vector transpose; v. of a ∈R d 、W a ∈R d*d 、U a ∈R d*d Are all global attention parameters; z is a radical of r Is the embedding of the task relation r obtained in step 3.1,
Figure BDA0002913736120000047
is an entity pair (h) i ,t i ) A specific relationship element;
step 3.3: normalization is performed using the softmax function, as follows:
Figure BDA0002913736120000043
wherein alpha is i Is a weight, α ', supporting the ith entity pair in the set' i Attention weight obtained in step 3.2, and the softmax function is a normalized exponential function;
step 3.4: calculating the final relation element in the current task relation r
Figure BDA0002913736120000044
The calculation formula is as the formula (6):
Figure BDA0002913736120000045
where k is support centralizationNumber of pairs of entities of task relationship r, α i Is the weight of the ith entity pair in the support set obtained in step 3.3,
Figure BDA0002913736120000046
is an entity pair (h) i ,t i ) A specific relationship element;
so far, from step 1 to step 3, the final relation element in the current task relation r is obtained
Figure BDA0002913736120000048
And completing the relational meta-learning based on the contextualized attention network.
Advantageous effects
Compared with the prior art, the method of the invention has the following advantages:
the method reduces the dependence of the knowledge graph completion on a large-scale data set, considers the characteristics of contexts and unequal attributes of different entity pairs, and improves the performance of the knowledge graph completion method with few samples.
Drawings
FIG. 1 is a schematic flow chart of the method of the present invention.
Detailed Description
The method of the present invention is further illustrated with reference to the accompanying drawings and examples.
Example 1
As shown in fig. 1, the present invention comprises the steps of:
step A: embedding a pre-training entity;
specifically, in this embodiment, a 100-dimensional entity embedding is obtained through pre-training by using a few-sample relational data set, which is specifically the same as that in step 1;
and B: computing a representation of the sequence input;
specifically, in this embodiment, the steps are the same as step 2.1 and step 2.2;
and C: training a Transformer encoder;
in the embodiment, the same as step 2.3 is performed;
step D: embedding a calculation task relation;
in the embodiment, the same as step 3.1 is performed;
step E: attention network training;
specifically, in this embodiment, the steps are the same as step 3.2 and step 3.3;
step F: generating a relation element;
in this embodiment, it is the same as step 3.4.
Example 2
The specific operation steps of the method of the present invention will be described in detail by taking the supporting set entity pair of the task relationship subpartiof and 5 samples in the NELL dataset (Petersburg, virgini) (Vacaville, california) (Prague, czech) (caviers, NBA) (l.a.lakers, NBA) as an example.
As shown in fig. 1, a method for learning relationship elements based on contextualized attention networks includes the following steps:
step A1: embedding a pre-training entity;
specifically, in this embodiment, a less-sample relational data set NELL is pre-trained by a TransE model to obtain 100-dimensional embedding of all entities in the data set, and all entities involved in a support set entity pair of 5 samples are embedded as shown in table 1:
table 1 example 2 step A1 entity embedding
Entity Entity embedded (front 5 dimension)
Petersburg (-0.077614,0.004932,-0.050307,-0.037010,-0.091474)
Virgini (-0.003813,-0.050697,0.017735,0.014269,-0.025330)
Vacaville (0.007892,-0.070081,-0.074737,0.063790,-0.019323)
California (0.025901,0.065208,-0.028254,0.069672,-0.046585)
Prague (0.037457,0.051717,0.081376,0.006337,0.081004)
Czech (-0.047006,-0.021410,-0.022436,-0.042963,0.081536)
Cavaliers (0.002829,-0.074528,0.017209,0.025479,0.004750)
L.A.Lakers (-0.005583,0.019989,-0.009808,0.009675,0.113000)
NBA (-0.038429,-0.013440,0.023420,-0.015192,0.037002)
Step B1, calculating the representation of the input sequence, specifically to the calculation process of the representation of the input sequence of the embodiment (Petersburg, virgini), as follows:
for the head entity Petersburg and the tail entity Virgini in the embodiment, a sequence X = (Petersburg, subpart of, virgini) is formed by adding a task relationship subpart of, and according to the entity embedding obtained in step A1, the input representation of Petersburg and Virgini is obtained by formula (1), as shown in table 2:
table 2 example 2 representation of step B1 input sequence
Entity Input representation (front 5 dimension)
Petersburg (0.922386,0.004932,-0.050307,-0.037010,-0.091474)
Virgini (-0.003813,-0.050697,1.017735,0.014269,-0.025330)
C1, training a Transformer encoder;
specifically, in the embodiment (Petersburg, virgini), the input representation of Petersburg and Virgini is input to a 3-layer Transformer encoder to encode the sequence X, and the task relationship SubPartOf is masked, the hidden state of SubPartOf after the head number of the multi-head self-attention mechanism adopted by the Transformer encoder is 4,3 layers is the contextualized embedding of the entity pair (Petersburg, virgini), the specific relationship element of the entity pair (Petersburg, virgini) is represented, and the support set of the task relationship SubPartOf 5 entities is embedded into the specific relationship element as shown in table 3:
table 3 example 2 entity pair specific relationship elements in step C1
Entity pair Entity to specific relationship element (top 5 dimension)
(Petersburg,Virgini) (0.001155,-0.045608,-0.031250,-0.005250,0.026172)
(Vacaville,California) (0.003821,0.002937,0.045261,-0.049472,0.015706)
(Prague,Czech) (0.007575,-0.003372,0.050863,-0.019459,0.029509)
(Cavaliers,NBA) (0.009471,-0.014569,-0.022956,-0.008901,-0.026102)
(L.A.Lakers,NBA) (0.001853,-0.009944,-0.056892,0.003789,0.000534)
Step D1, embedding a calculation task relation;
specifically, in this embodiment, the task relationship SubPartOf is embedded by the formula (3), and the result is shown in table 4:
table 4 example 2 embedding of step D1 task relationships
Task relationships Embedding of task relationships (first 5 dimensions)
SubPartOf (0.009682,0.035680,-0.06203,0.003524,-0.026782)
E1, attention network training;
specifically, in this embodiment, attention network training shown in formula (4) is performed to obtain attention weights corresponding to 5 entities for a specific relationship element, and then normalization is performed by using a softmax function, where the normalized weights are as shown in table 5:
table 5 example 2 weights after step E1 normalization
Entity pair Weight of
(Petersburg,Virgini) 0.3590885031
(Vacaville,California) 0.2668582673
(Prague,Czech) 0.1855784664
(Cavaliers,NBA) 0.1083937157
(L.A.Lakers,NBA) 0.0800810475
Step F1, generating a relation element;
using the weights obtained in step E1, specifically in this embodiment, 5 entities in table 3 are weighted and summed with formula (5) for a specific relationship element, and the final relationship element is embedded as in table 6:
table 6 example 2 step F1 relational element embedding
Task relationships Relationship element embedded (front 5 dimension)
SubPartOf (0.0040152,-0.018595,0.0032516,-0.019360,0.016279)
Example 3
In order to further verify the effectiveness of the method, the embodiment adopts a data set NELL-One and a Wiki-One constructed by selecting the relation meeting the conditions in NELL and Wikidata, wherein the NELL-One data set comprises 68545 entities, 358 relations, 181109 triples and 67 tasks; the Wiki-One data set comprises 4838244 entities, 822 relations, 5859240 triples and 183 tasks, the relation elements obtained by the relation element learning method based on the contextualized attention network are embedded into the MetaR for experiment, the experimental results are compared and analyzed with the experimental results of the MetaR without the contextualized attention network, and the comparison results are shown in tables 7 and 8.
TABLE 7 NELL-ONE data set comparative experiment results
Model (model) MRR Hits@10 Hits@5 Hits@1
MetaR .209 .355 .280 .141
The method of the invention .237 .389 .311 .165
TABLE 8 Wiki-One data set comparative experiment results
Model (model) MRR Hits@10 Hits@5 Hits@1
MetaR .323 .418 .385 .270
The method of the invention .335 .437 .391 .286
The relation element learning method used in the MetaR is as follows:
Figure BDA0002913736120000081
Figure BDA0002913736120000082
in equation (7), i represents the ith entity pair in the support set of the task relationship r,
Figure BDA0002913736120000083
representing the embedded splicing of head and tail entities to obtain an input representation x 0 (ii) a Through the full-connection neural network of the L layer, an entity pair (h) is obtained i ,t i ) Specific relation element
Figure BDA0002913736120000084
Where σ denotes the LeakyReLU activation function, W l And b l Represents the weight and deviation of the l-th layer; in formula (8), K represents the number of entity pairs in the support set of the task relationship r,
Figure BDA0002913736120000085
representing the final relationship element of the task relationship r.
As can be seen from tables 7 and 8, the method of the invention achieves better effect in the completion of the knowledge graph, and is improved in both Hits @ n and MRR compared with the MetaR model, wherein the definition of the Hits @ n is shown in definition 8, and the definition of the MRR is shown in definition 9. Through the improvement of the relation element learning method, the experimental result on the NELL-One data set is improved by about 3 percent, and the Wiki-One data set is improved by about 1 percent, so that the effectiveness of the method is proved.
The foregoing illustrates and describes the principles, general features, and advantages of the present invention. It will be understood by those skilled in the art that the present invention is not limited to the embodiments described above, which are merely illustrative of the principles of the present invention, but that various changes and modifications may be made without departing from the spirit and scope of the invention, which is defined by the appended claims and their equivalents.

Claims (1)

1. A relational element learning method based on contextualized attention networks is characterized by comprising the following steps:
step 1: pre-training the relation data set with few samples by using a TransE model to obtain entity embedding;
the entity embedding means that entities in the knowledge graph are represented by low-dimensional vectors;
the few-sample relational data set is constructed by selecting the relation meeting the conditions in NELL and Wikida, NELL is a system for continuously collecting structured knowledge by reading websites, and Wikida is a project for structuring information in a Wikipedia;
the TransE model is a classic model in the field of knowledge graph completion;
step 2: calculating contextualized embedding of the entity pairs in the support set through the entity embedding obtained in the step 1;
step 2.1: given one entity pair (h) in the task relation r i ,t i ),(h i ,t i )∈S r ,S r A support set representing a task relation r, wherein i refers to the ith entity pair of the task relation r; each entity pair is added with its task relation r as a sequence X = (X) 1 ,x 2 ,x 3 ) Wherein, the first element is a head entity h, the middle is a task relation r, and the last element is a tail entity t;
step 2.2: for each element X in X j Calculating its input representation
Figure FDA0003953850720000011
j refers to the jth element in X, and the formula is as follows:
Figure FDA0003953850720000012
wherein the content of the first and second substances,
Figure FDA0003953850720000013
is element embedding, d is the dimension of entity embedding, R is the entity set;
Figure FDA0003953850720000014
is a position embedding with the length of 3,
Figure FDA0003953850720000015
and
Figure FDA0003953850720000016
are all the entity embeddings obtained by step 1,
Figure FDA0003953850720000017
is a relationship embedding of the task relationship r;
step 2.3: will be provided with
Figure FDA0003953850720000018
The sequence X is encoded by a Transformer encoder input to the L layer, and the formula is as follows:
Figure FDA0003953850720000019
wherein the content of the first and second substances,
Figure FDA00039538507200000110
is x after l layers j Hidden state of (2);
the Transformer encoder uses a multi-headed self-attention mechanism to represent the element x of the task relationship r 2 Masked off, final hidden state
Figure FDA00039538507200000111
As an entity pair (h) i ,t i ) Representing entity pairs (h) by contextualized embedding i ,t i ) Specific relationship elements
Figure FDA00039538507200000112
And step 3: will support k entity pairs (h) in the set i ,t i ) Specific relation element
Figure FDA00039538507200000113
Obtaining a relation element by means of attention mechanism polymerization;
step 3.1: embedding z of computing task relationships r r The formula is as follows:
Figure FDA0003953850720000021
wherein z is r ∈R d Is the embedding of the task relationship r, k is the number of entity pairs supporting the centralized task relationship r,
Figure FDA0003953850720000022
is the tail entity embedding of the ith entity pair,
Figure FDA0003953850720000023
is the head entity embedding of the ith entity pair;
step 3.2: the attention weight is obtained by the attention network, and the formula is as follows:
Figure FDA0003953850720000024
wherein alpha is i ' is the attention weight to support the ith entity pair in focus;
Figure FDA0003953850720000025
representing a vector transpose; v. of a ∈R d 、W a ∈R d*d 、U a ∈R d*d Are all global attention parameters; z is a radical of r Is the embedding of the task relation r obtained in step 3.1,
Figure FDA0003953850720000026
is an entity pair (h) i ,t i ) A specific relationship element;
step 3.3: normalization was performed using the softmax function, as follows:
Figure FDA0003953850720000027
wherein alpha is i Is the weight, α, of the ith entity pair in the support set i ' is the attention weight obtained in step 3.2 and the softmax function is a normalized exponential function;
step 3.4: calculating the final relation element in the current task relation r
Figure FDA0003953850720000028
The calculation formula is as the formula (6):
Figure FDA0003953850720000029
where k is the number of entity pairs supporting the centralized task relationship r, α i Is the weight of the ith entity pair in the support set obtained in step 3.3,
Figure FDA00039538507200000210
is an entity pair (h) i ,t i ) A specific relationship element.
CN202110094919.1A 2021-01-25 2021-01-25 Relational element learning method based on contextualized attention network Active CN112836007B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110094919.1A CN112836007B (en) 2021-01-25 2021-01-25 Relational element learning method based on contextualized attention network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110094919.1A CN112836007B (en) 2021-01-25 2021-01-25 Relational element learning method based on contextualized attention network

Publications (2)

Publication Number Publication Date
CN112836007A CN112836007A (en) 2021-05-25
CN112836007B true CN112836007B (en) 2023-01-17

Family

ID=75931403

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110094919.1A Active CN112836007B (en) 2021-01-25 2021-01-25 Relational element learning method based on contextualized attention network

Country Status (1)

Country Link
CN (1) CN112836007B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115051843A (en) * 2022-06-06 2022-09-13 华北电力大学 KGE-based block chain threat information knowledge graph reasoning method
CN115712734B (en) * 2022-11-21 2023-10-03 之江实验室 Sparse knowledge graph embedding method and device based on meta learning

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109829057A (en) * 2019-01-11 2019-05-31 中山大学 A kind of knowledge mapping Entity Semantics spatial embedding method based on figure second order similitude
CN111552817A (en) * 2020-04-14 2020-08-18 国网内蒙古东部电力有限公司 Electric power scientific and technological achievement knowledge map completion method
CN111581395A (en) * 2020-05-06 2020-08-25 西安交通大学 Model fusion triple representation learning system and method based on deep learning

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11138392B2 (en) * 2018-07-26 2021-10-05 Google Llc Machine translation using neural network models

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109829057A (en) * 2019-01-11 2019-05-31 中山大学 A kind of knowledge mapping Entity Semantics spatial embedding method based on figure second order similitude
CN111552817A (en) * 2020-04-14 2020-08-18 国网内蒙古东部电力有限公司 Electric power scientific and technological achievement knowledge map completion method
CN111581395A (en) * 2020-05-06 2020-08-25 西安交通大学 Model fusion triple representation learning system and method based on deep learning

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Translating embeddings for modeling multi-relational data;Bordes A等;《Proc. Of the 26th Int’l Conf. on Neural Information Processing Systems (NIPS)》;20131231;2787-2795 *
基于多通道自注意力机制的电子病历实体关系抽取;宁尚明等;《计算机学报》;20200531;第43卷(第05期);916-929 *

Also Published As

Publication number Publication date
CN112836007A (en) 2021-05-25

Similar Documents

Publication Publication Date Title
Chen et al. A deep learning framework for time series classification using Relative Position Matrix and Convolutional Neural Network
CN109271522B (en) Comment emotion classification method and system based on deep hybrid model transfer learning
Cao et al. Hashnet: Deep learning to hash by continuation
CN109389151B (en) Knowledge graph processing method and device based on semi-supervised embedded representation model
CN112528928B (en) Commodity identification method based on self-attention depth network
CN114693397B (en) Attention neural network-based multi-view multi-mode commodity recommendation method
CN110837846A (en) Image recognition model construction method, image recognition method and device
CN112836007B (en) Relational element learning method based on contextualized attention network
CN111931505A (en) Cross-language entity alignment method based on subgraph embedding
CN110210027B (en) Fine-grained emotion analysis method, device, equipment and medium based on ensemble learning
CN110889282A (en) Text emotion analysis method based on deep learning
CN114036298B (en) Node classification method based on graph convolution neural network and word vector
Upreti Convolutional neural network (cnn). a comprehensive overview
Wu et al. Transferring vision-language models for visual recognition: A classifier perspective
CN112950414A (en) Legal text representation method based on decoupling legal elements
Padala et al. Effect of input noise dimension in GANs
CN116541592A (en) Vector generation method, information recommendation method, device, equipment and medium
CN112861882B (en) Image-text matching method and system based on frequency self-adaption
CN115344794A (en) Scenic spot recommendation method based on knowledge map semantic embedding
CN114882279A (en) Multi-label image classification method based on direct-push type semi-supervised deep learning
Ziyaden et al. Long-context transformers: A survey
CN113705197A (en) Fine-grained emotion analysis method based on position enhancement
Manisha et al. Effect of input noise dimension in gans
CN114625871B (en) Ternary grouping method based on attention position joint coding
Yang et al. Robust feature mining transformer for occluded person re-identification

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant