CN112836007A - Relational element learning method based on contextualized attention network - Google Patents

Relational element learning method based on contextualized attention network Download PDF

Info

Publication number
CN112836007A
CN112836007A CN202110094919.1A CN202110094919A CN112836007A CN 112836007 A CN112836007 A CN 112836007A CN 202110094919 A CN202110094919 A CN 202110094919A CN 112836007 A CN112836007 A CN 112836007A
Authority
CN
China
Prior art keywords
entity
embedding
task
relationship
attention
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110094919.1A
Other languages
Chinese (zh)
Other versions
CN112836007B (en
Inventor
史树敏
周月阳
黄河燕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Institute of Technology BIT
Original Assignee
Beijing Institute of Technology BIT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Institute of Technology BIT filed Critical Beijing Institute of Technology BIT
Priority to CN202110094919.1A priority Critical patent/CN112836007B/en
Publication of CN112836007A publication Critical patent/CN112836007A/en
Application granted granted Critical
Publication of CN112836007B publication Critical patent/CN112836007B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/31Indexing; Data structures therefor; Storage structures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/36Creation of semantic tools, e.g. ontology or thesauri
    • G06F16/367Ontology

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Animal Behavior & Ethology (AREA)
  • Software Systems (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Machine Translation (AREA)

Abstract

The invention discloses a relational element learning method based on a contextualized attention network, and belongs to the technical field of knowledge map completion and element learning framework application. The method includes the steps that firstly, according to the characteristic that entities and relations in a knowledge graph have different meanings in different contexts, contextualized embedding of entity pairs is obtained through a Transformer encoder, then attention weights of different entity pairs are obtained through an attention network, weighted summation is carried out to obtain relation element embedding, and finally the relation element embedding is input into a meta-learning framework for training. The method reduces the dependence of the knowledge graph completion on a large-scale data set, considers the characteristics of contexts and unequal attributes of different entity pairs, and improves the performance of the knowledge graph completion method with few samples.

Description

Relational element learning method based on contextualized attention network
Technical Field
The invention relates to a knowledge graph completion method with few samples, in particular to a relational element learning method based on a contextualized attention network, and belongs to the technical field of knowledge graph completion and element learning framework application.
Background
In recent years, knowledge maps have been applied to the fields of question answering, information extraction, and the like. Although the knowledge graph can effectively represent the structured information, the problems of data quality, data sparseness and the like exist in the application. Knowledge-graph completion techniques are used to solve these problems, and the goal is to enrich the incomplete graph by assessing the likelihood of missing triples.
One common approach is: vector embedding is used to learn low-dimensional representations of entities and relationships. These typical embedded-based knowledge-graph completion algorithms can be divided into fact-based models and additional information-based models. Fact-based models, which refer to learning embedding using only facts extracted from triples, can be further divided into translation distance models, which treat relationships as translations measured by the distance between two entities, and semantic matching models, which are models that match underlying semantics in the representation of entities and relationships. The model of the additional information contains other information than the fact, such as entity type, relationship path, and text description.
The above approach requires the use of a large number of entity pairs for each relationship. However, in practical datasets, the frequency distribution of relationships often has a long-tailed problem, i.e., most relationships have only a small number of entity pairs. How to handle the relationships of a limited (small) number of entity pairs is an important and challenging technical problem. Such a few sample scenario makes the above models infeasible because they assume that all relationships have enough training instances.
In recent years, how to solve the few-sample scene of knowledge graph completion by using a meta-learning framework becomes a popular research direction. For example, in the conference "experience Methods in Natural Language Processing" (EMNLP), 4216 page 4225 in 2019, "metal relational learning for raw-shot link prediction in knowledge graphs", a model MetaR using a Meta-learning framework to process the completion of a few-sample knowledge graph is proposed, which embeds and splices pairs of entities in a support set, obtains relationship elements through semantic averaging after a fully-connected neural network of an L layer, and inputs the relationship elements into an embedding learner for training. However, in the process of obtaining the relationship element, MetaR does not consider the contexts of different entity pairs and the characteristics of unequal attributes, so that the learned relationship element does not sufficiently obtain information specific to the relationship in the entity pair, and the result is poor.
Disclosure of Invention
The invention aims to creatively provide a relational meta-learning method based on a contextualized attention network in order to better utilize a meta-learning framework to solve a few-sample scene of knowledge graph completion aiming at the defect that the prior art cannot sufficiently acquire information in an input entity pair to cause poor results.
The invention relates to the following definitions:
definition 1: a knowledge graph refers to a multi-relationship graph composed of entities (nodes) and relationships (edges of different types).
Definition 2: meta learning, refers to a method of training a model from a large number of tasks and learning faster in a new task with a small amount of data.
Definition 3: and the support set refers to a training set corresponding to each task in the meta learning.
Definition 4: and inquiring a set, namely a verification set corresponding to each task in the element learning.
Definition 5: contextualization refers to the property that an entity, relationship, or word may have different meanings in different contexts.
Definition 6: attention, which refers to a phenomenon in which a human needs to select a specific portion in a visual region and then focus on it in order to make reasonable use of limited visual information processing resources.
Artificial intelligence exploits this phenomenon to provide neural networks with the ability to select specific inputs. In this method, attention is drawn to: if the entity pair in the support set is more relevant to the relationship in the query set, the entity pair in the support set is given higher weight.
Definition 7: and the relation element refers to the relation between the head entity and the tail entity in the support set and the query set.
Definition 8: the Hits @ n value refers to a model performance evaluation index which is universal in the field of knowledge graph completion, and the higher the Hits @ n value is, the better the model performance is represented.
Definition 9: the MRR value refers to a model performance evaluation index which is universal in the field of knowledge graph completion, and the larger the MRR value is, the better the performance of the representative model is.
A relation element learning method based on contextualized attention networks comprises the following steps:
step 1: and pre-training the relation data set with few samples by using a TransE model to obtain entity embedding.
The entity embedding means that entities in the knowledge graph are represented by low-dimensional vectors;
the few-sample relational data set is constructed by selecting the relation meeting the conditions in NELL and Wikida, NELL is a system for continuously collecting structured knowledge by reading websites, and Wikida is a project for structuring information in a Wikipedia;
the TransE model is a classical model in the field of knowledge graph completion.
Step 2: and calculating contextual embedding of entity pairs in the support set through the entity embedding obtained in the step 1.
Specifically, step 2 comprises the steps of:
step 2.1: given one entity pair (h) in the task relationship ri,ti),(hi,ti)∈Sr,SrA support set representing the task relationship r. Each entity pair is added with its task relation r as a sequence X ═ X (X)1,x2,x3) Wherein, the first element is a head entity h, the middle is a task relation r, and the last element is a tail entity t;
step 2.2: for each element X in XjCalculating its input representation
Figure BDA0002913736120000031
The formula is as follows:
Figure BDA0002913736120000032
wherein the content of the first and second substances,
Figure BDA0002913736120000033
is element embedding, d is the dimension of entity embedding, R is the entity set;
Figure BDA0002913736120000034
is a position embedding with the length of 3,
Figure BDA0002913736120000035
and
Figure BDA0002913736120000036
all are entity embeddings obtained by step 1;
step 2.3: will be provided with
Figure BDA0002913736120000037
The sequence X is encoded by a Transformer encoder input to the L layer, and the formula is as follows:
Figure BDA0002913736120000038
wherein the content of the first and second substances,
Figure BDA0002913736120000039
is x after l layersjHidden state of (2);
the Transformer encoder uses a multi-headed self-attention mechanism to represent the element x of the task relationship r2Masked off, final hidden state
Figure BDA00029137361200000310
As an entity pair (h)i,ti) Representing entity pairs (h) by contextualized embeddingi,ti) Specific relationship elements
Figure BDA00029137361200000311
And step 3: will support k entity pairs (h) in the seti,ti) Specific relationship elements
Figure BDA00029137361200000312
The embedding of the relationship elements is obtained by attention mechanism aggregation.
Specifically, step 3 includes the steps of:
step 3.1: embedding z of computing task relationships rrThe formula is as follows:
Figure BDA00029137361200000313
wherein z isr∈RdIs the embedding of the task relationship r, k is the number of entity pairs supporting the centralized task relationship r,
Figure BDA00029137361200000314
is the tail entity embedding of the ith entity pair,
Figure BDA00029137361200000315
is the head entity embedding of the ith entity pair;
step 3.2: the attention weight is obtained by the attention network, and the formula is as follows:
Figure BDA0002913736120000041
wherein, alpha'iIs the attention weight that supports the ith entity pair in focus;
Figure BDA0002913736120000042
representing a vector transpose; v. ofa∈Rd、Wa∈Rd*d、Ua∈Rd*dAre all global attention parameters; z is a radical ofrIs the embedding of the task relation r obtained in step 3.1,
Figure BDA0002913736120000047
is an entity pair (h)i,ti) A specific relationship element;
step 3.3: normalization was performed using the softmax function, as follows:
Figure BDA0002913736120000043
wherein alpha isiIs a weight, α ', supporting the ith entity pair in the set'iAttention weight obtained in step 3.2, and the softmax function is a normalized exponential function;
step 3.4: calculating the final relation element in the current task relation r
Figure BDA0002913736120000044
The calculation formula is as the formula (6):
Figure BDA0002913736120000045
where k is the number of entity pairs supporting the centralized task relationship r, αiIs the weight of the ith entity pair in the support set obtained in step 3.3,
Figure BDA0002913736120000046
is an entity pair (h)i,ti) A specific relationship element;
so far, from step 1 to step 3, the final relationship element in the current task relationship r is obtained
Figure BDA0002913736120000048
The relation element learning based on the contextualized attention network is completed.
Advantageous effects
Compared with the prior art, the method of the invention has the following advantages:
the method reduces the dependence of the knowledge graph completion on a large-scale data set, considers the characteristics of contexts and unequal attributes of different entity pairs, and improves the performance of the knowledge graph completion method with few samples.
Drawings
FIG. 1 is a schematic flow chart of the method of the present invention.
Detailed Description
The method of the present invention is further described with reference to the accompanying drawings and examples.
Example 1
As shown in fig. 1, the present invention comprises the steps of:
step A: embedding a pre-training entity;
specifically, in this embodiment, a 100-dimensional entity embedding is obtained through pre-training by using a few-sample relational data set, which is specifically the same as that in step 1;
and B: computing a representation of the sequence input;
specifically, in this embodiment, the steps are the same as step 2.1 and step 2.2;
and C: training a Transformer encoder;
in the embodiment, the same as step 2.3 is performed;
step D: embedding a calculation task relation;
in the embodiment, the same as step 3.1 is performed;
step E: attention network training;
specifically, in this embodiment, the steps are the same as step 3.2 and step 3.3;
step F: generating a relation element;
in this embodiment, the same as step 3.4.
Example 2
The specific operation steps of the method of the present invention will be described in detail by taking the supporting set entity pair of the task relationship subpartiof and 5 samples in the NELL dataset (Petersburg, Virgini) (Vacaville, California) (Prague, Czech) (caviers, NBA) (l.a.lakers, NBA) as an example.
As shown in fig. 1, a method for learning relationship elements based on contextualized attention networks includes the following steps:
step A1: embedding a pre-training entity;
specifically, in this embodiment, a less-sample relational data set NELL is pre-trained by a TransE model to obtain 100-dimensional embedding of all entities in the data set, and all entities involved in a support set entity pair of 5 samples are embedded as shown in table 1:
table 1 example 2 step a1 entity embedding
Entity Entity embedded (front 5 dimension)
Petersburg (-0.077614,0.004932,-0.050307,-0.037010,-0.091474)
Virgini (-0.003813,-0.050697,0.017735,0.014269,-0.025330)
Vacaville (0.007892,-0.070081,-0.074737,0.063790,-0.019323)
California (0.025901,0.065208,-0.028254,0.069672,-0.046585)
Prague (0.037457,0.051717,0.081376,0.006337,0.081004)
Czech (-0.047006,-0.021410,-0.022436,-0.042963,0.081536)
Cavaliers (0.002829,-0.074528,0.017209,0.025479,0.004750)
L.A.Lakers (-0.005583,0.019989,-0.009808,0.009675,0.113000)
NBA (-0.038429,-0.013440,0.023420,-0.015192,0.037002)
Step B1, calculating the representation of the input sequence, specifically to the example (Petersburg, Virgini), the calculation process of the representation of the input sequence is specifically as follows:
for the head entity Petersburg and the tail entity Virgini in the embodiment, the sequence X ═ is formed by adding the task relationship SubPartOf (Petersburg, SubPartOf, Virgini), according to the entity embedding obtained in step a1, the input representation of Petersburg and Virgini is obtained by formula (1), as shown in table 2:
table 2 example 2 representation of step B1 input sequence
Entity Input representation (front 5 dimension)
Petersburg (0.922386,0.004932,-0.050307,-0.037010,-0.091474)
Virgini (-0.003813,-0.050697,1.017735,0.014269,-0.025330)
Step C1, training a Transformer encoder;
specifically, in the embodiment (Petersburg, Virgini), the input representation of Petersburg and Virgini is input to a 3-layer Transformer encoder to encode the sequence X, the task relationship SubPartOf is masked, the head number of a multi-head self-attention mechanism adopted by the Transformer encoder is 4, the hidden state of SubPartOf after 3 layers is contextualized embedding of the entity pair (Petersburg, Virgini), a specific relationship element of the entity pair (Petersburg, Virgini) is represented, and the support set of the task relationship SubPartOf 5 entity pairs is embedded into the specific relationship element as shown in table 3:
table 3 example 2 entity pair specific relationship elements in step C1
Entity pair Entity to specific relationship element (top 5 dimension)
(Petersburg,Virgini) (0.001155,-0.045608,-0.031250,-0.005250,0.026172)
(Vacaville,California) (0.003821,0.002937,0.045261,-0.049472,0.015706)
(Prague,Czech) (0.007575,-0.003372,0.050863,-0.019459,0.029509)
(Cavaliers,NBA) (0.009471,-0.014569,-0.022956,-0.008901,-0.026102)
(L.A.Lakers,NBA) (0.001853,-0.009944,-0.056892,0.003789,0.000534)
D1, embedding the calculation task relation;
specifically, in this embodiment, the task relationship SubPartOf is embedded by the formula (3), and the result is shown in table 4:
table 4 example 2 embedding of task relationships in step D1
Task relationships Embedding of task relationships (first 5 dimensions)
SubPartOf (0.009682,0.035680,-0.06203,0.003524,-0.026782)
Step E1, attention network training;
specifically, in this embodiment, attention network training shown in formula (4) is performed to obtain attention weights corresponding to 5 entities for a specific relationship element, and then normalization is performed by using a softmax function, where the normalized weights are as shown in table 5:
table 5 example 2 normalized weights of step E1
Entity pair Weight of
(Petersburg,Virgini) 0.3590885031
(Vacaville,California) 0.2668582673
(Prague,Czech) 0.1855784664
(Cavaliers,NBA) 0.1083937157
(L.A.Lakers,NBA) 0.0800810475
Step F1, generating a relation element;
using the weights obtained in step E1, specifically in this embodiment, 5 entities in table 3 are weighted and summed with formula (5) for a specific relationship element, and the final relationship element is embedded as in table 6:
table 6 example 2 step F1 relationship element embedding
Task relationships Relationship element embedded (front 5 dimension)
SubPartOf (0.0040152,-0.018595,0.0032516,-0.019360,0.016279)
Example 3
In order to further verify the effectiveness of the method, the embodiment adopts a data set NELL-One and a Wiki-One constructed by selecting the relation meeting the conditions in NELL and Wikidata, wherein the NELL-One data set comprises 68545 entities, 358 relations, 181109 triples and 67 tasks; the Wiki-One data set comprises 4838244 entities, 822 relations, 5859240 triples and 183 tasks, the relation elements obtained by the relation element learning method based on the contextualized attention network are embedded into the MetaR for experiment, the experiment result is compared with the experiment result of the MetaR without the contextualized attention network, and the comparison result is shown in tables 7 and 8.
TABLE 7 NELL-ONE data set comparative experiment results
Model (model) MRR Hits@10 Hits@5 Hits@1
MetaR .209 .355 .280 .141
The method of the invention .237 .389 .311 .165
TABLE 8 Wiki-One data set comparative experiment results
Model (model) MRR Hits@10 Hits@5 Hits@1
MetaR .323 .418 .385 .270
The method of the invention .335 .437 .391 .286
The relation element learning method used in the MetaR is as follows:
Figure BDA0002913736120000081
Figure BDA0002913736120000082
in equation (7), i represents the ith entity pair in the support set of task relationship r,
Figure BDA0002913736120000083
representing the embedded splicing of head and tail entities to obtain an input representation x0(ii) a Full-connection neural net passing through L layerGet the entity pair (h)i,ti) Specific relationship elements
Figure BDA0002913736120000084
Where σ denotes the LeakyReLU activation function, WlAnd blRepresents the weight and deviation of the l-th layer; in formula (8), K represents the number of entity pairs in the support set of the task relationship r,
Figure BDA0002913736120000085
representing the final relationship element of the task relationship r.
As can be seen from tables 7 and 8, the method of the invention achieves better effect in the completion of the knowledge graph, and is improved in Hits @ n and MRR compared with a MetaR model, wherein the Hits @ n is defined in definition 8, and the MRR is defined in definition 9. Through the improvement of the relation element learning method, the experimental result on the NELL-One data set is improved by about 3 percent, and the Wiki-One data set is improved by about 1 percent, so that the effectiveness of the method is proved.
The foregoing illustrates and describes the principles, general features, and advantages of the present invention. It will be understood by those skilled in the art that the present invention is not limited to the embodiments described above, which are merely illustrative of the principles of the present invention, but that various changes and modifications may be made without departing from the spirit and scope of the invention, which is defined by the appended claims and their equivalents.

Claims (1)

1. A relational element learning method based on contextualized attention networks is characterized by comprising the following steps:
step 1: pre-training a few-sample relational data set by using a TransE model to obtain entity embedding;
the entity embedding means that entities in the knowledge graph are represented by low-dimensional vectors;
the few-sample relational data set is constructed by selecting the relation meeting the conditions in NELL and Wikida, NELL is a system for continuously collecting structured knowledge by reading websites, and Wikida is a project for structuring information in a Wikipedia;
the TransE model is a classical model in the field of knowledge graph completion.
Step 2: calculating contextualized embedding of the entity pairs in the support set through the entity embedding obtained in the step 1;
step 2.1: given one entity pair (h) in the task relationship ri,ti),(hi,ti)∈Sr,SrA support set representing a task relationship r; each entity pair is added with its task relation r as a sequence X ═ X (X)1,x2,x3) Wherein, the first element is a head entity h, the middle is a task relation r, and the last element is a tail entity t;
step 2.2: for each element X in XjCalculating its input representation
Figure FDA0002913736110000011
The formula is as follows:
Figure FDA0002913736110000012
wherein the content of the first and second substances,
Figure FDA0002913736110000013
is element embedding, d is the dimension of entity embedding, R is the entity set;
Figure FDA0002913736110000014
is a position embedding with the length of 3,
Figure FDA0002913736110000015
and
Figure FDA0002913736110000016
all are entity embeddings obtained by step 1;
step 2.3: will be provided with
Figure FDA0002913736110000017
The sequence X is encoded by a Transformer encoder input to the L layer, and the formula is as follows:
Figure FDA0002913736110000018
wherein the content of the first and second substances,
Figure FDA0002913736110000019
is x after l layersjHidden state of (2);
the Transformer encoder uses a multi-headed self-attention mechanism to represent the element x of the task relationship r2Masked off, final hidden state
Figure FDA00029137361100000110
As an entity pair (h)i,ti) Representing entity pairs (h) by contextualized embeddingi,ti) Specific relationship elements
Figure FDA00029137361100000112
And step 3: will support k entity pairs (h) in the seti,ti) Specific relationship elements
Figure FDA00029137361100000113
Obtaining a relation element by means of attention mechanism polymerization;
step 3.1: embedding z of computing task relationships rrThe formula is as follows:
Figure FDA00029137361100000111
wherein z isr∈RdIs the embedding of the task relationship r, k is the number of entity pairs supporting the centralized task relationship r,
Figure FDA0002913736110000021
is the tail entity embedding of the ith entity pair,
Figure FDA0002913736110000022
is the head entity embedding of the ith entity pair;
step 3.2: the attention weight is obtained by the attention network, and the formula is as follows:
Figure FDA0002913736110000023
wherein, alpha'iIs the attention weight that supports the ith entity pair in focus;
Figure FDA0002913736110000024
representing a vector transpose; v. ofa∈Rd、Wa∈Rd*d、Ua∈Rd*dAre all global attention parameters; z is a radical ofrIs the embedding of the task relation r obtained in step 3.1,
Figure FDA0002913736110000025
is an entity pair (h)i,ti) A specific relationship element;
step 3.3: normalization was performed using the softmax function, as follows:
Figure FDA0002913736110000026
wherein alpha isiIs a weight, α ', supporting the ith entity pair in the set'iAttention weight obtained in step 3.2, and the softmax function is a normalized exponential function;
step 3.4: calculating the final relation element in the current task relation r
Figure FDA0002913736110000027
The calculation formula is as the formula (6):
Figure FDA0002913736110000028
where k is the number of entity pairs supporting the centralized task relationship r, αiIs the weight of the ith entity pair in the support set obtained in step 3.3,
Figure FDA0002913736110000029
is an entity pair (h)i,ti) A specific relationship element.
CN202110094919.1A 2021-01-25 2021-01-25 Relational element learning method based on contextualized attention network Active CN112836007B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110094919.1A CN112836007B (en) 2021-01-25 2021-01-25 Relational element learning method based on contextualized attention network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110094919.1A CN112836007B (en) 2021-01-25 2021-01-25 Relational element learning method based on contextualized attention network

Publications (2)

Publication Number Publication Date
CN112836007A true CN112836007A (en) 2021-05-25
CN112836007B CN112836007B (en) 2023-01-17

Family

ID=75931403

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110094919.1A Active CN112836007B (en) 2021-01-25 2021-01-25 Relational element learning method based on contextualized attention network

Country Status (1)

Country Link
CN (1) CN112836007B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115051843A (en) * 2022-06-06 2022-09-13 华北电力大学 KGE-based block chain threat information knowledge graph reasoning method
CN115712734A (en) * 2022-11-21 2023-02-24 之江实验室 Sparse knowledge graph embedding method and device based on meta-learning

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109829057A (en) * 2019-01-11 2019-05-31 中山大学 A kind of knowledge mapping Entity Semantics spatial embedding method based on figure second order similitude
US20200034436A1 (en) * 2018-07-26 2020-01-30 Google Llc Machine translation using neural network models
CN111552817A (en) * 2020-04-14 2020-08-18 国网内蒙古东部电力有限公司 Electric power scientific and technological achievement knowledge map completion method
CN111581395A (en) * 2020-05-06 2020-08-25 西安交通大学 Model fusion triple representation learning system and method based on deep learning

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200034436A1 (en) * 2018-07-26 2020-01-30 Google Llc Machine translation using neural network models
CN109829057A (en) * 2019-01-11 2019-05-31 中山大学 A kind of knowledge mapping Entity Semantics spatial embedding method based on figure second order similitude
CN111552817A (en) * 2020-04-14 2020-08-18 国网内蒙古东部电力有限公司 Electric power scientific and technological achievement knowledge map completion method
CN111581395A (en) * 2020-05-06 2020-08-25 西安交通大学 Model fusion triple representation learning system and method based on deep learning

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
BORDES A等: "Translating embeddings for modeling multi-relational data", 《PROC. OF THE 26TH INT’L CONF. ON NEURAL INFORMATION PROCESSING SYSTEMS (NIPS)》 *
宁尚明等: "基于多通道自注意力机制的电子病历实体关系抽取", 《计算机学报》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115051843A (en) * 2022-06-06 2022-09-13 华北电力大学 KGE-based block chain threat information knowledge graph reasoning method
CN115712734A (en) * 2022-11-21 2023-02-24 之江实验室 Sparse knowledge graph embedding method and device based on meta-learning
CN115712734B (en) * 2022-11-21 2023-10-03 之江实验室 Sparse knowledge graph embedding method and device based on meta learning

Also Published As

Publication number Publication date
CN112836007B (en) 2023-01-17

Similar Documents

Publication Publication Date Title
Liu et al. Connecting image denoising and high-level vision tasks via deep learning
Chen et al. A deep learning framework for time series classification using Relative Position Matrix and Convolutional Neural Network
CN109271522B (en) Comment emotion classification method and system based on deep hybrid model transfer learning
CN109389151B (en) Knowledge graph processing method and device based on semi-supervised embedded representation model
CN112528928B (en) Commodity identification method based on self-attention depth network
CN110837846A (en) Image recognition model construction method, image recognition method and device
CN114693397B (en) Attention neural network-based multi-view multi-mode commodity recommendation method
CN111931505A (en) Cross-language entity alignment method based on subgraph embedding
CN112836007B (en) Relational element learning method based on contextualized attention network
CN110210027B (en) Fine-grained emotion analysis method, device, equipment and medium based on ensemble learning
CN111460222B (en) Short video multi-label classification method based on multi-view low-rank decomposition
CN111563373B (en) Attribute-level emotion classification method for focused attribute-related text
CN110889282A (en) Text emotion analysis method based on deep learning
CN109325875A (en) Implicit group based on the hidden feature of online social user finds method
CN113449853A (en) Graph convolution neural network model and training method thereof
Aygun et al. Exploiting convolution filter patterns for transfer learning
CN114036298B (en) Node classification method based on graph convolution neural network and word vector
Xu et al. Semi-supervised self-growing generative adversarial networks for image recognition
CN116541592A (en) Vector generation method, information recommendation method, device, equipment and medium
CN112861882B (en) Image-text matching method and system based on frequency self-adaption
CN115344794A (en) Scenic spot recommendation method based on knowledge map semantic embedding
CN114882279A (en) Multi-label image classification method based on direct-push type semi-supervised deep learning
Zhang et al. Research On Face Image Clustering Based On Integrating Som And Spectral Clustering Algorithm
Yang et al. Robust feature mining transformer for occluded person re-identification
CN114625871B (en) Ternary grouping method based on attention position joint coding

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant