CN114564596A - Cross-language knowledge graph link prediction method based on graph attention machine mechanism - Google Patents

Cross-language knowledge graph link prediction method based on graph attention machine mechanism Download PDF

Info

Publication number
CN114564596A
CN114564596A CN202210201390.3A CN202210201390A CN114564596A CN 114564596 A CN114564596 A CN 114564596A CN 202210201390 A CN202210201390 A CN 202210201390A CN 114564596 A CN114564596 A CN 114564596A
Authority
CN
China
Prior art keywords
knowledge
graph
entity
module
language
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210201390.3A
Other languages
Chinese (zh)
Inventor
张世洁
高永彬
方志军
余文俊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai University of Engineering Science
Original Assignee
Shanghai University of Engineering Science
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai University of Engineering Science filed Critical Shanghai University of Engineering Science
Priority to CN202210201390.3A priority Critical patent/CN114564596A/en
Publication of CN114564596A publication Critical patent/CN114564596A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/36Creation of semantic tools, e.g. ontology or thesauri
    • G06F16/367Ontology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/251Fusion techniques of input or preprocessed data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/10Text processing
    • G06F40/189Automatic justification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • G06N20/20Ensemble learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/02Knowledge representation; Symbolic representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/04Inference or reasoning models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Business, Economics & Management (AREA)
  • Economics (AREA)
  • Human Resources & Organizations (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Strategic Management (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Operations Research (AREA)
  • Game Theory and Decision Science (AREA)
  • Development Economics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Marketing (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Quality & Reliability (AREA)
  • Tourism & Hospitality (AREA)
  • General Business, Economics & Management (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Animal Behavior & Ethology (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Machine Translation (AREA)

Abstract

A cross-language knowledge graph link prediction method based on a graph attention machine mechanism comprises the following steps: setting a graph attention mechanism module, a knowledge module, an alignment module, an integrated reasoning module and a link prediction module; uniformly mapping various types of entities and relations in the cross-language knowledge graph into a vector space, and paying attention to the weight of each entity connected to a neighbor related entity through a local neighbor node; the weight on each relation is jointly learned through global attention, and entity embedding representation is carried out by fusing different neighbor information according to the weight; end-to-end optimization is performed while calculating losses. The precision is high: the data information obtained by multiple maps is fused, alignment and prediction are carried out according to the fused result, and compared with prediction adopting a single map, the accuracy is higher. The method is beneficial to linking and fusing the knowledge maps with personalized knowledge in multiple countries and nationalities in the world, and the knowledge sharing in the world is realized.

Description

Cross-language knowledge graph link prediction method based on graph attention machine mechanism
Technical Field
The invention relates to the technical field of information, in particular to a map prediction method, and particularly relates to a cross-language knowledge map link prediction method based on a graph attention machine mechanism.
Background
The construction of web-scale knowledge graphs (kg) is increasing day by day, and the knowledge graphs can accurately reflect the fact of the real world and well express abstract knowledge such as concepts, levels and the like. In recent years, knowledge maps have been applied to a variety of fields, and a great deal of research has been conducted around them. The vision of the knowledge-graph research field is to construct a structured knowledge base, which serves the aspects of the artificial intelligence field. And solving practical problems such as dbpedia, wikidata, freebase, yago, probase and the like by using the knowledge graph is widely applied to intelligent question answering, intelligent search, knowledge reasoning, fusion, complementation and the like.
People put a lot of effort on knowledge graph embedding models, encode entities as low-dimensional vectors, and capture relationships as algebraic operations on entity vectors. These models provide a useful tool for accomplishing the task, and representative models include a translation model, a bilinear model. These models all have excellent performance in predicting entities. Most knowledge maps are constructed based on single-language data sources, are described by a single language, serve users of the single language, and are still in the initial stage of research of cross-language knowledge maps. Because different languages have their own advantages and limitations on the data quality and coverage of a particular knowledge graph, people using different languages may have different knowledge graphs in their own languages, and it may not be optimal to independently complete each knowledge graph fusion, and thus cross-language graph fusion is being studied by more and more people.
Knowledge-graphs are often incomplete because updates to the graph need to be continually followed by steps of real-world information. Therefore, knowledge-graph fusion is a highly motivated problem, and mainly studies the prediction of the actual fact that the knowledge-graph is unknown. Combining predictions from multiple knowledge maps remains a significant technical challenge. On the one hand, due to the lack of reliable alignment information for connecting different knowledge maps, on the other hand, knowledge transfer between different embeddings is hindered.
Disclosure of Invention
The invention aims to provide a cross-language knowledge graph link prediction method based on an attention machine system, which aims to solve the technical problem of low prediction precision of a single graph in the prior art.
The invention relates to a cross-language knowledge graph link prediction method based on an attention machine mechanism, which comprises the following steps of:
the method comprises the following steps: setting a graph attention mechanism module, a knowledge module, an alignment module, an integrated reasoning module and a link prediction module;
the graph attention machine module is used for realizing word embedding of a cross-language graph, learning the attention weight of each neighbor information, carrying out weighted average on each node, training a word vector of each node and providing semantic representation for a subsequent task;
the knowledge module is used for embedding entities and relations in the cross-language knowledge graph, and using translation vectors or rotation vectors to model relations in a uniform embedding space to represent the confidence coefficient of the fact described by each specific language knowledge graph;
the alignment module is used for finding tasks of matching entity pairs in different languages in a multi-language scene; assuming that a plurality of entities represent the same object, constructing an alignment relation among the entities, and simultaneously fusing and aggregating information contained in the entities;
the integrated reasoning module is used for performing fact prediction on a plurality of knowledge maps, integrating cross-language map query and knowledge transfer, and mainly completing the task of target map prediction by utilizing knowledge of a source knowledge map. The integrated reasoning module improves the learning accuracy rate by combining a plurality of models on the same task;
the link prediction module is used for searching the highest candidate ranking in the candidate entity set through a scoring function, and in the tail entity prediction problem, selecting the highest scoring in all the entity sets to be selected as the result of tail entity prediction;
uniformly mapping various types of entities and relations in the cross-language knowledge graph into a vector space, and paying attention to the weight of each entity connected to a neighbor related entity through a local neighbor node;
step two: the weight on each relation is jointly learned through global attention, and entity embedding representation is carried out by fusing different neighbor information according to the weight;
step three: the computational loss is simultaneously optimized end-to-end.
Further, the relationship is converted into translation between a head entity and a tail entity in Euclidean space through TransE, and the formula is as follows: f. ofTransE(h,r,t)=-||h+r-t||2(ii) a The relationship is modeled as a rotation in a complex space by RotatE, and the tail entity is considered to be derived from the head entity after the head entity rotates in a complex vector space through the relationship, and the formula is as follows: f. ofRotatE(h,r,t)=-||h°r-t||2
Furthermore, the same entities in the multi-language knowledge graph are judged by using a method of Jacard similarity in the alignment process, the entity alignment is adaptively learned by using a seed supervision learning method, and new knowledge is formed by training and continuously increasing seed entities and connecting and fusing knowledge graphs of different languages.
Further, the information candidate entity set of the source knowledge graph is used for assisting the target graph to select the highest-scoring entity from the candidate entities as a prediction final result.
Further, the integrated learning algorithm is used for inquiring and transferring the target map of the cross-language map, and comprises the steps of inquiring semantic information of the target map by using alignment and carrying out knowledge transfer on the result after inquiry.
Compared with the prior art, the invention has positive and obvious effect.
1. The precision is high: the data information obtained by multiple maps is fused, alignment and prediction are carried out according to the fused result, and compared with prediction adopting a single map, the accuracy is higher.
2. The influence, namely the importance degree, of different neighbors related to each entity and relation on the entity and relation is calculated by utilizing the abundant structural features and example features in the multi-language map through the interaction of local and global attention mechanisms.
3. The method is beneficial to linking and fusing the knowledge maps with personalized knowledge in multiple countries and nationalities in the world, and the knowledge sharing in the world is realized. The method is beneficial to providing convenience for cross-language knowledge service and realizing barrier-free cross-language information retrieval and natural language processing.
Drawings
FIG. 1 is a flowchart illustrating steps of a cross-language knowledge-graph link prediction method based on the graph attention machine mechanism according to the present invention.
FIG. 2 is a schematic diagram of an alignment process in a cross-language knowledge-graph link prediction method based on the graph attention machine mechanism according to the present invention.
Detailed Description
The present invention will be further described with reference to the drawings and examples, but the present invention is not limited to the examples, and all similar structures and similar variations using the present invention shall fall within the scope of the present invention. The use of the directions of up, down, front, rear, left, right, etc. in the present invention is only for convenience of description and does not limit the technical solution of the present invention.
Example 1
As shown in FIG. 1 and FIG. 2, the invention relates to a cross-language knowledge graph link prediction method based on a graph attention machine mechanism, comprising the following steps:
step 1: based on a freebase cross-language knowledge graph data set, the data set comprises five different language data sets, wherein the five different language data sets comprise an entity set, a relation set and each graph triple set which are used as graph convolution input;
step 2: unified initialization mapping of entities and relations in a multilingual graph into a vector space, initializing a relation vector and an entity vector, for each dimension of the vector
Figure BDA0003529458950000031
A value is taken at random, k is the dimension of the low-dimensional vector, and normalization is performed on positive samples (h, r, t) in the data set after all vectors are initialized. Wherein h, t represents the head entity and the tail entity of the triad, and r represents the relationship between the head and the tail. Randomly replacing tail entities to form negative samples, and jointly forming a training data set;
and step 3: input f ═ h1,h2...hn},hnE, performing random initialization on the N nodes to obtain the feature of each node;
and 4, step 4: in order to obtain sufficient conversion power to convert the input features into higher dimensional features, at least one learnable linear transformation is necessary; therefore, a weight matrix W is trained for each node, and a learnable weight matrix W exists between input and output neighbor information;
and 5: adding an attention mechanism alpha to each node, thereby obtaining an attention correlation coefficient eijIn the formula 1, the order is,
Figure BDA0003529458950000041
where w represents the weight of the learning,
Figure BDA0003529458950000042
Figure BDA0003529458950000043
the word vectors representing entities i and j.
The importance of the node j to the formula node i, and the structural information of the graph is ignored (the formula model allows all nodes in the graph to calculate the mutual influence and is not limited to k-order neighbor nodes)
Step 6: when the mechanism is introduced into the graph structure, the mechanism is introduced through masked attribution. This means that j is a neighbor node of i and in order to make the cross-correlation coefficient easier to calculate and easy to compare, a softmax function is introduced to normalize all i's neighbor nodes j to equation 2:
Figure BDA0003529458950000044
wherein alpha isijRepresenting the attention weight of the graph, the attention correlation coefficient eij,eikRepresenting the attention correlation coefficient between node i and node k (j), k belonging to the neighbor node of the neighboring node i, NiBelonging to the local neighborhood map of the inode.
The attention mechanism alpha is a single-layer feedforward neural network, and alpha is determined by weight vectorsijAnd a nonlinear function of LeakyRelu is added; thus obtaining the complete alphaijGraph attention weight function 3:
Figure BDA0003529458950000045
the method comprises model weight alpha, T transposition operation, | | | connects two vectors, and k belongs to an adjacent node NiW, denotes a learning weight,
Figure BDA0003529458950000046
Figure BDA0003529458950000047
a word vector representing the entities i and k,
Figure BDA0003529458950000048
Figure BDA0003529458950000049
respectively obtaining corresponding vector products, and obtaining final output characteristics through the vector products;
and 7: the regularization obtained by operation is an attention cross-correlation coefficient between different nodes for preventing overfitting, and is used for predicting an output characteristic calculation storage module of each node;
as shown in fig. 2, after all word vectors can be mapped into a uniform vector space by the first map, and the word vectors of each entity in the knowledge graph are obtained, the knowledge model and the alignment model are jointly modeled, loss functions of the two models are jointly optimized, and finally, cross-language knowledge graph link prediction is realized.
The knowledge model is trained as shown in fig. 2. The steps are as follows:
s1: training by using a graph attention machine mechanism in the figure 1 to obtain a word vector of each triple, and randomly inputting training data (h, r, t) triples in batches aiming at a knowledge model;
s2: randomly generating a vector of any dimension to simulate the triplet;
s3: training data is disturbed, correct triples are randomly substituted for tail entities of correct samples to generate triple data of negative samples, the triple data of the negative samples and the triple data of the positive samples form new training data { (h, r, t),
Figure BDA0003529458950000054
a training set, a hyper-parameter gamma and a learning rate lambda can be determined;
s4: entering a circulation: by adopting minipatch, training of one batch of data can accelerate the training speed, negative sampling is carried out on each batch of data, T _ batch is initially a null list, and then a list consisting of tuple pairs (correct triples and error triples) is added to the null list;
s5: training is carried out after T _ batch is taken, parameter adjustment is carried out by adopting gradient descent, a loss function is calculated, and a vector generated randomly before is updated by a gradient descent method, wherein the loss function 4 is as follows:
Figure BDA0003529458950000051
wherein
Figure BDA0003529458950000052
And G represents a loss function containing a K language knowledge graph, and gamma is a learnable hyper-parameter.
Figure BDA0003529458950000053
Is a negative sample generated by the head entity or the tail entity, randomly replaces the true triplet, and (h, r, t) is the correct triplet word vector representation.
The f () function may adopt two functions, one is translation between a head entity and a tail entity in euclidean space through relationship conversion, the other is derived from the head entity through rotation of the head entity in complex vector space through relationship, RotatE models the relationship as rotation in complex space, and the specific function 5 is:
Figure BDA0003529458950000061
wherein, the (h, r, t) three-element word vector has two f functions respectively, TranseE is the translation operation between h (head entity) and t (tail entity) in r (relation), RotatE is the rotation operation between h (head entity) and t (tail entity) in r (relation), degree represents Hadama product in complex vector space, | The horizontal phase2Is represented by2And (4) regularizing.
The alignment model is trained as shown in figure 2. The steps are as follows:
s1: in any two linguistic knowledge graphs, known seed entity alignment attributes are determined as respective pairs of reference entities.
S2: and screening each target entity pair from the two knowledge graphs according to each graph reference seed entity pair.
S3: judging whether the number of the currently screened target entity pairs is 0 or not, if so, executing S6; if not, S4 is executed.
S4: and screening each new target attribute pair from the two knowledge maps according to the screened target entity pairs.
S5: the screened new target attribute pairs are used as reference attribute pairs, and the process returns to step S2.
S6: and forming an entity pair set by all the screened target entity pairs.
In step S2 (alignment model), the method for calculating the similarity between the data to be processed and each standard data in the candidate data set includes the following steps:
s201: determining the Jacard coefficient of each piece of standard data and the data to be processed aiming at each piece of standard data in the candidate data set, and determining the matching degree of the standard data and the data to be processed;
s202: determining the similarity between the standard data and the data to be processed based on the Jacard coefficient and the matching degree of the standard data and the data to be processed;
s203: satisfies the following formula 6;
Figure BDA0003529458950000062
where J (A, B) represents Jacard coefficient similarity and A, B represents the word vectors for two entities in different maps.
Step S3 (alignment model) self-learning and presentation learning semi-supervised entity alignment algorithm specifically includes the following sub-steps:
s301: inputting: two libraries to be aligned are made into a knowledge base, and an important mechanism is as follows: a self-learning mechanism.
S302: initially triples are empty and as learning progresses, learned triples are added continuously. The entity alignment module measures the degree of matching or not of the entities through the Jacard similarity represented by the entities.
S303: the self-learning mechanism is used as feedback from the entity alignment module to the knowledge graph representation learning module, a bidirectional matching alignment entity pair adding relation triple of top-beta (the first beta entity pairs are selected, and a threshold value set by a model) is selected for next round of training until no new alignment entity pair is generated, iteration is stopped, and a final entity alignment pair is generated;
the integrated inference model is trained as shown in fig. 2. The steps are as follows:
s1: based on a knowledge base of an alignment model and word vectors of the knowledge model, firstly, converting a target statement (such as a tail entity) into other source graph spectrums by using alignment for matching;
s2: in the matching process, obtaining a candidate result of a source image spectrum prediction entity;
s3: after a candidate result is obtained, converting the candidate result into a target knowledge graph through alignment;
s4: for the ranking of the candidate results, weighted training is performed on the candidate results by using formula 7 to obtain a final target candidate ranking s (e):
equation 7:
Figure BDA0003529458950000071
wherein m represents the number of entity alignment candidates, e represents an entity on the target knowledge graph, wi(e) Representing a weight of an entity-specific model, Ni(e) Represents the learning coefficient, when the emphedding model f of the knowledge graph is ranked on Top-K, then N isi(e) Is 1, otherwise Ni(e) Is 0. Here, on the computation of w, a specific weight is learned for each entity using boosting.
And finally outputting a prediction result on the basis of the steps, and forming a system for fusing data in the multi-language knowledge space.
The code is made according to the method, and the formula is operated by a computer.

Claims (5)

1. A cross-language knowledge graph link prediction method based on a graph attention machine mechanism is characterized by comprising the following steps:
the method comprises the following steps: setting a graph attention mechanism module, a knowledge module, an alignment module, an integrated reasoning module and a link prediction module;
the graph attention machine module is used for realizing word embedding of a cross-language graph, learning the attention weight of each neighbor information, carrying out weighted average on each node, training a word vector of each node and providing semantic representation for a subsequent task;
the knowledge module is used for embedding entities and relations in the cross-language knowledge graph, and using translation vectors or rotation vectors to model relations in a uniform embedding space to represent the confidence coefficient of the fact described by each specific language knowledge graph;
the alignment module is used for finding tasks of matching entity pairs in different languages in a multi-language scene; assuming that a plurality of entities represent the same object, constructing an alignment relation among the entities, and simultaneously fusing and aggregating information contained in the entities;
the integrated reasoning module is used for performing fact prediction on a plurality of knowledge maps, integrating cross-language map query and knowledge transfer, and mainly completing a task of target map prediction by utilizing knowledge of a source knowledge map; the integrated reasoning module improves the learning accuracy rate by combining a plurality of models on the same task;
the link prediction module is used for searching the highest candidate ranking in the candidate entity set through a scoring function, and in the tail entity prediction problem, selecting the highest scoring in all the entity sets to be selected as the result of tail entity prediction;
uniformly mapping various types of entities and relations in the cross-language knowledge graph into a vector space, and paying attention to the weight of each entity connected to a neighbor related entity through a local neighbor node;
step two: the weight on each relation is jointly learned through global attention, and entity embedding representation is carried out by fusing different neighbor information according to the weight;
step three: the computational loss is simultaneously optimized end-to-end.
2. The method of claim 1 for cross-language knowledge-graph link prediction based on a graph attention machine mechanism, whichIs characterized in that the relation is converted into translation between a head entity and a tail entity in an Euclidean space through a TransE algorithm, and the formula is as follows: f. ofTransE(h,r,t)=-||h+r-t||2(ii) a The relationship is modeled as a rotation in a complex space by RotatE, and the tail entity is considered to be derived from the head entity after the head entity rotates in a complex vector space through the relationship, and the formula is as follows: f. ofRotatE(h,r,t)=-||h°r-t||2
3. The method of claim 1, wherein the multiple language knowledge graph links are predicted by using Jacard similarity in alignment to determine the same entities in the multiple language knowledge graphs, and the learning method of seed supervision is used to adaptively learn entity alignment, and new knowledge is formed by training to continuously add seed entities and connecting and fusing knowledge graphs of different languages.
4. The graph attention machine based cross-language knowledge graph link prediction method of claim 1, characterized in that an information candidate entity set of a source knowledge graph is used to assist a target graph in picking a highest scoring entity from candidate entities as a prediction final result.
5. The method of claim 1, wherein querying and transferring the target graph across the linguistic graphs using an ensemble learning algorithm comprises querying semantic information of the target graph using alignment and knowledge transferring results after the querying.
CN202210201390.3A 2022-03-03 2022-03-03 Cross-language knowledge graph link prediction method based on graph attention machine mechanism Pending CN114564596A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210201390.3A CN114564596A (en) 2022-03-03 2022-03-03 Cross-language knowledge graph link prediction method based on graph attention machine mechanism

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210201390.3A CN114564596A (en) 2022-03-03 2022-03-03 Cross-language knowledge graph link prediction method based on graph attention machine mechanism

Publications (1)

Publication Number Publication Date
CN114564596A true CN114564596A (en) 2022-05-31

Family

ID=81718640

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210201390.3A Pending CN114564596A (en) 2022-03-03 2022-03-03 Cross-language knowledge graph link prediction method based on graph attention machine mechanism

Country Status (1)

Country Link
CN (1) CN114564596A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115051843A (en) * 2022-06-06 2022-09-13 华北电力大学 KGE-based block chain threat information knowledge graph reasoning method
CN115238100A (en) * 2022-09-21 2022-10-25 科大讯飞(苏州)科技有限公司 Entity alignment method, device, equipment and computer readable storage medium
CN115827883A (en) * 2022-06-24 2023-03-21 南瑞集团有限公司 Self-supervision graph alignment multi-language knowledge graph completion method and system
CN116069956A (en) * 2023-03-29 2023-05-05 之江实验室 Drug knowledge graph entity alignment method and device based on mixed attention mechanism
CN116186295A (en) * 2023-04-28 2023-05-30 湖南工商大学 Attention-based knowledge graph link prediction method, attention-based knowledge graph link prediction device, attention-based knowledge graph link prediction equipment and attention-based knowledge graph link prediction medium

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115051843A (en) * 2022-06-06 2022-09-13 华北电力大学 KGE-based block chain threat information knowledge graph reasoning method
CN115827883A (en) * 2022-06-24 2023-03-21 南瑞集团有限公司 Self-supervision graph alignment multi-language knowledge graph completion method and system
CN115238100A (en) * 2022-09-21 2022-10-25 科大讯飞(苏州)科技有限公司 Entity alignment method, device, equipment and computer readable storage medium
CN116069956A (en) * 2023-03-29 2023-05-05 之江实验室 Drug knowledge graph entity alignment method and device based on mixed attention mechanism
CN116186295A (en) * 2023-04-28 2023-05-30 湖南工商大学 Attention-based knowledge graph link prediction method, attention-based knowledge graph link prediction device, attention-based knowledge graph link prediction equipment and attention-based knowledge graph link prediction medium

Similar Documents

Publication Publication Date Title
CN114564596A (en) Cross-language knowledge graph link prediction method based on graph attention machine mechanism
JP7041281B2 (en) Address information feature extraction method based on deep neural network model
CN110083705B (en) Multi-hop attention depth model, method, storage medium and terminal for target emotion classification
CN112966127A (en) Cross-modal retrieval method based on multilayer semantic alignment
CN110674323B (en) Unsupervised cross-modal Hash retrieval method and system based on virtual label regression
Chitty-Venkata et al. Neural architecture search for transformers: A survey
WO2024032096A1 (en) Reactant molecule prediction method and apparatus, training method and apparatus, and electronic device
CN111079409B (en) Emotion classification method utilizing context and aspect memory information
CN111898636B (en) Data processing method and device
CN112308081B (en) Image target prediction method based on attention mechanism
CN113065587B (en) Scene graph generation method based on hyper-relation learning network
CN113254716B (en) Video clip retrieval method and device, electronic equipment and readable storage medium
CN114969367B (en) Cross-language entity alignment method based on multi-aspect subtask interaction
CN114639483A (en) Electronic medical record retrieval method and device based on graph neural network
Nannan et al. Adaptive online time series prediction based on a novel dynamic fuzzy cognitive map
CN113157919A (en) Sentence text aspect level emotion classification method and system
CN113255366A (en) Aspect-level text emotion analysis method based on heterogeneous graph neural network
CN116992151A (en) Online course recommendation method based on double-tower graph convolution neural network
CN115101145A (en) Medicine virtual screening method based on adaptive meta-learning
Chen et al. Multi-semantic hypergraph neural network for effective few-shot learning
CN113722439A (en) Cross-domain emotion classification method and system based on antagonism type alignment network
Raiaan et al. A systematic review of hyperparameter optimization techniques in Convolutional Neural Networks
CN110717116A (en) Method, system, device and storage medium for predicting link of relational network
WO2023174064A1 (en) Automatic search method, automatic-search performance prediction model training method and apparatus
Zhang et al. Tree-shaped multiobjective evolutionary CNN for hyperspectral image classification

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination