CN114625886A - Entity query method and system based on knowledge graph small sample relation learning model - Google Patents
Entity query method and system based on knowledge graph small sample relation learning model Download PDFInfo
- Publication number
- CN114625886A CN114625886A CN202210242159.9A CN202210242159A CN114625886A CN 114625886 A CN114625886 A CN 114625886A CN 202210242159 A CN202210242159 A CN 202210242159A CN 114625886 A CN114625886 A CN 114625886A
- Authority
- CN
- China
- Prior art keywords
- information interaction
- entity
- sample
- queried
- relation
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/36—Creation of semantic tools, e.g. ontology or thesauri
- G06F16/367—Ontology
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/33—Querying
- G06F16/3331—Query processing
- G06F16/334—Query execution
- G06F16/3347—Query execution using vector based model
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/35—Clustering; Classification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Databases & Information Systems (AREA)
- Life Sciences & Earth Sciences (AREA)
- Computational Linguistics (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Computation (AREA)
- Evolutionary Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Computational Biology (AREA)
- Animal Behavior & Ethology (AREA)
- Artificial Intelligence (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The invention belongs to the technical field of knowledge graph data processing, and provides an entity query method and system based on a knowledge graph small sample relation learning model. Acquiring information interaction data to be queried and a relation-entity pair contained in the information interaction data, and coding to obtain a vector to be queried; performing feature coding on head and tail entity pairs contained in a vector to be queried based on a knowledge graph to obtain corresponding triple representations; and carrying out attention mechanism matching on the triple representation of the information interaction data to be inquired and the triple representation of each group of information interaction reference small samples clustered in advance to obtain the information interaction reference small sample of the most similar group and using the information interaction reference small sample as an inquiry result.
Description
Technical Field
The invention belongs to the technical field of knowledge graph data processing, and particularly relates to an entity query method and system based on a knowledge graph small sample relation learning model.
Background
The statements in this section merely provide background information related to the present disclosure and may not necessarily constitute prior art.
At present, under unstructured environments such as families, hospitals and the like, a service robot cannot perform efficient navigation and safely execute various service tasks because a server person cannot have autonomous learning, knowledge sharing and emotion interaction capabilities just like a human being. Knowledge maps (KGs) are the focus of research in the context of human-computer interaction issues for service robots. To further extend KGs's coverage, the traditional KGs completion method requires a large number of training instances (i.e., head-to-tail entity pairs) for each relationship. Long-tailed relationships are actually more common at KGs, and these newly added relationships typically do not have many known training triplets.
In practical application, human-computer problem interactive samples of a server person are few, and compared with a traditional knowledge map learning mode, the small sample knowledge representation learning needs to consider not only the difference of the number of reference samples, but also the utilization mode of semantic information and result information among the reference samples. None of the current algorithms consider learning the dynamic properties of entity triples, i.e. entities may exhibit different roles in the task relationships, whereas reference samples may contribute differently to query samples. For example, the existing entity embedding method for enhancing the entity and the neighbor of the local graph thereof by a design module does not fully utilize the supervision information. In the current technical scheme, the problem is solved by learning static representation of entities and citations, relationships are simply represented by the entities, but characteristic relations among the relationships are ignored, namely, characteristics among the relationships in a reference set may make different contributions to queries.
The inventor finds that the existing small sample knowledge representation learning method has the huge defects of insufficient reference sample information utilization, serious noise interference and the like, and the previous research does not consider the sample dynamic attribute in combination with the actual sample semantic fine granularity, so that the accuracy of the human-computer interaction question answering of the actual server man is reduced, and the experience of the server man is poor.
Disclosure of Invention
In order to solve the technical problems in the background art, the invention provides an entity query method and system based on a knowledge graph small sample relation learning model, which can improve the accuracy of human-computer interaction question answering of an actual server person and enable the experience of the server person to be better.
In order to achieve the purpose, the invention adopts the following technical scheme:
the invention provides an entity query method based on a knowledge graph small sample relation learning model, which comprises the following steps:
acquiring information interaction data to be queried and a relation-entity pair contained in the information interaction data, and coding to obtain a vector to be queried;
performing feature coding on head and tail entity pairs contained in a vector to be queried based on a knowledge graph to obtain corresponding triple representations;
and performing attention mechanism matching on the triple representation of the information interaction data to be queried and the triple representation of each group of information interaction reference small samples clustered in advance to obtain the information interaction reference small sample of the most similar group as a query result.
The second aspect of the present invention provides an entity query system based on a knowledge-graph small-sample relation learning model, which includes:
the query vector encoding module is used for acquiring information interaction data to be queried and relation-entity pairs contained in the information interaction data, and encoding to obtain a vector to be queried;
the triple representation module is used for carrying out feature coding on head and tail entity pairs contained in the vector to be queried based on a knowledge graph to obtain corresponding triple representation;
and the matching search module is used for performing attention mechanism matching on the triple representation of the information interaction data to be inquired and the triple representation of each group of information interaction reference small samples clustered in advance to obtain the information interaction reference small sample of the most similar group and using the information interaction reference small sample as an inquiry result.
A third aspect of the present invention provides a computer-readable storage medium, on which a computer program is stored, which when executed by a processor, implements the steps in the method for querying an entity based on a knowledge-graph small-sample relationship learning model as described above.
A fourth aspect of the present invention provides an electronic device, including a memory, a processor, and a computer program stored on the memory and executable on the processor, where the processor executes the program to implement the steps in the method for querying an entity based on a knowledge-graph small-sample relation learning model as described above.
Compared with the prior art, the invention has the beneficial effects that:
the method effectively utilizes the unique advantages of the attention mechanism and the entity coupling method, distributes contribution weight in a finer granularity, optimizes the proportion of score elements, fully utilizes reference sample information, strengthens entity embedding, improves the accuracy of the man-machine interactive question answering of the actual server man, and ensures that the experience of the server man is better.
Advantages of additional aspects of the invention will be set forth in part in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, are included to provide a further understanding of the invention, and are incorporated in and constitute a part of this specification, illustrate exemplary embodiments of the invention and together with the description serve to explain the invention and not to limit the invention.
FIG. 1 is a flow chart of an entity query method based on a knowledge-graph small-sample relation learning model according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a resource scheduling consumption model of a cloud service platform according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of an LSTM structure in a loop processor according to an embodiment of the present invention;
FIG. 4 is a graph of a NELL data set relational frequency visualization simulation effect according to an embodiment of the present invention;
FIG. 5(a) is a schematic diagram of a feature clustering simulation result of a NELL data set relationship r according to an embodiment of the present invention;
fig. 5(b) is a schematic diagram of a characteristic clustering simulation result of the WiKi data set relation r according to the embodiment of the present invention;
FIG. 6 is a schematic diagram of a specific structure of a feature clustering coding module according to an embodiment of the present invention;
fig. 7 is a schematic structural diagram of a transform feature encoding module based on head-tail entity pairs according to an embodiment of the present invention;
FIG. 8 is a schematic diagram of a detailed structure of a match scoring network module according to an embodiment of the present invention;
FIG. 9(a) is a graph of the result of an MRR experiment investigating the effect of the sample number K on the predicted result according to the embodiment of the present invention;
FIG. 9(b) is a graph illustrating the analysis of the results of the Hits @10 experiment investigating the effect of the sample number K on the predicted results according to the embodiment of the present invention;
FIG. 9(c) is a graph illustrating the analysis of the results of Hits @5 experiments investigating the effect of the number of samples K on the predicted results according to an embodiment of the present invention;
FIG. 9(d) is a graph showing the analysis of the results of Hits @1 experiments investigating the effect of the number of samples K on the predicted results according to the embodiment of the present invention;
FIG. 10(a) is a graph of an analysis of the results of an MRR experiment exploring the effect of the number of neighbors of a reference sample on the prediction results according to an embodiment of the present invention;
FIG. 10(b) is a graph of analysis of the results of the Hits @10 experiment exploring the effect of the number of neighbors of the reference sample on the predicted results according to an embodiment of the present invention;
FIG. 10(c) is a graph of analysis of the results of the Hits @5 experiment exploring the effect of the number of neighbors of the reference sample on the predicted results, in accordance with an embodiment of the present invention;
FIG. 10(d) is a graph of analysis of the results of the Hits @1 experiment exploring the effect of the number of neighbors of the reference sample on the predicted results, in accordance with an embodiment of the present invention.
Detailed Description
The invention is further described with reference to the following figures and examples.
It is to be understood that the following detailed description is exemplary and is intended to provide further explanation of the invention as claimed. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
It is noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of exemplary embodiments according to the invention. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, and it should be understood that when the terms "comprises" and/or "comprising" are used in this specification, they specify the presence of stated features, steps, operations, devices, components, and/or combinations thereof, unless the context clearly indicates otherwise.
Example one
Referring to fig. 1, the present embodiment provides an entity query method based on a knowledge graph small sample relationship learning model, which includes:
s101: and acquiring information interaction data to be queried and a relation-entity pair contained in the information interaction data, and coding to obtain a vector to be queried.
As shown in fig. 2, in the knowledge graph, an entity may be both the head entity of a triple and the tail entity of multiple triples, i.e., the entities may display different roles in the task relationship. The meaning of an entity represented therein varies according to the triplet relationship. Therefore, when predicting hidden relationships, the model can learn the relationships between relationships using the relationships between its neighboring nodes, such as the 5/6/7/8 node in FIG. 4.
For example: the method comprises the steps of (obtaining a triplet to be queried) (Ming Yao, word For, Chinese men's basetball team), wherein the triplet (Ming Yao, Chinese men's basetball team) is a head-tail entity pair, the Yao Ming and the Chinese men's basket are respectively head-tail entities of the entity pair, and the word For is a query relation vector to be processed by clustering coding.
On the other hand, reference sets of different semantics may contribute differently to a query. For example, for the relation "work for" in the query, the relations "work for" and "work as" in the reference sample are more similar to the query sample than the relations "subject to" and "famous for", and the like, the degree of influence is also larger when the matching score is obtained, and the model uses a fine-grained attention mechanism to highlight the sample contribution with strong correlation.
In the specific implementation process, each group of information cross reference small samples adopt unsupervised clustering to generate central points with preset number, and vector representations of all relations in the reference small samples are clustered.
For example: n relation clusters in the reference sample are obtained through a K-means center clustering algorithm and cosine similarity calculation, and each relation class generates a public relation feature, so that the embedding data can be subjected to learning and weight distribution in a finer granularity.
In some embodiments, a feature clustering coding module is used to cluster each set of information cross-reference small samples, for example, as shown in fig. 6. First, relational clustering is carried out on given reference sample data to obtain n known fact groups [ r [ [ r ]i]. The referral-entity pairs in each packet are encoded to explore their relevance. The module uses the following function f that satisfies the above property:
A feed forward layer is then applied to encode the interactions in this tuple:
After the relation function f is obtained, when a query sample is input, the module respectively encodes the relation between the reference sample and the query sample and the neighbor head and tail entities to obtain:
wherein s and q are respectively the relational neighbor vector representation of the reference sample and the query sample, wherein rs i、hs i、ts iRespectively, the relationship in the reference sample and the feature representation of the head and tail entity neighbors, rq i、hq i、tq iRespectively, the relationship in the query sample and the feature representation of the head and tail entity neighbors.
By relating the query to respective sets of relationships ri]Performing correlation analysis, inputting a query sample qiAnd then, the cluster coding module analyzes the cosine similarity between the query sample and each relationship category, and gives the highest weight to the relationship category with the highest similarity. According to the relation neighbor vector representation of the reference sample and the query sample, carrying out weight distribution through a cosine similarity function:
the greater the relevance of the relation of the query sample to a certain packet, the greater the weight αiThe higher this way, the query sample is classified into some reference sample relationship grouping of greatest relevance by preliminary classification weighting.
Then, the module performs fine-grained weight matching on the query sample according to the correlation between the task relation and the reference relation. First, a metric function ψ is defined, and their correlation interaction representation is calculated by a bilinear dot product:
ψ(rs,rq)=rs TWrq+b
wherein r iss、rqRespectively, vector representations of references versus query samples, W, b are learnable parameters.
At a weight αiUnder the initial weight, the model further weights the similarity of the single sample in each group and the query sample to obtain a secondary weight betaijComprises the following steps:
wherein m isnRepresenting the nth reference sample in the mth relational grouping.
The initial weight and the secondary weight between the query sample and each reference sample can be obtained, and the attention parameter of the characteristic clustering coding model based on the relation r can be obtained through dot product operation:
and then, carrying out attention weighting on the neighbor vectors of the relationship to obtain the output of a cluster coding model, namely embedding a pre-trained entity, taking a head entity as an example:
where σ is the activation function. Set σ ═ tanh. The entity representation obtained by the characteristic clustering coding model based on the relation r in the mode keeps the individual attributes generated by the current embedded model, and describes the correlation between the reference sample and the query sample in a more fine-grained manner according to different functions of different reference samples in the query sample. The above formula also applies to the candidate tail entity t.
S102: and performing feature coding on head and tail entity pairs contained in the vector to be queried based on the knowledge graph to obtain corresponding triple representation.
Grouping r for each relationshipi]Reference sample of (1), hiding its relation grouping ri]Each entity pair (h) having a task relationshipij,[ri],tij) And the entity is used as a sequence X, the entity is embedded and coupled with the neighbor embedding of role perception of the entity, after all input representations are constructed, the entity is input into a transform block stack, the X is coded, the coded triple representation is obtained, the ordering and the preference are carried out on the expected representation, and the matching triple is obtained.
Specifically, a transform feature coding module feature coding based on a head-tail entity pair is adopted, and an operation mechanism is as follows:
the specific structure of the transform feature coding module is shown in fig. 7. The module gives a triple in task r, i.e. (h)ij,[ri],tij) E Dr, grouping for each relationship ri]Reference sample in (1), we hide their relationship grouping ri]Each entity pair (h) having a task relationshipij,[ri],tij) As a sequence X ═ (X1, X2, X3), where the first/last element is the head/tail entity and the middle is the hidden task relationship. We represent the reference sample head and tail entities as feature embedding after training by the clustering module. To enhance entity embedding, taking the header entity h as an example, the module simultaneously couples the entity embedding h with its role-aware neighbor embedding. h can be expressed as:
FC(h)=σ[ω1hs i+ω2F&(hs i)]
where σ is the activation function. Setting σ ═ relu, W1, W2 are learnable parameters.
After all input representations are constructed, the coding module embeds features into an input transform block stack, codes X and obtains:
Zk i=Transformer(Zk-1 i),k=1,2,...,L
wherein Zk iIs the hidden state of the entity pair after the k layer, and L is the number of hidden layers of the transform.
Last hidden state ZL 2As an expected representation of the relationship by the entity, the relationship of the new triple is embedded as the match score. Such a representation encodes the semantic role of each entity, thereby facilitating identification of the fine-grained significance of the task relationships associated with different entity pairs.
In order to utilize limited reference samples with finer granularity and enable the reference samples more relevant to query to play a greater role, the module is used for hiding the state Z of the entity pair finally output by the TransformerL 1/ZL 3Sampling sorting, wherein the weights occupied by the relation groups are different, and the sample correlation is sorted according to an Euclidean spatial distance formula:
gh i=ZL 1(Si),gt i=ZL 3(Si)
Sort1={MIN[F(hq i,tq i)],(hq i,rq i,tq i)∈G',rq i∈[ri]}
wherein g ish i/gt iEntity pair hidden state Z for last output of reference sample entering TransformerL 1/ZL 3. The smaller the distance between the reference sample entity pair and a certain grouped query sample entity pair, the greater the correlation between the query sample and the group of reference samples, and the greater the probability of being in the same class during clustering. And Sort1 outputs the minimum distance between the reference sample entity pair and the query sample entity pair, and takes the first n groups of corresponding groups of output sorts as reference embedding of the next step.
S103: and carrying out attention mechanism matching on the triple representation of the information interaction data to be inquired and the triple representation of each group of information interaction reference small samples clustered in advance to obtain the information interaction reference small sample of the most similar group and using the information interaction reference small sample as an inquiry result.
In a specific implementation process, the similarity is characterized by Euclidean distance. In the process of attention mechanism matching, weight distribution between the triple representation of the information interaction data to be inquired and the triple representation of each group of information interaction reference small samples is carried out through a cosine similarity function.
In the process of obtaining the information cross-reference small samples of the most similar group, the information cross-reference small samples are grouped according to different weights of the relations, and the information cross-reference data to be inquired and the relevance of the information cross-reference small samples of each group are sorted according to the Euclidean spatial distance formula.
Multiple step matching is performed using the LSTM in the loop processor of fig. 3. The input of the LSTM hidden layer comprises the state c of the hidden layer at the previous momentt-1The output vector h of the previous hidden layert-1Inputting x with sequence of current timet. The forgetting gate of LSTM controls the memory of the last memory cell state to determine the memory cell state c at the last momentt-1How much information in (c) can be transferred to the current time ctThe preparation method comprises the following steps of (1) performing; the input gate determines the current sequence input xtHow much information in (c) can be saved to the current time ct(ii) a Output gate based on new state ctObtaining the output h of the current momentt. The update mode of the LSTM can be expressed as:
ft=σ(Wxfxt+Whfht-1+bf)
it=σ(Wxixt+Whiht-1+bi)
ot=σ(Wxoxt+Whoht-1+bo)
ht=ot·tanh(ct)
in the formula, ctThe cell state information is stored for the current time,for the state information accumulated at the current moment, W represents a weight coefficient matrix corresponding to different gates, b represents an offset term, and σ and tanh represent a sigmoid activation function and a tanh activation function, respectively.
In specific implementation, a matching score network module is adopted for matching query, and the operation mechanism is as follows:
the concrete structure of the matching score network module is shown in fig. 8. Resulting in the embedding of the reconstructed new relational query sample with the most similarly grouped reference samples. For each query triplet (h) by the match scoring network modulei,rnew,ti) And reference set (h, [ r ]i]And t) matching and predicting the score result.
To measure the similarity between two vectors, we use a loop processor f (m) to perform multiple step matching. The expression of the t-th process step is as follows:
μ1=(hi,rnew,ti),μ2=(h,[ri],t)
gt=g′t+μ1
wherein RNNmatchIs an LSTM cell, inputs μ1Hidden state gtAnd cell state ct. T "treatment" step followed by the last hidden state gTIs the refinement embedding of query triples: mu.s1=gT. Match score Module uses μ1And mu2The inner product between them is used as the similarity score of the subsequent ranking optimization process.
Simulation verification was performed on the present embodiment.
The model can generate a large amount of original data in the training process, the original data has a large amount of loss and noise, the quality of the data is seriously influenced, certain trouble is caused to the effective information mining, and the quality of the data can be improved by applying methods such as data cutting. The data preprocessing is used for segmenting training data in an original public data set NELL data set, and the relation in the training set is divided into three sections of low, middle and high according to different occurrence frequencies, wherein the corresponding orders of magnitude are respectively [0,200 ], [200,400 ] and [400,400+ ]. And performing visualization analysis according to the occurrence frequency of the relationship, wherein the result is shown in FIG. 4, and the relationship frequency in the data set conforms to long-tail distribution and meets the requirement of model training.
Referring to the simulation results of the feature clustering code grouping of the sample triplet relation r as shown in fig. 5(a) -5 (b), K-means center clustering is respectively performed on the public data set NELL and the WiKi data set, and visual analysis is performed to respectively obtain the optimal clustering results when the number of different center points is K. Taking K6 and K7 as examples, it can be observed that the clustering numbers N of reference sample relationship classes in NELL and WiKi data sets are N3 and N5, respectively, and the clustering completion is high.
Experiments are carried out on a NELL data set and a WiKi data set of an open data set, relations without too many triples are selected as one-time task relations, the latest dump relations are selected, and the inverse relations are deleted. The dataset selects less than 500 but more than 50 triplets of relationships as a one-time task.
The FANC model and a plurality of baseline models are trained under the same parameter pre-training model, and for all realized small sample learning methods, the entity embedding is initialized through TransE. The entity neighbors are randomly sampled and fixed before model training. And training the model by using the relation in the public data sets NELL and WiKi training data and the entity pair thereof, and respectively adjusting and evaluating the model by using the relation in the verification data and the test data. Experiments used top-k hit rate (Hitsk) and average reciprocal rank (MRR) to evaluate the performance of different methods. k is set to 1, 5 and 10.
Taking the NELL data set as an example, the performance of all models on it is as follows:
the experimental result obviously proves that compared with the traditional KG embedding method, the FANC model has better performance on two data sets, and the small sample relation learning network model based on the attention clustering knowledge graph is more suitable for solving the problem of small samples.
The results of experiments investigating the effect of the sample number K on the predicted results are shown in fig. 9(a) -9 (d) using the NELL data set as an example. At different K, the FANC model is better than all baseline models, showing the effectiveness of the new model in small sample scenarios. When the effect of the traditional model is stagnated when the sample K is increased to a certain degree, the FANC model still shows the effect rise, which shows that in the scene of a small sample, a larger reference set cannot always obtain better performance. Noise is introduced because the small sample scene makes performance sensitive to available references. However, the transform coding module in the new model preferentially selects n groups of samples with the highest correlation degree, so that the introduction of irrelevant reference data interference and noise is obviously reduced, and the model prediction result is optimized and improved.
The results of experiments investigating the effect of the neighbor number of the reference sample on the prediction results are shown in fig. 10(a) -10 (d), taking the NELL data set as an example. For each entity, the traditional model encodes more neighbors, sometimes yielding worse performance. The reason is that for some entity pairs, there are some local connections that are irrelevant and provide noise information for the model. For the FANC model, triples are modeled by clustering neighbor encoders, and are clustered through relationships to identify task-oriented roles, and corresponding weights are given according to different contributions. The reference set and the query set can capture semantic meanings of fine granularity of neighbors, and neighbor vectors with large correlation have large weight, so that more expressive representation is presented, and noise interference is effectively avoided.
The small sample relational learning network flow framework based on the attention clustering knowledge graph provided by the embodiment optimizes the feature triple by utilizing finer granularity of the relational clustering algorithm and the coupling sorting algorithm, and improves the accuracy of the model prediction result and the algorithm convergence speed. The high efficiency and the low cost of the algorithm are better verified through simulation experiments.
The invention has the advantages that:
(1) the invention provides a brand-new flow frame for representing learning by a small sample knowledge graph, which is used for similarity measurement learning of a query sample and a reference sample and has the core of two coding functions: f (&) And F (T). Thus, for any query relationship rqAs long as there is a known fact group ri]The model can predict the check triple (h) by entering the matching score network through the entity and relation coding moduleq,rq,tq) And (h, [ r ]i]And t) matching score, optimizing prediction result accuracy.
(2) The invention provides a brand-new characteristic clustering coding model based on a relation r, which is characterized in that an entity obtained by the clustering model is used for representing, individual attributes generated by a current embedded model are reserved, and the correlation between a reference sample and a query sample is described in a finer granularity without different functions of different reference samples in the query sample.
(3) According to the method, an expected sequencing module is newly built in a transform coding model, correlation between a reference sample entity pair and a certain grouped query sample entity pair is selected preferentially, n groups of samples with the highest correlation degree are selected, incoherent reference data interference and noise introduction are reduced remarkably, and a model prediction result is optimized and improved.
(4) The method effectively utilizes the unique advantages of the attention mechanism and the entity coupling method, distributes contribution weight in a finer granularity, optimizes the proportion of score elements, fully utilizes the reference sample information and strengthens entity embedding.
Example two
The embodiment provides an entity query system based on a knowledge graph small sample relation learning model, which specifically comprises the following modules:
the query vector encoding module is used for acquiring information interaction data to be queried and relation-entity pairs contained in the information interaction data, and encoding to obtain a vector to be queried;
the triple representation module is used for carrying out feature coding on head and tail entity pairs contained in the vector to be queried based on a knowledge graph to obtain corresponding triple representation;
and the matching search module is used for performing attention mechanism matching on the triple representation of the information interaction data to be inquired and the triple representation of each group of information interaction reference small samples clustered in advance to obtain the information interaction reference small sample of the most similar group and using the information interaction reference small sample as an inquiry result.
It should be noted that, each module in the present embodiment corresponds to each step in the first embodiment one to one, and the specific implementation process is the same, which is not described herein again.
EXAMPLE III
The present embodiment provides a computer-readable storage medium, on which a computer program is stored, which when executed by a processor implements the steps in the method for querying an entity based on a knowledge-graph small-sample relationship learning model as described above.
Example four
The embodiment provides an electronic device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor executes the program to implement the steps in the entity query method based on the knowledge-graph small-sample relation learning model as described above.
The present invention has been described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.
Claims (10)
1. An entity query method based on a knowledge graph small sample relation learning model is characterized by comprising the following steps:
acquiring information interaction data to be queried and a relation-entity pair contained in the information interaction data, and coding to obtain a vector to be queried;
performing feature coding on head and tail entity pairs contained in a vector to be queried based on a knowledge graph to obtain corresponding triple representations;
and carrying out attention mechanism matching on the triple representation of the information interaction data to be inquired and the triple representation of each group of information interaction reference small samples clustered in advance to obtain the information interaction reference small sample of the most similar group and using the information interaction reference small sample as an inquiry result.
2. The entity query method based on the knowledge-graph small-sample relation learning model as claimed in claim 1, wherein each group of information interaction reference small samples adopts unsupervised clustering to generate a preset number of central points, and the vector representations of each relation in the reference small samples are clustered.
3. The method for entity query based on knowledge-graph small-sample relation learning model according to claim 1, characterized in that the similarity is characterized by Euclidean distance.
4. The entity query method based on the knowledge-graph small-sample relation learning model as claimed in claim 1, wherein in the process of performing attention mechanism matching, weight distribution between the triple representation of the information interaction data to be queried and the triple representation of each group of information interaction reference small samples is performed through a cosine similarity function.
5. The entity query method based on the knowledge-graph small-sample relation learning model as claimed in claim 1, wherein in the process of obtaining the information cross-reference small samples of the most similar group, the information cross-reference data to be queried and the relevance of each group of information cross-reference small samples are sorted according to the Euclidean spatial distance formula according to the different weights occupied by each relation group.
6. An entity query system based on a knowledge graph small sample relation learning model is characterized by comprising:
the query vector encoding module is used for acquiring information interaction data to be queried and relation-entity pairs contained in the information interaction data, and encoding to obtain a vector to be queried;
the triple representation module is used for carrying out feature coding on head and tail entity pairs contained in the vector to be queried based on a knowledge graph to obtain corresponding triple representation;
and the matching search module is used for performing attention mechanism matching on the triple representation of the information interaction data to be inquired and the triple representation of each group of information interaction reference small samples clustered in advance to obtain the information interaction reference small sample of the most similar group and using the information interaction reference small sample as an inquiry result.
7. The system of claim 6, wherein in the matching and searching module, in the process of performing attention mechanism matching, weight distribution between the triple representation of the information interaction data to be queried and the triple representation of each set of information interaction reference small samples is performed through a cosine similarity function.
8. The system of claim 6, wherein in the matching and searching module, in the process of obtaining the information cross-reference small samples of the most similar group, the weights of the information cross-reference small samples of the most similar group are different according to the relation groups, and the relevance of the information cross-reference small samples and the information cross-reference data to be queried is sorted according to the Euclidean spatial distance formula.
9. A computer-readable storage medium, on which a computer program is stored, which program, when being executed by a processor, carries out the steps of the method for querying an entity based on a knowledge-graph small-sample relation learning model according to any one of claims 1 to 5.
10. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor when executing the program implements the steps in the method for entity query based on a knowledge-graph small-sample relationship learning model according to any one of claims 1-5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210242159.9A CN114625886A (en) | 2022-03-11 | 2022-03-11 | Entity query method and system based on knowledge graph small sample relation learning model |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210242159.9A CN114625886A (en) | 2022-03-11 | 2022-03-11 | Entity query method and system based on knowledge graph small sample relation learning model |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114625886A true CN114625886A (en) | 2022-06-14 |
Family
ID=81902065
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210242159.9A Pending CN114625886A (en) | 2022-03-11 | 2022-03-11 | Entity query method and system based on knowledge graph small sample relation learning model |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114625886A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115422321A (en) * | 2022-07-26 | 2022-12-02 | 亿达信息技术有限公司 | Knowledge graph complex logic reasoning method and component and knowledge graph query and retrieval method |
-
2022
- 2022-03-11 CN CN202210242159.9A patent/CN114625886A/en active Pending
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115422321A (en) * | 2022-07-26 | 2022-12-02 | 亿达信息技术有限公司 | Knowledge graph complex logic reasoning method and component and knowledge graph query and retrieval method |
CN115422321B (en) * | 2022-07-26 | 2024-03-26 | 亿达信息技术有限公司 | Knowledge graph complex logic reasoning method, component and knowledge graph query and retrieval method |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2022083624A1 (en) | Model acquisition method, and device | |
Liu et al. | Progressive neural architecture search | |
WO2023000574A1 (en) | Model training method, apparatus and device, and readable storage medium | |
CN111797321B (en) | Personalized knowledge recommendation method and system for different scenes | |
Chen et al. | Efficient ant colony optimization for image feature selection | |
CN113361680B (en) | Neural network architecture searching method, device, equipment and medium | |
CN110175628A (en) | A kind of compression algorithm based on automatic search with the neural networks pruning of knowledge distillation | |
US20200167659A1 (en) | Device and method for training neural network | |
CN114048331A (en) | Knowledge graph recommendation method and system based on improved KGAT model | |
CN109214599B (en) | Method for predicting link of complex network | |
CN109120462A (en) | Prediction technique, device and the readable storage medium storing program for executing of opportunistic network link | |
CN113326377A (en) | Name disambiguation method and system based on enterprise incidence relation | |
CN113806582B (en) | Image retrieval method, image retrieval device, electronic equipment and storage medium | |
CN113362963B (en) | Method and system for predicting side effects among medicines based on multi-source heterogeneous network | |
CN114639483A (en) | Electronic medical record retrieval method and device based on graph neural network | |
CN114118369B (en) | Image classification convolutional neural network design method based on group intelligent optimization | |
CN113377964A (en) | Knowledge graph link prediction method, device, equipment and storage medium | |
CN108320027B (en) | Big data processing method based on quantum computation | |
Lyu et al. | A survey of model compression strategies for object detection | |
CN114999635A (en) | circRNA-disease association relation prediction method based on graph convolution neural network and node2vec | |
CN116386899A (en) | Graph learning-based medicine disease association relation prediction method and related equipment | |
CN115116557A (en) | Method and related device for predicting molecular label | |
CN114897085A (en) | Clustering method based on closed subgraph link prediction and computer equipment | |
CN115169555A (en) | Edge attack network disruption method based on deep reinforcement learning | |
CN113692591A (en) | Node disambiguation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |