CN115526316A - Knowledge representation and prediction method combined with graph neural network - Google Patents

Knowledge representation and prediction method combined with graph neural network Download PDF

Info

Publication number
CN115526316A
CN115526316A CN202211076579.0A CN202211076579A CN115526316A CN 115526316 A CN115526316 A CN 115526316A CN 202211076579 A CN202211076579 A CN 202211076579A CN 115526316 A CN115526316 A CN 115526316A
Authority
CN
China
Prior art keywords
entity
knowledge
neural network
representation
prediction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211076579.0A
Other languages
Chinese (zh)
Inventor
孙明
闫科
田玲
伍艳萍
李耘辉
王兆坤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN202211076579.0A priority Critical patent/CN115526316A/en
Publication of CN115526316A publication Critical patent/CN115526316A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/02Knowledge representation; Symbolic representation
    • G06N5/022Knowledge engineering; Knowledge acquisition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/02Knowledge representation; Symbolic representation
    • G06N5/022Knowledge engineering; Knowledge acquisition
    • G06N5/025Extracting rules from data

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Biophysics (AREA)
  • Health & Medical Sciences (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses a knowledge representation and prediction method combining a graph neural network, which comprises the following steps of: s1: acquiring source data, and constructing a negative sample set according to the source data; s2: constructing an entity set; s3: obtaining entity remote distance correlation information by using a graph neural network; s4: constructing and training a knowledge representation model to obtain an entity representation vector and a relation representation vector; s5: and constructing a knowledge prediction model, and performing knowledge prediction by using the knowledge prediction model. The invention provides a knowledge representation and prediction method combined with a neural network of a map aiming at knowledge representation and prediction in a knowledge map based on the theoretical basis of the neural network of the map. The long-distance associated information of the entity is fused by using the graph neural network, the vector is expressed for the entity learning, and finally the knowledge prediction service is provided.

Description

Knowledge representation and prediction method combined with graph neural network
Technical Field
The invention belongs to the technical field of knowledge graphs, and particularly relates to a knowledge representation and prediction method of a neural network combined graph.
Background
After large-scale entities are connected to form a map, reading comprehension capacity can be provided for an intelligent machine, when the requirement of needing large data and massive knowledge to support is met, the efficiency of data intelligent processing is improved, the development of intelligent application is promoted, the existing knowledge map comprises entities and relations which are seemingly large in quantity, but has the problem of knowledge loss, the entity representation learning needs to be completed urgently, knowledge prediction is carried out based on entity representation, the existing knowledge representation method mainly adopts a model based on a graph neural network, and the model can utilize entity context information to carry out entity feature learning.
In recent years, certain achievements are obtained by combining a neural network model of a graph, a conversion mode is defined for each relation in the R-GCN model, and the knowledge graph is modeled microscopically from the angle of the relation direction. Chao et al propose weighted graph convolutional neural network (WGCN) which effectively utilizes type information of relationships by setting different weights for different relationships, completes modeling of a fusion mode of neighborhood information in a knowledge graph through the WGCN and learns embedding representation of an entity according to the modeling, and then embeds and inputs the entity and the relationship into a Conv-TransE-based scoring module to calculate a triple score, thereby completing a triple completion task. The CompGCN distinguishes the directions of edges, applies different attention networks to the edges in different directions, and combines and fuses entity neighborhood information by using different entity relationships. The model improves the expression capacity of the entity by capturing the context information of the entity, but the model only considers first-order neighbor information to learn the representation of the target entity, ignores the semantic information of the remote entity information to the target entity and causes the problem of insufficient utilization of the global semantic information of the model.
Disclosure of Invention
The invention aims to solve the problems that the prior art is lack of knowledge mining capability and the utilization of associated information in knowledge expression is insufficient, and provides a knowledge representation and prediction method combining a graph neural network.
The technical scheme of the invention is as follows: a knowledge representation and prediction method combining a graph neural network comprises the following steps:
s1: acquiring source data, and constructing a negative sample set according to the source data;
s2: constructing an entity set;
s3: obtaining entity remote distance correlation information by using a graph neural network according to the negative sample set and the entity set;
s4: constructing and training a knowledge representation model according to the negative sample set, the entity set and the entity remote association information to obtain an entity representation vector and a relation representation vector;
s5: and constructing a knowledge prediction model according to the entity expression vector and the relation expression vector, and performing knowledge prediction by using the knowledge prediction model.
Further, in step S1, a specific method for constructing the negative sample set is as follows: preprocessing source data, taking the positive example triples obtained after preprocessing as a positive sample set, randomly replacing head entities of the positive example triples to obtain negative example triples, and constructing a negative sample set according to the negative example triples.
Further, in step S2, a specific method for constructing the entity set is as follows: calculating the semantic distance between each entity and the rest entities, and acquiring the entities with the semantic distances smaller than a set distance threshold value to obtain semantic space neighborhoods of the entities; and in the semantic space neighborhood, acquiring an entity set with topological hop counts of 3 and 4 of each entity and the rest entities.
Further, in step S2, the semantic distance d (v) is i ,v j ) The calculation formula of (2) is as follows:
Figure BDA0003831449140000021
wherein v is i Is an entity i, v j In the form of the entity j,
Figure BDA0003831449140000022
is an entity iIs used to represent the vector(s) initially,
Figure BDA0003831449140000023
is the initial representation vector of entity j.
Further, in step S3, a specific method for acquiring the entity remote distance related information is as follows: and acquiring the intersection of the negative sample set and the entity set, and fusing and updating the intersection of the negative sample set and the entity set by adopting a graph neural network to obtain entity remote distance correlation information.
Further, in step S3, the entity remote distance related information includes a feature representation h obtained by performing linear transformation on the initial feature representation u A characterization d calculated by the activation function u Entity v to target entity u weight coefficient α uv And target entity feature obtained by fusing all sampling entities represents h' v The calculation formulas are respectively as follows:
h u =W 1 ·x u
d u =LeakyRelu(W 2 ·h u )
α uv =softmax(d u )
h′ v =σ(∑ u∈N(v) α uv ·h u )
wherein x is u For long-range initial characterization of the sampled entity, W 1 For the first parameter of the linear transformation matrix of the corresponding step, W 1 And for the second parameter of the linear transformation matrix in the corresponding step, leakyRelu (-) is an activation function, softmax (-) is a normalization function, u is a target entity to be finally represented currently by calculation, N (v) is a remote associated entity set obtained after sampling of the target entity u, and sigma (-) is the activation function.
Further, in step S4, the loss function L (θ) of the knowledge representation model is expressed as:
L(θ)=∑ s∈ss′∈S′ max(f(s)-f(s′)+γ,0)
wherein, S is a positive sample set, S is a positive example triplet in the positive sample set, S' is a negative example triplet in the negative sample set, f (·) is a score function for predicting whether the triplet is established, and γ is a soft interval parameter between the positive and negative samples during training.
Further, in step S5, a specific method for constructing the knowledge prediction model is as follows: and inputting the entity expression vector and the relation expression vector into a convolutional neural network, acquiring interactive features between the entity and the relation, and calculating scores of the interactive features by using a classifier to complete the construction of a knowledge prediction model.
Further, in step S5, the loss function L' (θ) of the knowledge prediction model is expressed as:
Figure BDA0003831449140000031
wherein h is a head entity in the triple sample, r is a relation, t is a tail entity, S is a positive sample set, S' is a negative sample set, y (·) For the label of the training data, the positive sample triple label is 1, the negative sample triple label is-1,f (·) is a score function for predicting whether the triple is established, λ is a parameter regularization parameter, and W is a convolution kernel parameter.
The invention has the beneficial effects that: the invention provides a knowledge representation and prediction method combined with a neural network of a map aiming at knowledge representation and prediction in a knowledge map based on the theoretical basis of the neural network of the map. The remote associated information of the entity is fused by utilizing the graph neural network, the vector is expressed for the entity learning, and finally the knowledge prediction service is provided.
Drawings
FIG. 1 is a flow chart of a knowledge representation and prediction method;
FIG. 2 is a long range associative entity sampling of the present invention;
FIG. 3 illustrates a neural network information fusion process in accordance with the present invention;
FIG. 4 is a schematic diagram of the knowledge prediction model of the present invention.
Detailed Description
The embodiments of the present invention will be further described with reference to the accompanying drawings.
As shown in FIG. 1, the invention provides a knowledge representation and prediction method combining a neural network of a graph, which comprises the following steps:
s1: acquiring source data, and constructing a negative sample set according to the source data;
s2: constructing an entity set;
s3: obtaining entity remote distance correlation information by using a graph neural network according to the negative sample set and the entity set;
s4: constructing and training a knowledge representation model according to the negative sample set, the entity set and the entity remote distance correlation information,
obtaining an entity representation vector and a relation representation vector;
s5: and constructing a knowledge prediction model according to the entity expression vector and the relation expression vector, and performing knowledge prediction by using the knowledge prediction model.
In the embodiment of the present invention, in step S1, a specific method for constructing a negative sample set is as follows: preprocessing source data, taking the positive example triples obtained after preprocessing as a positive sample set, randomly replacing head entities of the positive example triples to obtain negative example triples, and constructing a negative sample set according to the negative example triples. The positive example triplet label is 1 and the negative example triplet label is 0.
In the embodiment of the present invention, in step S2, a specific method for constructing the entity set is as follows: calculating the semantic distance between each entity and the rest entities, and acquiring the entities with the semantic distances smaller than a set distance threshold value to obtain semantic space neighborhoods of the entities; and in the semantic space neighborhood, acquiring an entity set with topological hop counts of 3 and 4 of each entity and the rest entities. And acquiring an entity set which is far away from the hop count of the target entity on the topological structure according to the preset topological hop count by taking the topological hop count as a standard.
In the embodiment of the present invention, in step S2, the semantic distance d (v) i ,v j ) The calculation formula of (c) is:
Figure BDA0003831449140000041
wherein v is i Is an entity i, v j In the form of the entity j,
Figure BDA0003831449140000042
is an initial representation vector for the entity i,
Figure BDA0003831449140000043
is the initial representation vector of entity j.
In the embodiment of the present invention, in step S3, a specific method for acquiring the entity remote distance association information is: and acquiring the intersection of the negative sample set and the entity set, and fusing and updating the intersection of the negative sample set and the entity set by adopting a graph neural network to obtain entity remote association information.
The purpose of the long-distance associated information fusion is to fuse entity information which is far away from the target entity in the topological structure but is associated with the target entity, and given a distance parameter rho, an entity set N with a semantic distance smaller than the parameter rho from the target entity is obtained s (v i )={v j |v j ∈E,d(v i ,v j )<ρ }. In order to obtain the long-distance associated information of the entities, the entities far away from the target entity need to be found based on the topological structure of the graph. First, a first-order neighborhood of an entity is obtained and represented by an adjacency matrix:
Figure BDA0003831449140000044
wherein if there is a triple (v) in the dataset i ,r,v j ) Then a is a ij =1, otherwise a ij =0。
According to the property of Boolean operation, calculating an adjacency matrix of reachable nodes in k hops, wherein the reachable matrix describes the reachability of each node in the directed graph after a path with a certain length is passed through:
Figure BDA0003831449140000045
wherein the content of the first and second substances,
Figure BDA0003831449140000046
the number of paths for the entity i to reach the entity j after passing through the k-hop path, if
Figure BDA0003831449140000047
The entity i and the entity j can be reached through k hops, and the hop number k =3 and 4 are taken to obtain a long-distance associated entity set of the target entity
Figure BDA0003831449140000048
Figure BDA0003831449140000049
Taking the intersection of the negative sample set and the entity set to obtain an entity set N (v) which is far away from the target entity but has close semantic relevance i )=N s (v i )∩N T (v i ) Merging the entity information in the set by adopting a graph neural network to update the hidden characteristics of the entity; firstly, information aggregation is carried out on all entities in a set in a self-attention mode, the relative attention weight of each entity to be fused is obtained through LeakyRelu and Softmax, and then weighting summation is carried out on the entity information to obtain entity embedding.
In step S3, in the embodiment of the present invention, the entity remote distance related information includes a feature representation h obtained by performing linear transformation on the initial feature representation u A characterization d calculated by the activation function u Weight coefficient α of entity v to target entity u uv And target entity feature obtained by fusing all sampling entities represents h' v The calculation formulas are respectively as follows:
h u =W 1 ·x u
d u =LeakyRelu(W 2 ·h u )
α uv =softmax(d u )
h′ v =σ(∑ u∈N(v) α uv ·h u )
wherein x is u For long-range initial characterization of the sampled entity, W 1 For the first parameter of the linear transformation matrix of the corresponding step, W 1 And for the second parameter of the linear transformation matrix in the corresponding step, leakyRelu (-) is an activation function, softmax (-) is a normalization function, u is a target entity to be finally represented currently by calculation, N (v) is a remote associated entity set obtained after sampling of the target entity u, and sigma (-) is the activation function.
In the embodiment of the present invention, as shown in fig. 4, in step S4, the loss function L (θ) of the knowledge representation model is expressed as:
L(θ)=∑ s∈ss′∈s′ max(f(s)-f(s′)+γ,0)
Figure BDA0003831449140000051
wherein S is a positive sample set, S is a positive example triple in the positive sample set and comprises a head entity, a relation and a tail entity, S 'is a negative sample set, S' is a negative example triple in the negative sample set and also comprises the head entity, the relation and the tail entity, f (-) is a score function for predicting whether the triple is established, and gamma is a soft interval parameter between the positive sample and the negative sample during training.
In the embodiment of the present invention, in step S5, a specific method for constructing the knowledge prediction model is as follows: and inputting the entity expression vector and the relation expression vector into a convolutional neural network, acquiring the interactive characteristics between the entity and the relation, and calculating the score of the interactive characteristics by using a classifier to complete the construction of a knowledge prediction model.
After the final expression vectors of the entities and the relations are obtained, knowledge prediction needs to be carried out, firstly, a knowledge prediction model needs to be trained, the model is constructed on the basis of a convolutional neural network, the entity and relation expressions are input into the model in a triple form, the interaction characteristics between the entities and the relations are captured, finally, the probability of whether the triple is established or not is calculated through a classifier, the probability is the score of the triple, and then, according to the label of a sample data set, the model is trained by adopting a soft interval loss function:
and giving a triple of missing head entities or tail entities, filling all the entities in the knowledge graph into the triple, and obtaining the establishment probability score of each filled triple, wherein the triple with the highest score is the correct triple, thereby completing knowledge prediction.
In the embodiment of the present invention, in step S5, the loss function L' (θ) of the knowledge prediction model is expressed as:
Figure BDA0003831449140000061
Figure BDA0003831449140000062
wherein h is a head entity in the triple sample, r is a relation, t is a tail entity, S is a positive sample set, S' is a negative sample set, y (·) For the label of the training data, the positive sample triple label is 1, the negative sample triple label is-1,f (·) is a score function for predicting whether the triple is established, λ is a parameter regularization parameter, W is a convolution kernel parameter, σ is an activation function, and Ω 1 And W 1 Is the 3x3 convolution kernel parameter.
In the embodiment of the invention, the training combined with the knowledge representation and prediction method of the graph neural network is divided into a training part and a testing part, the training part is divided into a training part for representing the model and a triple scoring model, the training part mainly obtains a trained knowledge representation vector and a knowledge prediction model, and the testing part can directly carry out knowledge prediction according to the trained model.
The training part is implemented as follows: the training is divided into two parts, namely a knowledge representation and a triple scoring, wherein the knowledge representation relates to the remote association information fusion, as shown in figure 2. The triple scoring model is shown in FIG. 3.
The whole process comprises model training and testing, and the specific implementation process is as follows:
1) Knowledge representation model training (network parameters of knowledge representation model)
The training data is 128 batches, the semantic distance threshold value in the step S2 is 0.5, the long-distance hop count k =3 and 4 in the step S3, the loss function value of the model is calculated in each training, model parameters are adjusted by using an Adam optimization algorithm based on the loss function value, the model training round is 3000, the representing vector dimension is 200, and when the loss function value is higher than the loss function value in the previous time or the training round reaches the preset round, the model training is terminated.
2) Knowledge prediction model training (network parameters of triple scoring model)
The training data adopts a batch size of 128, the loss function value of the model is calculated in each training, the parameters of the model are adjusted by using an Adam optimization algorithm based on the loss function value, the number of times of model training is 150, and when the loss function value of the current time is higher than the loss function value of the previous time or the number of times of training reaches a preset number of times, the model training is terminated.
The working principle and the process of the invention are as follows: firstly, sampling a remote associated entity of the entity to construct an information fusion neighborhood for a graph neural network; then fusing neighborhood information by adopting a graph neural network model to represent a vector for entity learning; and finally, inputting the learned entity and relationship representation into a knowledge scoring device formed by a convolutional network, inputting all entities serving as candidate entities into the triples of the missing entities, scoring the triples and sequencing the scores in a descending order, wherein the entity in the triplet with the highest score is the entity to be finally predicted, and thus the knowledge prediction task is completed.
The invention has the beneficial effects that: the invention provides a knowledge representation and prediction method combined with a neural network of a map aiming at knowledge representation and prediction in a knowledge map based on the theoretical basis of the neural network of the map. The remote associated information of the entity is fused by utilizing the graph neural network, the vector is expressed for the entity learning, and finally the knowledge prediction service is provided.
It will be appreciated by those of ordinary skill in the art that the embodiments described herein are intended to assist the reader in understanding the principles of the invention and are to be construed as being without limitation to such specifically recited embodiments and examples. Those skilled in the art can make various other specific changes and combinations based on the teachings of the present invention without departing from the spirit of the invention, and these changes and combinations are within the scope of the invention.

Claims (9)

1. A knowledge representation and prediction method in conjunction with a graph neural network, comprising the steps of:
s1: acquiring source data, and constructing a negative sample set according to the source data;
s2: constructing an entity set;
s3: obtaining entity remote distance correlation information by using a graph neural network according to the negative sample set and the entity set;
s4: constructing and training a knowledge representation model according to the negative sample set, the entity set and the entity remote association information to obtain an entity representation vector and a relation representation vector;
s5: and constructing a knowledge prediction model according to the entity expression vector and the relation expression vector, and performing knowledge prediction by using the knowledge prediction model.
2. The knowledge representation and prediction method in combination with the neural network of the figure as claimed in claim 1, wherein in the step S1, the specific method for constructing the negative sample set is: preprocessing source data, taking the positive example triples obtained after preprocessing as a positive sample set, randomly replacing head entities of the positive example triples to obtain negative example triples, and constructing a negative sample set according to the negative example triples.
3. The knowledge representation and prediction method of neural network in combination with graph according to claim 1, wherein in step S2, the specific method for constructing entity set is: calculating the semantic distance between each entity and the rest entities, and acquiring the entities with the semantic distances smaller than a set distance threshold value to obtain semantic space neighborhoods of the entities; and in the semantic space neighborhood, acquiring an entity set with topological hop counts of 3 and 4 of each entity and the rest entities.
4. The knowledge representation and prediction method in connection with neural network of figure 3, wherein in step S2, the semantic distance d (v) is i ,v j ) The calculation formula of (2) is as follows:
Figure FDA0003831449130000011
wherein v is i Is an entity i, v j In the form of the entity j,
Figure FDA0003831449130000012
is an initial representation vector for the entity i,
Figure FDA0003831449130000013
is the initial representation vector of entity j.
5. The knowledge representation and prediction method of neural network in combination with graph according to claim 1, wherein in step S3, the specific method for obtaining entity remote association information is: and acquiring the intersection of the negative sample set and the entity set, and fusing and updating the intersection of the negative sample set and the entity set by adopting a graph neural network to obtain entity remote association information.
6. The method for knowledge representation and prediction in conjunction with neural network of figure 1, wherein in step S3, the entity distance-related information includes a feature representation h after linear transformation of the initial feature representation u A characterization d calculated by the activation function u Entity v to target entity u weight coefficient α uv And target entity feature obtained by fusing all sampling entities represents h' v The calculation formulas are respectively as follows:
h u =W 1 ·x u
d u =LeakyRelu(W 2 ·h u )
α uv =softmax(d u )
Figure FDA0003831449130000021
wherein x is u For long-range initial characterization of the sampled entity, W 1 For the first parameter of the linear transformation matrix of the corresponding step, W 1 For the second parameter of the linear transformation matrix in the corresponding step, leakyRelu (·) is an activation function, softmax (·) is a normalization function, u is a target entity to be currently calculated and finally represented, N (v) is a remote associated entity set of the target entity u after sampling, and σ (·) is the activation function.
7. The knowledge representation and prediction method of neural network of combination map as claimed in claim 1, wherein in said step S4, the loss function L (θ) expression of the knowledge representation model is:
Figure FDA0003831449130000022
wherein, S is a positive sample set, S is a positive example triplet in the positive sample set, S' is a negative example triplet in the negative sample set, f (·) is a score function for predicting whether the triplet is established, and γ is a soft interval parameter between the positive and negative samples during training.
8. The knowledge representation and prediction method in combination with the neural network of the figure as claimed in claim 1, wherein in the step S5, a specific method for constructing the knowledge prediction model is: and inputting the entity expression vector and the relation expression vector into a convolutional neural network, acquiring interactive features between the entity and the relation, and calculating scores of the interactive features by using a classifier to complete the construction of a knowledge prediction model.
9. The knowledge representation and prediction method of the neural network of the combination graph according to claim 1, wherein in the step S5, the loss function L' (θ) of the knowledge prediction model is expressed as:
Figure FDA0003831449130000023
wherein h is a head entity in the triple sample, r is a relation, t is a tail entity, S is a positive sample set, S' is a negative sample set, y (·) For the label of the training data, the positive sample triple label is 1, the negative sample triple label is-1,f (·) is a score function for predicting whether the triple is established, λ is a parameter regularization parameter, and W is a convolution kernel parameter.
CN202211076579.0A 2022-09-05 2022-09-05 Knowledge representation and prediction method combined with graph neural network Pending CN115526316A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211076579.0A CN115526316A (en) 2022-09-05 2022-09-05 Knowledge representation and prediction method combined with graph neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211076579.0A CN115526316A (en) 2022-09-05 2022-09-05 Knowledge representation and prediction method combined with graph neural network

Publications (1)

Publication Number Publication Date
CN115526316A true CN115526316A (en) 2022-12-27

Family

ID=84698028

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211076579.0A Pending CN115526316A (en) 2022-09-05 2022-09-05 Knowledge representation and prediction method combined with graph neural network

Country Status (1)

Country Link
CN (1) CN115526316A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116302481A (en) * 2023-01-06 2023-06-23 上海交通大学 Resource allocation method and system based on sparse knowledge graph link prediction
CN116737745A (en) * 2023-08-16 2023-09-12 杭州州力数据科技有限公司 Method and device for updating entity vector representation in supply chain network diagram
CN117932089A (en) * 2024-03-25 2024-04-26 南京中医药大学 Knowledge graph-based data analysis method

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116302481A (en) * 2023-01-06 2023-06-23 上海交通大学 Resource allocation method and system based on sparse knowledge graph link prediction
CN116302481B (en) * 2023-01-06 2024-05-14 上海交通大学 Resource allocation method and system based on sparse knowledge graph link prediction
CN116737745A (en) * 2023-08-16 2023-09-12 杭州州力数据科技有限公司 Method and device for updating entity vector representation in supply chain network diagram
CN116737745B (en) * 2023-08-16 2023-10-31 杭州州力数据科技有限公司 Method and device for updating entity vector representation in supply chain network diagram
CN117932089A (en) * 2024-03-25 2024-04-26 南京中医药大学 Knowledge graph-based data analysis method

Similar Documents

Publication Publication Date Title
Ruiz et al. Learning to simulate
US20230196117A1 (en) Training method for semi-supervised learning model, image processing method, and device
CN115526316A (en) Knowledge representation and prediction method combined with graph neural network
CN109636049B (en) Congestion index prediction method combining road network topological structure and semantic association
CN110781262B (en) Semantic map construction method based on visual SLAM
CN110569901A (en) Channel selection-based countermeasure elimination weak supervision target detection method
CN114386694A (en) Drug molecule property prediction method, device and equipment based on comparative learning
CN112200266B (en) Network training method and device based on graph structure data and node classification method
CN114283316A (en) Image identification method and device, electronic equipment and storage medium
CN112766280A (en) Remote sensing image road extraction method based on graph convolution
CN113987236B (en) Unsupervised training method and unsupervised training device for visual retrieval model based on graph convolution network
CN111000492B (en) Intelligent sweeper behavior decision method based on knowledge graph and intelligent sweeper
CN112559764A (en) Content recommendation method based on domain knowledge graph
CN112949929B (en) Knowledge tracking method and system based on collaborative embedded enhanced topic representation
CN114943017A (en) Cross-modal retrieval method based on similarity zero sample hash
CN115270007A (en) POI recommendation method and system based on mixed graph neural network
CN116992151A (en) Online course recommendation method based on double-tower graph convolution neural network
CN115051925A (en) Time-space sequence prediction method based on transfer learning
CN114202035A (en) Multi-feature fusion large-scale network community detection algorithm
CN112529025A (en) Data processing method and device
CN111079900B (en) Image processing method and device based on self-adaptive connection neural network
WO2023143570A1 (en) Connection relationship prediction method and related device
Zhang et al. Road topology extraction from satellite imagery by joint learning of nodes and their connectivity
CN117037483A (en) Traffic flow prediction method based on multi-head attention mechanism
CN113360772B (en) Interpretable recommendation model training method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination