CN111428744A - Heterogeneous information network representation learning method and system for reserving type sequence information - Google Patents
Heterogeneous information network representation learning method and system for reserving type sequence information Download PDFInfo
- Publication number
- CN111428744A CN111428744A CN201910022868.4A CN201910022868A CN111428744A CN 111428744 A CN111428744 A CN 111428744A CN 201910022868 A CN201910022868 A CN 201910022868A CN 111428744 A CN111428744 A CN 111428744A
- Authority
- CN
- China
- Prior art keywords
- type
- node
- sequence
- information network
- type sequence
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 38
- 230000006870 function Effects 0.000 claims abstract description 30
- 238000013507 mapping Methods 0.000 claims abstract description 3
- 239000013598 vector Substances 0.000 claims description 37
- 241000288105 Grus Species 0.000 claims description 10
- 238000004364 calculation method Methods 0.000 claims description 6
- 239000011159 matrix material Substances 0.000 claims description 4
- 238000005295 random walk Methods 0.000 claims description 3
- 230000008447 perception Effects 0.000 claims 2
- 238000012800 visualization Methods 0.000 abstract description 4
- 238000003012 network analysis Methods 0.000 abstract description 2
- 230000000875 corresponding effect Effects 0.000 description 6
- 238000010586 diagram Methods 0.000 description 3
- 230000000717 retained effect Effects 0.000 description 3
- 238000013459 approach Methods 0.000 description 2
- 238000013528 artificial neural network Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 230000000306 recurrent effect Effects 0.000 description 2
- 238000004422 calculation algorithm Methods 0.000 description 1
- 239000000470 constituent Substances 0.000 description 1
- 230000002596 correlated effect Effects 0.000 description 1
- 230000007423 decrease Effects 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000008092 positive effect Effects 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
- G06Q50/01—Social networking
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Computing Systems (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Software Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Molecular Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Mathematical Physics (AREA)
- Evolutionary Biology (AREA)
- Business, Economics & Management (AREA)
- Economics (AREA)
- Human Resources & Organizations (AREA)
- Marketing (AREA)
- Primary Health Care (AREA)
- Strategic Management (AREA)
- Tourism & Hospitality (AREA)
- General Business, Economics & Management (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The invention discloses a heterogeneous information network representation learning method and system for reserving type sequence information. The invention provides a heterogeneous information network representation learning model with a reserved type sequence, which is a novel encoder-decoder framework with a type-aware GRU as a gating unit and an attenuation function and can map a heterogeneous information network into a low-dimensional space. The method extends the representation learning of the heterogeneous information network to the sequence level by combining type-aware GRU and sequence information and retaining representative sequence information by using a decay function. The method can support the mapping of the heterogeneous information network to the low-dimensional space, thereby facilitating various network analysis applications such as classification, clustering and visualization of heterogeneous information network data of a social network and the like.
Description
Technical Field
The invention belongs to the technical field of information, relates to a deep learning framework, and particularly relates to a heterogeneous information network representation learning method.
Background
The social network is a heterogeneous information network comprising a plurality of types of nodes such as users, blooms and the like and a plurality of types of relationships such as comments, 17-37.) because various types of information in the heterogeneous information network contain abundant information, the representation of the heterogeneous information network is studied and has great significance for studying and researching semantic classification, clustering, prediction, and the like, and the downstream semantic classification and visualization of the data is worthy of being applied to the downstream semantic classification and the like.
The existing heterogeneous information network representation learning method loses rich intrinsic semantic information, particularly type sequence information. The type sequence is a sequence consisting of a node type and a relation type and can express rich semantic relations among nodes. For example, a sequence of typesRepresenting the type of movie that the actor is starring. Based on this type of sequence, the existing method generates a node set for the node Actor1 as a context: sequence information has been lost during the context extraction phase. The better context extraction result is a sequence set: the existing heterogeneous information network representation learning method lacks a sequence maintaining mechanism, so that the type sequence information cannot be sufficiently combined. Therefore, heterogeneous information network representations that retain type sequence information are quite worthy of study.
The heterogeneous information network representation learning for retaining type sequence information is a challenging task, and has two aspects: 1) how to retain type sequence information: the existing heterogeneous information network representation learning method cannot keep complex type sequence information, so an effective model is needed; 2) which types of sequence information are retained: the theoretically best approach is to retain all types of sequence information in the heterogeneous information network. But the cost grows exponentially as the length of the sequence increases. Therefore, it is quite worthwhile to explore which types of sequence information are retained.
In the past, a plurality of different methods are introduced in the field of Network representation learning research, the classical method comprises projecting a data matrix into a low-dimensional space such as IsoMap, MDS and graph fault, but due to the high complexity of the classical method, researchers have proposed various neural Network-based representation learning methods such as the algorithm of hierarchy learning, B, Al-Rfou, Skaienna, S, Deepwalk on which the Network structure is converted into a linear sequence by random walk of fixed length, and have been constructed by using a Network-map (e.g. map, Qu, M, Yan, J, Qu, Q, J, Q, G, D, K, D, C-D, D-D, C-D, D-D, C, D-D, D-D, C-D.
Disclosure of Invention
In view of the foregoing problems, an object of the present invention is to provide a heterogeneous information network representation learning method, which provides a heterogeneous information network representation learning model with a reserved type sequence to solve the representation learning problem of a heterogeneous information network, thereby facilitating various downstream analysis applications such as classification, clustering, visualization, and the like of heterogeneous information network data of a social network and the like.
The encoder-decoder framework has demonstrated its superiority in handling the sequence-to-sequence prediction problem (seq2seq) in recent years. The Encoder understands the input sequence and creates its low-dimensional vector representation. By inputting the representation result, a decoder generates an output sequence. Wherein, Skip-through (Kiros, r., Zhu, y., salakhitdinov, r.r., Zemel, r., Urtasun, r., Torralba, a., & Fidler, S. (2015). Skip-through vectors, in Advances in processing systems (pp.3294-3302)), an encoder network is used to encode the input sentence, and the decoder network is used to predict the surrounding sentences, thereby obtaining the representation result of the current input sentence. Inspired by the method, in order to learn the representation result of each node in the heterogeneous information network and retain the type sequence information in the process, the method utilizes an encoder-decoder framework to learn the representation result of the heterogeneous information network. According to the invention, each node is taken as an input sequence with the length of 1, and the type sequence information set of the current node is taken as an output sequence set, so that the learned low-dimensional vector representation is the representation result of the type sequence information reserved by the current node. RNN (Recurrent Neural Networks) is the preferred architecture for dealing with sequence problems, and GRU (gated Recurrent Unit) is the common gating mechanism. Aiming at the first challenge (how to keep the type sequence information), the invention designs an RNN encoder-decoder architecture taking type-aware GRU (type-aware GRU) as a gating mechanism to fuse the type information and the sequence information. For the second challenge (which type of sequence information is retained), the present invention considers that representativeness of a type sequence is highly correlated with the distance to the current node. Intuitively, the closer the better. The invention designs a nearest priority strategy and further designs a decay function based on hop count distance (hop-count distance) to retain representative type sequence information.
The heterogeneous information network representation learning model is a novel RNN encoder-decoder framework which takes type-aware GRUs as gate control units and has attenuation functions, and the heterogeneous information network can be mapped into a low-dimensional space. The method extends the representation learning of the heterogeneous information network to the sequence level by combining type-aware GRU and sequence information and retaining representative sequence information by using a decay function. The goal of the model is to find a node representation that is useful for predicting the surrounding type sequence.
Specifically, the invention adopts the following technical scheme:
a heterogeneous information network representation learning method for retaining type sequence information comprises the following steps:
1) generating a type sequence set corresponding to each node in the heterogeneous information network;
2) each node in the heterogeneous information network learns the hidden state through a Type-aware GRU in an encoder and a Type vector of the node, and the hidden state is used as a representation result of the node, so that the current node is mapped to a low-dimensional vector representation result through the encoder;
3) taking the hidden state obtained by the encoder as input, predicting the current element of a Type sequence by using a Type-aware GRU in the decoder in combination with the previous node Type and relationship Type information, and reconstructing the Type sequence of each node by the decoder;
4) combining the type-aware GRUs of each time step by using an attenuation function, and retaining representative type sequence information by adopting a recent priority strategy;
5) optimizing parameters in the encoder and the decoder through continuous iterative computation steps 2) to 4), thereby obtaining a heterogeneous information network representation learning model;
6) inputting the heterogeneous information network to be learned, and obtaining a representation result of each node in the heterogeneous information network to be learned, namely a hidden state of an encoder output of each node through the heterogeneous information network representation learning model in the step 5).
Further, the encoder and decoder are RNN networks using type-aware GRUs as gating units.
Further, in step 2), each node is mapped to a node vector and a type vector by a word2vec method in an encoder.
Further, the Type-aware GRU fuses Type and sequence information into a hidden state, and the Type-aware GRU introduces a matrixAnd T is a parameter of the type vector, whereby the update gate, the reset gate and the hidden state are calculated in combination with the type information.
Further, in step 3), the hidden state output by the encoder is used as an input by a first type-aware GRU in the decoder, and the first hidden state of the decoder is obtained through calculation, and then the hidden state of the first step of the decoder is obtained through calculation by the node vector and the type vector of the corresponding type sequence element and the hidden state of the previous time step.
Further, step 4) combines the conditional probabilities of each time step by using a decay function, and minimizes a loss function, so that the difference between the predicted type sequence and the real type sequence is minimized; and 5) continuously and iteratively calculating the parameters in the steps 2) -4), thereby obtaining the heterogeneous information network representation learning model.
The invention also provides a heterogeneous information network representation learning system corresponding to the method, which comprises an encoder module and a decoder module, wherein the encoder module and the decoder module both contain Type-aware GRU;
the Type-aware GRU in the encoder is combined with the Type vector of the node to learn the hidden state as the representation result of the node, so that the current node is mapped to a low-dimensional vector representation result;
the Type-aware GRU in the decoder predicts the current element of the Type sequence by combining the previous node Type and relationship Type information, thereby reconstructing the Type sequence of each node;
the decoder combines the type-aware GRUs at each time step by using a decay function, and retains representative type sequence information by adopting a recent priority strategy.
Compared with the prior art, the invention has the following positive effects:
the invention provides a representation learning method of a heterogeneous information network by adopting a representation learning mode of a reserved type sequence, and the heterogeneous information network is mapped into a low-dimensional space. The method can support the mapping of the heterogeneous information network to the low-dimensional space, thereby facilitating various network analysis applications such as classification, clustering and visualization of heterogeneous information network data of a social network and the like.
Drawings
Fig. 1 is a schematic structural diagram of a heterogeneous information network representation learning model of a reserved type sequence according to an embodiment of the present invention.
FIG. 2 is a schematic diagram of a Type-aware GRU according to an embodiment of the present invention.
Detailed Description
The present invention will be described in detail below with reference to specific embodiments and accompanying drawings.
First, the meanings of the variables and parameters related to the present invention are explained, as shown in tables 1 and 2.
TABLE 1 variables and their meanings
TABLE 2 parameters and their Effect
1.Type-aware GRU
Fig. 1 is a schematic diagram of a structure of a heterogeneous information network representation learning model of a reserved type sequence implemented by the present invention. The invention provides a type-aware GRU as a novel gate control unit capable of sensing type information. FIG. 2 is a schematic representation of a Type-aware GRU. The variables and parameters in fig. 1 and 2 have the meanings given in tables 1 and 2, and the arrows above the parameters indicate vectors. Type-aware GRUs fuse Type and sequence information to a hidden state to represent the Type sequence handled so far. The type sequence consists of type sequence elements, wherein the type sequence elements comprise nodes, types of the nodes and relationship types of the nodes and the previous node.
The current hidden state is obtained by the node vector and the type vector of the corresponding type sequence element and the hidden state at the previous time. Wherein the type vector is composed of a node type and a relationship type. The type information also affects the update gate (update gate) and the reset gate (reset gate). The invention introduces the matrixAnd T is used as a parameter of the type vector, so that the calculation of the updating gate, the resetting gate and the hidden state is realized by combining the type information. Type-aware GRU is defined as follows:
wherein,for the reset gate of step i (i.e. Z in figure 2),for the update gate of step i (i.e. u in figure 2),for the hidden state of step I, σ represents a sigmoid function.
2. Attenuation function
The invention designs a recent priority strategy: the closer an element in the type sequence is to the current node, the higher its weight. To adopt this strategy, the attenuation function introduces an attenuation factor with respect to the hop-count distance to reduce the weight of the second half of the sequence.
Let α be the decay factor that controls the decay Rate combine the conditional probability of each time step with the decay function, sequenceIs defined as:
wherein e-α(l-1)Is a function of attenuation. When l is 1, e-α(l-1)1, does not affect the results. As l increases, the impact on conditional probability decreases exponentially.
3. The method of the invention comprises
1) And generating a corresponding type sequence set for each node in the heterogeneous information network.
And taking each node in the heterogeneous information network as a starting point to perform random walk (with length limit of l) for multiple times in the heterogeneous information network to obtain multiple paths with the length of l taking the current node as the starting point, and taking the paths as a type sequence set.
Further, another optimized method for generating a type sequence set can be adopted: let e-α(l-1)When the sum is less than or equal to 0.01, the influence of the sum on the conditional probability is negligible, namely the hop-count distance is more than or equal toType sequence element pair conditional probabilityThe effect is negligible. Then the path from which the signal is to be transmitted may be generated as a 1 st, … th,each node is a starting point and has a length ofMay be used as a type sequence for the start node of the sub-path. From this, a set of type sequences of nodes can be derived.
2) Node viThe node information and the type information are projected by a word2vec method to obtain a node vector xeAnd type vector te. The hidden state of the encoder can be calculated by the type-aware GRU in the encoder, namely the current node viThe low dimension of (2) represents the result.
3) Node viThe result of the representation of (2) is used as the input of the decoder, and the first hidden state of the decoder is calculated by the first type-aware GRU in the decoder. And then the hidden state of the first step of the decoder is calculated by the node vector and the type vector of the corresponding type sequence element and the hidden state of the previous time step. Wherein the node vectors of the type sequence elements and the calculation mode of the type vectors are shown in step 2). And inputting the current hidden state, and obtaining the element of the current step output by the decoder by using a software function.
4) And combining the conditional probabilities of each time step by using a decay function to minimize a loss function so as to minimize the difference between the predicted type sequence and the real type sequence. There are many types of cost functions commonly used at present, such as log loss function and cross entropy loss function.
5) And continuously and iteratively calculating parameters in the steps 2) to 4), so that the type sequence predicted by the model according to the current node and the type information of the current node approaches to the current type sequence in the type sequence set generated in the step 1) every time, namely the loss function is minimum, and the heterogeneous information network representation learning model is obtained. Wherein the attenuation factor needs to be set manually.
6) Inputting the heterogeneous information network to be learned, and obtaining a representation result of each node in the heterogeneous information network to be learned, namely a hidden state of an encoder output of each node through the heterogeneous information network representation learning model in the step 5).
4. The invention further provides a heterogeneous information network representation learning system, which comprises an encoder module and a decoder module, wherein the encoder module and the decoder module both contain Type-aware GRUs;
the Type-aware GRU in the encoder is combined with the Type vector of the node to learn the hidden state as the representation result of the node, so that the current node is mapped to a low-dimensional vector representation result;
the Type-aware GRU in the decoder predicts the current element of the Type sequence by combining the previous node Type and relationship Type information, thereby reconstructing the Type sequence of each node;
the decoder combines the type-aware GRUs at each time step by using a decay function, and retains representative type sequence information by adopting a recent priority strategy.
While the foregoing disclosure shows illustrative embodiments of the invention, it should be noted that various changes and modifications could be made herein without departing from the scope of the invention as defined by the appended claims. In accordance with the structures of the embodiments of the invention described herein, the constituent elements of the claims can be replaced with any functionally equivalent elements. Therefore, the scope of the present invention should be determined by the contents of the appended claims.
Claims (10)
1. A heterogeneous information network representation learning method for retaining type sequence information is characterized by comprising the following steps:
1) generating a type sequence set corresponding to each node in the heterogeneous information network;
2) each node in the heterogeneous information network learns the hidden state through the combination of the type sensing GRU in the encoder and the type vector of the node as the representation result of the node, so that the current node is mapped to a low-dimensional vector representation result;
3) taking the hidden state obtained by the encoder as input, predicting the current element of the type sequence by utilizing the type perception GRU in the decoder in combination with the previous node type and the relation type information, and reconstructing the type sequence of each node;
4) combining the type perception GRUs of each time step by using a decay function, and retaining representative type sequence information by adopting a recent priority strategy;
5) optimizing parameters in the encoder and the decoder through continuous iterative computation steps 2) -4), thereby obtaining a heterogeneous information network representation learning model;
6) inputting the heterogeneous information network to be learned, and obtaining a representation result of each node in the heterogeneous information network to be learned, namely a hidden state of an encoder output of each node through the heterogeneous information network representation learning model in the step 5).
2. The method of claim 1, wherein the encoder and decoder are RNN networks with type-aware GRUs as gating units.
3. The method of claim 1, wherein the type-aware GRU fuses type and sequence information to hidden states, and wherein the type-aware GRU introduces a matrixAnd T is used as a parameter of the type vector, so that the updating gate, the resetting gate and the hidden state are calculated by combining the type information; the type aware GRU is defined as follows:
wherein,in order to reset the gate in the first step,in order to update the gate in the step l,sigma represents a sigmoid function for the hidden state in the step I;parameters representing corresponding node vectors at the time of update gate, reset gate and hidden state calculations;a parameter representing a corresponding previous step length hidden state at the time of the update gate, the reset gate, and the hidden state calculation;a node vector representing the l-1 st element of the type sequence;a type vector representing the l-1 st element of the type sequence;representing the hidden state of the l-1 time step.
4. The method of claim 1, wherein the nearest priority policy is that the element in the type sequence has a higher weight the closer it is to the current node, the decay function introduces a decay factor α with respect to the hop count distance, and combines the conditional probability of each time step with the decay function, the ith node viType sequence ofIs defined as:
5. The method according to claim 4, wherein step 1) takes each node in the heterogeneous information network as a starting point to perform multiple random walks in the heterogeneous information network, and multiple paths with the length of l taking the current node as the starting point are obtained and used as the type sequence set.
7. The method of claim 1, wherein each node in the heterogeneous information network of step 2) is mapped to a node vector and a type vector by a word2vec method in an encoder.
8. The method of claim 1, wherein the hidden state of step 3) is calculated by a first type-aware GRU in the decoder using the hidden state of the encoder output as input, and the hidden state of the decoder is calculated thereafter by a first step of the decoder from the node vector and the type vector of the corresponding type sequence element and the hidden state of the previous time step.
9. The method according to claim 1, characterized in that step 4) combines the conditional probabilities for each time step with a decay function, minimizing a loss function, minimizing the difference between the predicted type sequence and its true type sequence; and 5) continuously and iteratively calculating the parameters in the steps 2) -4), thereby obtaining the heterogeneous information network representation learning model.
10. A heterogeneous information network representation learning system using the method of claim 1, comprising an encoder module and a decoder module, each of the encoder module and the decoder module including a type-aware GRU;
the type-aware GRU in the encoder learns hidden states in combination with the type vectors of the nodes as a representation of the nodes, thereby mapping the current node to a low-dimensional vector representation;
a type-aware GRU in the decoder predicts a current element of a type sequence in combination with previous node type and relationship type information, thereby reconstructing its type sequence for each node;
the decoder combines the type-aware GRUs at each time step using a decay function, preserving representative type sequence information using a recent precedence strategy.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910022868.4A CN111428744A (en) | 2019-01-10 | 2019-01-10 | Heterogeneous information network representation learning method and system for reserving type sequence information |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910022868.4A CN111428744A (en) | 2019-01-10 | 2019-01-10 | Heterogeneous information network representation learning method and system for reserving type sequence information |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111428744A true CN111428744A (en) | 2020-07-17 |
Family
ID=71546022
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910022868.4A Pending CN111428744A (en) | 2019-01-10 | 2019-01-10 | Heterogeneous information network representation learning method and system for reserving type sequence information |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111428744A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112182511A (en) * | 2020-11-27 | 2021-01-05 | 中国人民解放军国防科技大学 | Complex semantic enhanced heterogeneous information network representation learning method and device |
-
2019
- 2019-01-10 CN CN201910022868.4A patent/CN111428744A/en active Pending
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112182511A (en) * | 2020-11-27 | 2021-01-05 | 中国人民解放军国防科技大学 | Complex semantic enhanced heterogeneous information network representation learning method and device |
CN112182511B (en) * | 2020-11-27 | 2021-02-19 | 中国人民解放军国防科技大学 | Complex semantic enhanced heterogeneous information network representation learning method and device |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Gasparin et al. | Deep learning for time series forecasting: The electric load case | |
CN112150210B (en) | Improved neural network recommendation method and system based on GGNN (global warming network) | |
Xie et al. | SNAS: stochastic neural architecture search | |
CN113905391A (en) | Ensemble learning network traffic prediction method, system, device, terminal, and medium | |
CN112989064B (en) | Recommendation method for aggregating knowledge graph neural network and self-adaptive attention | |
CN112633010B (en) | Aspect-level emotion analysis method and system based on multi-head attention and graph convolution network | |
CN112529168A (en) | GCN-based attribute multilayer network representation learning method | |
CN108876044A (en) | Content popularit prediction technique on a kind of line of knowledge based strength neural network | |
JPWO2019229931A1 (en) | Information processing equipment, control methods, and programs | |
WO2023179609A1 (en) | Data processing method and apparatus | |
Nie et al. | Digital twin for transportation big data: a reinforcement learning-based network traffic prediction approach | |
Ngo et al. | Adaptive anomaly detection for internet of things in hierarchical edge computing: A contextual-bandit approach | |
Le et al. | GCRINT: network traffic imputation using graph convolutional recurrent neural network | |
US20230289618A1 (en) | Performing knowledge graph embedding using a prediction model | |
Li et al. | Attention‐based novel neural network for mixed frequency data | |
KR102258206B1 (en) | Anomaly precipitation detection learning device, learning method, anomaly precipitation detection device and method for using heterogeneous data fusion | |
CN111428744A (en) | Heterogeneous information network representation learning method and system for reserving type sequence information | |
Chen et al. | Q‐EANet: Implicit social modeling for trajectory prediction via experience‐anchored queries | |
Zhou et al. | What happens next? Combining enhanced multilevel script learning and dual fusion strategies for script event prediction | |
CN111444316A (en) | Knowledge graph question-answer oriented composite question analysis method | |
CN116629362A (en) | Interpreteable time graph reasoning method based on path search | |
CN115333957A (en) | Service flow prediction method and system based on user behaviors and enterprise service characteristics | |
CN112650861B (en) | Personality prediction method, system and device based on task layering | |
Chakraborty et al. | Brain-inspired spiking neural network for online unsupervised time series prediction | |
Wei et al. | Compression and storage algorithm of key information of communication data based on backpropagation neural network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20200717 |