CN113688600A - Information propagation prediction method based on topic perception attention network - Google Patents
Information propagation prediction method based on topic perception attention network Download PDFInfo
- Publication number
- CN113688600A CN113688600A CN202111049168.8A CN202111049168A CN113688600A CN 113688600 A CN113688600 A CN 113688600A CN 202111049168 A CN202111049168 A CN 202111049168A CN 113688600 A CN113688600 A CN 113688600A
- Authority
- CN
- China
- Prior art keywords
- user
- topic
- propagation
- information
- context
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/10—Text processing
- G06F40/12—Use of codes for handling textual entities
- G06F40/126—Character encoding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/10—Text processing
- G06F40/194—Calculation of difference between files
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/30—Semantic analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02A—TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
- Y02A90/00—Technologies having an indirect contribution to adaptation to climate change
- Y02A90/10—Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Computational Linguistics (AREA)
- General Health & Medical Sciences (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The invention discloses an information propagation prediction method based on a theme perception attention network, which integrates a theme context and a propagation history context into a user representation for prediction. The topic context supports model propagation patterns for a particular topic, while the propagation history context can be further decomposed into user-dependent and location-dependent modeling. We can then use the encoded user context to construct a user representation under multiple topics. We then further integrate the user representation through a time-decay aggregation module to obtain a cascaded representation. Wherein all of these modules are driven by information dissemination features. Therefore, the information propagation prediction method based on the topic perception attention network can better fit the diffusion data of the real world and predict more accurately. In addition, predefined good theme distribution is required to be used in the traditional theme perception model, and the theme can be automatically learned by adopting the method.
Description
Technical Field
The invention relates to the technical field of networks, in particular to an information propagation prediction method based on a theme perception attention network.
Background
Social networking platforms such as twitter and new wave microblog attract millions of users, and a large amount of information is spread among the users every day. The process of information dissemination, also known as cascading, models the dissemination pattern and the user behavior is widely used in many fields, such as popularity prediction, epidemiology and personalized recommendations. Next user prediction has been extensively studied in recent years as a popular micro-cascading prediction task. This problem is defined as the sequence of user infections, ordered over time for a given information item, predicting the next infected user (by convention, researchers will use "infection", "activation" or "influence" to describe that there is an interaction of a user with an information item).
Conventional micro-cascade prediction methods include independent cascade model (IC) based methods and embedding based methods. The independent cascade model allocates an independent diffusion probability between each user pair, and many cascade diffusion models are established on the basic assumption of the model, and are expanded by additionally considering more information, such as continuous time stamps and user attributes. There are also some studies that explored the impact of topic information on cascade modeling. TIC has for the first time studied the information dissemination prediction task from the perspective of topic perception, the main idea being by setting a specific topic probability for each user pair.
As research advances, researchers have proposed an embedding-based approach to cascade prediction, which embeds users into a continuous latent space by improving the expressive power of the model using representation learning techniques, and calculates the propagation probability between each pair of users by using a user embedding function, rather than directly estimating a real-valued parameter. However, neither the IC-based approach nor the embedding-based approach takes into account the modeling of the concatenation history sequence information. Recent work has shown that these models are not as efficient as deep learning models based on considering cascade sequences.
With the success of deep learning, Recurrent Neural Networks (RNNs) have shown a strong ability to model information dissemination. TopolSTM extends the standard LSTM model, building hidden states from Directed Acyclic Graphs (DAGs) extracted from social graphs. CYAN-RNN and DeepDiffuse combine a recurrent neural network with an attention mechanism to account for the propagation structure. RecCTIC proposes a Bayesian topological RNN model for capture tree dependence. The Diffusion-LSTM uses image information to assist in prediction, and builds a Tree-LSTM model to infer propagation paths. The FOREST expands the GRU model, and designs an additional structural context extraction strategy to utilize the underlying social graph information.
Recently, some attention networks have been proposed to better capture the propagation dependence in the cascade sequence. The HiDAN builds a hierarchical attention network, captures a non-sequence structure in the cascade by adopting an attention mechanism, excavates a real dependency relationship from the cascade, designs a time attenuation module by combining timestamp information of a user, and jointly models user dependency and time attenuation, thereby greatly improving the expression capability and interpretability of the model.
With the development of deep learning technology, some documents model information propagation cascade as infection sequences, and a circulating neural network is adopted to obtain a good effect. Although concatenation is usually represented as a sequence of users, ordered by infection timestamp, the real propagation process is usually not strictly ordered, relying on unobserved user connection graphs. Therefore, other studies have employed an attention mechanism to capture non-sequential long-term propagation dependencies.
However, existing neural network-based approaches assume that the propagation behavior and pattern of all information items are homogenous. This assumption may not hold in the real world. In a real information dissemination scenario, a user may have different behavior patterns for information items of different topics. Intuitively, the interests of the user are often diverse, and the propagation behavior of the user may also be diverse according to the topic of the information item. For example, a user may focus on different people and then forward different information under different topics, respectively, and thus have topic-specific dependencies. The existing method based on the neural network rarely utilizes information texts, does not consider the propagation mode and user behavior perceived by a modeling theme, cannot model the propagation mode and dependency relationship under a specific theme, and limits the expression capability of the model. Whereas traditional non-neural approaches have demonstrated the impact of the subject on the user.
Next user prediction has been extensively studied in recent years as a popular micro-cascading prediction task. Traditional modeling typically ignores the textual content of the propagating information item, resulting in learning mixed dependencies from different topics. In contrast, topic-aware modeling aims to explicitly decouple propagation dependencies under a particular topic, thereby enabling more accurate predictions. In fact, the traditional non-neural network method based on independent cascade model has proved the advantage of topic-aware modeling, which can model the behavior of information items from different topics separately. But these early methods were built on strong independent assumptions, and this strategy limited the generalization performance of the model and has been shown to be suboptimal by recent deep learning-based methods. To our knowledge, no previous research has proposed a topic perception model based on neural networks to mine propagation dependencies under different topics.
Disclosure of Invention
In view of the above, the present invention aims to provide an information propagation prediction method based on a topic-aware attention network, which starts from a formalized propagation prediction problem and introduces our embedding strategy to encode user/location/text information into a vector. We will then come up with a topic-aware attention tier aimed at capturing the historical propagation dependence and time-decay effects of different topics. Finally, our model will take a multi-topic cascade representation through a given topic-aware attention layer and then predict the next infected user.
In order to achieve the above purpose, the invention provides the following technical scheme:
the invention provides an information propagation prediction method based on a topic perception attention network, and S1, a topic context and a propagation history context are integrated into a user representation for prediction;
s2, the topic context supports the model building of the propagation mode aiming at the specific topic, and the propagation history context is further decomposed into user dependence model building and position dependence model building;
s3, constructing a user representation under multiple topics by using the user context obtained by encoding;
s4, further integrating the user representation through a time attenuation aggregation module, thereby obtaining a multi-topic cascade representation and then predicting the next infected user;
wherein each module is driven by information dissemination features.
Further, the specific method of step S1 is:
given a set of users U, a concatenation set V and a propagation information set M, the propagation sequence of the ith information item in M is defined as a concatenationWherein the tupleRepresenting a userIn thatThe moments are forwarded and the sequence is ordered by the time of infection, the propagation prediction task being defined as a given cascade ciAnd previous infected user sequencePredict next infected user asWherein n is 1,2, …, | ci|-1。
Further, in step S2, the propagation pattern modeling is to encode semantic information of the propagation information text by using the pre-trained language model BERT.
Further, the propagation pattern modeling in step S2 is to embed the text encoded by BERT into the text by a full link layerConversion to propagating text embedding
yi=Wxxi+bx (1)
Wherein WxAnd bxRespectively a weight matrix and an offset vector.
Further, the user dependent modeling uses the embedded matrix in step S2And encoding the users, wherein | U | represents the number of users, and K and d represent the number of topics and the embedding dimension respectively.
Further, for the cascade sequenceEach user inUser is embedded intoWhereinIs the user embedding of the user under the kth topic.
Further, the position-dependent modeling in step S2 is to set a learnable position-embedded pos for each positionjWherein posjShared among all cascades.
Further, the encoding method in step S3 is:
topic context:
computing user embedding for each topic kAnd passBroadcast text embedding yiCosine similarity between them, and normalized by softmax function:
wherein K is 1,2, …, K, andrepresenting a userA weight under the kth topic; user-embedded representation of the aggregated topic context as
Propagation history context:
user' sAnd the userBetweenFull attention scoreAnd weightIs used to describe the propagation history context and is calculated by the following formula:
full context-aware multi-topic user representation:
Further, the modeling method of the time decay aggregation module in step S4 is as follows:
converting the continuous-time decay into discrete time intervals:
wherein t islBy dividing the time range [0, Tmax]Is divided into L sub-intervals { [0, t ]1),..,[tL-1,Tmax) Where T ismaxIs the maximum timestamp in the data set, with a corresponding learnable weight for each time interval for each topic
Further, the obtaining method of the multi-topic cascade representation in step S4 is as follows:
the complete aggregation weight is calculated according to equation (8):
for each topic k, calculatingBeing weightedAnd a feedforward neural network with a ReLU activation function is adopted to endow the model with nonlinearity, and the output of the theme perception attention layer is represented in cascade as
Further, the method for predicting the next infected user in step S4 is:
given a cascading sequenceBy measuring user embeddingAnd cascade embeddingTo parameterize the next infected userProbability, cascade and user ofThe interaction probability of (a) is expressed as:
wherein Θ represents all parameters that need to be learned;
the training objective for predicting an infected user is defined by equation (10):
setting K topic prototype embeddingAnd encourages user embedding under k themesCorresponding subject prototype mkSimilarly, the goal is to maximize:
this term is taken as an additional training target and summed over all users:
Compared with the prior art, the invention has the beneficial effects that:
the information propagation prediction method based on the topic perception attention network, provided by the invention, combines the advantages of topic-specific propagation modeling and deep learning technology, designs a novel and effective topic perception attention mechanism, and integrates a topic context and a propagation history context into a user representation for prediction. The topic context supports model propagation patterns for a particular topic, while the propagation history context can be further decomposed into user-dependent and location-dependent modeling. We can then use the encoded user context to construct a user representation under multiple topics. We then further integrate the user representation through a time-decay aggregation module to obtain a cascaded representation. Wherein all of these modules are driven by information dissemination features. Therefore, the information propagation prediction method based on the topic perception attention network can better fit the diffusion data of the real world and predict more accurately. In addition, predefined good theme distribution is required to be used in the traditional theme perception model, and the theme can be automatically learned by adopting the method.
Drawings
In order to more clearly illustrate the embodiments of the present application or technical solutions in the prior art, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments described in the present invention, and other drawings can be obtained by those skilled in the art according to the drawings.
Fig. 1 is an architecture diagram of a subject-aware attention network according to an embodiment of the present invention.
Detailed Description
The invention provides an information propagation prediction method based on a topic perception attention network, and S1, a topic context and a propagation history context are integrated into a user representation for prediction;
s2, the topic context supports the model building of the propagation mode aiming at the specific topic, and the propagation history context is further decomposed into user dependence model building and position dependence model building;
s3, constructing a user representation under multiple topics by using the user context obtained by encoding;
s4, further integrating the user representation through a time attenuation aggregation module, thereby obtaining a multi-topic cascade representation and then predicting the next infected user;
wherein each module is driven by information dissemination features.
For a better understanding of the present solution, the method of the present invention is described in detail below with reference to the accompanying drawings.
In this section we will start with a formalized propagation prediction problem and introduce our embedding strategy to encode user/location/text information into a vector. We will then come up with a topic-aware attention tier aimed at capturing the historical propagation dependence and time-decay effects of different topics. Finally, our model will take a multi-topic cascade representation through a given topic-aware attention layer and then predict the next infected user. The complete structure of our proposed TAN is shown in figure 1.
1. Problem definition
Given a set of users U, a concatenation set V and a propagation information set M, the propagation sequence of the ith information item in M can be defined as a concatenationWherein the tupleRepresenting a userIn thatThe time of day is forwarded and the sequence is ordered by time of infection. Following the previous task setup, the propagation prediction task is defined as a given cascade ciAnd previous infected user sequencePredicting next infected userWherein n is 1,2, …, | ci|-1。
2. Embedding layer
1) User embedding:
to capture user interests and dependencies on different topics, we use an embedded matrix And encoding the users, wherein | U | represents the number of users, and K and d represent the number of topics and the embedding dimension respectively. For cascade sequencesEach user inHis user is embedded asWhereinIs the user embedding of the user under the kth topic.
2) Position embedding:
to exploit the cascade infection order information, we are for eachPosition setting a learnable position embedding posjWherein posjShared among all cascades.
3) Text embedding:
the semantic information of the transmission information text is coded by using a pre-training language model BERT. To measure topic similarity between user embedding and text embedding for a particular topic, we embed text encoded by BERT through a fully connected layerIs converted into
yi=Wxxi+bx (1)
Wherein WxAnd bxRespectively a weight matrix and an offset vector.
3. Topic perception attention layer
In this section, we will further encode various context information into user representations, which are then aggregated in conjunction with time-decaying weights to generate a cascaded representation for each topic.
3.1 user representation enhancement
We incorporate the topic context and the propagation history context into the multi-topic user representation, respectively. The propagation history context can be further decomposed into user dependencies and location dependencies. Inspired by the multi-head attention mechanism, we treat a topic as a specific head (head) and perform the attention mechanism separately in each topic to extract user and location dependencies.
1) Topic context
Based on propagating text yiWe propose to strengthen the user's embedding if there is a higher similarity to the text embedding under the kth topicIs embedded. Specifically, we compute for each topic kAnd yiCosine similarity between them, and normalized by a softmax function:
wherein K is 1,2, …, K, andrepresenting a userWeight under the k topic. User embedding of the aggregate topic context can then be represented asWe can find out that when the k-th topic corresponds to the user embeddingAnd yiThe larger the cosine similarity of (a), the larger the assigned weight, and the more strengthened the user embedding under the theme.
2) Propagating historical context
Intuitively, a user is infected typically due to the spreading of text, while there are only a few users in the spreading sequence that were previously infected. Thus, the goal of propagating historical context is to extract and characterize and userInfecting the associated user. In particular, we employ an attention mechanism to model user dependencies and give more attention weight to those users who may affect an infection. Formally, in a cascade sequenceWith previous usersThe dependent attention weight of (a) can be calculated by the following formula:
Intuitively, we should also focus on the source user and the recently infected users. Note that such dependencies are independent of the particular user, so we propose to model location dependencies under each topic. Instead of directly adding the predefined location embedding and the user embedding in the past, we compute the location-dependent score using a similar method to user-dependent modeling. In this way, our approach can better capture user-independent location dependencies for better prediction performance.
User' sAnd the userComplete attention score betweenAnd weightCan be used to describe the propagation history context and is calculated by the following formula:
3) Complete context-aware multi-topic user representation
To fully exploit the topic and propagation history context, we will refer to the users in the kth topicRepresented as a weighted sum of previously infected users.
Note that we can also add multiple tiers of the above operations to get a more accurate representation. In this case, the weight of the subject contextAnd location dependent scoreShared between different layers.
3.2 obtaining a cascading representation based on time-decaying aggregation
After extracting the multi-user representation under multiple topics, we need to aggregate them to obtain a cascading representation under multiple topics. We assume that the user's influence will decay with time and jointly consider the time decay and the propagation dependent weights in equation 4.
1) Time-decay impact modeling
Specifically, inspired by Deephawkes, we employed a non-parametric time-decay modeling strategy for each topic. Formally, a cascade of sequences giving historical infectionThe continuous-time decay is first converted into discrete time intervals:
wherein t islBy dividing the time range [0, Tmax]Is divided into L sub-intervals { [0, t ]1),..,[tL-1,Tmax) Where T ismaxIs the maximum timestamp in the data set. For each topic, each time interval has a corresponding learnable weight
2) Computing cascading representations under multiple topics
The complete aggregate weight is the addition of an additional term to equation 4:
then theJ will be normalized by the softmax function to 1,2, …, n. Finally, for each topic k, we calculateBeing weightedAnd a feed-forward neural network with a ReLU activation function is used to impart model non-linearity. The output of the topic perception attention layer is a cascade representation and can be represented as
3.3 training goals and model details
Given a cascading sequenceBy measuring user embeddingAnd cascade embeddingTo parameterize the next infected userThe probability of (c). Cascading with Users as followsThe interaction probability of (c) can be expressed as:
where Θ represents all parameters that need to be learned.
We then predict the training goals of the infected user as defined by the following equation:
in addition to this, we want each topic subspace to reflect different semantics, and the embedding of different users under the same topic should be as similar as possible. Therefore, we set K topic prototype embeddingAnd encourages user embedding under k themesCorresponding subject prototype mkSimilarly. Formalization, we aim at maximumAnd (3) conversion:
we therefore take this term as an additional training target and sum all users:
the complete training objective function isWhere η is the equilibrium coefficient. We optimized the parameters using a gradient descent method and Adam optimizer. To avoid the training process from being unstable, we also apply layer normalization and dropout techniques to user embedding.
In the present invention we propose a model TAN to model the subject-specific propagation dependence. Specifically, we jointly model the textual content information of the propagated item and the user propagation history sequence, and then propose a topic-aware attention mechanism for capturing historical propagation dependencies and time-decay effects under different topics. TAN can automatically learn topics and benefit from deep learning, as compared to traditional topic perception models. Compared with the current neural network-based model, TAN can not only effectively model subject-specific propagation patterns, but also better capture user dependence and location dependence. Meanwhile, the interpretability of the model is improved by extracting the theme information, so that the model makes a prediction and gives a prediction reason, namely, the information item of a specific theme is more likely to be forwarded by a user, and the forwarding operation can be carried out according to the attention weight, wherein the user is influenced by the user.
The above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: it is to be understood that modifications may be made to the technical solutions described in the foregoing embodiments, or equivalents may be substituted for some of the technical features thereof, but such modifications or substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention.
Claims (10)
1. An information propagation prediction method based on a topic awareness attention network is characterized by comprising the following steps:
s1, integrating the topic context and the propagation history context into the user representation for prediction;
s2, the topic context supports the model building of the propagation mode aiming at the specific topic, and the propagation history context is further decomposed into user dependence model building and position dependence model building;
s3, constructing a user representation under multiple topics by using the user context obtained by encoding;
s4, further integrating the user representation through a time attenuation aggregation module, thereby obtaining a multi-topic cascade representation and then predicting the next infected user;
wherein each module is driven by information dissemination features.
2. The information propagation prediction method based on the topic awareness network according to claim 1, wherein the specific method in step S1 is:
given a set of users U, a concatenation set V and a propagation information set M, the propagation sequence of the ith information item in M is defined as a concatenationWherein the tupleRepresenting a userIn thatTime quiltForwarding, and sequencing by time of infection, the propagation prediction task being defined as a given cascade ciAnd previous infected user sequencePredict next infected user asWherein n is 1,2i|-1。
3. The information propagation prediction method based on the topic awareness attention network as claimed in claim 2, wherein the propagation mode modeling in step S2 is to encode the semantic information of the propagation information text by using the pre-trained language model BERT, specifically, to embed the text encoded by BERT into the propagation information text through a full connection layerConversion to propagating text embedding
yi=Wxxi+bx (1)
Wherein WxAnd bxRespectively a weight matrix and an offset vector.
4. The information propagation prediction method based on topic awareness attention network according to claim 3 wherein the user dependent modeling uses embedded matrix in step S2And encoding the users, wherein | U | represents the number of users, and K and d represent the number of topics and the embedding dimension respectively.
6. The information propagation prediction method based on topic awareness network according to claim 5, wherein the position-dependent modeling in step S2 is to set a learnable position-embedded pos for each positionjWherein posjShared among all cascades.
7. The information propagation prediction method based on the topic awareness network according to claim 6, wherein the encoding method in step S3 is:
topic context:
computing user embedding for each topic kAnd propagating text embedding yiCosine similarity between them, and normalized by softmax function:
wherein K is 1,2,.., K, andrepresenting a userA weight under the kth topic; user-embedded representation of the aggregated topic context as
Propagation history context:
user' sAnd the userComplete attention score betweenAnd weightUsed for describing propagation historyAnd calculated by the following formula:
full context-aware multi-topic user representation:
8. The information propagation prediction method based on the topic awareness attention network according to claim 1, wherein the modeling method of the time attenuation aggregation module in step S4 is:
converting the continuous-time decay into discrete time intervals:
9. The information propagation prediction method based on the topic awareness attention network according to claim 8, wherein the obtaining method of the multi-topic cascade representation in step S4 is:
the complete aggregation weight is calculated according to equation (8):
10. The information dissemination prediction method based on the topic awareness network as claimed in claim 1, wherein the method for predicting the next infected user in step S4 is:
given a cascading sequenceBy measuring user embeddingAnd cascade embeddingTo parameterize the next infected userProbability, cascade and user ofThe interaction probability of (a) is expressed as:
wherein Θ represents all parameters that need to be learned;
the training objective for predicting an infected user is defined by equation (10):
setting K topic prototype embeddingAnd encourages user embedding under k themesCorresponding subject prototype mkSimilarly, the goal is to maximize:
this term is taken as an additional training target and summed over all users:
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111049168.8A CN113688600B (en) | 2021-09-08 | 2021-09-08 | Information propagation prediction method based on topic perception attention network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111049168.8A CN113688600B (en) | 2021-09-08 | 2021-09-08 | Information propagation prediction method based on topic perception attention network |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113688600A true CN113688600A (en) | 2021-11-23 |
CN113688600B CN113688600B (en) | 2023-07-28 |
Family
ID=78585637
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111049168.8A Active CN113688600B (en) | 2021-09-08 | 2021-09-08 | Information propagation prediction method based on topic perception attention network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113688600B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114334159A (en) * | 2022-03-16 | 2022-04-12 | 四川大学华西医院 | Postoperative risk prediction natural language data enhancement model and method |
CN114519606A (en) * | 2022-01-29 | 2022-05-20 | 北京京东尚科信息技术有限公司 | Information propagation effect prediction method and device |
CN116955846A (en) * | 2023-07-20 | 2023-10-27 | 重庆理工大学 | Cascade information propagation prediction method integrating theme characteristics and cross attention |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20180329884A1 (en) * | 2017-05-12 | 2018-11-15 | Rsvp Technologies Inc. | Neural contextual conversation learning |
AU2019100854A4 (en) * | 2019-08-02 | 2019-09-05 | Xi’an University of Technology | Long-term trend prediction method based on network hotspot single-peak topic propagation model |
CN112182423A (en) * | 2020-10-14 | 2021-01-05 | 重庆邮电大学 | Information propagation evolution trend prediction method based on attention mechanism |
CN112380427A (en) * | 2020-10-27 | 2021-02-19 | 中国科学院信息工程研究所 | User interest prediction method based on iterative graph attention network and electronic device |
-
2021
- 2021-09-08 CN CN202111049168.8A patent/CN113688600B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20180329884A1 (en) * | 2017-05-12 | 2018-11-15 | Rsvp Technologies Inc. | Neural contextual conversation learning |
AU2019100854A4 (en) * | 2019-08-02 | 2019-09-05 | Xi’an University of Technology | Long-term trend prediction method based on network hotspot single-peak topic propagation model |
CN112182423A (en) * | 2020-10-14 | 2021-01-05 | 重庆邮电大学 | Information propagation evolution trend prediction method based on attention mechanism |
CN112380427A (en) * | 2020-10-27 | 2021-02-19 | 中国科学院信息工程研究所 | User interest prediction method based on iterative graph attention network and electronic device |
Non-Patent Citations (1)
Title |
---|
张志扬;张凤荔;陈学勤;王瑞锦;: "基于分层注意力的信息级联预测模型", 计算机科学 * |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114519606A (en) * | 2022-01-29 | 2022-05-20 | 北京京东尚科信息技术有限公司 | Information propagation effect prediction method and device |
CN114334159A (en) * | 2022-03-16 | 2022-04-12 | 四川大学华西医院 | Postoperative risk prediction natural language data enhancement model and method |
CN114334159B (en) * | 2022-03-16 | 2022-06-17 | 四川大学华西医院 | Postoperative risk prediction natural language data enhancement model and method |
CN116955846A (en) * | 2023-07-20 | 2023-10-27 | 重庆理工大学 | Cascade information propagation prediction method integrating theme characteristics and cross attention |
CN116955846B (en) * | 2023-07-20 | 2024-04-16 | 重庆理工大学 | Cascade information propagation prediction method integrating theme characteristics and cross attention |
Also Published As
Publication number | Publication date |
---|---|
CN113688600B (en) | 2023-07-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113688600A (en) | Information propagation prediction method based on topic perception attention network | |
Ji et al. | Learning private neural language modeling with attentive aggregation | |
Xiong et al. | Dcn+: Mixed objective and deep residual coattention for question answering | |
CN108876044B (en) | Online content popularity prediction method based on knowledge-enhanced neural network | |
Han et al. | Maximum information exploitation using broad learning system for large-scale chaotic time-series prediction | |
CN106294618A (en) | Searching method and device | |
Chen et al. | A survey on heterogeneous one-class collaborative filtering | |
Xiao et al. | User behavior prediction of social hotspots based on multimessage interaction and neural network | |
Chen et al. | Tracking dynamics of opinion behaviors with a content-based sequential opinion influence model | |
Hu et al. | Self-attention-based temporary curiosity in reinforcement learning exploration | |
CN115409155A (en) | Information cascade prediction system and method based on Transformer enhanced Hooke process | |
CN116051175A (en) | Click rate prediction model and prediction method based on depth multi-interest network | |
CN117435715B (en) | Question answering method for improving time sequence knowledge graph based on auxiliary supervision signals | |
CN113330462A (en) | Neural network training using soft nearest neighbor loss | |
Sun et al. | Tcsa-net: a temporal-context-based self-attention network for next location prediction | |
Xu et al. | Knowledge graph-based reinforcement federated learning for Chinese question and answering | |
Deng et al. | Risk management of investment projects based on artificial neural network | |
Liu et al. | Model design and parameter optimization of CNN for side-channel cryptanalysis | |
An et al. | Multiuser behavior recognition module based on dc-dmn | |
Zhao et al. | Preference-aware Group Task Assignment in Spatial Crowdsourcing: Effectiveness and Efficiency | |
Yin et al. | Time-Aware Smart City Services based on QoS Prediction: A Contrastive Learning Approach | |
Xue et al. | Deep reinforcement learning based ontology meta-matching technique | |
Feng et al. | A New Method of Microblog Rumor Detection Based on Transformer Model | |
CN113190733B (en) | Network event popularity prediction method and system based on multiple platforms | |
CN116485501B (en) | Graph neural network session recommendation method based on graph embedding and attention mechanism |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |