CN115129888A - Active content caching method based on network edge knowledge graph - Google Patents
Active content caching method based on network edge knowledge graph Download PDFInfo
- Publication number
- CN115129888A CN115129888A CN202210667981.XA CN202210667981A CN115129888A CN 115129888 A CN115129888 A CN 115129888A CN 202210667981 A CN202210667981 A CN 202210667981A CN 115129888 A CN115129888 A CN 115129888A
- Authority
- CN
- China
- Prior art keywords
- user
- content
- vector
- attention
- module
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/36—Creation of semantic tools, e.g. ontology or thesauri
- G06F16/367—Ontology
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/90—Details of database functions independent of the retrieved data types
- G06F16/95—Retrieval from the web
- G06F16/953—Querying, e.g. by the use of web search engines
- G06F16/9536—Search customisation based on social or collaborative filtering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/90—Details of database functions independent of the retrieved data types
- G06F16/95—Retrieval from the web
- G06F16/957—Browsing optimisation, e.g. caching or content distillation
- G06F16/9574—Browsing optimisation, e.g. caching or content distillation of access to content, e.g. by caching
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Abstract
The invention provides an active content caching method based on a network edge knowledge graph, and belongs to the technical field of wireless communication. The caching method provided by the invention can learn the popularity of the content and actively cache the content preferred by the user in the edge server. The invention integrates knowledge map information and sequence information, not only considers the influence of content segments of historical requests of users on future preference, but also considers the influence of friends having social relations with the users on potential preference of the users. The invention can effectively improve the accuracy and diversity of the cache content, effectively improve the user satisfaction, reduce the traffic load, increase the hit rate and better stimulate the participants to use the mobile terminal equipment to send the content request to the edge server.
Description
Technical Field
The invention belongs to the technical field of wireless communication, and relates to a user preference content caching method based on a network edge knowledge graph.
Background
With the rapid development of mobile internet technology, the data scale is continuously enlarged, and the user demand is also continuously increased. For the storage of data, it is conventional to upload them to a centralized central node and then download them on demand. However, the storage space and the computing resources of the central node are usually limited, and storing a large amount of data is a huge burden for the central node. Meanwhile, because a certain distance exists between the edge node and the central node, delay is generated when data is uploaded and downloaded, so that transmission time is prolonged, and transmission efficiency is reduced. Therefore, an efficient edge caching policy is needed to store part of the content in the edge server. Since the storage capacity of the edge server is limited, it is necessary to predict in advance the contents that the user may prefer. Most passive content caching algorithms (e.g., FIFO, LRU, LFU, etc.) do not take into account the future popularity of the content and therefore have a low hit rate. In contrast, active caching algorithms based on learning are able to learn content popularity, actively caching user-preferred content in edge servers.
Active content caching at the edge of the network can enable offloading of data and alleviate data traffic congestion. However, policy control issues arise in choosing the desired cache contents, i.e. deciding which contents to store in the edge node. In addition, most of the current algorithms are focused on improving the accuracy of the selected content, and the attention on improving the diversity of the selected content is less. The accuracy of the content plays a decisive role in the quality of the system, but the diversity of the content is also important, and the content cached by the edge server has richer diversity and can bring higher satisfaction to users.
Aiming at the problems, the invention provides an active content caching method based on a network edge knowledge graph. As a typical application of large-scale data in a specific field, the knowledge graph can discover the internal relation among knowledge through related data analysis, thereby inferring new knowledge and providing indispensable information for related functions of the system. The invention utilizes the deep neural network to model the complex interactive data in the knowledge graph, and predicts the diversified and accurate contents with higher future preference degree of the user, thereby realizing the advanced caching of the edge data.
Therefore, the invention solves the problem of how to actively and accurately acquire the user preference content from the mass data and cache the user preference content to the edge server in advance, effectively improves the user satisfaction, reduces the traffic load and increases the hit rate.
Disclosure of Invention
The invention aims to solve the technical problem of how to accurately predict diversified user preference contents by establishing a user preference prediction model based on complex interactive data in a network edge knowledge graph and caching the user preference contents to an edge server in advance. Therefore, an active content caching method based on the network edge knowledge graph is provided.
The technical scheme adopted by the invention aiming at the technical problems is as follows:
an active content caching method based on a network edge knowledge graph comprises the following steps:
the method comprises the following steps: the mobile terminal device used by the user is responsible for initiating an access request to the edge server, and the edge server collects information through the network side, wherein the information comprises the access request initiated by the user to the edge server and the social network service information provided by the user, and the social network service information comprises a large amount of data related to the social contact of the user.
Step two: and the user preferring the prediction task enters the coverage range of the edge server, and the edge server stores the collected data related to the user social contact according to the task requirement.
Step three: and (3) preprocessing the data related to the social contact of the user, which is collected in the second step, by the manager based on the knowledge graph, then establishing a user preference prediction model by combining a deep learning algorithm, inputting the preprocessed data into the model for training, and storing model parameters after the training is finished.
Step four: and caching the diversity accurate data predicted by the user preference prediction model in a database of the edge server to finish the access request of the user.
The third step comprises the following specific steps:
3.1: and constructing a knowledge sub-map by using data related to the social contact of the user, wherein entities in the knowledge sub-map comprise the user, friends of the user and historical access content, the relationship between the user and the friends of the user is a friend relationship, and the relationship between the user and the friends of the user and the historical access content is an access relationship. And (4) pre-training the knowledge graph by adopting a TransR embedding method to obtain a low-dimensional dense embedding vector. Specifically, for a given triplet (h, r, t), the entities in the entity space are passed through a matrix M r Projected to the space of the relation r, the mapping of the head entity vector h and the tail entity vector t are respectively:
h r =M r h
t r =M r t
the TransR embedding method loss function is:
f r (h,t)=||h r +r-t r ||l 1 /l 2
wherein l 1 /l 2 Is represented by 1 Or l 2 Norm to measure h r And t r The degree of closeness of.
In order to further mine the graph structure information in the knowledge graph, a simplified GraphSAGE convolutional neural network is adopted to carry out graph convolution operation on the pre-trained knowledge graph. The method cancels the operation of controlling the number of sampling neighbors, utilizes an average aggregation function to aggregate first-order neighbors and second-order neighbors of a target entity to generate convolution embedded vectors of the target entity, and further represents convolution embedded vectors of all entities in a knowledge graph. The formula for the convolution method is as follows:
h d ←σ(W·MEAN(h k-1 ))
where mean (x) represents the average of solving for x; w represents a first layer parameter; k represents the order of the knowledge-graph; σ denotes a nonlinear activation function.
Obtaining the embedded vector p of the user entity in the knowledge sub-graph spectrum after the operation i Embedded vector of user friend entity { p (i,1) ,p (i,2) ,...,p (i,m) Q and an embedded vector of historical request content entities q (i,1) ,q (i,2) ,...,q (i,n) }。
3.2: and constructing a user-friend module. The module adopts a friend-level attention mechanism, aims to distribute proper weight to different friends of a user, and automatically acquires the influence of the different friends on the preference of the user. The input of the module is a user embedded vector p i And a friend embedding vector of the user { p (i,1) , p(i,2) ,...,p (i,m) An attention score for a friend level is defined as follows:
wherein, W 1 、W 2 、b 1 As a parameter of the first layer, the layer,is a second layer parameter;is a RELU function. The attention scores of the friend levels are normalized through Softmax to obtain attention weights of the friend levels, and it can be understood that different friends have different contributions to user preference:
after the attention weight of the friend level is obtained, the final embedded vector P of the user i Is calculated as follows:
P i =p i +∑ m α (i,m) p (i,m)
the influence of different friends of the user is integrated into the preference of the user by adopting an addition strategy to obtain the final embedded vector P of the user i 。
3.3: and constructing a content sequence module. Aiming at the problem that the request content of a user in the past period may affect the future request content of the user, the module respectively processes the content sequence accessed by the user by adopting a convolutional neural network and a self-attention mechanism, and aims to acquire the dependency among different contents. The input to this module is an embedded vector q of a sequence of content requested by a user over a period of time in the past (i,1) ,q (i,2) ,...,q (i,n) -wherein the embedded vectors of the content sequence are arranged in time order. Extracting a time t from an embedded vector of a content sequence 0 Embedded vector of L consecutive content segments withinThe user's next D consecutive content segment embedding vectors are predicted by sliding L + D sized windows over the embedding vectors of the content sequence, so each window produces a training instance.
The convolution layer of the module has N convolution kernels, each convolution kernel performs convolution operation from top to bottom on a matrix formed by embedded vectors of L continuous content segments, and the operation result can be expressed as:
whereinIs the result of the G-th convolution operation. Then to c K Executing the maximum pooling operation, and outputting the result as follows:
o={max(c 1 ),max(c 2 ),…,max(c N )}
the self-attention layer of the module aims to obtain the dependency relationship between contents, and for a matrix formed by embedded vectors of L continuous content segments, the attention score of the embedded vector of each content in the matrix to the embedded vectors of the rest contents is calculated by adopting a feedforward neural network based on a tanh activation function:
wherein, W 3 、W 4 、b 2 As a parameter of the first layer, the layer,is the second layer parameter. The above attention scores were normalized by Softmax to obtain the final attention weight:
using this attention weight, a new fusion representation is obtained, as shown in the following equation:
combining the convolution layer and the results obtained from the attention layer using a join strategy to obtain a final representation of L consecutive content segments:
wherein, W 5 、b 3 Is a first layer parameter;is the RELU function. Z n Containing information that the user requested the content in the short term.
3.4: a user-content interaction module is constructed. The module acquires long-term content request information of a user based on interaction information of the user-friend module and the content sequence module, processes the short-term content request information of the user by adopting a user-level attention mechanism, and automatically acquires the attention right of the user to each piece of contentHeavy, reflecting the general preferences of the user. The concrete implementation is as follows: firstly, calculating final embedded vector P of user by using dot product operation i With the final representation Z of L successive content segments L Similarity of (2):
S=(P i ) T Z n
according to the similarity matrix S, obtaining the attention vector of the user level:
wherein S is j Representing the jth row vector of the similarity matrix S. After obtaining the attention vectors of L continuous content segments, adopting a connection strategy to connect the attention vectors with the final user vector P i Connect, and perform affine transformations as follows:
wherein σ (x) is 1/(1+ e) -x ) (ii) a W ', b' are first layer parameters. Q (i,τ) Vector representing user request for content embedding within time tauProbability of the content represented.
Using binary cross entropy as a loss function for the user preference prediction model, as follows:
wherein, TIME i N denotes a time step for predicting the user;a set of embedded vectors representing D content segments to be predicted.
Finally, according to Q (i,τ) The magnitude of the value ranks the popularity of the content, Q (i,τ) A larger value represents a more popular content, and a greater user preference for that content. Selecting Q (i,τ) The first X contents with the maximum value are taken as the cache contents of the edge server, and X is determined by the cache size of the edge server.
The invention has the beneficial effects that:
compared with other content caching strategies, the caching method provided by the invention can learn the popularity of the content and actively cache the content preferred by the user in the edge server. The invention integrates knowledge map information and sequence information, not only considers the influence of content segments of historical requests of users on future preference, but also considers the influence of friends having social relations with the users on potential preference of the users. The invention can effectively improve the accuracy and diversity of the cache content, effectively improve the user satisfaction, reduce the traffic load, increase the hit rate and better stimulate the participants to use the mobile terminal equipment to send the content request to the edge server.
Drawings
FIG. 1 is a system workflow scenario diagram of the present invention.
Fig. 2 is a timing diagram of the system workflow of the present invention.
FIG. 3 is a schematic diagram of the data preprocessing flow of the present invention.
FIG. 4 is a schematic diagram of the model training process of the present invention.
FIG. 5(a) is a user-side workflow diagram of the present invention.
FIG. 5(b) is an edge server side workflow diagram of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, embodiments of the present invention will be described in detail below with reference to the accompanying drawings.
Fig. 1 is a system work flow scene diagram of the present invention, and the active content caching system of the present invention is composed of a user, a mobile terminal device, an edge server and a database. The user sends a request to the edge server, and the edge server acquires the user-related data at the moment. Preprocessing the data by using a knowledge map technology, then sending the data into a deep learning model for prediction, judging the popularity of the content according to the probability, and caching the content meeting the conditions into a database for backup.
Fig. 2 is a sequence diagram of a system work flow of the present invention, and the present invention provides an active content caching policy based on a network edge knowledge graph, which specifically includes the following steps:
the method comprises the following steps: the mobile terminal device used by the user is responsible for initiating an access request to the edge server, and the edge server collects information through the network side, wherein the information comprises the access request initiated by the user to the edge server and the social network service information provided by the user, and the social network service information comprises a large amount of data related to the social contact of the user.
Step two: and the user preferring the prediction task enters the coverage range of the edge server, and the edge server stores the collected data related to the user social contact according to the task requirement.
Step three: and (3) preprocessing the data related to the social contact of the user, which is collected in the second step, by the manager based on the knowledge graph, then establishing a user preference prediction model by combining a deep learning algorithm, inputting the preprocessed data into the model for training, and storing model parameters after the training is finished.
Step four: and caching the diversity accurate data predicted by the user preference prediction model in a database of the edge server to finish the access request of the user.
FIG. 3 is a schematic diagram of a data preprocessing flow of the present invention, wherein the data preprocessing specifically comprises the following steps:
(1) and constructing a knowledge sub-map by using data related to the social contact of the user, wherein entities in the knowledge sub-map comprise the user, friends of the user and historical access content, the relationship between the user and the friends of the user is a friend relationship, and the relationship between the user and the friends of the user and the historical access content is an access relationship. And (4) pre-training the knowledge graph by adopting a TransR embedding method to obtain a low-dimensional dense embedding vector.
(2) In order to further mine graph structure information in the knowledge graph, a simplified GraphSAGE convolutional neural network is adopted to carry out graph volume operation on the pre-trained knowledge graph. The method cancels the operation of controlling the number of sampling neighbors, utilizes an average aggregation function to aggregate the first-order neighbors and the second-order neighbors of the target entity to generate the convolution embedded vector of the target entity, and further represents the convolution embedded vectors of all entities in a knowledge graph.
(3) And taking the operation result as a data set for model training, further dividing the data set into a training set, a verification set and a prediction set, sending the training set and the verification set into a deep learning model for training, and detecting the training effect of the model by using the prediction set.
Before the detailed steps of establishing the user preference prediction model of the present invention are explained, the following description is made to the problems:
firstly, preprocessing data related to user social contact based on a knowledge graph to obtain an embedded vector, then sending the embedded vector into a user preference prediction model for training and prediction, analyzing the prediction capability of the model, and continuously adjusting the structure and parameters of the model according to the result of a loss function, thereby improving the feasibility of the invention. Finally, the invention determines to adopt an attention mechanism and a convolutional neural network as main structures, optimizes parameters to further improve the prediction accuracy, and finally stores the trained parameters.
Fig. 4 is a schematic diagram of a model training process according to the present invention, which includes a user-friend module, a content sequence module, and a user-content interaction module, and specifically includes the following steps:
(1) and constructing a user-friend module. The module adopts a friend-level attention mechanism and aims to distribute proper weights to different friends of a user and automatically acquire the influence of the different friends on the preference of the user. And (3) integrating the influence of different friends of the user into the preference of the user by adopting an addition strategy to obtain a final embedded vector of the user.
(2) And constructing a content sequence module. Aiming at the problem that the request content of a user in the past period may affect the future request content of the user, the module respectively processes the content sequence accessed by the user by adopting a convolutional neural network and a self-attention mechanism, and aims to acquire the dependency among different contents. And combining the convolution layer and the result obtained from the attention layer by adopting a connection strategy to obtain a final representation of the continuous content segment in the content sequence.
(3) A user-content interaction module is constructed. The module acquires long-term content request information of a user based on interaction information of the user-friend module and the content sequence module, processes the short-term content request information of the user by adopting a user-level attention mechanism, and automatically acquires the attention weight of the user to each piece of content, so that the general preference of the user is reflected. And after obtaining the user attention vectors of the continuous content segments in the content sequence, connecting the user attention vectors with the final user vector by adopting a connection strategy, and carrying out affine transformation to obtain the probability that the user requests a certain content within a certain period of time.
FIG. 5(a) is a user-side workflow diagram of the present invention. The overall system workflow based on the user angle is further explained.
FIG. 5(b) is an edge server side workflow diagram of the present invention. The work flow of data preprocessing and model training at the edge server side is further explained.
Claims (3)
1. An active content caching method based on a network edge knowledge graph is characterized by comprising the following steps:
the method comprises the following steps: the method comprises the steps that mobile terminal equipment used by a user is responsible for initiating an access request to an edge server, the edge server collects information through a network side, the information comprises the access request initiated by the user to the edge server and social network service information provided by the user, and the social network service information comprises a large amount of data related to the social contact of the user;
step two: the method comprises the steps that a user preferring a prediction task enters a coverage range of an edge server, and at the moment, the edge server stores collected data related to user social contact according to task requirements;
step three: the manager performs data preprocessing on the data related to the social contact of the user, which are collected in the second step, based on a knowledge graph, then establishes a user preference prediction model by combining a deep learning algorithm, inputs the preprocessed data into the model for training, and stores model parameters after the training is completed;
step four: and caching the diversity accurate data predicted by the user preference prediction model in a database of the edge server to finish the access request of the user.
2. The active content caching method based on the network edge knowledge graph according to claim 1, wherein in the third step, the specific steps of data preprocessing are as follows:
(1) constructing a knowledge sub-map by using data related to the social contact of the user, wherein entities in the knowledge sub-map comprise the user, friends of the user and historical access content, the relationship between the user and the friends of the user is a friend relationship, and the relationship between the user and the friends of the user and the historical access content is an access relationship; pre-training a knowledge graph by adopting a TransR embedding method to obtain a low-dimensional dense embedding vector; specifically, for a given triplet (h, r, t), the entities in the entity space are passed through a matrix M r Projected to the space of the relation r, the mapping of the head entity vector h and the tail entity vector t are respectively:
h r =M r h
t r =M r t
the TransR embedding method loss function is:
wherein l 1 /l 2 Is represented by 1 Or l 2 Norm to measure h r And t r The degree of closeness of;
(2) in order to further excavate the graph structure information in the knowledge graph, a simplified GraphSAGE convolutional neural network is adopted to perform graph convolution operation on the pre-trained knowledge graph; the method cancels the operation of controlling the number of sampling neighbors, utilizes an average aggregation function to aggregate first-order neighbors and second-order neighbors of a target entity to generate convolution embedded vectors of the target entity, and further represents convolution embedded vectors of all entities in a knowledge graph; the formula for the convolution method is as follows:
h d ←σ(W·MEAN(h k-1 ))
where mean (x) represents the average of solving for x; w represents a first layer parameter; k represents the order of the knowledge-graph; σ represents a nonlinear activation function;
(3) obtaining the embedded vector p of the user entity in the knowledge sub-graph spectrum after the operation i Embedded vector of user friend entity { p (i,1) ,p (i,2) ,...,p (i,m) Q and an embedded vector of historical request content entities q (i,1) ,q (i,2) ,...,q (i,n) }。
3. The active content caching method based on the network edge knowledge graph according to claim 1 or 2, wherein in the third step, the specific steps of establishing the user preference prediction model are as follows:
(1) constructing a user-friend module: the module adopts a friend-level attention mechanism and aims to distribute proper weights to different friends of a user and automatically acquire the influence of the different friends on the preference of the user; the input of the module is the user embedding vector p i And the friend embedding vector of the user p (i,1) ,p (i,2) ,...,p (i,m) An attention score for a friend level is defined as follows:
wherein, W 1 、W 2 、b 1 As a parameter of the first layer, the layer,is a second layer parameter;is a RELU function; and (3) normalizing the attention scores of the friend levels through Softmax to obtain attention weights of the friend levels, wherein the attention weights are understood to be different in contribution of different friends to user preference influence:
after the attention weight of the friend level is obtained, the final embedded vector P of the user i Is calculated as follows:
P i =p i +∑ m α (i,m) p (i,m)
the influence of different friends of the user is integrated into the preference of the user by adopting an addition strategy to obtain the final embedded vector P of the user i ;
(2) Constructing a content sequence module: aiming at the fact that the request content of a user in the past period may affect the future request content of the user, the module respectively processes the content sequence accessed by the user by adopting a convolutional neural network and an attention mechanism, and aims to acquire the dependency among different contents; the input to this module is an embedded vector q of a sequence of content requested by a user over a period of time in the past (i,1) ,q (i,2) ,...,q (i,n) -wherein the embedded vectors of the content sequence are arranged in time order; extracting a time t from an embedded vector of a content sequence 0 Embedded vector of L consecutive content segments withinPredicting an embedding vector of the next D consecutive content segments of the user by sliding on the embedding vector of the content sequenceMoving windows of L + D size to realize, so that each window can generate a training example;
the convolution layer of the module is provided with N convolution kernels, each convolution kernel can carry out convolution operation from top to bottom on a matrix formed by embedded vectors of L continuous content segments, and the operation result is expressed as:
whereinThe result of the G-th convolution operation is obtained; then to c K Executing the maximum pooling operation, and outputting the result as follows:
o={max(c 1 ),max(c 2 ),...,max(c N )}
the self-attention layer of the module aims to obtain the dependency relationship between contents, and for a matrix formed by embedded vectors of L continuous content segments, the attention score of the embedded vector of each content in the matrix to the embedded vectors of the rest contents is calculated by adopting a feedforward neural network based on a tanh activation function:
wherein, W 3 、W 4 、b 2 As a parameter of the first layer, the layer,is a second layer parameter; the above attention scores were normalized by Softmax to obtain the final attention weight:
using this attention weight, a new fused representation is obtained, as shown in the following equation:
combining the convolution layer and the results obtained from the attention layer using a join strategy to obtain a final representation of L consecutive content segments:
wherein, W 5 、b 3 Is a first layer parameter;is a RELU function; z n Information containing the content requested by the user in a short period;
(3) constructing a user-content interaction module: the module acquires long-term content request information of a user based on interaction information of a user-friend module and a content sequence module, processes the short-term content request information of the user by adopting a user-level attention mechanism, and automatically acquires the attention weight of the user to each piece of content, so that the general preference of the user is reflected; the concrete implementation is as follows: firstly, calculating final embedded vector P of user by using dot product operation i With final representation Z of L successive content segments L Similarity of (2):
S=(P i ) T Z n
according to the similarity matrix S, obtaining the attention vector of the user level:
wherein S is j A jth row vector representing a similarity matrix S; after obtaining the attention vectors of L continuous content segments, adopting a connection strategy to connect the attention vectors with the final user vector P i Connect, and perform affine transformations as follows:
wherein σ (x) is 1/(1+ e) -x ) (ii) a W 'and b' are first layer parameters; q (i,τ) Vector representing user request for content embedding within time tauA probability of the represented content;
using binary cross entropy as a loss function for the user preference prediction model, as follows:
wherein, TIME i N denotes a time step for predicting a user;a set of embedded vectors representing the D content segments to be predicted;
finally, according to Q (i,τ) The magnitude of the value ranks the popularity of the content, Q (i,τ) A larger value represents a more popular content, and a greater user preference for that content; selecting Q (i,τ) The first X contents with the maximum value are taken as the cache contents of the edge server, and X is determined by the cache size of the edge server.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210667981.XA CN115129888A (en) | 2022-06-14 | 2022-06-14 | Active content caching method based on network edge knowledge graph |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210667981.XA CN115129888A (en) | 2022-06-14 | 2022-06-14 | Active content caching method based on network edge knowledge graph |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115129888A true CN115129888A (en) | 2022-09-30 |
Family
ID=83377343
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210667981.XA Pending CN115129888A (en) | 2022-06-14 | 2022-06-14 | Active content caching method based on network edge knowledge graph |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115129888A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116208970A (en) * | 2023-04-18 | 2023-06-02 | 山东科技大学 | Air-ground collaboration unloading and content acquisition method based on knowledge-graph perception |
-
2022
- 2022-06-14 CN CN202210667981.XA patent/CN115129888A/en active Pending
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116208970A (en) * | 2023-04-18 | 2023-06-02 | 山东科技大学 | Air-ground collaboration unloading and content acquisition method based on knowledge-graph perception |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Yu et al. | Federated learning based proactive content caching in edge computing | |
Thar et al. | DeepMEC: Mobile edge caching using deep learning | |
CN109635204A (en) | Online recommender system based on collaborative filtering and length memory network | |
CN111262940A (en) | Vehicle-mounted edge computing application caching method, device and system | |
CN113434212A (en) | Cache auxiliary task cooperative unloading and resource allocation method based on meta reinforcement learning | |
CN113901327A (en) | Target recommendation model training method, recommendation device and electronic equipment | |
CN115293358A (en) | Internet of things-oriented clustered federal multi-task learning method and device | |
CN113873534B (en) | Active content caching method for federal learning assisted by blockchain in fog calculation | |
CN114553963B (en) | Multi-edge node collaborative caching method based on deep neural network in mobile edge calculation | |
CN111314862B (en) | Caching method with recommendation under deep reinforcement learning in fog wireless access network | |
CN112752308B (en) | Mobile prediction wireless edge caching method based on deep reinforcement learning | |
CN114039870B (en) | Deep learning-based real-time bandwidth prediction method for video stream application in cellular network | |
CN115374853A (en) | Asynchronous federal learning method and system based on T-Step polymerization algorithm | |
CN113918829A (en) | Content caching and recommending method based on federal learning in fog computing network | |
CN116361009A (en) | MEC computing unloading, resource allocation and cache joint optimization method | |
CN115129888A (en) | Active content caching method based on network edge knowledge graph | |
CN114154060A (en) | Content recommendation system and method fusing information age and dynamic graph neural network | |
CN116828052A (en) | Intelligent data collaborative caching method based on edge calculation | |
CN113962417A (en) | Video processing method and device, electronic equipment and storage medium | |
CN116367231A (en) | Edge computing Internet of vehicles resource management joint optimization method based on DDPG algorithm | |
CN114490447A (en) | Intelligent caching method for multitask optimization | |
CN115563519A (en) | Federal contrast clustering learning method and system for non-independent same-distribution data | |
CN111901394A (en) | Method and system for caching moving edge by jointly considering user preference and activity degree | |
Dedeoglu et al. | Continual learning of generative models with limited data: From wasserstein-1 barycenter to adaptive coalescence | |
CN114170560B (en) | Multi-device edge video analysis system based on deep reinforcement learning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |