CN110784881B - Method, equipment and medium for actively caching multi-level edge nodes of Internet of things terminal - Google Patents

Method, equipment and medium for actively caching multi-level edge nodes of Internet of things terminal Download PDF

Info

Publication number
CN110784881B
CN110784881B CN201911008975.8A CN201911008975A CN110784881B CN 110784881 B CN110784881 B CN 110784881B CN 201911008975 A CN201911008975 A CN 201911008975A CN 110784881 B CN110784881 B CN 110784881B
Authority
CN
China
Prior art keywords
node
entropy value
content
transition probability
caching
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911008975.8A
Other languages
Chinese (zh)
Other versions
CN110784881A (en
Inventor
高强
张国翊
田志峰
郭少勇
张伟贤
陈建民
保剑
黄哲
黄儒雅
陈嘉
周瑾瑜
邵苏杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Power Supply Co ltd
Beijing University of Posts and Telecommunications
Original Assignee
Shenzhen Power Supply Co ltd
Beijing University of Posts and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Power Supply Co ltd, Beijing University of Posts and Telecommunications filed Critical Shenzhen Power Supply Co ltd
Priority to CN201911008975.8A priority Critical patent/CN110784881B/en
Publication of CN110784881A publication Critical patent/CN110784881A/en
Application granted granted Critical
Publication of CN110784881B publication Critical patent/CN110784881B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W24/00Supervisory, monitoring or testing arrangements
    • H04W24/02Arrangements for optimising operational condition
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W28/00Network traffic management; Network resource management
    • H04W28/02Traffic management, e.g. flow control or congestion control
    • H04W28/10Flow control between communication endpoints
    • H04W28/14Flow control between communication endpoints using intermediate storage
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The invention discloses an active caching method for multi-level edge nodes of an internet of things terminal, which comprises the following steps: s1, predicting the movement probability of a node by adopting a first-order Markov model; step S2, calculating an entropy value through movement probability to measure the uncertainty of mobility prediction, and judging the position of the content prefetched and cached in the network according to the entropy value; and step S3, caching the content to the node in advance according to the judged next hop position of the node. By implementing the invention, uncertainty is reduced, redundancy is eliminated, high cache hit rate is ensured, and performance improvement of exchanging active cache is increased with a small amount of delay.

Description

Method, equipment and medium for actively caching multi-level edge nodes of Internet of things terminal
Technical Field
The invention belongs to the field of Internet of things, and relates to an active caching method, equipment and medium for multi-level edge nodes of an Internet of things terminal.
Background
While the current architecture of the internet of things is very successful, it presents challenges in coping with the increasing number of applications and services. With this growth, the internet is no longer a tool for two communication endpoints, but rather a tool required to support mobility, security, and most important content distribution. To address these challenges, academia proposes future internet architectures, such as information-centric networking (ICN). ICN considers that the internet is mainly used for content distribution, and users interested in a particular content object send packets specifying the name of the content object without knowing where the content object is located and specifying the IP address of the destination that owns it. By separating the distributed content from the established connection, the network has higher flexibility, and the content has high reusability due to the characteristic of caching in the ICN network.
Active caching is a method for improving content propagation efficiency. Unlike reactive caching policies that cache previously requested content, active caching pre-fetches the mobile user's intended content of interest in advance. Proactively retrieving and buffering content may reduce latency in retrieving predictable content requests while also mitigating backhaul traffic. It is also the main solution for mitigating delay costs caused by handover in several settings, such as LTE and Wifi access, and future internet architectures.
To achieve the above objective, active caching relies on predictability of movement patterns to predict the next location of a mobile device and decide which caching node should cache a prefetched content object. Furthermore, previously proposed active caching strategies either assume perfect mobility predictions or are cached redundantly at multiple edge nodes to cope with prediction uncertainties.
In order to understand the development state of the prior art, the prior papers and patents are searched, compared and analyzed, and the following technical information with higher correlation degree with the invention is screened out:
the prior art scheme 1 is that patent number CN107454562A is a D2D mobile content distribution method facing ICN architecture, belongs to the technical field of Internet mobile communication, and particularly relates to a D2D mobile content distribution method facing ICN architecture. The invention discloses an information center network mobile content distribution method, which comprises the steps of caching content data in a caching device in a range closest to a request end, requesting based on position information, returning the data along an original path after the content data arrives at the caching device, and judging whether to cache a data packet in a return process. However, the scheme does not limit the cache redundancy, which may cause a great deal of cache redundancy in the network due to the same request initiated at different geographic positions, and reduce the overall service performance of the network.
The patent number CN108200206A is an ICN seamless mobile system based on SDN architecture, belongs to the technical field of communication, and particularly relates to an ICN seamless mobile system based on SDN architecture. The ICN seamless mobile system based on the SDN architecture is provided, and POF controllers and the like in the control domain of the ICN core network are arranged, so that seamless movement can be realized, and a user can reply data lost due to a mobile switching process as required. This scheme, while taking into account the data recovery after the move, is a buffering after the move to the new access point, which causes a certain time delay and the coordination cost of the controller is significant.
The prior art scheme 3 is that the patent number is CN105357246A, namely a caching method and a caching system based on an information center network, belongs to the technical field of information communication, and particularly relates to a caching decision method based on content popularity. The method and the system divide the nodes in the network into common cache nodes, a resource manager and backup cache nodes, and carry out cache decision through coordination among the three nodes. Although the scheme can ensure that the coordination in the same domain carries out better caching decision, a resource manager is required to maintain global information of the caching content of all nodes in the domain, the additional expenditure during coordination is large, and the performance is required to be improved.
Disclosure of Invention
The technical problem to be solved by the embodiment of the invention is to provide an active caching method, equipment and medium for multi-level edge nodes of an internet of things terminal, which are used for eliminating the problem of uncertainty and high delay caused by cache redundancy while adopting an active caching strategy.
The invention provides an active caching method for a multi-level edge node of an internet of things terminal, which comprises the following steps:
s1, predicting the movement probability of a node by adopting a first-order Markov model;
step S2, calculating an entropy value through movement probability to measure the uncertainty of mobility prediction, and judging the position of the content prefetched and cached in the network according to the entropy value;
and step S3, caching the content to the node in advance according to the judged next hop position of the node.
Further, in step S1, the markov model is specifically expressed by the following formula:
Figure BDA0002243613400000021
wherein X (L) i ,L j ) Is the slave L of the vehicle i Move to L i And Z (L) i ) Is the vehicle at L i Can retrieve X and Z from the movement trajectories of the training data set,
thus, the first and second substrates are bonded together,
Figure BDA0002243613400000022
wherein l= { L 1 ,L 2 ,…,L n Represents a set of states, p ij Is the transition probability.
Further, in step S2, the specific process of measuring the uncertainty of the mobility prediction by calculating the entropy value through the mobility probability is:
step S21, according to the node L in the initial state w i To its adjacent node set N i Calculating an entropy value h;
step S22, judging whether the entropy h is smaller than the predetermined threshold
Figure BDA0002243613400000023
If the entropy h is smaller than the predetermined threshold +.>
Figure BDA0002243613400000024
Step S23 is performed if the entropy h is greater than the predetermined threshold +.>
Figure BDA0002243613400000025
Step S25 is performed;
step S23, accumulating N i The transition probability of the same upper node is used as the transition probability of the upper node;
step S24, re-calculating an entropy value h according to the aggregated transition probability set, and performing step S22;
in step S25, the node with the highest transition probability is selected as the prefetching node to cache the content.
Further, in step S21, the initial state w specifically includes a set of transition probabilities that each node in the network reaches all the potential next-hop nodes.
Further, in step S21, the entropy value h is calculated using the transition probability according to the following formula:
Figure BDA0002243613400000031
wherein p= { P 1 ,P 2 ,…,P n },
Figure BDA0002243613400000032
Further, in step S2, the determining, according to the entropy value, the location of the content prefetched and cached in the network is specifically that the child nodes in the network tree structure vertically aggregate the transition probabilities to the parent nodes upwards until the threshold of the entropy value is met, where the lower nodes capable of being aggregated upwards must be connected to the same upper node.
In another aspect the invention provides a computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the steps of the method when executing the computer program:
predicting the movement probability of the node by adopting a first-order Markov model;
calculating an entropy value through the movement probability to measure the uncertainty of mobility prediction, and judging the position of the content prefetched and cached in the network according to the entropy value;
and caching the content to the node in advance according to the judged next hop position of the node.
In another aspect the invention also provides a computer readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of the method of:
predicting the movement probability of the node by adopting a first-order Markov model;
calculating an entropy value through the movement probability to measure the uncertainty of mobility prediction, and judging the position of the content prefetched and cached in the network according to the entropy value;
and caching the content to the node in advance according to the judged next hop position of the node.
The embodiment of the invention has the following beneficial effects:
according to the method, the device and the medium for actively caching the multi-level edge node of the internet of things terminal, provided by the embodiment of the invention, by combining the ICN architecture characteristics, the uncertainty of mobility prediction is measured by utilizing the characteristic that any position in an ICN network can be cached, so that strategic decisions are made on the prefetching and caching positions in the network, redundant caching is eliminated, the uncertainty is reduced to the greatest extent, redundancy is eliminated, the cache hit rate is ensured, and the performance of exchanging active caching is improved by a small amount of delay increase.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions of the prior art, the drawings which are required in the description of the embodiments or the prior art will be briefly described, it being obvious that the drawings in the description below are only some embodiments of the invention, and that it is within the scope of the invention to one skilled in the art to obtain other drawings from these drawings without inventive faculty.
Fig. 1 is a schematic flow chart of an embodiment of an active caching method for multi-level edge nodes of an internet of things terminal.
Fig. 2 is a flow chart of an active caching method for multi-level edge nodes of an internet of things terminal.
Fig. 3 is a schematic view of vertical aggregation in the active caching method of the multi-level edge node of the internet of things terminal.
Fig. 4 is a schematic diagram of delay gain in an embodiment of the present invention.
Fig. 5 is a schematic diagram of server load in an embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings, for the purpose of making the objects, technical solutions and advantages of the present invention more apparent.
Fig. 1 is a schematic flow chart of an embodiment of an active caching method for a multi-level edge node of an internet of things terminal, where the method includes the following steps:
s1, predicting the movement probability of a node by adopting a first-order Markov model;
in a particular embodiment, the markov predictor represents the movement pattern as a markov chain, which uses a historical construct of movement trajectories and then predicts the next location based on the current location. The first order markov model considers the transmission range of vehicles through several RSUs as they drive around in an on-board network; the goal of the mobility prediction model is to predict the next RSU with which the vehicle will be associated in order to connect to the internet backbone;eventually, this information will be used to prefetch content and meet future requests of the vehicle with lower latency and higher efficiency when actually connected to the next RSU. Thus, the markov model forms a set of states l= { L 1 ,L 2 ,…,L n Each state represents an RSU, and the transition probability is P ij Representing at L when the vehicle is currently connected to the RSU i Probability of being connected to RSU; l (L) i Neighbor forming vehicle of a particular RSU at which it can connect to the next state N i The set, and thus the next state, depends only on the current state.
The transition probability is defined as:
Figure BDA0002243613400000041
wherein X (L) i ,L j ) Is the slave L of the vehicle i Move to L i And Z (L) i ) Is the vehicle at L i X and Z can be retrieved from the movement trajectories of the training dataset, thus:
Figure BDA0002243613400000042
wherein l= { L 1 ,L 2 ,…,L n Represents a set of states, p ij Is the transition probability.
In particular, it is shown by Markov-based movement prediction that to resolve the uncertainty of the prediction, multiple potential next RSUs require prefetching and buffering, which results in cache redundancy and traffic on multiple links added to the data publisher for prefetching.
Previous work on mobility predictions for prefetching relies on horizontal aggregation of probabilities, i.e., redundancy at multiple edge nodes, to resolve the prediction uncertainty to ensure that the mobile user actually retrieves the prefetched content; with the intra-network caching feature of the ICN architecture, all levels of nodes are able to cache content by aggregating probabilities vertically and at higher levels of the network, because higher levels of nodes cover a larger area and increase the chance that future requests of vehicles will be satisfied by pre-fetched content on those nodes without increasing redundancy and congestion links to the data publishers, in the particular case of a predictor, the output of the markov predictor is a probability distribution over many potential next states, using entropy metrics to quantify the uncertainty of the predictions, and making caching decisions.
Step S2, calculating an entropy value through movement probability to measure the uncertainty of mobility prediction, and judging the position of the content prefetched and cached in the network according to the entropy value, wherein the more likely the event is, the more nearly and uniformly distributed the event is, and the larger the entropy is; on the other hand, if some events are close to deterministic, then entropy is small, and in support of using entropy as an uncertainty measure, the least square fit of two variables of prediction accuracy and entropy shows that if 90% of prediction accuracy is desired, the entropy value must not exceed 0.5;
in a specific embodiment, the specific process of measuring the uncertainty of the mobility prediction by calculating the entropy value through the mobility probability is as follows:
step S21, according to the node L in the initial state w i To its adjacent node set N i Calculating an entropy value h;
more specifically, the initial state w specifically includes a set of transition probabilities that each node in the network reaches all the potential next-hop nodes;
the entropy value h is calculated using the transition probability according to the following formula:
Figure BDA0002243613400000051
wherein p= { P 1 ,P 2 ,…,P n },
Figure BDA0002243613400000052
As shown in fig. 2, more specifically, the decision on the location of the content prefetching and caching in the network according to the entropy value is specifically to aggregate the transition probability vertically upwards from the child node to the parent node in the network tree structure, until the threshold of the entropy value is met, and the lower node capable of being aggregated upwards must be connected to the same upper node.
Step S22, judging whether the entropy h is smaller than the predetermined threshold h * If the entropy value h is smaller than the predetermined threshold value h * Step S23 is performed if the entropy h is greater than the predetermined threshold h * Step S25 is performed;
step S23, accumulating N i The transition probability of the same upper node is used as the transition probability of the upper node;
step S24, re-calculating an entropy value h according to the aggregated transition probability set, and performing step S22;
in step S25, the node with the highest transition probability is selected as the prefetching node to cache the content.
And step S3, caching the content to the node in advance according to the judged next hop position of the node.
As shown in FIG. 3, in particular embodiments, L may be polymerized 2 And L 3 Because they are all connected to the same core router E 2 The method comprises the steps of carrying out a first treatment on the surface of the On the other hand, L 2 、L 3 And L 5 The transition probabilities of (a) cannot be aggregated at the second level but can continue to aggregate upward, aggregating at the third level node M.
When the mobile node is at node L i When selecting a potential next-hop node for prefetching and caching content, first, if the result of entropy calculation of the current transition probability is below a threshold h * Approximately 0.5, then we say that there is a high certainty in predicting the next hop node at present, then we choose node L i The node with the highest transition probability performs the prefetching of the content; on the other hand, if the entropy value is higher than the threshold value, it indicates that the movement of the current node has high uncertainty, thus considering content prefetching at an upper node of the network; at this time, first, node L i All the potential next-hop nodes are aggregated upwards, all the transition probabilities connected with the same upper node are accumulated to be used as the transition probabilities of the upper node, then the entropy value is recalculated for all the aggregated upper node sets, and if the entropy value is lower than the threshold value, the entropy value is calculatedAnd selecting a node with the highest transition probability from the current node set to prefetch the content, otherwise, if the entropy value is still higher than the threshold value, taking the current node set as a lower node set, and repeating the same operation at a higher level in the network until the calculated entropy value is lower than the threshold value or reaches a source content node in the network.
When the mobile node is at L 6 When, the transition probability w= (0.5,0.4,0.05,0.05) is adjacent to the node set N 1 The entropy calculation result of (2) is 1.46, and it can be seen that the node L with the highest transition probability is 9 Or at node L 10 The cache content has almost half probability prediction uncertainty, if the node L is in accordance with the process proposed in the method 9 And node L 10 Is aggregated upward, since both nodes are connected to node E 4 Then the updated transition probability w= (0.9,0.05,0.05), the entropy will decrease to 0.57 at node E 4 The cache content will have higher reliability; on the other hand, when the node is at L 1 When the entropy value of the transition probability w= (0.25,0.25,0.25,0.25) is 2, the vertical aggregation is connected to the same upper node E 2 Node L of (2) 2 、L 3 After the transition probability of (2) is reduced to 1.5, the requirement on the prediction precision of the method is obviously not met, so that the entropy calculation result becomes 0 after the upward aggregation is continued because all secondary nodes are connected to the same tertiary node M, and therefore, the content can be searched when the node is moved to any position by caching the content in the tertiary node M, and a small amount of delay increase is used for saving a large amount of cache space.
The calculation of the transition probability is to use a Markov movement prediction model established by using a san francisco taxi movement model, use a san francisco taxi movement data set to contain longitude and latitude coordinates of about 500 taxis collected in san francisco areas for several weeks, divide a map into areas with the same size, each area represents a network internal node, associate the coordinates with the areas, perform format conversion on the data set, replace the longitude and latitude coordinates by network nodes, divide the data set into a training set and a data set, construct a Markov predictor by the training set, obtain a transition probability set of all nodes after the construction is successful, and verify the correctness of the calculation of the transition probability and the validity of the provided strategy by the data set.
The multi-level edge node active caching method of the internet of things terminal provided by the invention is compared with the strategy in the ndnSIM through experiments, and the situation that the vehicle is converted between RSUs when downloading the 50MB file is needed to be compared. There are a total of 230 RSUs in the experiment, each RSU representing an edge node, which has a transition probability in the Markov predictor. Thus, the transitions from all RSUs to their neighbor nodes are simulated and the average of all such scenarios is displayed.
To focus on the performance of active caching, no responsive buffers will be installed on the nodes during the experiment and nodes with a sufficiently large cache size are provided so as not to affect the performance results associated with locating policy prefetch nodes. Furthermore, the experiment only provides the results of prefetching content to compare the predicted performance gain to the prefetch node's selection. The result presented is for the case where the core network is an M-ary tree structure with m=6. We compare entropy based caches with edge caches of different degrees of redundancy and no active caches. In the following experimental results, R represents the cache redundancy degree, i.e. the number of copies of the same resource in the network
As shown in fig. 4, one of the goals of prefetching is to reduce the latency of data retrieval to enhance the user experience. Delay is measured as the time between the first interest packet of the transmitted content object and the receipt of the corresponding data, showing the percentage of delay gain compared to no active buffer, as expected, all active buffer strategies result in a positive delay gain relative to no active buffer, since not all interest packets need to reach the server to retrieve the content, and entropy based buffering can increase the delay by 60% compared to no active buffer, which exceeds the edge buffer r=1 without redundancy.
As shown in fig. 5, one of the main goals of ICN and active caching is to reduce the burden on the original content publisher, thereby reducing backhaul traffic, measuring server load as the average number of requests to reach the server, including prefetch requests. The entropy-based policy is shown to be least loaded on the server because it eliminates redundancy and high accuracy, allowing recovery of data from the intermediate cache; edge caching, on the other hand, results in a retransmission to the server or multiple prefetches, resulting in a large number of prefetch requests to the server.
In another aspect the invention provides a computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the steps of the method when executing the computer program:
predicting the movement probability of the node by adopting a first-order Markov model;
calculating an entropy value through the movement probability to measure the uncertainty of mobility prediction, and judging the position of the content prefetched and cached in the network according to the entropy value;
and caching the content to the node in advance according to the judged next hop position of the node.
In another aspect the invention also provides a computer readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of the method of:
predicting the movement probability of the node by adopting a first-order Markov model;
calculating an entropy value through the movement probability to measure the uncertainty of mobility prediction, and judging the position of the content prefetched and cached in the network according to the entropy value;
and caching the content to the node in advance according to the judged next hop position of the node.
For further details, reference is made to the foregoing description of the drawings, which is not described in detail herein.
The embodiment of the invention has the following beneficial effects:
according to the method, the device and the medium for actively caching the multi-level edge nodes of the internet of things terminal, the network is divided into a core area and an edge area according to the architecture of the internet of things;
aiming at the structural characteristics in the edge network of the Internet of things, the transition probability is calculated based on a Markov predictor, the entropy value quantization uncertainty is calculated, the flexibility of caching of the ICN at any position in the network is utilized, and the prefetching nodes are positioned, so that cache redundancy is eliminated, the uncertainty is reduced to the greatest extent, the redundancy is eliminated, the cache hit rate is ensured, and the performance of exchanging active cache is improved by a small amount of delay.
The above disclosure is only a preferred embodiment of the present invention, and it is needless to say that the scope of the invention is not limited thereto, and therefore, the equivalent changes according to the claims of the present invention still fall within the scope of the present invention.

Claims (7)

1. The method for actively caching the multi-level edge nodes of the internet of things terminal is characterized by comprising the following steps of:
s1, predicting the movement probability of a node by adopting a first-order Markov model;
step S2, calculating an entropy value through movement probability to measure the uncertainty of mobility prediction, and judging the position of the content prefetched and cached in the network according to the entropy value;
the specific process of measuring the uncertainty of the mobility prediction by calculating the entropy value through the movement probability is as follows:
step S21, according to the node L in the initial state w i To its adjacent node set N i Calculating an entropy value h;
step S22, judging whether the entropy h is smaller than the predetermined threshold h * If the entropy value h is smaller than the predetermined threshold value h * Step S23 is performed if the entropy h is greater than the predetermined threshold h * Step S25 is performed;
step S23, accumulating N i The transition probability of the same upper node is used as the transition probability of the upper node;
step S24, re-calculating an entropy value h according to the aggregated transition probability set, and performing step S22;
step S25, selecting a node with highest transition probability as a prefetching node cache content;
and step S3, caching the content to the node in advance according to the judged next hop position of the node.
2. The method according to claim 1, wherein in step S1, the markov model is specifically represented by the following formula:
Figure FDA0004057870670000011
wherein X (L) i ,L j ) Is the slave L of the vehicle i Move to L j And Z (L) i ) Is the vehicle at L i Retrieving X, which is the number of times the vehicle moves from one node to another, and Z, which is the total number of vehicles at one node, from the movement tracks of the training data set;
thus, the first and second substrates are bonded together,
Figure FDA0004057870670000012
wherein l= { L 1 ,L 2 ,…,L n Represents a set of states, p ij N is the transition probability i Representing node L i To its adjacent node L j Is a set of (3).
3. The method according to claim 1, characterized in that in step S21, the initial state w is specifically a set of transition probabilities comprising each node in the network reaching all possible next-hop nodes.
4. A method according to claim 3, characterized in that in step S21, the entropy value h is calculated with transition probabilities according to the following formula:
Figure FDA0004057870670000021
wherein H [ P ]]Is the entropy value h, p= { P corresponding to the transition probability set P 1 ,P 2 ,…,P n N is the maximum number of sequences of transition probabilities in the transition probability set,
Figure FDA0004057870670000022
p i the transition probability in the corresponding transition probability set when the transition probability sequence number is i.
5. The method according to claim 4, wherein in step S2, the decision is made on the location of the prefetching and caching of the content in the network based on the entropy value, specifically that the child nodes in the network tree structure aggregate the transition probabilities vertically upwards to the parent node until the threshold value of the entropy value is met, and the lower nodes capable of aggregation upwards have to be connected to the same upper node.
6. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the steps of the method according to any one of claims 1 to 5 when the computer program is executed by the processor.
7. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any of claims 1 to 5.
CN201911008975.8A 2019-10-23 2019-10-23 Method, equipment and medium for actively caching multi-level edge nodes of Internet of things terminal Active CN110784881B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911008975.8A CN110784881B (en) 2019-10-23 2019-10-23 Method, equipment and medium for actively caching multi-level edge nodes of Internet of things terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911008975.8A CN110784881B (en) 2019-10-23 2019-10-23 Method, equipment and medium for actively caching multi-level edge nodes of Internet of things terminal

Publications (2)

Publication Number Publication Date
CN110784881A CN110784881A (en) 2020-02-11
CN110784881B true CN110784881B (en) 2023-05-02

Family

ID=69386281

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911008975.8A Active CN110784881B (en) 2019-10-23 2019-10-23 Method, equipment and medium for actively caching multi-level edge nodes of Internet of things terminal

Country Status (1)

Country Link
CN (1) CN110784881B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113422801B (en) * 2021-05-13 2022-12-06 河南师范大学 Edge network node content distribution method, system, device and computer equipment
CN113037872B (en) * 2021-05-20 2021-08-10 杭州雅观科技有限公司 Caching and prefetching method based on Internet of things multi-level edge nodes

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107018493A (en) * 2017-04-20 2017-08-04 北京工业大学 A kind of geographical position Forecasting Methodology based on continuous sequential Markov model
CN108093056A (en) * 2017-12-25 2018-05-29 重庆邮电大学 Information centre's wireless network virtualization nodes buffer replacing method
CN109391681A (en) * 2018-09-14 2019-02-26 重庆邮电大学 V2X mobility prediction based on MEC unloads scheme with content caching
CN110312231A (en) * 2019-06-28 2019-10-08 重庆邮电大学 Content caching decision and resource allocation joint optimization method based on mobile edge calculations in a kind of car networking

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9471497B2 (en) * 2014-01-24 2016-10-18 Netapp, Inc. Methods for combining access history and sequentiality for intelligent prefetching and devices thereof

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107018493A (en) * 2017-04-20 2017-08-04 北京工业大学 A kind of geographical position Forecasting Methodology based on continuous sequential Markov model
CN108093056A (en) * 2017-12-25 2018-05-29 重庆邮电大学 Information centre's wireless network virtualization nodes buffer replacing method
CN109391681A (en) * 2018-09-14 2019-02-26 重庆邮电大学 V2X mobility prediction based on MEC unloads scheme with content caching
CN110312231A (en) * 2019-06-28 2019-10-08 重庆邮电大学 Content caching decision and resource allocation joint optimization method based on mobile edge calculations in a kind of car networking

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于自扩展时间窗的告警多级聚合与关联方法;李洪成等;《工程科学与技术》;20170120(第01期);全文 *

Also Published As

Publication number Publication date
CN110784881A (en) 2020-02-11

Similar Documents

Publication Publication Date Title
Abani et al. Proactive caching with mobility prediction under uncertainty in information-centric networks
Zhong et al. A deep reinforcement learning-based framework for content caching
US10305968B2 (en) Reputation-based strategy for forwarding and responding to interests over a content centric network
US10917328B2 (en) Routing updates in ICN based networks
KR101943530B1 (en) Systems and methods for placing virtual serving gateways for mobility management
Mahmood et al. Mobility-aware edge caching for connected cars
EP2985970B1 (en) Probabilistic lazy-forwarding technique without validation in a content centric network
CN110784881B (en) Method, equipment and medium for actively caching multi-level edge nodes of Internet of things terminal
CN109218747A (en) Video traffic classification caching method in super-intensive heterogeneous network based on user mobility
CN113382059B (en) Collaborative caching method based on federal reinforcement learning in fog wireless access network
CN111935246A (en) User generated content uploading method and system based on cloud edge collaboration
CN110913430A (en) Active cooperative caching method and cache management device for files in wireless network
CN113411826A (en) Edge network equipment caching method based on attention mechanism reinforcement learning
Yu et al. Mobility-aware proactive edge caching for large files in the internet of vehicles
CN115361710A (en) Content placement method in edge cache
CN115314944A (en) Internet of vehicles cooperative caching method based on mobile vehicle social relation perception
CN113114762B (en) Data caching method and system
CN112911614B (en) Cooperative coding caching method based on dynamic request D2D network
CN103052114A (en) Data cache placement system and data caching method
Mishra et al. An efficient content replacement policy to retain essential content in information-centric networking based internet of things network
CN111901833A (en) Unreliable channel transmission-oriented joint service scheduling and content caching method
CN113473408B (en) User association method and system for realizing video transmission in Internet of vehicles
KR20160097421A (en) Bio-inspired Algorithm based P2P Content Caching Method for Wireless Mesh Networks and System thereof
Li et al. A smart cache content update policy based on deep reinforcement learning
Yang et al. Efficient Vehicular Edge Computing: A Novel Approach With Asynchronous Federated and Deep Reinforcement Learning for Content Caching in VEC

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant