CN110784881A - Method, device and medium for actively caching multi-level edge nodes of Internet of things terminal - Google Patents

Method, device and medium for actively caching multi-level edge nodes of Internet of things terminal Download PDF

Info

Publication number
CN110784881A
CN110784881A CN201911008975.8A CN201911008975A CN110784881A CN 110784881 A CN110784881 A CN 110784881A CN 201911008975 A CN201911008975 A CN 201911008975A CN 110784881 A CN110784881 A CN 110784881A
Authority
CN
China
Prior art keywords
node
caching
entropy
content
entropy value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911008975.8A
Other languages
Chinese (zh)
Other versions
CN110784881B (en
Inventor
高强
张国翊
田志峰
郭少勇
张伟贤
陈建民
保剑
黄哲
黄儒雅
陈嘉
周瑾瑜
邵苏杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Posts and Telecommunications
Shenzhen Power Supply Bureau Co Ltd
Original Assignee
Beijing University of Posts and Telecommunications
Shenzhen Power Supply Bureau Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Posts and Telecommunications, Shenzhen Power Supply Bureau Co Ltd filed Critical Beijing University of Posts and Telecommunications
Priority to CN201911008975.8A priority Critical patent/CN110784881B/en
Publication of CN110784881A publication Critical patent/CN110784881A/en
Application granted granted Critical
Publication of CN110784881B publication Critical patent/CN110784881B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W24/00Supervisory, monitoring or testing arrangements
    • H04W24/02Arrangements for optimising operational condition
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W28/00Network traffic management; Network resource management
    • H04W28/02Traffic management, e.g. flow control or congestion control
    • H04W28/10Flow control between communication endpoints
    • H04W28/14Flow control between communication endpoints using intermediate storage
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The invention discloses an active caching method for multi-level edge nodes of an internet of things terminal, which comprises the following steps: step S1, predicting the movement probability of the node by adopting a first-order Markov model; step S2, calculating the entropy value through the movement probability to measure the uncertainty of the mobility prediction, and judging the pre-fetching and caching positions of the content in the network according to the entropy value; and step S3, caching the content to the node in advance according to the judged next hop position of the node. The invention reduces uncertainty, eliminates redundancy, ensures high cache hit rate, and increases the performance of replacing the active cache with a small amount of delay.

Description

Method, device and medium for actively caching multi-level edge nodes of Internet of things terminal
Technical Field
The invention belongs to the field of Internet of things, and relates to an active caching method, equipment and medium for multi-level edge nodes of an Internet of things terminal.
Background
While the current internet of things architecture is very successful, it faces some challenges in dealing with the increasing number of applications and services. With this growth, the internet is no longer a tool for both communication endpoints, but a tool needed to support mobility, security, and most important content distribution. To address these challenges, the academia has proposed future internet architectures, such as Information Centric Networking (ICN). ICN considers the internet primarily for content dissemination, where a user interested in a particular content object sends packets specifying the name of the content object without knowing where the content object is and specifying the IP address of the destination that owns it. The distributed content is separated from the established connection, so that the network has higher flexibility, and the content has high reusability due to the characteristic of caching in the ICN network.
Active caching is a method to improve the efficiency of content dissemination. Unlike reactive caching strategies that cache previously requested content, active caching prefetches mobile users' anticipated content of interest ahead of time. Actively acquiring and caching content can reduce the delay in retrieving predictable content requests, while also mitigating backhaul traffic. It is also the main solution for mitigating the latency costs caused by handovers in several settings, such as LTE and Wifi access and future internet architectures.
To achieve the above, active caching relies on the predictability of movement patterns to predict the next location of the mobile device and decide which cache node should cache the prefetched content object. Furthermore, previously proposed active caching strategies either assume perfect mobility prediction or cache redundantly at multiple edge nodes to cope with prediction uncertainty.
In order to solve the development situation of the prior art, the existing papers and patents are searched, compared and analyzed, and the following technical information with high relevance to the invention is screened out:
the prior technical scheme 1 is a patent of 'a D2D mobile content distribution method facing an ICN (Internet communications network) architecture' with a patent number of CN107454562A, belongs to the technical field of Internet mobile communication, and particularly relates to a D2D mobile content distribution method facing the ICN architecture. The method comprises the steps of caching content data in a cache device which is closest to a request end, requesting based on position information, returning the data along an original path after the content data reaches the cache device, and judging whether to cache a data packet in the returning process. However, the scheme does not limit the cache redundancy, and a large amount of cache redundancy is generated in the network due to the fact that the same request is initiated at different geographic positions, so that the overall service performance of the network is reduced.
In the prior art, the patent of 'an ICN seamless mobile system based on an SDN architecture' with the patent number CN108200206A belongs to the technical field of communication, and particularly relates to an ICN seamless mobile system based on an SDN architecture. The provided ICN seamless mobile system based on the SDN framework is provided, a POF controller and the like in an ICN core network control domain are arranged, seamless mobile can be realized, and a user can reply data lost due to a mobile switching process according to needs. Although the scheme considers data recovery after moving, the process is buffering after moving to a new access point, a certain time delay is caused, and the coordination cost of the controller is large.
In the prior art, the patent of "caching method and system based on information center network" with the patent number of CN105357246A belongs to the technical field of information communication, and particularly relates to a caching decision method based on content popularity. The caching method and system based on the information center network are provided, nodes in the network are divided into a common caching node, a resource manager and a backup caching node, and caching decision is made through coordination among the common caching node, the resource manager and the backup caching node. Although the scheme can ensure that the coordination in the same domain can perform a better caching decision, the resource manager is required to maintain the global information of the caching contents of all nodes in the domain, the extra overhead in the coordination is high, and the performance needs to be improved.
Disclosure of Invention
The technical problem to be solved by the embodiments of the present invention is to provide a method, a device and a medium for active caching of multiple levels of edge nodes of an internet of things terminal, which adopt an active caching strategy to eliminate cache redundancy and simultaneously generate a large amount of uncertainty and high delay.
The invention provides an active caching method for multi-level edge nodes of an Internet of things terminal, which comprises the following steps:
step S1, predicting the movement probability of the node by adopting a first-order Markov model;
step S2, calculating the entropy value through the movement probability to measure the uncertainty of the mobility prediction, and judging the pre-fetching and caching positions of the content in the network according to the entropy value;
and step S3, caching the content to the node in advance according to the judged next hop position of the node.
Further, in step S1, the markov model is specifically the following formula:
Figure BDA0002243613400000021
wherein X (L) i,L j) Is that the vehicle drives from L iMove to L iOf the order of Z (L) i) Is that the vehicle is at L iCan retrieve X and Z from the moving trajectory of the training data set,
therefore, the temperature of the molten metal is controlled,
Figure BDA0002243613400000022
wherein L ═ { L ═ L 1,L 2,…,L nRepresents a constituent set of states, p ijIs the transition probability.
Further, in step S2, the specific process of measuring the uncertainty of the mobility prediction by calculating the entropy through the mobility probability is as follows:
step S21, according to the initial state w, according to the node L iTo its neighboring node set N iCalculating an entropy value h according to the transition probability;
step S22, judging whether the entropy h is less than the set threshold value
Figure BDA0002243613400000023
If the entropy value h is less than the predetermined threshold value
Figure BDA0002243613400000024
Step S23 is executed if entropy h is larger than the predetermined threshold
Figure BDA0002243613400000025
Proceed to step S25;
step S23, accumulating N iThe transition probability of the middle node connected to the same superior node is used as the transition probability of the superior node;
step S24, recalculating the entropy value h according to the aggregated transition probability set, and performing step S22;
in step S25, the node with the highest transition probability is selected as the prefetch node cache content.
Further, in step S21, the initial state w specifically includes a transition probability set of each node in the network reaching all potential next-hop nodes.
Further, in step S21, the entropy value h is calculated using the transition probability according to the following formula:
Figure BDA0002243613400000031
wherein P ═ { P ═ P 1,P 2,…,P n},
Figure BDA0002243613400000032
Further, in step S2, the determination of the pre-fetching and caching position of the content in the network according to the entropy value is to vertically aggregate the transition probability to the parent node upwards at the child nodes in the network tree structure until the threshold of the entropy value is satisfied, and the lower nodes capable of aggregating upwards must be connected to the same upper node.
In another aspect, the present invention provides a computer device, comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the following steps of the method when executing the computer program:
predicting the movement probability of the nodes by adopting a first-order Markov model;
calculating the uncertainty of mobility prediction measured by the entropy value through the mobility probability, and judging the pre-fetching and caching positions of the content in the network according to the entropy value;
and caching the content to the node in advance according to the judged next hop position of the node.
In another aspect, the present invention also provides a computer-readable storage medium, on which a computer program is stored, which computer program, when being executed by a processor, carries out the steps of the method of:
predicting the movement probability of the nodes by adopting a first-order Markov model;
calculating the uncertainty of mobility prediction measured by the entropy value through the mobility probability, and judging the pre-fetching and caching positions of the content in the network according to the entropy value;
and caching the content to the node in advance according to the judged next hop position of the node.
The embodiment of the invention has the following beneficial effects:
the method, the device and the medium for the active caching of the multilevel edge node of the internet of things terminal provided by the embodiment of the invention combine the characteristics of an ICN architecture, measure the uncertainty of mobility prediction by using entropy through utilizing the characteristic that any position in the ICN network can be cached, so as to make strategic decisions on the positions of prefetching and caching in the network, thereby eliminating redundant caching, reducing the uncertainty to the maximum extent, eliminating redundancy and ensuring the hit rate of high caching, and increasing the performance of replacing the active caching by a small amount of delay.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is within the scope of the present invention for those skilled in the art to obtain other drawings based on the drawings without inventive exercise.
Fig. 1 is a main flow diagram of an embodiment of an active caching method for a multi-level edge node of an internet of things terminal according to the present invention.
Fig. 2 is a schematic flow chart of an active caching method for a multi-level edge node of an internet of things terminal according to the present invention.
Fig. 3 is a schematic view of vertical aggregation in the multi-level edge node active caching method for the internet of things terminal according to the present invention.
Fig. 4 is a schematic diagram of delay gain conditions in an embodiment of the present invention.
Fig. 5 is a schematic diagram of a server load situation in an embodiment provided by the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in further detail with reference to the accompanying drawings.
Fig. 1 shows a main flow diagram of an embodiment of an active caching method for a multi-level edge node of an internet of things terminal according to the present invention, where in this embodiment, the method includes the following steps:
step S1, predicting the movement probability of the node by adopting a first-order Markov model;
in particular embodiments, the Markov predictor represents the movement pattern as a Markov chain that is constructed using a history of movement trajectories and then predicts the next location based on the current location. The first order markov model considers the transmission range in the on-board network for vehicles to pass through several RSUs as they drive around; the goal of the mobility prediction model is to predict the next RSU with which the vehicle will be associated in order to connect to the internet backbone; eventually, this information will be used to prefetch content and satisfy future requests of the vehicle with lower latency and greater efficiency when actually connecting to the next RSU. Thus, the Markov model forms a set of states L ═ L 1,L 2,…,L nEach state represents an RSU with a transition probability P ijAt L when the vehicle is currently connected to the RSU iThe probability of being connected to the RSU; l is iThe neighbours of a particular RSU form a state N where the vehicle can connect to the next one iSet, so the next state depends only on the current state.
The transition probability is defined as:
Figure BDA0002243613400000041
wherein, X (L) i,L j) Is that the vehicle drives from L iMove to L iOf the order of Z (L) i) Is that the vehicle is at L iCan retrieve X and Z from the moving trajectory of the training data set, thus:
Figure BDA0002243613400000042
wherein L ═ { L ═ L 1,L 2,…,L nRepresents a constituent set of states, p ijIs the transition probability.
In particular, it is shown by markov-based motion prediction that to resolve the uncertainty of the prediction, multiple potential next RSUs require prefetching and caching, which results in cache redundancy and increased traffic on multiple links to the data publisher to prefetch.
Resolving prediction uncertainty, previous work for pre-fetched mobility prediction relies on horizontal aggregation of probabilities, i.e., redundancies at multiple edge nodes, to ensure that mobile users actually retrieve pre-fetched content; with the in-network caching feature of the ICN architecture, nodes at all levels are able to cache content, by aggregating probabilities vertically and caching at higher levels of the network, because higher level nodes cover a larger area and increase the chance that future requests of vehicles will be satisfied by prefetched content on these nodes without increasing redundancy and congestion links to data publishers, in the particular case of a predictor, the output of the markov predictor is a probability distribution over many potential next states, using an entropy metric to quantify the uncertainty of the prediction and make caching decisions.
Step S2, calculating an entropy value through the mobility probability to measure the uncertainty of mobility prediction, and judging the pre-fetching and caching positions of the content in the network according to the entropy value, wherein the more probable the event is, the more nearly uniform the event is, the larger the entropy is; on the other hand, if some events are close to certainty, the entropy is small, in order to support the use of the entropy as an uncertainty measure, the least square fitting of two variables of prediction accuracy and entropy shows that the entropy value cannot exceed 0.5 if the prediction accuracy of 90% is to be achieved;
in a specific embodiment, the specific process of measuring uncertainty of mobility prediction by calculating entropy through mobile probability includes:
step S21, according to the initial state w, according to the node L iTo its neighboring node set N iCalculating an entropy value h according to the transition probability;
more specifically, the initial state w specifically includes a transition probability set of each node in the network when the node reaches all potential next-hop nodes;
the entropy value h is calculated using the transition probability according to the following formula:
Figure BDA0002243613400000051
wherein P ═ { P ═ P 1,P 2,…,P n},
Figure BDA0002243613400000052
As shown in fig. 2, more specifically, the determination of the pre-fetching and caching position of the content in the network according to the entropy value is to vertically and upwardly aggregate transition probabilities to parent nodes at child nodes in a network tree structure until a threshold of the entropy value is met, and lower nodes capable of being aggregated upwardly must be connected to the same upper node.
Step S22, judging whether the entropy value h is less than the established threshold value h *If the entropy value h is less than the predetermined threshold value h *Then go to step S23, if the entropy h is larger than the predetermined threshold h *Then go to step S25;
step S23, accumulating N iThe transition probability of the middle node connected to the same superior node is used as the transition probability of the superior node;
step S24, recalculating the entropy value h according to the aggregated transition probability set, and performing step S22;
in step S25, the node with the highest transition probability is selected as the prefetch node cache content.
And step S3, caching the content to the node in advance according to the judged next hop position of the node.
As shown in FIG. 3, in particular embodiments, L may be polymerized 2And L 3Because they are all connected to the same core router E 2(ii) a On the other hand, L 2、L 3And L 5The transition probabilities of (c) cannot be aggregated at the second level, but can continue to be aggregated upward, with aggregation occurring at the tertiary node M.
When the mobile node is at node L iThe process of selecting a potential next hop node to prefetch and cache content, first, ifThe result of entropy calculation of the current transition probability is below a threshold h *When the value is approximately equal to 0.5, the high certainty is shown when the next hop node is predicted at present, and the node L is selected iPrefetching the content by the node with the highest transition probability; on the other hand, if the entropy value is higher than the threshold value, it indicates that the movement of the current node has a high degree of uncertainty, and therefore content prefetching at upper nodes of the network is considered; at this time, the node L is first connected iAnd upwards aggregating all potential next-hop nodes, accumulating all transition probabilities connected with the same superior node as the transition probabilities of the superior node, then recalculating the entropy values of all the aggregated superior node sets, selecting the node with the highest transition probability in the current node set to prefetch contents if the entropy values are lower than a threshold, or taking the current node set as a subordinate node set if the entropy values are still higher than the threshold, and repeatedly performing the same operation at a higher level in the network until the calculated entropy values are lower than the threshold or reach the source content node in the network.
When the mobile node is at L 6When the transition probability w is (0.5,0.4,0.05,0.05), the neighboring node set N 1The entropy calculation result of (1) is 1.46, and it can be seen that the node L with the highest transition probability is the node L 9Or at node L 10The cache contents all have prediction uncertainty with nearly half probability, and if the process proposed in the method is adopted, the node L is connected 9And a node L 10Is aggregated upwards, since both nodes are connected to node E 4Then the entropy value is reduced to 0.57 when the updated transition probability w is (0.9,0.05,0.05), which is at node E 4The cached content will have higher reliability; on the other hand, when the node is at L 1When the transition probability w is (0.25,0.25,0.25,0.25) entropy is 2, the vertical aggregation is connected to the same upper node E 2Node L of 2、L 3After the transition probability, the entropy is reduced to 1.5, which obviously cannot meet the requirement of prediction precision, so that after the continuous upward aggregation, the calculation result of the entropy will become 0 because all the second-level nodes are connected to the same third-level node M, and the cache of the third-level node M is realizedContent will ensure that content can be retrieved by moving the node to any location, with a small amount of delay in exchange for a large savings in cache space.
The calculation of the transition probability is that a Markov movement prediction model is established by using a taxi movement model in the san Francisco, a map is divided into areas with the same size by using a taxi movement data set in the san Francisco, the latitude and longitude coordinates of about 500 taxis collected in weeks in the san Francisco are included, each area represents a network internal node, the coordinates are associated with the areas, the data set is subjected to format conversion, the latitude and longitude coordinates are replaced by network nodes, the data set is divided into a training set and a data set, a Markov predictor is established by the training set, the transition probability set of all nodes can be obtained after the establishment is successful, and the correctness of the calculation of the transition probability and the effectiveness of the proposed strategy can be verified by the data set.
According to the active caching method for the multi-level edge nodes of the Internet of things terminal, provided by the invention, the comparison is carried out with the strategy in the ndnSIM through experiments, and what needs to be compared is the situation that a vehicle is converted between RSUs when downloading a 50MB file. There were a total of 230 RSUs in the experiment, each RSU representing an edge node with transition probabilities in the Markov predictor. Thus, the transition from all RSUs to their neighbor nodes is simulated and the average of all such scenarios is displayed.
To focus on the performance of active caching, the experiment will not install a responsive cache on a node and provide a node with a cache size large enough to not affect the performance results associated with locating policy prefetch nodes. Furthermore, the experiment only provides results of prefetching the content in order to compare the predicted performance gain to the selection of the prefetch node. The results presented are for the case where the core network is an M-ary tree structure of M-6. We compare entropy-based caching with edge caching of different degrees of redundancy and do not have active caching. In the following experimental results, R represents the cache redundancy degree, i.e. the number of copies of the same resource in the network
As shown in fig. 4, one of the goals of prefetching is to reduce the latency of data retrieval to enhance the user experience. Latency is measured as the time between the sending of the first packet of interest of the content object and the receipt of the corresponding data, and the figure shows the percentage of latency gain compared to no active caching, as expected, all active caching strategies result in a positive latency gain relative to no active caching, since not all packets of interest need to reach the server to retrieve the content, and the entropy-based caching may increase latency by 60% compared to no active caching, which exceeds the edge caching without redundancy, R ═ 1.
As shown in fig. 5, one of the main goals of ICN and active caching is to reduce the burden on the original content publisher, thereby reducing backhaul traffic, measuring server load as the average number of requests reaching the server, including prefetch requests. The figure shows that the entropy-based policy is least loaded on the server because it eliminates redundancy and high accuracy, allowing data to be recovered from the intermediate cache; edge caching, on the other hand, results in re-sending to the server or multiple prefetches, resulting in a large number of prefetch requests to the server.
In another aspect, the present invention provides a computer device, comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the following steps of the method when executing the computer program:
predicting the movement probability of the nodes by adopting a first-order Markov model;
calculating the uncertainty of mobility prediction measured by the entropy value through the mobility probability, and judging the pre-fetching and caching positions of the content in the network according to the entropy value;
and caching the content to the node in advance according to the judged next hop position of the node.
In another aspect, the present invention also provides a computer-readable storage medium, on which a computer program is stored, which computer program, when being executed by a processor, carries out the steps of the method of:
predicting the movement probability of the nodes by adopting a first-order Markov model;
calculating the uncertainty of mobility prediction measured by the entropy value through the mobility probability, and judging the pre-fetching and caching positions of the content in the network according to the entropy value;
and caching the content to the node in advance according to the judged next hop position of the node.
For further details, reference may be made to the preceding description of the drawings, which are not described in detail herein.
The embodiment of the invention has the following beneficial effects:
according to the method, the device and the medium for actively caching the multi-level edge nodes of the Internet of things terminal, provided by the embodiment of the invention, a network is divided into a core area and an edge area according to an Internet of things architecture;
aiming at the structural characteristics in the Internet of things edge network, the branch probability is calculated based on the Markov predictor, the entropy quantization uncertainty is calculated, the flexibility of caching at any position in the network by the ICN is utilized, and the prefetching node is positioned, so that the cache redundancy is eliminated, the uncertainty is reduced to the maximum extent, the redundancy is eliminated, the high cache hit rate is ensured, and the performance improvement of replacing the active cache is increased by a small amount of delay.
While the invention has been described in connection with what is presently considered to be the most practical and preferred embodiment, it is to be understood that the invention is not to be limited to the disclosed embodiment, but on the contrary, is intended to cover various modifications and equivalent arrangements included within the spirit and scope of the appended claims.

Claims (8)

1. A method for actively caching multi-level edge nodes of an Internet of things terminal is characterized by comprising the following steps:
step S1, predicting the movement probability of the node by adopting a first-order Markov model;
step S2, calculating the entropy value through the movement probability to measure the uncertainty of the mobility prediction, and judging the pre-fetching and caching positions of the content in the network according to the entropy value;
and step S3, caching the content to the node in advance according to the judged next hop position of the node.
2. The method of claim 1, wherein in step S1, the markov model is embodied by the following formula:
Figure FDA0002243613390000011
wherein X (L) i,L j) Is that the vehicle drives from L iMove to L iOf the order of Z (L) i) Is that the vehicle is at L iCan retrieve X and Z from the moving trajectory of the training data set,
therefore, the temperature of the molten metal is controlled,
Figure FDA0002243613390000012
wherein L ═ { L ═ L 1,L 2,...,L nRepresents a constituent set of states, p ijIs the transition probability.
3. The method according to claim 2, wherein in step S2, the specific process of measuring the uncertainty of the mobility prediction by calculating the entropy through the mobility probability is as follows:
step S21, according to the initial state w, according to the node L iTo its neighboring node set N iCalculating an entropy value h according to the transition probability;
step S22, judging whether the entropy value h is less than the established threshold value h *If the entropy value h is less than the predetermined threshold value h *Then go to step S23, if the entropy h is larger than the predetermined threshold h *Then go to step S25;
step S23, accumulating N iThe transition probability of the middle node connected to the same superior node is used as the transition probability of the superior node;
step S24, recalculating the entropy value h according to the aggregated transition probability set, and performing step S22;
in step S25, the node with the highest transition probability is selected as the prefetch node cache content.
4. The method according to claim 3, wherein in step S21, the initial state w is a set of transition probabilities that includes every node in the network reaching all potential next-hop nodes.
5. The method according to claim 4, characterized in that in step S21, the entropy values h are calculated with transition probabilities according to the following formula:
wherein P ═ { P ═ P 1,P 2,...,P n},
Figure FDA0002243613390000014
6. The method according to claim 5, wherein in step S2, the decision is made on the location of the content prefetched and cached in the network according to the entropy value, in particular, the transition probability is vertically aggregated upward to the parent node by the child nodes in the network tree structure, until the threshold of the entropy value is satisfied, and the lower nodes capable of being aggregated upward must be connected to the same upper node.
7. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the steps of the method of any of claims 1 to 6 are implemented when the computer program is executed by the processor.
8. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 6.
CN201911008975.8A 2019-10-23 2019-10-23 Method, equipment and medium for actively caching multi-level edge nodes of Internet of things terminal Active CN110784881B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911008975.8A CN110784881B (en) 2019-10-23 2019-10-23 Method, equipment and medium for actively caching multi-level edge nodes of Internet of things terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911008975.8A CN110784881B (en) 2019-10-23 2019-10-23 Method, equipment and medium for actively caching multi-level edge nodes of Internet of things terminal

Publications (2)

Publication Number Publication Date
CN110784881A true CN110784881A (en) 2020-02-11
CN110784881B CN110784881B (en) 2023-05-02

Family

ID=69386281

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911008975.8A Active CN110784881B (en) 2019-10-23 2019-10-23 Method, equipment and medium for actively caching multi-level edge nodes of Internet of things terminal

Country Status (1)

Country Link
CN (1) CN110784881B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113037872A (en) * 2021-05-20 2021-06-25 杭州雅观科技有限公司 Caching and prefetching method based on Internet of things multi-level edge nodes
CN113422801A (en) * 2021-05-13 2021-09-21 河南师范大学 Edge network node content distribution method, system, device and computer equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150212943A1 (en) * 2014-01-24 2015-07-30 Netapp, Inc. Methods for combining access history and sequentiality for intelligent prefetching and devices thereof
CN107018493A (en) * 2017-04-20 2017-08-04 北京工业大学 A kind of geographical position Forecasting Methodology based on continuous sequential Markov model
CN108093056A (en) * 2017-12-25 2018-05-29 重庆邮电大学 Information centre's wireless network virtualization nodes buffer replacing method
CN109391681A (en) * 2018-09-14 2019-02-26 重庆邮电大学 V2X mobility prediction based on MEC unloads scheme with content caching
CN110312231A (en) * 2019-06-28 2019-10-08 重庆邮电大学 Content caching decision and resource allocation joint optimization method based on mobile edge calculations in a kind of car networking

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150212943A1 (en) * 2014-01-24 2015-07-30 Netapp, Inc. Methods for combining access history and sequentiality for intelligent prefetching and devices thereof
CN107018493A (en) * 2017-04-20 2017-08-04 北京工业大学 A kind of geographical position Forecasting Methodology based on continuous sequential Markov model
CN108093056A (en) * 2017-12-25 2018-05-29 重庆邮电大学 Information centre's wireless network virtualization nodes buffer replacing method
CN109391681A (en) * 2018-09-14 2019-02-26 重庆邮电大学 V2X mobility prediction based on MEC unloads scheme with content caching
CN110312231A (en) * 2019-06-28 2019-10-08 重庆邮电大学 Content caching decision and resource allocation joint optimization method based on mobile edge calculations in a kind of car networking

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
李洪成等: "基于自扩展时间窗的告警多级聚合与关联方法", 《工程科学与技术》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113422801A (en) * 2021-05-13 2021-09-21 河南师范大学 Edge network node content distribution method, system, device and computer equipment
CN113037872A (en) * 2021-05-20 2021-06-25 杭州雅观科技有限公司 Caching and prefetching method based on Internet of things multi-level edge nodes

Also Published As

Publication number Publication date
CN110784881B (en) 2023-05-02

Similar Documents

Publication Publication Date Title
Zhong et al. A deep reinforcement learning-based framework for content caching
Abani et al. Proactive caching with mobility prediction under uncertainty in information-centric networks
KR101943530B1 (en) Systems and methods for placing virtual serving gateways for mobility management
US10305968B2 (en) Reputation-based strategy for forwarding and responding to interests over a content centric network
Zhang et al. Smart proactive caching: Empower the video delivery for autonomous vehicles in ICN-based networks
Mahmood et al. Mobility-aware edge caching for connected cars
CN109218747B (en) Video service classification caching method based on user mobility in super-dense heterogeneous network
CN104901980B (en) The equiblibrium mass distribution caching method of numerical nomenclature network based on popularity
US10917328B2 (en) Routing updates in ICN based networks
CN102647357B (en) A kind of contents processing method for routing and device
CN102075562A (en) Cooperative caching method and device
CN110784881B (en) Method, equipment and medium for actively caching multi-level edge nodes of Internet of things terminal
CN115314944A (en) Internet of vehicles cooperative caching method based on mobile vehicle social relation perception
Liu et al. Mobility-aware video prefetch caching and replacement strategies in mobile-edge computing networks
CN110913430B (en) Active cooperative caching method and cache management device for files in wireless network
CN112911614A (en) Cooperative coding caching method based on dynamic request D2D network
CN108390936A (en) A kind of probability cache algorithm based on caching distributed awareness
CN117221403A (en) Content caching method based on user movement and federal caching decision
CN110113418B (en) Collaborative cache updating method for vehicle-associated information center network
Vasilakos et al. Mobility-based proactive multicast for seamless mobility support in cellular network environments
Zhang et al. A cooperative content distribution system for vehicles
KR20170103286A (en) Method for providing of content and caching, recording medium recording program therfor
KR102235622B1 (en) Method and Apparatus for Cooperative Edge Caching in IoT Environment
CN115361710A (en) Content placement method in edge cache
CN113473408A (en) User association method and system for realizing video transmission in Internet of vehicles

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant