AU2020103384A4 - Method for Constructing Energy-efficient Network Content Distribution Mechanism Based on Edge Intelligent Caches - Google Patents

Method for Constructing Energy-efficient Network Content Distribution Mechanism Based on Edge Intelligent Caches Download PDF

Info

Publication number
AU2020103384A4
AU2020103384A4 AU2020103384A AU2020103384A AU2020103384A4 AU 2020103384 A4 AU2020103384 A4 AU 2020103384A4 AU 2020103384 A AU2020103384 A AU 2020103384A AU 2020103384 A AU2020103384 A AU 2020103384A AU 2020103384 A4 AU2020103384 A4 AU 2020103384A4
Authority
AU
Australia
Prior art keywords
network
power consumption
content
model
constructing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
AU2020103384A
Inventor
Chao FANG
Lin Aung Htin
Gongtian Li
Peishan Li
Changtong Liu
Zhuwei Wang
Yihui Yang
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Technology
Original Assignee
Beijing University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Technology filed Critical Beijing University of Technology
Priority to AU2020103384A priority Critical patent/AU2020103384A4/en
Application granted granted Critical
Publication of AU2020103384A4 publication Critical patent/AU2020103384A4/en
Ceased legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/104Peer-to-peer [P2P] networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0803Configuration setting
    • H04L41/0823Configuration setting characterised by the purposes of a change of settings, e.g. optimising configuration for enhancing reliability
    • H04L41/0833Configuration setting characterised by the purposes of a change of settings, e.g. optimising configuration for enhancing reliability for reduction of network energy consumption
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/2866Architectures; Arrangements
    • H04L67/289Intermediate processing functionally located close to the data consumer application, e.g. in same machine, in same home or in same sub-network
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • H04L67/568Storing data temporarily at an intermediate stage, e.g. caching
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W28/00Network traffic management; Network resource management
    • H04W28/02Traffic management, e.g. flow control or congestion control
    • H04W28/10Flow control between communication endpoints
    • H04W28/14Flow control between communication endpoints using intermediate storage
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W40/00Communication routing or communication path finding
    • H04W40/02Communication route or path selection, e.g. power-based or shortest path routing
    • H04W40/04Communication route or path selection, e.g. power-based or shortest path routing based on wireless node resources
    • H04W40/10Communication route or path selection, e.g. power-based or shortest path routing based on wireless node resources based on available power or energy
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/12Shortest path evaluation
    • H04L45/122Shortest path evaluation by minimising distances, e.g. by selecting a route with minimum of number of hops
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/12Shortest path evaluation
    • H04L45/125Shortest path evaluation based on throughput or bandwidth
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Artificial Intelligence (AREA)
  • Mobile Radio Communication Systems (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The invention discloses a method for constructing an energy-efficient network content distribution mechanism based on edge intelligent caches, which comprises the following steps: constructing a system model according to the change of a network service mode, and deploying contents on the network edge according to the system model; constructing a network power consumption model of the internet service provider (ISP) and the content provider (CP) according to the deployment contents on the network edge, and restricting and simplifying the network power consumption model; and optimizing and solving the network power consumption model based on reinforcement learning. The invention solves the problem that the prior research basically aims at the prediction angle of content popularity, cache hit rate and so on, lacks an optimal routing scheme based on machine learning to solve the energy consumption problem when the ISP and the CP cooperatively work, establishes a centralized system model for the power consumption of the ISP and the CP by exploring the behavior change of network participants in an edge cache environment, carries out distributed online and offline solution by using the machine learning, and the simulation result is compared with a theoretical value. -- Content Provider r -------------------- ------- I nternet Service I P Prv Provider JMobile - subscriber Figure1I Figure

Description

-- Content Provider
r -------------------- -------
I nternet Service I P Prv Provider
JMobile - subscriber
Figure1I
Figure
Method for Constructing Energy-efficient Network Content Distribution Mechanism Based on Edge Intelligent Caches
TECHNICAL FIELD
The invention relates to the technical field of network communication, in particular to a method for constructing an energy-efficient network content distribution mechanism based on edge intelligent caches.
BACKGROUND
According to the forecast, 5.3 billion people will use the Internet by 2023, which has brought about the rapid development of the Internet industry, marked the visualization of Internet demand, and also led to the increase of equipment, the rapid increase of network traffic, and the heavy burden of mobile communication network traffic as well as the serious problem of energy consumption. Although the motivation of the ISP and the CP is different, in the existing network system, the CP and the ISP cannot realize their own goal separately, only cooperation can obtain the network and request information accurately, and enable the CP to select the optimal content server to realize load balancing, and reduce delay, and enable the ISP to minimize the transmission flow, and reduce the jam. As a result, the ISP and the CP want to cooperate to provide better content delivery services for users.
Through in-depth investigation of the situation of the Internet, due to the characteristics of end-to-end communication in the traditional Internet, the traditional Internet lacks internal support for contents delivery services, so there is an urgent need for a new scheme to eliminate the problem of redundant transmission of network contents. If popular contents are cached on the access side of network communication, traffic problems can be effectively reduced. Researchers have made many preliminary attempts at this architecture: P2P has broken the user and server mode of traditional IP networks, but nodes are difficult to maintain. The content delivery network (CDN) has established a cache layer, but the architecture of the application layer lacks awareness of the underlying network, which makes it difficult to improve the efficiency of content distribution. In order to solve this problem fundamentally, researchers have proposed a network layer content retrieval architecture information centric network (ICN), which enables users to only pay attention to the content, the most popular implementation scheme CCN of content centric network (ICN) is the foundation of the invention. The above problems are effectively solved by caching popular content in the network. However, extensive cache deployment is not always required in ICN and the nearest route based on content replication may bring more consumption. In view of the above solutions, the research on ICN points out the urgency of improving cache utilization. A large number of researchers pay attention to cache deployment and collaboration within the ISP, but lack cooperation with the CP to control energy consumption. The researchers have discussed a variety of factors affecting energy consumption, which has provided leading significance, but lack of systematic research. Furthermore, the existing research is basically aimed at the prediction angle of content popularity and cache hit rate, and lacks an optimal routing scheme based on machine learning to solve the problem of the ISP and the CP cooperative energy consumption.
Therefore, how to provide an energy-efficient network content distribution mechanism based on edge intelligent caches to minimize the power consumption of the whole network by optimizing the allocation of cache resources and the transmission of an optimal routing path is an urgent problem to be solved by the technicians skilled in the art.
SUMMARY
Therefore, the invention provides a method for constructing an energy-efficient network content distribution mechanism based on edge intelligent caches, which solves the problem of large energy consumption in related technical problems.
In order to achieve the above purpose, the invention adopts the following technical scheme:
According to one aspect of the invention, the invention provides a method for constructing an energy-efficient network content distribution mechanism based on edge intelligent caches, which comprises the following steps:
Constructing a system model according to the change of the network service mode, and deploying content at the edge of the network according to the system model;
Constructing a network power consumption model of the ISP and the CP according to the deployment content of the network edge, and restricting and simplifying the network power consumption model;
And optimizing and solving the network power consumption model based on reinforcement learning.
Further, constructing the system model comprises the following steps:
Constructing a content popularity model according to the Zipf distribution;
And constructing a network model, wherein the network model comprises a network service and architecture model and a network topology model.
Furthermore, the content popularity model is constructed in such a way that the video content is numbered from 1 to k assuming that the number of content types is F; and in a given fixed time, assuming the total number of base station requests is R, and the distribution of the content popularity with the content number k is:
Rk = R - k-"I k-", k = 1, 2,..., F k=1
Among which, the Zipf skewness coefficient a characterizes the content popularity.
Furthermore, the constructed model of the ISP power consumption comprises a base station power consumption model, a transmission power consumption model and a cache power consumption model; the power consumption model of the ISP is obtained according to the constructed base station power consumption model, the transmission power consumption model and the cache power consumption model; the power consumption model of the ISP is expressed as follows:
NF
P + 2 _1( - qs + O 1- (2q XA s ' -1 P+ ilklk
i=1 k=1 +1 qP, X ska1
Among which, P is the inherent power consumption; Ap is a slope parameter, used to
indicate the influence of BS traffic load on its power consumption; gis the channel gain; B represents the system bandwidth; Nois the power density associated with noise; rhis a constant associated with bit error rate requirements; qf is the number of requests for content K in base
station i, and s' is the size of content K; Pn is the network node power; Pis the network link power that the request passes through; Hrepresents the average number of hops between the
base station i and the content source; Xik is a Boolean variable; The average retrieval power consumption of each cache arrival content request is Pr; coca is the power efficiency parameter.
Furthermore, the constructed model of the CP power consumption includes static power consumption Ps, and power consumed by processing the request that is not satisfied in the edge caches; The CP power consumption model is expressed as follows:
CP Ps+$ (1-Xik)q kD
Among which, PD is the average retrieval power for each user request in the source server.
Furthermore, for the network power consumption model constraints for constructing the ISP and the CP, the specific expressions are as follows:
F
_Xsk Ci,ViE N;
XE {0,1,ViE N, kE F;
Among which, Ci is the maximum cache capacity of cache i.
Furthermore, the simplified manner includes caching the content in descending order of the ranking of content popularity for each base station i to achieve optimal caching performance, each base station having an average service capacity represented by Iand the same number of content requests,
The number of user requests that are not satisfied in the edge caches based on the above two F R N F
k-" k k=1 Y conditions is simplified as: Ni=1 N k=N,
According to the simplification mode and the constraint condition, the network power model is rewritten as follows:
F R - max N- P+A (2"-- N F k-" k-" N i=1 k=N -1 k=1
F R -3-N F +0II-10{P+1) - sY k- Ik N i=1 k=N, -1 k=1
NF F +R.P+- N,+P+R.PD -a -/Y a i=1 k=N+1 k=1
s.t. N - Y:< C,Vie N Xk e {0,1},Vie N,ke F
Among which,= r/g , g is the channel gain; B is the system bandwidth; Nois the power NOB density associated with noise; r is the constant associated with the error rate requirement.
Furthermore, the network power consumption model is further optimized using an enhanced Q-Learning algorithm.
Furthermore, the training steps of the matrix Q by the Q-Learning algorithm are expressed as follows:
Giving parameter# and setting environment reward in the matrix R;
Initializing the matrix Q; Looping through the scene:
(1) Randomly selecting a state as an initial state;
(2) Executing under the condition that the target state is not reached: randomly selecting a behavior a under the current state; executing behavior a gets the next state
(3) Obtaining the maximum Q value of the next state according to all possible actions;
(4) Updating state according to Q (s, a)= R(s, a)+(max Q(s', a'));
Obtaining a trained matrix Q.
Furthermore, a specific implementation of the Q-Learning algorithm using a matrix Q: Setting the current state as an initial state;
Finding the action with the highest Q value from the current state;
Entering the next state;
Repeating the last two steps until the current state equals the target state.
According to the above technical scheme, compared with the prior art, the method for constructing the energy-efficient network content distribution mechanism based on the edge intelligent caches disclosed by the invention has the advantages that the edge caches are reasonably deployed under the cooperation of the ISP and the CP so as to optimize the resource allocation, the pressure of a mobile core network link is greatly relieved, a large amount of redundant content transmission existing in the network and the traffic load between the network domain and within the network domain are effectively reduced, the energy efficiency of the whole network is improved, the positive influence on the society and the economy is generated, the concept of green energy conservation is met, and the development prospect is realized. The invention further optimizes the system model by utilizing the machine strengthening learning with wide application prospect, designs a system which is adaptive to the requests of users and acquires the cache nearby by the optimal routing path, further reduces the transmission energy consumption of the system, further jointly improves the network service quality, has wide research development prospect and has reference value for researchers in the related field. The invention has certain practical value and solves the problem of network power consumption under the condition of not considering heterogeneous wireless networks and bottom-layer protocols. At the same time, the analysis results verify many kinds of network factors that influence the energy consumption, and lay a foundation for researchers in the future.
BRIEF DESCRIPTION OF THE FIGURES
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the following brief description will be given of the drawings which need to be used in the embodiments or the description of the prior art, and it will be apparent that the drawings in the following description are only the embodiments of the present invention, and that other drawings can be obtained from the drawings provided without any creative effort for the technicians of ordinary skill in the art.
Figure 1 is a diagram of a network architecture model according to the present invention;
Figure 2 is a diagram illustrating a network topology model according to the present invention;
Figure 3 is a histogram of network power performance assessment for different strategies under different network topologies according to the present invention;
Figure 4 is a histogram of network power performance assessment for different strategies under different cache hardware according to the present invention;
Figure 5 is a graph of network power performance evaluation lines for different strategies under different cache sizes according to the present invention;
Figure 6 is a graph of network power performance assessment lines for different strategies under different content popularity according to the present invention;
Figure 7 is a histogram of network power performance assessment under different content categories for different strategies according to the present invention;
Figure 8 is a flowchart of the method of the present invention.
DESCRIPTION OF THE INVENTION
The technical solution in the embodiments of the present invention will be clearly and fully described below. Obviously, the described embodiments are only a part of the embodiments of the present invention, and not all the embodiments. Based on the embodiments of the present invention, all other embodiments obtained by the technicians of ordinary skills in the art without creative labor are within the scope of protection of the present invention.
The embodiment of the invention provides a method for constructing an energy-efficient network content distribution mechanism based on edge intelligent caches, and Figure 8 is a method for constructing the energy-efficient network content distribution mechanism according to the embodiment of the invention, and as shown in Figure 8, the flow comprises the following steps:
Step Si01, constructing a system model according to the change of the network service mode, and deploying content at the edge of the network according to the system model;
Step S102, constructing a network power consumption model of the ISP and the CP according to the deployment content of the network edge, and restricting and simplifying the network power consumption model;
Step S103, optimizing and solving the network power consumption model based on reinforcementlearning.
Through the above steps, the network power consumption model obtained by the optimization solves the problem that the current research basically aims at prediction angles such as content popularity, cache hit rate and the like and lacks an optimal routing scheme based on machine learning to solve the problem of energy consumption of the ISP and the CP cooperation, provides a method for constructing an energy-efficient network content distribution mechanism based on edge intelligent caches; and establishes a centralized system model for the ISP and the CP power consumption by researching the behavior change of network participants in an edge cache environment, and carries out the theoretical calculation. Then, using the reinforcement learning to optimize the network power consumption model, and designs an optimal route path solution for adaptive searching cache content. And finally, simulating and analyzing the designed joint power consumption model under various strategies respectively, and comprehensively discussing various factors influencing the energy consumption to realize the optimization of the cache contents and the minimization of the power consumption.
Embodiment 1
In the present embodiment, in the above step S101, constructing the system model includes:
Step S1011, constructing a content popularity model according to the Zipf distribution;
Step S1012, constructing a network model, wherein the network model comprises a network service and architecture model and a network topology model.
Furthermore, step S1011 constructs a content popularity model.
The popularity model of network video content is designed according to Zipf's Law. Assuming that the number of content types is F, the video content is numbered from 1 to k, and the distribution of the content popularity with the total number of requests of the base station being R, and the content number being k within a given fixed time is:
F Rk = R -k-" k-" k =1,2,...,F k=1 (1)
Among which, the Zipf skewness coefficient a represents the content popularity, the larger the a value is, the more concentrated the content request is, and the larger the request amount of the popularity data is.
Step S1012, constructing a network model.
In this embodiment, the service model includes three participants: the ISP, the CP, and the MC. In the conventional internet, the ISP is only responsible for transferring all content requests from the CP to the MC, which makes both the ISP and the CP more power consuming. However, in edge caching-based networks, the ISP can deploy intra-network caching in access networks while providing network support and content delivery services. Obviously, the adoption of the edge cache changes the existing network service model can reduce the power consumption by meeting most content requests of the network edge, and further improve the service quality and the terminal user experience quality.
As shown in Figure 1, the edge caches are placed in the ISP's base station (BS), and the ISP and the CP provide collaborative content delivery services to improve the efficiency of data distribution. In the network model, Xi, is a Boolean variable that indicates whether the cache deployed on BS i caches content k. If BS i caches content k, Xi takes 1, otherwise takes 0. According to the content popularity ranking, popular content can be cached in BS to meet the needs of most end users. Although the introduced edge caches will bring additional caching capacity to the ISP, the traffic burden of the entire network can be significantly reduced. Therefore, the total power efficiency of the ISP and the CP can be significantly improved.
Three kinds of network topology models are established, and the intra-domain nodes of Transit-Stub have higher correlation degree, but there are some limitations in the distribution of geographical position. Having the characteristic of completely equal disorder, Waxman topology structure is only suitable for small networks, and is usually used to analyze the influence of network topology on network performance. The network distribution is not random, but based on the approximate degree, and the structure characteristic of the Power-Law topology has the characteristic of connecting a small number of nodes to a large number of nodes, which not only meets the requirements of the Zipf distribution, but also is beneficial to maintenance and greatly reduces the influence of a single node on the topology structure. Therefore, this embodiment will focus on the Power-Law topology model of Figure 2 and will take the other two topology structures as a comparative analysis.
In this embodiment, in step S102 described above, this embodiment uses a centralized strategy to formulate an optimal power consumption problem for the ISP and the CP. Without losing generality, the model takes only one ISP and one CP scene into account to simplify the problem. As shown in table 1, the core parameter symbols and definitions of this embodiment.
Table 1
Symbol Meaning
N Number of base stations
M Number ofnodes
F Number of different network contents
R Number ofmobile user requests
a Zipf skewness coefficient
Ae Slope factor
Xi Service rate
B Bandwidth
y Gamma factor
s Average content size
Ni Number of each base station caches the content according to the content popularity
P. Static power consumption of each base station
Pr Average data retrieval power consumption for each user in edge caches
PD Average retrieval power consumption for each user in a data center
P. Network node (such as router, switch) power
P, Request power across the network link
W" Power efficiency parameters of cache hardware technology
P. Static power consumption of content providers
He' Corresponding hop count obtained from proximity of a proximity response node
Sc' Number of content obtained from the responding node
Step S1021, constructing an ISP power consumption model;
Among which, the power consumption of the ISP consists of base station power consumption, transmission power consumption and cache power consumption.
1) Base station power model:
The conventional power of BS i based on M/G/1 processor is represented by:
BS,i 0 P ,i (2). When a base station operating, Pofor inherent power consumption, Ap is a
slope parameter, indicating the effect of BS traffic load on its power consumption; ,, for
transmission power consumption of BS i.
Assuming that the service capacity or service rate of the BS i is xi bits per second, the transmission power consumption matches the number of data traffic processed by the BS i. Thus, the transmit power P, (BS i) can be expressed as:
1i I = 7(21_1kIY 1 q~* 7 9 = NB (3)
g is the channel gain; B represents the system bandwidth; No is the power density
associated with the noise; i is the constant associated with the error rate requirement; qf is the
number of requests for content k in the base station i; sk is the size of content k;
In conclusion, the power consumption of the base station without the caches can be expressed by the formula:
However, when a user's requests arrive at the BS i, the requests are partially processed by the cache contents of the BS i, and thus network power can be effectively reduced.
Assuming that the number of unsatisfied content requests of the base station i is represented by qis*, the total power consumption representation of the BS i with the cache may be defined F by(I-X k) q Sk as: k-1
A Bi F PBs,i + 2B - - X, )q's' (5)
2) Transmission power model:
The requested transmission power in the ISPs consists of the power P, of network nodes
(such as routers and switches) and the power P of network links through which the request
passes. For ease of illustration, A is used to represent the average number of hops between the
base station i and the content source, thus simplifying the transmission power model. ICN
network consists of N content routers, servers, etc.. In a stable state, n (:N) copies of content elements are cached on n content routers. Contents that are not buffered in the caches must be
accessed via N-n content routers through one or more hops, therefore, the average jump
distance H. > 0 to the content is an important indicator of cache location efficiency. The hop
count drives the balance between cache and transmission energy, and the exact form depends on the network topology and the replica placement algorithm. Therefore, the power consumed by content request transmission to the base station i can be expressed as:
F
k=1 (6)
3) Cache power model:
Cache power consumption model: Cache power consumption includes cache retrieval power consumption and content cache power consumption, which are related to the user requests and the cached data.
Assuming that the average retrieval power consumption of each cache arrival content request is P,, the content cache power of a base station is adapted to the total number of cached content in its cache. The total cache power consumption can be expressed as:
F F F
c=3 k=1 P+ q k=1 XkskmC,,Xksk k=1 C, (7)
Among which, Ci is the maximum cache capacity of cache i. The cache hardware technologies, such as Dynamic Random Access Memory (DRAM), High Speed Solid State Disk (SSD) and Static Random Access Memory (SARM), determine coca, i.e. the power efficiency parameters.
Therefore, the network power consumption models of the ISP can cooperate:
NF
+ - ilklk )is*+IIl(,}) (8)
+11q,P+XiKkSkO") i=1 k=1
Step S1022, constructing a CP power consumption model.
The power consumption of the CP consists of static power consumption Ps and power consumption for processing unsatisfied requests in edge caches, which can be written as:
P CP S+ ik-),P l- D9 i-I k-I
Among which, PD is the average retrieval power consumption requested by each user in the source server. After negotiation and cooperation of the CP and the ISP, an edge cache is deployed near the base station. The more content requests satisfied by the caches, the less power consumption for processing unsatisfied content requests in the edge caches, so as to achieve the purpose of saving power consumption.
Step S1023, constructing a network power consumption model.
The goal of this embodiment is to minimize the power consumption of the entire network so as to effectively allocate data in an edge cache environment composed of the ISP power consumption and the CP power consumption. Therefore, the problem of maximum network power efficiency can be expressed as:
B k 7 max [P + A (2i --1)(-X,
+ X, q(1- -Xi +
ilk=1 N F N
+P + qj P ~, + kp-X, q i=1 k=1 i=1 k=1
s.t. F
YXik Sk<C,Vie N k { V
Xik e10, 1,Vie N,ke F
In the centralized model, the first constraint requires that the amount of cache content in cache i is less than its capacity Ci. The second parameter Xi is a Boolean variable that only accepts values of 0 or 1 in the network.
Step S1024, defining constraint conditions for the model.
In the centralized model, the first constraint requires that the cached content k in the cache should be less than its capacity Ci; the second constraint is that Boolean variable Xi only accept values of 0 or 1 in the network.
(1) The cache i of each base station caches the contents in descending order according to the content popularity ranking to obtain the best cache performance.
F
k-1
(2) Each base station has an average service capacity represented by Y and the same number of content requests.
Based on the above two assumptions, the number of user requests that are not satisfied in F R N F
the edge caches can be simplified as: N i=1 k=N k-1
With the above-described simplification and constraints, the entire network power model can be rewritten as:
max N. Po+ (2R -1)RN k-" i1kN-1 k=1
N i-1 k-V1 k-1
N F F +R. P + . N + P + R . PD -a -a i=1 k=N +1 k=1
s.t. N, - < CVie N
Xfke {0,1},Vie N,ke F (I
N, B, 7, I r paaame related to the real network environment; R, a, F
S are related to user requests and network data; Po, Ap, Pn, P, Pr, coca, Ps, Pd are the given parameters related to power consumption.
Therefore, it can be found that network power consumption mainly depends on the size of edge cache, popularity distribution of network content, network topology and the number of different content.
Furthermore, in the above step S103, the network power consumption model is solved centrally based on reinforcement learning. As shown in Table 2, the symbols and meanings of core parameters adopted by the reinforcement learning solution part are shown. The Q-Learning algorithm of reinforcement learning is used to further solve the network power consumption model.
Table 2
s Current status: (1, 2, 3, . . , 64) nodes
a Possible actions in the current state: Transfer to any node
s' The new state after transitioning to a node
Q (s,a) Experience score obtained after transferring to a node
R(s,a) Instant reward for environment (connected 0, disconnected-1)
MaxQ(s', a') The most valuable behavioral experience score in self-experience in the new state
p8 For the discount rate, the near 0 tendency is to consider only the reward, while the near 1 tendency is to consider the future reward with the largest weight.
The intelligent agents will learn through experience without supervision and continuously explore until reaches the destination. Each exploration can be called a scene, and each scene consists of the main body that moves from the initial state to the target state. The intelligent agents will go to the next scene after reaching the target state. The specific learning process of the intelligent agent can be expressed as:
1. Setting, and setting environmental awards in Matrix R.
2. Initializing matrix Q. 3. Looping through the scene:
(1) Randomly selecting a state as the initial state.
(2) Executing when the target state has not been reached:
(QInthe current state, randomly selecting a behavior a.
@executing behavior a gets the next state.
(3) According to all possible actions, obtaining the maximum Q value of the next state.
(4) Updating the status according to Q(s, a)=R (s, a)+p6 (max Q(s', a')) (12).
Intelligent agents use the above algorithms to learn from experience. Each scene is equivalent to a training course. The purpose of the training is to continuously explore the environment: before reaching the target state, the reward is obtained through the matrix R, i. e. exploring the environment to obtain the reward. The intelligent agent can use matrix Q by simply tracking the state sequence from the initial state to the target state, the Q-Learning algorithm can find the action of the highest reward value of the current state recorded in matrix Q to determine the future behavior. The continuous training of the intelligent agent makes the Q table (matrix Q) gives better and better approximation, so that the intelligent agent will not continuously explore around or fall into a loop to find the shortest path to the target state.
The Specific Implementation of Q-Learning Algorithm Using Matrix Q; 1. Setting the current state to the initial state.
2. Finding the action with the highest Q value from the current state.
3. Entering the next state.
4. Repeating step 2 and step3 until the current state is equal to the target state.
Due to the 64-node topology, the 64x64 diagonally symmetric adjacency matrix is used to represent the connection state of the network nodes: setting in the initial reward matrix R, if the nodes are connected is 0, the unconnected is -1. Q table is 64x64 table.
The intelligent agent automatically finds the shortest path of content transmission from any position according to the Q-Learning algorithm. In addition, when updating the Q table, it is also necessary to judge whether it is connected. If the reward value is negative, indicating that the nodes are not connected, then -1 is directly stored at the corresponding policy position in the Q table, without updating the Q value, which affects the selection of subsequent actions. On the contrary, the new Q value is obtained by updating and calculating the Q value through the formula, which effectively avoids the transmission of the intelligent agent under the condition that the nodes are not connected and causing principle errors.
In the power consumption model, the user's request contents may be sought from the neighboring source response nodes on the path of the previous requesting node instead of the source server, which reduces the retrieval power of the CP and the transmission power consumption of the ISP nodes and links. The Q-Learning algorithm is used to calculate the best routing path, and counting variables are used to calculate the corresponding hop count, which can be substituted into the formula to calculate the optimized power consumption.
For example, when calculating the power consumption of a requesting node, the total number of content requests obtained from the source server is the number of the contents that not Hi satisfied by the caches, substituting the hop number , corresponding to the total number of
acquired contents Se of each adjacent response node into the sum of node and link transmission power consumption in the power consumption formula to obtain the total link and node power consumption generated by the transmission of the adjacent response node, and further summing with the link and node transmission power consumption brought by the search to the source server to obtain the transmission power consumption of the total node and the link:
H, - 1)(P+}-S -sk+ (-X, )qisk(Hi - 1)(P,+P) i-1 k-1 o
The complex joint power consumption model optimized by reinforcement learning can be expressed as follows:
PONC 0 +-2 - (1 X )q,
+1 (1 - -Xk )q, S ( Hi - 1){ P, + P,) NJF
i=1 k=1
The following is aperformance analysis and comparison of the methods involved in this embodiment based on the simulation results.
(1) Simulation settings:
In the simulation, the network topology contains 64 network nodes, and the changed average number of hops 1 is obtained under different strategies, and the edge cache size of
each base station is abstracted into the relative size of different network data amounts. The values of important variables related to network power are obtained from actual network scenes (such as ITU test environment).
In order to embody the system performance of the present embodiment, under the condition of not considering the deployment of edge cache in wireless network, five specific strategies based on the LRU-based online cache strategy, offline cache strategy and ideal non-cache strategy and combined with the power consumption model under two platform environments are designed, as shown in Table 3, the system performance is discussed to obtain the optimal solution of the system power consumption model.
Table 3
Simulation strategy Features
Optimal Offline Caching Strategy 11nodesUcachethemostpoular sing OSPF shortest routing, using (OPT-Offline) All withut mot arsimplified joint power formulas. content without updating and replacing, and the unsatisfied Offline Caching Strategy Based on contentsinthe caches areobtained Use QL algorithm to calculate the number Reinforcement Learning (RL-Offline) from the source server. of hops. Use the data set generated by simulation.
Optimal Online Caching Strategy LRU Replaces Online Cache: If Use OSPF shortest hop routing, ideally. (OPT-Online) the content is not satisfied in the edge cache of the requesting node, it is sought by the nearest Online Caching Strategy Based on responding node, and if it is stillQL algorithm is used to find the optimal Reinforcement Learning (RL-Online) not satisfied, it is obtained fromrouting path, resulting in routing errors. the source server.
Optimal without Cache Strategy (OPT Without caching, the ISP and the CP ideally share all information and are best without Cache) routed to the source server for contents. Compared with the strategy with caches.
The replaced content under the no-cache strategy is no longer accessed. The ISP and the CP ideally share information. The power consumption mainly consists of the static power consumption, the transmission power consumption from the contents to the source server and the CP retrieval power consumption.
The offline caching strategy does not replace. The caching base station i will rank the most popular N i contents according to the popularity of the contents. The unsatisfied contents in the edge caches are obtained from the source server, which is divided into an ideal strategy using a simplified power consumption model and a reinforcement learning optimization strategy using a data set to simulate the content requests.
The online caching strategy is based on the least recently used LRU algorithm and requires sequential traversal of data. The ideal strategy creates the structure, judges the cache and space state through identification, so as to determine the replacement method and the number of cache hit, and uses topology function and OSPF shortest hop route to find the nearest response node. The LRU algorithm of the reinforcement learning strategy uses bidirectional linked list to modify element position. The topology is embodied in the reward matrix. Q-Learning algorithm is used to find the optimal path, and the number of the contents and corresponding hops obtained from neighboring response nodes or source servers are calculated. Among which, the error of reinforcement learning online strategy comes from Q table routing, such as the contents that should have been obtained from neighboring nodes, and the source server seeks or does not select the nearest response node.
(2) Performance analysis:
Figure 3 shows the network power of different strategies under different network topology structures. When other conditions are the same, the overall network performance of the three schemes does not change much regardless of the network topology structure. This shows that the model designed in this embodiment is universal and can be widely applied to heterogeneous wireless network environments. Based on the above comparison, discussion and analysis of the three topologies, it is concluded that the Power-Law topology model is more in line with the network interconnection state, has certain universality and advantages, and has stronger pertinence and adaptability with this embodiment.
Figure 4 shows the network power of different strategies under different cache hardware. Compared with the SRAM, the SSD and the DRAM have little difference in power consumption. Although the SSD is suitable for high-performance storage, it is less for long-term archiving and backup (usually using fixed disks). As a mainstream storage device, the DRAM has great advantages in writing performance compared with SSD, and due to the high durability, the DRAM is more suitable for large-scale network environments that require the service life of storage hardware. Therefore, this embodiment uses the DRAM as a cache hardware technology for further simulation analysis.
Figure 5 shows the network power of different policies under different cache sizes. The overall caching-free strategy has the largest power consumption, the online strategy is the second, and the offline strategy is the smallest, which reflects the advantages of caching. If the offline strategy does not replace and is not satisfied by itself, it will go to the source server to obtain it, there is no routing error and curve coincidence of Q-Learning algorithm. As the cache increases, more requests are met, the power consumption with cache decreases, but the increased cache power consumption gradually offsets the positive influence of transmission power consumption, and the change rate decreases. As the offline cache strategy is the best content placement, the gap with the online cache strategy increases. The increase in the number of contents obtained from neighboring nodes leads to an increase in the impact of errors, and the gap between online reinforcement learning strategy and ideal increases. It can be observed from the figure that the gap between the scheme designed by this embodiment and the ideal scheme is small, and the scheme designed by this embodiment has practicability.
Figure 6 shows the network power of different strategies under different contents popularity. As the popularity of contents increases, the demand of users for the top ranking has increased, and the number of the contents that can be cached or obtained from the edge of the base station has also increased, which greatly reduces the transmission power consumption and increases the rate of change. The difference between the two online caches is due to the fact that before the Zipf skewness coefficient is 1.2, the requests are scattered, most of the contents are obtained from the source server, and the error of Q-Leaming algorithm is small. When the Zipf skewness coefficient is 1-1.2, the requests are gradually concentrated, most of the contents are close to acquisition, and the errors are gradually significant. After the Zipf skewness coefficient is 1.2, the requests are too concentrated, and the base station can meet a large number of requests, thus reducing the influence of errors. The overall gap between the scheme designed by the embodiment and the ideal scheme is small, and the scheme designed by this embodiment has practicability.
Figure 7 shows the network power of different strategies under different content types. The increase of content types reduces the number of cache hits, increases the transmission power consumption, and reduces the cache advantages. The gap of online caching strategy is reflected in the fact that when the content type is small, the probability of near acquisition increases and the Q-Learning error increases, but the probability of satisfaction in its own edge cache also increases, and the gap is not significant. The variety of the contents increases, more content needs to be obtained from the source server, the routing error of the Q-Leaing algorithm decreases, and the gap decreases. The overall gap between the scheme designed by the embodiment and the ideal scheme is small, and the scheme designed by this embodiment has practicability.
Each embodiment in this specification is described in a progressive manner. Each embodiment focuses on its differences from other embodiments, and the same and similar parts between each embodiment can be referred to each other.
The foregoing description of the disclosed embodiments enables the technicians skilled in the art to realize or use the invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be implemented in other embodiments without departing from the spirit or scope of the invention. Accordingly, the invention is not to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. The method for constructing an energy-efficient network content distribution mechanism based on edge intelligent caches, which is characterized by comprising the following steps:
Constructing a system model according to the change of the network service mode, and deploying content at the edge of the network according to the system model;
Constructing a network power consumption model of the ISP and the CP according to the deployment content of the network edge, and restricting and simplifying the network power consumption model;
And optimizing and solving the network power consumption model based on reinforcement learning.
2. The method for constructing an energy-efficient network content distribution mechanism based on edge intelligent caches, according to claim 1, is characterized in that constructing the system model comprises:
Constructing a content popularity model according to the Zipf distribution;
Constructing a network model, wherein the network model comprises a network service and architecture model and a network topology model.
3. The method for constructing the energy-efficient network content distribution mechanism based on the edge intelligent caches, according to claim 2, is characterized in that the content popularity model is constructed by assuming that the number of content types is F and the video content is numbered from 1 to k; and in a given fixed time, the distribution of the content popularity with the total network request number being R, and the content number being k is:
F Rk = R - k-"/Z k",k = 1,2,..., F k=1 (1)
Among which, the Zipf skewness coefficient characterizes the content popularity.
4. The method for constructing the energy-efficient network content distribution mechanism based on the edge intelligent caches, according to claim 1, is characterized in that the constructed the ISP power consumption model comprises a base station power consumption model, a transmission power consumption model and a cache power consumption model; the power consumption model of the ISP is obtained according to the constructed base station power consumption model, the transmission power consumption model and the cache power consumption model; the power consumption model of the ISP is expressed as follows:
PSP ++'n 2i _1 lX qs k=1
+111-,)qksk N,-1P +P
+J(qJ,+XkSr) i=1 k=1
Among which, Po is the inherent power consumption of the base station; Ap is a slope parameter indicating the effect of the BS traffic load on its power consumption; g is a channel gain; B represents the system bandwidth; No is the power density related to noise; ris a constant related to the error rate requirement; qf is the number of requests for content k in the base
station i; skis the size of content k; Pn is the network node power; Pis the network link power through which the request passes; Hi represents the average number of hops between the base
station I and the content source; Xiiis a boolean variable; Pis the average retrieval power consumption of each cache arrival content request; coca is a power efficiency parameter.
5. The method for constructing the energy-efficient network content distribution mechanism based on the edge intelligent caches, according to claim 4, is characterized in that the constructed the CP power consumption model comprises static power consumption Ps and power consumed by processing the request that is not satisfied in the edge caches, and the CP power consumption model is expressed as follows:
PCP S I(I k) q1 DP+k
Among which, PDis the average retrieval power consumption for each user request in the source server.
6. The method for constructing the energy-efficient network content distribution mechanism based on the edge intelligent caches, according to claim 5, is characterized in that the specific expression for the constraint condition of the network power consumption model for constructing the ISP and the CP is as follows:
F
YXs <Ci,Vie N
X,,E {0,1},ViE N, kE F;
Among which, Ci is the maximum cache capacity of cache i.
7. The method for constructing an energy-efficient network content distribution mechanism based on edge intelligent caches, according to claim 6, is characterized in that the simplified method comprises: caching the content in descending order of content popularity ranking for each base station i to obtain optimal caching performance; each base station has an average service capacity represented by x and the same number of content requests;
The number of user requests that are not satisfied in the edge caches based on the above two F R N F I k-" -YYk-" conditions is simplified as: Ni=1 k=N k=1
According to the simplification mode and the constraint condition, the network power model is rewritten as follows:
F maxN.Po+ 2 R - (2s -,R N k-" F I N i-1k -i- k-1
N i Fk=1 =N -1 F k=
N i=1k=N 1 k=1
+R-P,+- IN,+P+R-PD -a -/Y a i=1 k=N +1 k=1
s.t.
N, - CVie N
Xk e {0,1},Vie N,ke F
Among which, 7= , g is the channel gain; B is the system bandwidth; No is the NOB power density associated with noise; '1 is the constant associated with the error rate requirement.
8. The method for constructing an energy-efficient network content distribution mechanism based on edge intelligent caches, according to claim 1, is characterized in that the network power consumption model is further solved using an enhanced Q-Learning algorithm.
9. The method for constructing an energy-efficient network content distribution mechanism based on edge intelligent caches, according to claim 8, is characterized in that the training step of the matrix Q by the Q-Learning algorithm is expressed as:
Giving parameters, and setting environment reward in the matrix R;
Initializing the matrix Q; Looping through the scene:
(1) Selecting a state as an initial state randomly;
(2) Executing under the condition that the target state is not reached: randomly selecting a behavior a under the current state; executing behavior a gets the next state;
(3) Obtaining the maximum value Q of the next state according to all the possible actions;
(4)Updating state according to Q (s,a)=R(s,a)+(maxQ(s',a'));
Obtaining a trained matrix Q.
10. The method for constructing an energy-efficient network content distribution mechanism based on edge intelligent caches, according to claim 9, is characterized in that the specific implementation of the Q-Learning algorithm using a matrix Q comprises the following steps of:
Setting the current state as an initial state;
Finding the action with the highest Q value from the current state;
Entering the next state;
Repeating the last two steps until the current state equals the target state.
-1/4- 11 Nov 2020
Content Provider
Internet 2020103384
Internet Service Provider
Mobile subscriber
Figure 1
Figure 2
-2/4- 11 Nov 2020 2020103384
Figure 3
Figure 4
-3/4- 11 Nov 2020 2020103384
Figure 5
Figure 6
-4/4- 11 Nov 2020 2020103384
Figure 7
S101
Constructing a system model according to the change of the network 根据网络服务模式的变化,构建系统模型,根据 service mode, and deploying content at the edge of the network 所述系统模型,在网络边缘部署内容 according to the system model.
S102
根据网络边缘部署内容,构建ISP和CP的网络功耗 Constructing a network power consumption model of the ISP and the CP according to 模型,并进行约束和简化 the deployment content of the network edge, and restricting and simplifying the network power consumption model.
S103
Optimizing and solving the network power consumption model based 基于强化学习对所述网络功耗模型进行优化与求 on reinforcement learning. 解
Figure 8
AU2020103384A 2020-11-11 2020-11-11 Method for Constructing Energy-efficient Network Content Distribution Mechanism Based on Edge Intelligent Caches Ceased AU2020103384A4 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU2020103384A AU2020103384A4 (en) 2020-11-11 2020-11-11 Method for Constructing Energy-efficient Network Content Distribution Mechanism Based on Edge Intelligent Caches

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
AU2020103384A AU2020103384A4 (en) 2020-11-11 2020-11-11 Method for Constructing Energy-efficient Network Content Distribution Mechanism Based on Edge Intelligent Caches

Publications (1)

Publication Number Publication Date
AU2020103384A4 true AU2020103384A4 (en) 2021-01-28

Family

ID=74192140

Family Applications (1)

Application Number Title Priority Date Filing Date
AU2020103384A Ceased AU2020103384A4 (en) 2020-11-11 2020-11-11 Method for Constructing Energy-efficient Network Content Distribution Mechanism Based on Edge Intelligent Caches

Country Status (1)

Country Link
AU (1) AU2020103384A4 (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112949988A (en) * 2021-02-01 2021-06-11 浙江大学 Service flow construction method based on reinforcement learning
CN113012013A (en) * 2021-02-09 2021-06-22 北京工业大学 Cooperative edge caching method based on deep reinforcement learning in Internet of vehicles
CN113709853A (en) * 2021-07-23 2021-11-26 北京工业大学 Network content transmission method and device oriented to cloud edge collaboration and storage medium
CN113873622A (en) * 2021-09-01 2021-12-31 武汉大学 Communication network energy-saving method based on reconfigurable intelligent surface
CN115633380A (en) * 2022-11-16 2023-01-20 合肥工业大学智能制造技术研究院 Multi-edge service cache scheduling method and system considering dynamic topology
WO2023168824A1 (en) * 2022-03-07 2023-09-14 北京工业大学 Mobile edge cache optimization method based on federated learning
CN116916390A (en) * 2023-09-11 2023-10-20 军事科学院系统工程研究院系统总体研究所 Edge collaborative cache optimization method and device combining resource allocation
CN117319249A (en) * 2023-10-10 2023-12-29 黑龙江大学 Data optimization management system based on communication network information processing

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112949988A (en) * 2021-02-01 2021-06-11 浙江大学 Service flow construction method based on reinforcement learning
CN112949988B (en) * 2021-02-01 2024-01-05 浙江大学 Service flow construction method based on reinforcement learning
CN113012013A (en) * 2021-02-09 2021-06-22 北京工业大学 Cooperative edge caching method based on deep reinforcement learning in Internet of vehicles
CN113012013B (en) * 2021-02-09 2024-05-28 北京工业大学 Collaborative edge caching method based on deep reinforcement learning in Internet of vehicles
CN113709853A (en) * 2021-07-23 2021-11-26 北京工业大学 Network content transmission method and device oriented to cloud edge collaboration and storage medium
CN113709853B (en) * 2021-07-23 2022-11-15 北京工业大学 Network content transmission method and device oriented to cloud edge collaboration and storage medium
CN113873622B (en) * 2021-09-01 2023-10-27 武汉大学 Communication network energy saving method based on reconfigurable intelligent surface
CN113873622A (en) * 2021-09-01 2021-12-31 武汉大学 Communication network energy-saving method based on reconfigurable intelligent surface
WO2023168824A1 (en) * 2022-03-07 2023-09-14 北京工业大学 Mobile edge cache optimization method based on federated learning
CN115633380A (en) * 2022-11-16 2023-01-20 合肥工业大学智能制造技术研究院 Multi-edge service cache scheduling method and system considering dynamic topology
CN116916390A (en) * 2023-09-11 2023-10-20 军事科学院系统工程研究院系统总体研究所 Edge collaborative cache optimization method and device combining resource allocation
CN117319249A (en) * 2023-10-10 2023-12-29 黑龙江大学 Data optimization management system based on communication network information processing
CN117319249B (en) * 2023-10-10 2024-03-15 黑龙江大学 Data optimization management system based on communication network information processing

Similar Documents

Publication Publication Date Title
AU2020103384A4 (en) Method for Constructing Energy-efficient Network Content Distribution Mechanism Based on Edge Intelligent Caches
CN111885648A (en) Energy-efficient network content distribution mechanism construction method based on edge cache
Zhong et al. A deep reinforcement learning-based framework for content caching
Jiang et al. Multi-agent reinforcement learning based cooperative content caching for mobile edge networks
CN112218337B (en) Cache strategy decision method in mobile edge calculation
CN113114756B (en) Video cache updating method for self-adaptive code rate selection in mobile edge calculation
CN112020103B (en) Content cache deployment method in mobile edge cloud
CN108366089B (en) CCN caching method based on content popularity and node importance
CN108600998B (en) Cache optimization decision method for ultra-density cellular and D2D heterogeneous converged network
Zhang et al. An SDN-based caching decision policy for video caching in information-centric networking
CN111935246A (en) User generated content uploading method and system based on cloud edge collaboration
CN112399485A (en) CCN-based new node value and content popularity caching method in 6G
Xiaoqiang et al. An in-network caching scheme based on betweenness and content popularity prediction in content-centric networking
Gui et al. A cache placement strategy based on entropy weighting method and TOPSIS in named data networking
Khodaparas et al. A multi criteria cooperative caching scheme for internet of things
Wauters et al. Load balancing through efficient distributed content placement
CN112052198B (en) Hash route cooperative caching method based on node betweenness popularity under energy consumption monitoring platform
Pu ProNDN: MCDM‐Based Interest Forwarding and Cooperative Data Caching for Named Data Networking
CN112867092A (en) Intelligent data routing method for mobile edge computing network
CN110324175B (en) Network energy-saving method and system based on edge cache
Cao et al. Family-aware pricing strategy for accelerating video dissemination over information-centric vehicular networks
Pu Adaptive forwarding strategy based on MCDM model in named data networking
Alduayji et al. PF-EdgeCache: Popularity and freshness aware edge caching scheme for NDN/IoT networks
Liu et al. COD: caching on demand in information-centric networking
Gomes et al. A mobile follow-me cloud content caching model

Legal Events

Date Code Title Description
FGI Letters patent sealed or granted (innovation patent)
MK22 Patent ceased section 143a(d), or expired - non payment of renewal fee or expiry