CN112804361A - Edge alliance game method for content cooperation cache - Google Patents
Edge alliance game method for content cooperation cache Download PDFInfo
- Publication number
- CN112804361A CN112804361A CN202110349244.0A CN202110349244A CN112804361A CN 112804361 A CN112804361 A CN 112804361A CN 202110349244 A CN202110349244 A CN 202110349244A CN 112804361 A CN112804361 A CN 112804361A
- Authority
- CN
- China
- Prior art keywords
- base station
- cache
- content
- request
- buffer
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/50—Network services
- H04L67/56—Provisioning of proxy services
- H04L67/568—Storing data temporarily at an intermediate stage, e.g. caching
Abstract
The invention discloses a fringe alliance game method facing content cooperative caching, which comprises the following steps: A. establishing a cache infrastructure provider alliance system model; B. setting a service position of a alliance system model; C. setting a user request scheduling strategy; D. establishing a service request model; E. calculating the income of the alliance system; F. optimizing a content caching strategy and a workload scheduling strategy to maximize profit of the alliance system; G. setting a profit allocation scheme of the alliance system; H. and setting a alliance game scheme. The invention can solve the defects of the prior art and improve the profit of individual participants in the cache service under the stable condition.
Description
Technical Field
The invention relates to a network edge cache arrangement method, in particular to a content cooperative cache oriented edge alliance game method.
Background
With the ever increasing number of mobile devices, the internet is undergoing an explosive growth in mobile data traffic. A report by cisco predicts that global mobile data will increase 7-fold from 2017 to 2022. In addition to the large traffic, the delay in traffic transmission is also important. Especially in emerging delay-sensitive applications and services, such as augmented reality, autonomous driving, interactive online maps, etc., the magnitude of traffic delay directly determines the quality of service (QoS). Therefore, efforts have been made to reduce the communication burden on the backbone network and to reduce the response time of delay sensitive traffic. Deploying caches at the edge of the network (i.e., near the end users) has proven to be an effective and intelligent strategy.
The current mobile edge caching scheme mainly comprises four categories of macro base station caching, small base station caching, device-to-device communication network caching and mobile device local caching. The content cached in these infrastructures or devices can be obtained by all users directly from the network edge, thus avoiding the communication burden of the backbone network. In addition to the above four types, a new scheme for edge caching on vehicles with abundant IT resources has been recently proposed, which is a promising approach. By reusing vehicles as cache nodes, service providers may save the tremendous expense of building cache infrastructure (e.g., base stations) while ensuring desirable cache hit rates.
While caching content at the network edge may reduce communication latency, balancing cache usage also presents challenges in a hybrid edge caching scheme. On the one hand, the fixed cache node of the base station may have a large storage capacity (in the form of an edge cloud), but its deployment cost is high, and the service range (the capability of serving dynamic demands) is also limited. On the other hand, the mobile cache node of the vehicle or the mobile device has lower cost and flexible capability of adapting to the change of the user demand. However, it cannot buffer too much data due to limited storage capacity. In order to solve the above problem, a collaborative content caching is proposed using a hybrid caching scheme. Different cache nodes at the edge of the network cooperate with each other, so that the network cost can be reduced, and the overall cache utilization rate can be improved. However, there are still some key challenges that limit the collaborative content caching at the edge of the network, which remain to be solved.
The cache resources are spatially unbalanced. In the past, the collaboration mode usually adopts a rental mode, and Content Providers (CPs) and Caching Infrastructure Providers (CIPs) make bilateral contracts, and rental edge resources are used for content caching. This mode ignores interactions between CIPs (such as resource sharing and content loading), resulting in space imbalance and resource waste in the caching process.
The user requests an imbalance in time. Over time, fluctuations in end user requests can lead to mismatches between caching resources and service requirements. Wireless traffic demands typically have a large peak-to-valley ratio throughout the day. Therefore, building more caching infrastructure to meet peak demand and other traditional solutions can also cause significant waste of resources (during valley periods).
There is a lack of attractive incentive measures. It is impractical to expect that a cache provider will unconditionally collaborate with others no matter how much profit it receives. In some cases, a "good-sounding" league may not always benefit all members, for example, some people's profit may be impaired by "idle members" gaining more revenue than they are willing to. Thus, cooperation will only be formed if each member can receive sufficient benefit in view of the potential monetary loss risk (e.g., violating the quality of service negotiated with the customer).
Disclosure of Invention
The invention aims to provide a fringe alliance game method facing content cooperative caching, which can overcome the defects of the prior art and improve the profits of individual participants in caching service under a stable condition.
The invention comprises the following steps:
A. establishing a cache infrastructure provider alliance system model;
B. setting a service position of a alliance system model;
C. setting a user request scheduling strategy;
D. establishing a service request model;
E. calculating the income of the alliance system;
F. optimizing a content caching strategy and a workload scheduling strategy to maximize profit of the alliance system;
G. setting a profit allocation scheme of the alliance system;
H. and setting a alliance game scheme.
Preferably, in step A, the first step,
the cache infrastructure providers comprise fixed base stations and mobile vehicles with storage units, different cache infrastructure providers form a federation, provide cache services for content providers and reasonably distribute revenue among the federations, select which cache infrastructure provider cooperates and how much storage resources are leased in each specific area will determine the profit obtained, the whole process of federation formation is dynamic, and each cache infrastructure provider decides whether to cooperate with other cache infrastructure providers at each time slot depending on the state of the system;
set of cache infrastructure providers asWhereinRepresenting the set of fixed caching infrastructure providers that own the base station,representing a collection of mobile caching infrastructure providers having mobile caching vehicles, the federation system model including the collection ofAnd a base station and a cellThe base station and the buffer vehicle are respectively referred to by edge nodes, and the set of all buffer nodes is represented asFor fixation ofAnd moveAre defined separatelyAndto belong to their base stations and to the set of buffer vehicles,representing a collection of edge nodes belonging to a caching infrastructure provider,representing a set of buffer vehicles within the coverage of base station k;
when a request arrives at the base station, if the base station caches the content required by the request, the request directly sends the content to the user; otherwise, the cache vehicle cooperating with the base station in the coverage area of the base station can be used as an assisting node to process the request; meanwhile, the base stations can cooperate with each other through a high-speed link;
the base station is used as a static edge node, and the number and the position of the base station are fixed; determining the number of available buffer vehicles in the coverage area of each base station in each time slot, and calculating the minimum buffer vehicle number of each mobile buffer infrastructure provider in a certain time period in the area of the base station as the number of available vehicles by the base station according to the track data provided by the mobile buffer infrastructure provider;
all base stations in the alliance continuously collect information of arrival requests in each time period, wherein the information comprises the arrival rate and the type proportion; the status of the buffer vehicles, including the number of available vehicles and which buffer infrastructure provider they belong to, is obtained by the mobile buffer infrastructure providers in the consortium and sent to the base station.
Preferably, in the step B, the step C,
the fixed content library comprises collectionsThe probability and magnitude of its request are respectivelyAnd,the probability that content i is requested is,
wherein the content of the first and second substances,a value of between 0.5 and 1, representing the peak of the distribution,reflecting different skewness of distribution, for the base station k, the arrival rate of the user isThe arrival rate of the request for the content j is,The cache capacity of the cache node i is shown, a base station or a cache vehicle cannot cache all contents at the same time, the cache node selectively caches the contents according to the request of a user, and binary variables are usedTo indicate the state of the placement of the content,indicating that the content j is placed in the cache nodes i, each cache node cannot cache the content beyond its storage capacity,
preferably, in step C, the step of,
indicating a request for a scheduling policy that is,the ratio of the requests representing the content j of the request to be processed at the local base station i, forAnd is,The proportion of requests representing the request content j from base station i load to base station k,indicating the proportion of the load allocated to the macrocell, for,The proportion of the request representing the requested content j being distributed from the base station i to the buffer vehicle v, the base station being able to distribute its request only toCooperatively caching vehicles within its coverage area, forWhen is coming into contact withWhen satisfied withIt holds that for each base station i, the request for content j can be processed locally, in the assisting base station, in the buffer vehicle in range or in the macro-unit, the load ratioSatisfies the following conditions:
expressed as the profit gained by providing the content j requested at base station i from the cache of base station k,in order to provide the profit gained by the content j requested at the base station i from the buffer of the buffer vehicle v, servicing the request with the macro-cell will typically introduce intolerable transmission delays to the delay-critical service,represents a fine through the macrocell service content j。
Preferably, in step D, the step of,
regarding the request as a client, and taking each cache node as a service desk to provide service for the client; according to the content bit of all cache nodesRequesting allocation to a base station, a buffer vehicle or a macro cell; when a request is scheduled to another base station, the content acquired from another base station is first transmitted to the requested base station via a high capacity link and then sent to the user by the requested base station, which can be seen as a two-stage process; because in the second phase, the content is transmitted through the transmission link of the local base station, the communication resource of the local base station is occupied; the request service of the local base station or another base station is regarded as the execution process of the local base station;andrepresenting the requested rate of arrival at base station i and buffer vehicle v; setting the size of the contentSatisfy the mean value ofThe service rates of the corresponding base station i and the buffer vehicle v are respectively subject to mean valuesAnd;
setting each base station and buffer vehicle as M/M/1 queue without request priority and waiting queue limit, base stationThe average latency for each request is,
is provided withThe average delay per request for the buffer vehicle v within range of base station i is,
whereinDefining a delay threshold for ensuring quality of serviceSatisfy the requirement ofThe constraint of transmission delay is defined as
preferably, in step F, the optimization problem is expressed as,
the decision variable isAndthe limitation condition is that the content of the cache node can not exceed the capacity of the cache node, the requests from the base station are all dispatched to a local base station, other base stations, cache vehicles in the coverage area or a macro unit, the average transmission delay of the request of each cache node does not exceed a delay threshold value, the request applying for the content of j can not be dispatched to a cache node without caching the content of j, and the base station can only redirect the request to the cache vehicles in the coverage area.
Preferably, in step G, the first step,
let the ith cache infrastructure provider pair allianceHas a marginal contribution ofU is a cost function, the ith cache infrastructure provider is from the federationAllocated revenueIs composed of
WhereinIs thatAll arrangement ofIs selected from the group consisting of (a) a subset of,is the set of players in the ranking pi that precede i;
setting the cost of each unit of a cache of a sense cache node k to beCaching cost of the ith caching infrastructure providerIs composed of;
Preferably, in step H, the reaction mixture is,
let the preference function of player iPlayer i thanPreference is given toThe presence of a metal in the metal layer, if and only if,the preference function is equal to the utility each user assigns in the federation;
when a participant has no incentive to unilaterally change his federation into another federation in the partition, the participant in the partition is nash-stable,by applying utility valuesCombining with alliance division, alliance formation game is defined on an aggregation N, the utility value of the aggregation is irrelevant to other alliances, and the requirement of meeting,Is the optimal solution of the optimization problem;
the profit sharing method satisfies the following condition,
Preferably, in step H, the reaction mixture is,
the decision process at a particular time t consists of a number of rounds, each comprising N steps, each participant being able to make a decision; in t rounds, random sequenceIs generated in whichIndicating the ith participant selected to make the decision, at each step, player i chooses to leave the current league and join the new league or stay in the current league, including,
player i iteratively retrieves the current partitionUse a group ofTo record the historical alliances that participant i joined before, the retrieval process ignores the historical setIn the new alliance, the computing participant i joins the new allianceProfit of(ii) a If the new federation's revenue exceeds the current one, the new federation is recorded as the best federation, and the process continues until all possible federations are complete, except for those in the history set, after iteration, if the best federation has been updated, then the current federation is appended to the history set and a new current partition is obtained;
At the end of the t-th round, after all participants made the decision, we obtained and recorded a partitionAt a given round t + 1, ifThen that means that no players can deviate from their current league for better profit, then nash stability is reached.
The invention has the beneficial effects that: the present invention proposes a framework to combine a wide variety of caching infrastructure providers to increase overall profits. The caching process is described by a model consisting of caching infrastructure provider owned base stations and caching vehicles, including content placement and request scheduling in close proximity to the user. On the basis of the model, a utility function of a federation consisting of different cache infrastructure providers is defined, and benefits obtained by different service request modes and constraints of average delay and cache capacity are mainly considered. By solving the optimization problem, the optimal profit under a specific alliance state can be obtained. Each caching infrastructure provider decides to join a different federation at each time slot based on its own profit. The total profit of the federation and the average personal profit of the participants are increased by 53% and 42%, respectively, compared to a collaboration scenario with only fixed caching nodes.
Drawings
FIG. 1 is a diagram of an example scenario of an edge computing environment.
Fig. 2 is a schematic diagram of a cache federation architecture consisting of a macrocell, a base station, and a cache vehicle.
FIG. 3 is a graphical illustration of dynamic user request arrival rates and buffer vehicle number fluctuations.
FIG. 4 is a graph of total profit for all caching infrastructure providers under three different strategies.
Fig. 5(a) - (c) are plots of profit comparison for a fixed cache infrastructure provider for three different situations.
FIG. 6 is a graph comparing profits of mobile caching infrastructure providers.
FIG. 7 is a diagram illustrating the impact of user request heterogeneity on total revenue.
FIG. 8 is a graphical illustration of the mobile revenue versus total revenue impact.
FIG. 9 is a graph of the number of cache nodes versus algorithm runtime.
Detailed Description
Referring to fig. 1-2, a fringe alliance gaming method oriented to content cooperative caching comprises the following steps:
A. establishing a cache infrastructure provider alliance system model;
B. setting a service position of a alliance system model;
C. setting a user request scheduling strategy;
D. establishing a service request model;
E. calculating the income of the alliance system;
F. optimizing a content caching strategy and a workload scheduling strategy to maximize profit of the alliance system;
G. setting a profit allocation scheme of the alliance system;
H. and setting a alliance game scheme.
In the step A, the step B is carried out,
the cache infrastructure providers comprise fixed base stations and mobile vehicles with storage units, different cache infrastructure providers form a federation, provide cache services for content providers and reasonably distribute revenue among the federations, select which cache infrastructure provider cooperates and how much storage resources are leased in each specific area will determine the profit obtained, the whole process of federation formation is dynamic, and each cache infrastructure provider decides whether to cooperate with other cache infrastructure providers at each time slot depending on the state of the system;
set of cache infrastructure providers asWhereinRepresenting the set of fixed caching infrastructure providers that own the base station,representing a collection of mobile caching infrastructure providers having mobile caching vehicles, the federation system model including the collection ofAnd a base station and a cellThe base station and the buffer vehicle are respectively referred to by edge nodes, and the set of all buffer nodes is represented asFor fixation ofAnd moveAre defined separatelyAndto belong to their base stations and to the set of buffer vehicles,representing a collection of edge nodes belonging to a caching infrastructure provider,representing a set of buffer vehicles within the coverage of base station k;
when a request arrives at the base station, if the base station caches the content required by the request, the request directly sends the content to the user; otherwise, the cache vehicle cooperating with the base station in the coverage area of the base station can be used as an assisting node to process the request; meanwhile, the base stations can cooperate with each other through a high-speed link;
the base station is used as a static edge node, and the number and the position of the base station are fixed; determining the number of available buffer vehicles in the coverage area of each base station in each time slot, and calculating the minimum buffer vehicle number of each mobile buffer infrastructure provider in a certain time period in the area of the base station as the number of available vehicles by the base station according to the track data provided by the mobile buffer infrastructure provider;
all base stations in the alliance continuously collect information of arrival requests in each time period, wherein the information comprises the arrival rate and the type proportion; the status of the buffer vehicles, including the number of available vehicles and which buffer infrastructure provider they belong to, is obtained by the mobile buffer infrastructure providers in the consortium and sent to the base station.
In the step (B), the step (A),
the fixed content library comprises collectionsThe probability and magnitude of its request are respectivelyAnd,the probability that content i is requested is,
wherein the content of the first and second substances,a value of between 0.5 and 1, representing the peak of the distribution,reflecting different skewness of distribution, for the base station k, the arrival rate of the user isThe arrival rate of the request for the content j is,The cache capacity of the cache node i is shown, a base station or a cache vehicle cannot cache all contents at the same time, the cache node selectively caches the contents according to the request of a user, and binary variables are usedTo indicate the state of the placement of the content,indicating that the content j is placed in the cache nodes i, each cache node cannot cache the content beyond its storage capacity,
in the step C, the step C is carried out,
indicating a request for a scheduling policy that is,the ratio of the requests representing the content j of the request to be processed at the local base station i, forAnd is,The proportion of requests representing the request content j from base station i load to base station k,indicating the proportion of the load allocated to the macrocell, for,The proportion of requests representing the content of the request j distributed from the base station i to the buffer vehicles v, the base station being able to distribute its request only to cooperating buffer vehicles within its coverage area, forWhen is coming into contact withWhen satisfied withIt holds that for each base station i, the request for content j can be processed locally, in the assisting base station, in the buffer vehicle in range or in the macro-unit, the load ratioSatisfies the following conditions:
expressed as the profit gained by providing the content j requested at base station i from the cache of base station k,in order to provide the profit gained by the content j requested at the base station i from the buffer of the buffer vehicle v, servicing the request with the macro-cell will typically introduce intolerable transmission delays to the delay-critical service,represents a fine through the macrocell service content j。
In the step D, the step of the method is carried out,
regarding the request as a client, and taking each cache node as a service desk to provide service for the client; requests are assigned to base stations, buffer vehicles or macro cells according to the content locations of all buffer nodes; when a request is scheduled to another base station, the content acquired from another base station is first transmitted to the requested base station via a high capacity link and then sent to the user by the requested base station, which can be seen as a two-stage process; because in the second phase, the content is transmitted through the transmission link of the local base station, the communication resource of the local base station is occupied; the local base station or another base station is regarded as the local base stationThe execution process of (1);andrepresenting the requested rate of arrival at base station i and buffer vehicle v; setting the size of the contentSatisfy the mean value ofThe service rates of the corresponding base station i and the buffer vehicle v are respectively subject to mean valuesAnd;
setting each base station and buffer vehicle as M/M/1 queue without request priority and waiting queue limit, base stationThe average latency for each request is,
is provided withThe average delay per request for the buffer vehicle v within range of base station i is,
whereinDefining a delay threshold for ensuring quality of serviceSatisfy the requirement ofThe constraint of transmission delay is defined as
in step F, the optimization problem is represented as,
the decision variable isAndthe limitation condition is that the content of the cache node can not exceed the capacity of the cache node, the requests from the base station are all dispatched to a local base station, other base stations, cache vehicles in the coverage area or a macro unit, the average transmission delay of the request of each cache node does not exceed a delay threshold value, the request applying for the content of j can not be dispatched to a cache node without caching the content of j, and the base station can only redirect the request to the cache vehicles in the coverage area.
In the step G, the step C is carried out,
let the ith cache infrastructure provider pair allianceHas a marginal contribution ofU is a cost function, the ith cache infrastructure provider is from the federationAllocated revenueIs composed of
WhereinIs thatAll arrangement ofIs selected from the group consisting of (a) a subset of,is in line pi before iA set of players;
setting the cost of each unit of a cache of a sense cache node k to beCaching cost of the ith caching infrastructure providerIs composed of;
In the step (H), the step (A),
let the preference function of player iPlayer i thanPreference is given toThe presence of a metal in the metal layer, if and only if,the preference function is equal to the utility each user assigns in the federation;
when a participant has no incentive to unilaterally change his federation into another federation in the partition, the participant in the partition is nash-stable,by applying utility valuesCombining with alliance division, alliance formation game is defined on an aggregation N, the utility value of the aggregation is irrelevant to other alliances, and the requirement of meeting,Is the optimal solution of the optimization problem;
the profit sharing method satisfies the following condition,
In the step (H), the step (A),
at a particular time tThe decision process consists of several rounds, each comprising N steps, each participant can make a decision; in t rounds, random sequenceIs generated in whichIndicating the ith participant selected to make the decision, at each step, player i chooses to leave the current league and join the new league or stay in the current league, including,
player i iteratively retrieves the current partitionUse a group ofTo record the historical alliances that participant i joined before, the retrieval process ignores the historical setIn the new alliance, the computing participant i joins the new allianceProfit of(ii) a If the new federation's revenue exceeds the current one, the new federation is recorded as the best federation, and the process continues until all possible federations are complete, except for those in the history set, after iteration, if the best federation has been updated, then the current federation is appended to the history set and a new current partition is obtained;
At the end of the t-th round, after all participants made the decision, we obtained and recorded a partitionAt a given round t + 1, ifThen that means that no players can deviate from their current league for better profit, then nash stability is reached.
Experiment of
(one) Experimental parameters
Within a given area, we considerIndividual Cache Infrastructure Providers (CIPs), whereinA fixed cache infrastructure provider anda mobile cache infrastructure provider, each cache infrastructure provider providingDifferent types of content. We are only concerned with business areas that are more crowded and have more traffic flow during the day than at night. 3 fixed cache infrastructure providers own 3, 2 and 3 base stations, respectively, in the area. The 3 mobile caching infrastructure providers own 150, 200 and 150 cache vehicles, respectively. We assume that all cache contents have the same sizeSince different content can be divided into the same size. For simplicity, we assume that all base stations and buffer vehicles have the same configuration. Then we setIndicating that the base station and the buffer vehicle are one70 and 10 contents can be stored simultaneously in the time period. Cost per buffer unit of base stationSet to 0.01 $. For a caching vehicle, its caching cost is low due to its low capacity and mobilityAbove the base station, we set it to $ 0.02.
Profit obtained when request for content j is served locallyAndfor all base stations and vehicles to be identical, we set them to 0.03$ and 0.04$, respectively, per request. When a request cannot be processed on a caching node close to the user, we set the penalty to $ 0.08, which is higher than any profit or caching cost. The penalty indicates that, in order to reduce caching costs, the caching infrastructure provider prefers to place content on caching nodes to serve as much as possible, rather than denying service, which is consistent with the reality that the caching infrastructure provider should guarantee cache hit rates to increase the profit of Content Providers (CPs). For each base station, we will time the processing of each requestSet to 0.005 seconds. Compared with the base station, the processing capacity of the buffer vehicle is weaker, so the processing time of the buffer vehicle is set0.015 seconds. To ensure satisfactory quality of service, we will delay the thresholdSet to 0.02 seconds.
To reflect the reality, we used the Uber pickups dataset and the T-drive dataset of ModelWhale to simulate the user request and flow trajectory. The data set from ModelWhale is the taxi-taking record for the best-paced rental car, including time and location data requested by the U.S. New York user. The T-drive data set records the driving tracks of 10000 taxis in one week in Beijing. These data show the relationship between user requests and traffic during the day. After data extraction is performed on the data set, fig. 3 is drawn, and fluctuation conditions of user request quantity and buffer vehicle quantity in a base station coverage range in the area are shown in the figure. Fig. 3 illustrates that traffic varies with user requests in a business area on a certain day. This is a key prerequisite to ensure that caching using vehicles is a promising approach to satisfy dynamic user requests. Based on the data from the day above, we performed a series of experiments to test the performance of the league.
To represent attributes intuitively, we have set two different policies as baselines in addition to the federation policy:
not applicable to the strategy. Uncooperative means that there is no cooperation between all caching infrastructure providers in the area, i.e. all base stations handle all requests locally. Redundant requests that the base station cannot handle can only be transmitted to the macrocell. When the requests significantly exceed the processing capability of the base station, a large number of requests offloaded to the macrocell may result in a large penalty. Mobile CIPs do not make a profit in this case, since the mobile buffer vehicle cannot provide services without cooperating with the base station.
BS-only. BS-only represents a strategy where all fixed CIPs form a federation and cooperate with each other. This is a commonly used collaboration scheme at present. The federation exists only in the base station and does not include cached vehicles. Although cooperative base stations can share different contents, due to the limitation of the transmission capability of the base station, the scheme still cannot meet the requirement when a large number of requests occur.
(II) comparison of Performance
To run our experiments, we developed a simulation program with Python to solve the optimization problem in section four. Fig. 4 shows the performance of the coalition formation algorithm at different time periods of the day. We compared the total profit for all caching infrastructure providers in three different cases. In particular, we divide a day into 24 time periods, corresponding to 24 hours. In each time slot, the cache infrastructure provider forms a alliance according to the request arrival rate of each base station and the number of cache vehicles in the region, and decides how to cache the content and schedule the request. As can be seen from fig. 4, the unworked policy is always kept at a lower profit level. Due to the limited buffer capacity of a single base station, even in a time slot with a low request rate, a single job tends to miss most of the requested content. The BS-only strategy has better performance under the condition of few requests because the base stations can share the content. But during business hours, when demand is increased dramatically, the overall profit tends to decrease. In this case, the main bottleneck is the processing power of the base station. Considering only the cooperation between base stations, the key to improving cache hit rate is to increase the base stations or other cache infrastructure in the area. The federation policy performs much better than the other two policies during most of the day. It has much less advantage than the BS-only strategy in the first few hours because the request arrival rate is low and can be handled by the base station alone. When the request rate reaches a specific level after 7 o' clock, the buffer vehicle plays an important role in sharing the base station load and improving the overall profit of the alliance.
Feasibility of alliance (III)
Although we have studied that the formation of a federation works well to increase the overall profit for the caching infrastructure providers, we must determine that each caching infrastructure provider benefits from it, in order to guarantee the feasibility of the federation. Fig. 5(a) (b) (c) shows the profit for a fixed caching infrastructure provider in three different cases, and the profit for a mobile caching infrastructure provider under a dynamic federation policy. FIG. 5(a) (b) (c) illustrates that the increment of the federation profit is not exactly equal to the profit obtained by the mobile caching infrastructure provider. In other words, participation by the mobile caching infrastructure provider not only brings income to itself, but also increases the profit of the fixed caching infrastructure provider. Mobile caching infrastructure providers are of great help to fixed caching infrastructure providers when the amount of requests reaches a high level and the base stations themselves cannot handle the requests well. For mobile caching infrastructure providers, caching vehicles need to cooperate with base stations to provide service. As can be seen from fig. 6, mobile caching infrastructure providers are willing to join the federation when they can profit from it. The mobile caching infrastructure provider may remain independent when the request rate is low and the base station does not need additional caching nodes to share the load. When the number of requests exceeds a certain number, the mobile caching infrastructure provider joins the federation, increasing the profit of the federation, and thereby obtaining the profit of the federation.
Based on the above results, we find it feasible to increase the profit of the caching infrastructure providers by means of federation. At the same time, dynamic federation based on the present situation has a crucial advantage over the past large federation of all caching infrastructure providers. Some caching vehicles owned by the mobile caching infrastructure provider are idle for some time period, which means that caching resources will be wasted if a large federation is formed. Some mobile caching infrastructure providers do not join the federation due to the limited analog size. Thus, if the situation is large enough and there are more participants, the idle cache infrastructure provider may join other alliances and contribute to the total profit. Useless cache infrastructure providers stay in the large federation for a long time, causing serious harm to other members in the federation in addition to wasting resources.
Influence of (IV) related parameters
The user requests the impact of the heterogeneity. An imbalance in user requests will exacerbate the situation where most base stations are overloaded or idle. It compromises the overall profit. Thus, in this case, we studied the impact of the heterogeneity of user requests on overall profit. We change the standard deviation of user requests in different base stations. As can be seen from fig. 7, there is no cooperation withThe overall profit in the case of BS-only is strongly affected by the increased standard deviation. When in useIncreasing from 12 to 15, the profit drops by about 37% and 31% in the case of no cooperation and BS-only, respectively. And the dynamic alliance is better adapted to the imbalance of the user request.
The impact of the mobile phone revenue. We define the profit-to-profit ratio as the ratio of the profit gained by a moving vehicle to the base station. As shown in FIG. 8, as the profit ratio increases, the league overall profit margin increases. This means that if the revenue obtained by caching vehicle service requests reaches a certain level, the federation can obtain considerable profit. In other words, once the quality of service of the caching vehicle improves, resulting in higher costs, the caching vehicle may make a significant contribution to the caching consortium.
Efficiency of the (V) algorithm
In this section, we demonstrate the effectiveness of our algorithm through simulation experiments. We change the size of the network (number of cache nodes) and record the running time of the corresponding algorithm in fig. 9. As shown in fig. 9, the running time increases linearly with the increase of cache nodes, which shows that our algorithm has better efficiency. During the experiment, we kept the number of cache infrastructure providers unchanged (3 fixed cache infrastructure providers and 3 mobile cache infrastructure providers), only changing the number of cache nodes owned by the cache infrastructure providers. For the cache node, the number of cache vehicles is adjusted, and the number of base stations is also adjusted according to the proportion (for example, 3 base stations correspond to 97 vehicles, and 8 base stations correspond to 292 vehicles). It is not meaningful to add vehicles without considering the base station, for example, 100 and 200 buffer vehicles are not different for the base station because 100 have satisfied the user requirements of the base station. Due to the nature of the Shapril value, it calculates the profitThe factorial order of (a), the membership, has a significant impact on runtime. Although an increase in the number of cache nodes may not result in a sharp rise in runtime, especially the number of cache infrastructure providers is at a lower level. In fact, in practical cases, the number of caching infrastructure providers is not large, for example, the fixed caching infrastructure providers in china mainly include telecommunications, mobile and telecommunications. Therefore, our algorithm adapts well to real scenes.
Claims (10)
1. A fringe alliance game method facing content cooperative caching is characterized by comprising the following steps:
A. establishing a cache infrastructure provider alliance system model;
B. setting a service position of a alliance system model;
C. setting a user request scheduling strategy;
D. establishing a service request model;
E. calculating the income of the alliance system;
F. optimizing a content caching strategy and a workload scheduling strategy to maximize profit of the alliance system;
G. setting a profit allocation scheme of the alliance system;
H. and setting a alliance game scheme.
2. The content collaborative cache-oriented league gaming method of claim 1, wherein: in the step A, the step B is carried out,
the cache infrastructure providers comprise fixed base stations and mobile vehicles with storage units, different cache infrastructure providers form a federation, provide cache services for content providers and reasonably distribute revenue among the federations, select which cache infrastructure provider cooperates and how much storage resources are leased in each specific area will determine the profit obtained, the whole process of federation formation is dynamic, and each cache infrastructure provider decides whether to cooperate with other cache infrastructure providers at each time slot depending on the state of the system;
set of cache infrastructure providers asWhereinRepresenting the set of fixed caching infrastructure providers that own the base station,representing a collection of mobile caching infrastructure providers having mobile caching vehicles, the federation system model including the collection ofAnd a base station and a cellThe base station and the buffer vehicle are respectively referred to by edge nodes, and the set of all buffer nodes is represented asFor fixation ofAnd moveAre defined separatelyAndto belong to their base stations and to the set of buffer vehicles,representing a collection of edge nodes belonging to a caching infrastructure provider,representing a set of buffer vehicles within the coverage of base station k;
when a request arrives at the base station, if the base station caches the content required by the request, the request directly sends the content to the user; otherwise, the cache vehicle cooperating with the base station in the coverage area of the base station can be used as an assisting node to process the request; meanwhile, the base stations can cooperate with each other through a high-speed link;
the base station is used as a static edge node, and the number and the position of the base station are fixed; determining the number of available buffer vehicles in the coverage area of each base station in each time slot, and calculating the minimum buffer vehicle number of each mobile buffer infrastructure provider in a certain time period in the area of the base station as the number of available vehicles by the base station according to the track data provided by the mobile buffer infrastructure provider;
all base stations in the alliance continuously collect information of arrival requests in each time period, wherein the information comprises the arrival rate and the type proportion; the status of the buffer vehicles, including the number of available vehicles and which buffer infrastructure provider they belong to, is obtained by the mobile buffer infrastructure providers in the consortium and sent to the base station.
3. The content collaborative cache oriented league gaming method of claim 2, wherein: in the step (B), the step (A),
the fixed content library comprises collectionsThe probability and magnitude of its request are respectivelyAnd,the probability that content i is requested is,
wherein the content of the first and second substances,a value of between 0.5 and 1, representing the peak of the distribution,reflecting different skewness of distribution, for the base station k, the arrival rate of the user isThe arrival rate of the request for the content j is,The cache capacity of the cache node i is shown, a base station or a cache vehicle cannot cache all contents at the same time, the cache node selectively caches the contents according to the request of a user, and binary variables are usedTo indicate the state of the placement of the content,indicating that the content j is placed in the cache nodes i, each cache node cannot cache the content beyond its storage capacity,
4. the content collaborative cache oriented league gaming method of claim 3, wherein: in the step C, the step C is carried out,
indicating a request for a scheduling policy that is,the ratio of the requests representing the content j of the request to be processed at the local base station i, forAnd is,The proportion of requests representing the request content j from base station i load to base station k,indicating the proportion of the load allocated to the macrocell, for,The proportion of requests representing the content of the request j distributed from the base station i to the buffer vehicles v, the base station being able to distribute its request only to cooperating buffer vehicles within its coverage area, forWhen is coming into contact withWhen satisfied withIt holds that for each base station i, the request for content j can be processed locally, in the assisting base station, in the buffer vehicle in range or in the macro-unit, the load ratioSatisfies the following conditions:
expressed as the profit gained by providing the content j requested at base station i from the cache of base station k,in order to provide the profit gained by the content j requested at the base station i from the buffer of the buffer vehicle v, servicing the request with the macro-cell will typically introduce intolerable transmission delays to the delay-critical service,represents a fine through the macrocell service content j。
5. The content collaborative cache-oriented league gaming method of claim 4, wherein: in the step D, the step of the method is carried out,
regarding the request as a client, and taking each cache node as a service desk to provide service for the client; requests are assigned to base stations, buffer vehicles or macro cells according to the content locations of all buffer nodes; when the request is receivedWhen the base station is scheduled to another base station, the content acquired from another base station is firstly transmitted to the requested base station through a high-capacity link and then is sent to the user by the requested base station, which can be regarded as a two-stage process; because in the second phase, the content is transmitted through the transmission link of the local base station, the communication resource of the local base station is occupied; the request service of the local base station or another base station is regarded as the execution process of the local base station;andrepresenting the requested rate of arrival at base station i and buffer vehicle v; setting the size of the contentSatisfy the mean value ofThe service rates of the corresponding base station i and the buffer vehicle v are respectively subject to mean valuesAnd;
setting each base station and buffer vehicle as M/M/1 queue without request priority and waiting queue limit, base stationThe average latency for each request is,
is provided withAt a base stationThe average latency per request for buffered vehicles v within the range is,
whereinDefining a delay threshold for ensuring quality of serviceSatisfy the requirement ofThe constraint of transmission delay is defined as
7. the content collaborative cache-oriented league gaming method of claim 6, wherein: in step F, the optimization problem is represented as,
the decision variable isAndthe limitation condition is that the content of the cache node can not exceed the capacity of the cache node, the requests from the base station are all dispatched to a local base station, other base stations, cache vehicles in the coverage area or a macro unit, the average transmission delay of the request of each cache node does not exceed a delay threshold value, the request applying for the content of j can not be dispatched to a cache node without caching the content of j, and the base station can only redirect the request to the cache vehicles in the coverage area.
8. The content collaborative cache-oriented league gaming method of claim 7, wherein: in the step G, the step C is carried out,
let the ith cache infrastructure provider pair allianceHas a marginal contribution ofU is a cost function, the ith cache infrastructure provider is from the federationAllocated revenueIs composed of
WhereinIs thatAll arrangement ofIs selected from the group consisting of (a) a subset of,is arranged in piA set of previous players;
setting definition cache nodeThe cost per unit of cache isCaching cost of the ith caching infrastructure providerIs composed of;
9. The content collaborative cache-oriented league gaming method of claim 8, wherein: in the step (H), the step (A),
let the preference function of player iPlayer i thanPreference is given toThe presence of a metal in the metal layer, if and only if,the preference function is equal to the utility each user assigns in the federation;
when a participant has no incentive to unilaterally change his federation into another federation in the partition, the participant in the partition is nash-stable,by passingWill utility valueCombining with alliance division, alliance formation game is defined on an aggregation N, the utility value of the aggregation is irrelevant to other alliances, and the requirement of meeting,Is the optimal solution of the optimization problem;
the profit sharing method satisfies the following condition,
10. The content collaborative cache-oriented league gaming method of claim 8, wherein: in the step (H), the step (A),
the decision process at a particular time t consists of a number of rounds, each comprising N steps, each participant being able to make a decision; in t rounds, random sequenceIs generated in whichIndicating the ith participant selected to make the decision, at each step, player i chooses to leave the current league and join the new league or stay in the current league, including,
player i iteratively retrieves the current partitionUse a group ofTo record the historical alliances that participant i joined before, the retrieval process ignores the historical setIn the new alliance, the computing participant i joins the new allianceProfit of(ii) a If the new league's revenue exceeds the current one, thenThe new federation is recorded as the best federation, and this process continues until all possible federations are complete, except for the federations in the history set, after iteration, if the best federation has been updated, then the current federation is appended to the history set and a new current partition is obtained;
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110349244.0A CN112804361B (en) | 2021-03-31 | 2021-03-31 | Edge alliance game method for content cooperation cache |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110349244.0A CN112804361B (en) | 2021-03-31 | 2021-03-31 | Edge alliance game method for content cooperation cache |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112804361A true CN112804361A (en) | 2021-05-14 |
CN112804361B CN112804361B (en) | 2021-07-02 |
Family
ID=75816112
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110349244.0A Active CN112804361B (en) | 2021-03-31 | 2021-03-31 | Edge alliance game method for content cooperation cache |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112804361B (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113037876A (en) * | 2021-05-25 | 2021-06-25 | 中国人民解放军国防科技大学 | Cooperative game-based cloud downlink task edge node resource allocation method |
CN113784320A (en) * | 2021-08-23 | 2021-12-10 | 华中科技大学 | Alliance dividing and adjusting method based on multiple relays and multiple relay transmission system |
CN114980212A (en) * | 2022-04-29 | 2022-08-30 | 中移互联网有限公司 | Edge caching method and device, electronic equipment and readable storage medium |
CN116208669A (en) * | 2023-04-28 | 2023-06-02 | 湖南大学 | Intelligent lamp pole-based vehicle-mounted heterogeneous network collaborative task unloading method and system |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105049326A (en) * | 2015-06-19 | 2015-11-11 | 清华大学深圳研究生院 | Social content caching method in edge network area |
US20170149922A1 (en) * | 2014-07-01 | 2017-05-25 | Cisco Technolgy Inc. | Cdn scale down |
CN107483630A (en) * | 2017-09-19 | 2017-12-15 | 北京工业大学 | A kind of construction method for combining content distribution mechanism with CP based on the ISP of edge cache |
CN110062037A (en) * | 2019-04-08 | 2019-07-26 | 北京工业大学 | Content distribution method and device |
US20190356498A1 (en) * | 2018-05-17 | 2019-11-21 | At&T Intellectual Property I, L.P. | System and method for optimizing revenue through bandwidth utilization management |
CN111815367A (en) * | 2020-07-22 | 2020-10-23 | 北京工业大学 | Network profit optimization allocation mechanism construction method based on edge cache |
-
2021
- 2021-03-31 CN CN202110349244.0A patent/CN112804361B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170149922A1 (en) * | 2014-07-01 | 2017-05-25 | Cisco Technolgy Inc. | Cdn scale down |
CN105049326A (en) * | 2015-06-19 | 2015-11-11 | 清华大学深圳研究生院 | Social content caching method in edge network area |
CN107483630A (en) * | 2017-09-19 | 2017-12-15 | 北京工业大学 | A kind of construction method for combining content distribution mechanism with CP based on the ISP of edge cache |
US20190356498A1 (en) * | 2018-05-17 | 2019-11-21 | At&T Intellectual Property I, L.P. | System and method for optimizing revenue through bandwidth utilization management |
CN110062037A (en) * | 2019-04-08 | 2019-07-26 | 北京工业大学 | Content distribution method and device |
CN111815367A (en) * | 2020-07-22 | 2020-10-23 | 北京工业大学 | Network profit optimization allocation mechanism construction method based on edge cache |
Non-Patent Citations (2)
Title |
---|
XIAOFENG CAO.ETC: ""Edge Federation_ Towards an Integrated Service Provisioning Model"", 《IEEE/ACM TRANSACTIONS ON NETWORKING ( VOLUME: 28, ISSUE: 3, JUNE 2020)》 * |
郭建宇等: "面向ICN的非合作博弈优化缓存策略", 《电讯技术》 * |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113037876A (en) * | 2021-05-25 | 2021-06-25 | 中国人民解放军国防科技大学 | Cooperative game-based cloud downlink task edge node resource allocation method |
CN113784320A (en) * | 2021-08-23 | 2021-12-10 | 华中科技大学 | Alliance dividing and adjusting method based on multiple relays and multiple relay transmission system |
CN113784320B (en) * | 2021-08-23 | 2023-07-25 | 华中科技大学 | Multi-relay-based alliance dividing and adjusting method and multi-relay transmission system |
CN114980212A (en) * | 2022-04-29 | 2022-08-30 | 中移互联网有限公司 | Edge caching method and device, electronic equipment and readable storage medium |
CN114980212B (en) * | 2022-04-29 | 2023-11-21 | 中移互联网有限公司 | Edge caching method and device, electronic equipment and readable storage medium |
CN116208669A (en) * | 2023-04-28 | 2023-06-02 | 湖南大学 | Intelligent lamp pole-based vehicle-mounted heterogeneous network collaborative task unloading method and system |
CN116208669B (en) * | 2023-04-28 | 2023-06-30 | 湖南大学 | Intelligent lamp pole-based vehicle-mounted heterogeneous network collaborative task unloading method and system |
Also Published As
Publication number | Publication date |
---|---|
CN112804361B (en) | 2021-07-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112804361B (en) | Edge alliance game method for content cooperation cache | |
CN109379727B (en) | MEC-based task distributed unloading and cooperative execution scheme in Internet of vehicles | |
Xu et al. | Collaborate or separate? Distributed service caching in mobile edge clouds | |
CN111405527B (en) | Vehicle-mounted edge computing method, device and system based on volunteer cooperative processing | |
Yu et al. | Cooperative resource management in cloud-enabled vehicular networks | |
Samanta et al. | Latency-oblivious distributed task scheduling for mobile edge computing | |
CN111866601B (en) | Cooperative game-based video code rate decision method in mobile marginal scene | |
Jie et al. | Online task scheduling for edge computing based on repeated Stackelberg game | |
Wu et al. | A profit-aware coalition game for cooperative content caching at the network edge | |
Krolikowski et al. | A decomposition framework for optimal edge-cache leasing | |
Zamzam et al. | Game theory for computation offloading and resource allocation in edge computing: A survey | |
Lungaro et al. | Predictive and context-aware multimedia content delivery for future cellular networks | |
Ma et al. | Reinforcement learning based task offloading and take-back in vehicle platoon networks | |
Mishra et al. | A collaborative computation and offloading for compute-intensive and latency-sensitive dependency-aware tasks in dew-enabled vehicular fog computing: A federated deep Q-learning approach | |
Amer et al. | An optimized collaborative scheduling algorithm for prioritized tasks with shared resources in mobile-edge and cloud computing systems | |
Dong et al. | Quantum particle swarm optimization for task offloading in mobile edge computing | |
Wang et al. | An adaptive QoS management framework for VoD cloud service centers | |
Song et al. | Joint bandwidth allocation and task offloading in multi-access edge computing | |
Zhang et al. | Distributed pricing and bandwidth allocation in crowdsourced wireless community networks | |
Nguyen et al. | EdgePV: collaborative edge computing framework for task offloading | |
CN114466023B (en) | Computing service dynamic pricing method and system for large-scale edge computing system | |
Peng et al. | A task assignment scheme for parked-vehicle assisted edge computing in iov | |
Fang et al. | Edge cache-based isp-cp collaboration scheme for content delivery services | |
Cui et al. | GreenLoading: Using the citizens band radio for energy-efficient offloading of shared interests | |
Sterz et al. | Multi-stakeholder service placement via iterative bargaining with incomplete information |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |