CN112804361A - Edge alliance game method for content cooperation cache - Google Patents

Edge alliance game method for content cooperation cache Download PDF

Info

Publication number
CN112804361A
CN112804361A CN202110349244.0A CN202110349244A CN112804361A CN 112804361 A CN112804361 A CN 112804361A CN 202110349244 A CN202110349244 A CN 202110349244A CN 112804361 A CN112804361 A CN 112804361A
Authority
CN
China
Prior art keywords
base station
cache
content
request
buffer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110349244.0A
Other languages
Chinese (zh)
Other versions
CN112804361B (en
Inventor
郭得科
武睿
廖汉龙
唐国明
罗来龙
康文杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
National University of Defense Technology
Original Assignee
National University of Defense Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by National University of Defense Technology filed Critical National University of Defense Technology
Priority to CN202110349244.0A priority Critical patent/CN112804361B/en
Publication of CN112804361A publication Critical patent/CN112804361A/en
Application granted granted Critical
Publication of CN112804361B publication Critical patent/CN112804361B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • H04L67/568Storing data temporarily at an intermediate stage, e.g. caching

Abstract

The invention discloses a fringe alliance game method facing content cooperative caching, which comprises the following steps: A. establishing a cache infrastructure provider alliance system model; B. setting a service position of a alliance system model; C. setting a user request scheduling strategy; D. establishing a service request model; E. calculating the income of the alliance system; F. optimizing a content caching strategy and a workload scheduling strategy to maximize profit of the alliance system; G. setting a profit allocation scheme of the alliance system; H. and setting a alliance game scheme. The invention can solve the defects of the prior art and improve the profit of individual participants in the cache service under the stable condition.

Description

Edge alliance game method for content cooperation cache
Technical Field
The invention relates to a network edge cache arrangement method, in particular to a content cooperative cache oriented edge alliance game method.
Background
With the ever increasing number of mobile devices, the internet is undergoing an explosive growth in mobile data traffic. A report by cisco predicts that global mobile data will increase 7-fold from 2017 to 2022. In addition to the large traffic, the delay in traffic transmission is also important. Especially in emerging delay-sensitive applications and services, such as augmented reality, autonomous driving, interactive online maps, etc., the magnitude of traffic delay directly determines the quality of service (QoS). Therefore, efforts have been made to reduce the communication burden on the backbone network and to reduce the response time of delay sensitive traffic. Deploying caches at the edge of the network (i.e., near the end users) has proven to be an effective and intelligent strategy.
The current mobile edge caching scheme mainly comprises four categories of macro base station caching, small base station caching, device-to-device communication network caching and mobile device local caching. The content cached in these infrastructures or devices can be obtained by all users directly from the network edge, thus avoiding the communication burden of the backbone network. In addition to the above four types, a new scheme for edge caching on vehicles with abundant IT resources has been recently proposed, which is a promising approach. By reusing vehicles as cache nodes, service providers may save the tremendous expense of building cache infrastructure (e.g., base stations) while ensuring desirable cache hit rates.
While caching content at the network edge may reduce communication latency, balancing cache usage also presents challenges in a hybrid edge caching scheme. On the one hand, the fixed cache node of the base station may have a large storage capacity (in the form of an edge cloud), but its deployment cost is high, and the service range (the capability of serving dynamic demands) is also limited. On the other hand, the mobile cache node of the vehicle or the mobile device has lower cost and flexible capability of adapting to the change of the user demand. However, it cannot buffer too much data due to limited storage capacity. In order to solve the above problem, a collaborative content caching is proposed using a hybrid caching scheme. Different cache nodes at the edge of the network cooperate with each other, so that the network cost can be reduced, and the overall cache utilization rate can be improved. However, there are still some key challenges that limit the collaborative content caching at the edge of the network, which remain to be solved.
The cache resources are spatially unbalanced. In the past, the collaboration mode usually adopts a rental mode, and Content Providers (CPs) and Caching Infrastructure Providers (CIPs) make bilateral contracts, and rental edge resources are used for content caching. This mode ignores interactions between CIPs (such as resource sharing and content loading), resulting in space imbalance and resource waste in the caching process.
The user requests an imbalance in time. Over time, fluctuations in end user requests can lead to mismatches between caching resources and service requirements. Wireless traffic demands typically have a large peak-to-valley ratio throughout the day. Therefore, building more caching infrastructure to meet peak demand and other traditional solutions can also cause significant waste of resources (during valley periods).
There is a lack of attractive incentive measures. It is impractical to expect that a cache provider will unconditionally collaborate with others no matter how much profit it receives. In some cases, a "good-sounding" league may not always benefit all members, for example, some people's profit may be impaired by "idle members" gaining more revenue than they are willing to. Thus, cooperation will only be formed if each member can receive sufficient benefit in view of the potential monetary loss risk (e.g., violating the quality of service negotiated with the customer).
Disclosure of Invention
The invention aims to provide a fringe alliance game method facing content cooperative caching, which can overcome the defects of the prior art and improve the profits of individual participants in caching service under a stable condition.
The invention comprises the following steps:
A. establishing a cache infrastructure provider alliance system model;
B. setting a service position of a alliance system model;
C. setting a user request scheduling strategy;
D. establishing a service request model;
E. calculating the income of the alliance system;
F. optimizing a content caching strategy and a workload scheduling strategy to maximize profit of the alliance system;
G. setting a profit allocation scheme of the alliance system;
H. and setting a alliance game scheme.
Preferably, in step A, the first step,
the cache infrastructure providers comprise fixed base stations and mobile vehicles with storage units, different cache infrastructure providers form a federation, provide cache services for content providers and reasonably distribute revenue among the federations, select which cache infrastructure provider cooperates and how much storage resources are leased in each specific area will determine the profit obtained, the whole process of federation formation is dynamic, and each cache infrastructure provider decides whether to cooperate with other cache infrastructure providers at each time slot depending on the state of the system;
set of cache infrastructure providers as
Figure 850049DEST_PATH_IMAGE001
Wherein
Figure 65130DEST_PATH_IMAGE002
Representing the set of fixed caching infrastructure providers that own the base station,
Figure 701778DEST_PATH_IMAGE003
representing a collection of mobile caching infrastructure providers having mobile caching vehicles, the federation system model including the collection of
Figure 27718DEST_PATH_IMAGE004
And a base station and a cell
Figure 581059DEST_PATH_IMAGE005
The base station and the buffer vehicle are respectively referred to by edge nodes, and the set of all buffer nodes is represented as
Figure 967041DEST_PATH_IMAGE006
For fixation of
Figure 481199DEST_PATH_IMAGE007
And move
Figure 486195DEST_PATH_IMAGE008
Are defined separately
Figure 769409DEST_PATH_IMAGE009
And
Figure 326292DEST_PATH_IMAGE010
to belong to their base stations and to the set of buffer vehicles,
Figure 452380DEST_PATH_IMAGE011
representing a collection of edge nodes belonging to a caching infrastructure provider,
Figure 120121DEST_PATH_IMAGE012
representing a set of buffer vehicles within the coverage of base station k;
when a request arrives at the base station, if the base station caches the content required by the request, the request directly sends the content to the user; otherwise, the cache vehicle cooperating with the base station in the coverage area of the base station can be used as an assisting node to process the request; meanwhile, the base stations can cooperate with each other through a high-speed link;
the base station is used as a static edge node, and the number and the position of the base station are fixed; determining the number of available buffer vehicles in the coverage area of each base station in each time slot, and calculating the minimum buffer vehicle number of each mobile buffer infrastructure provider in a certain time period in the area of the base station as the number of available vehicles by the base station according to the track data provided by the mobile buffer infrastructure provider;
all base stations in the alliance continuously collect information of arrival requests in each time period, wherein the information comprises the arrival rate and the type proportion; the status of the buffer vehicles, including the number of available vehicles and which buffer infrastructure provider they belong to, is obtained by the mobile buffer infrastructure providers in the consortium and sent to the base station.
Preferably, in the step B, the step C,
the fixed content library comprises collections
Figure 257842DEST_PATH_IMAGE013
The probability and magnitude of its request are respectively
Figure 126572DEST_PATH_IMAGE014
And
Figure 349743DEST_PATH_IMAGE015
Figure 821175DEST_PATH_IMAGE016
the probability that content i is requested is,
Figure 203615DEST_PATH_IMAGE017
wherein the content of the first and second substances,
Figure 102301DEST_PATH_IMAGE018
a value of between 0.5 and 1, representing the peak of the distribution,
Figure 812768DEST_PATH_IMAGE018
reflecting different skewness of distribution, for the base station k, the arrival rate of the user is
Figure 963258DEST_PATH_IMAGE019
The arrival rate of the request for the content j is
Figure 75570DEST_PATH_IMAGE020
Figure 410737DEST_PATH_IMAGE021
The cache capacity of the cache node i is shown, a base station or a cache vehicle cannot cache all contents at the same time, the cache node selectively caches the contents according to the request of a user, and binary variables are used
Figure 467554DEST_PATH_IMAGE022
To indicate the state of the placement of the content,
Figure 546369DEST_PATH_IMAGE023
indicating that the content j is placed in the cache nodes i, each cache node cannot cache the content beyond its storage capacity,
Figure 778767DEST_PATH_IMAGE024
preferably, in step C, the step of,
Figure 623183DEST_PATH_IMAGE025
indicating a request for a scheduling policy that is,
Figure 573822DEST_PATH_IMAGE026
the ratio of the requests representing the content j of the request to be processed at the local base station i, for
Figure 190748DEST_PATH_IMAGE027
And is
Figure 136707DEST_PATH_IMAGE028
Figure 548097DEST_PATH_IMAGE029
The proportion of requests representing the request content j from base station i load to base station k,
Figure 720452DEST_PATH_IMAGE030
indicating the proportion of the load allocated to the macrocell, for
Figure 141069DEST_PATH_IMAGE031
Figure 692267DEST_PATH_IMAGE032
The proportion of the request representing the requested content j being distributed from the base station i to the buffer vehicle v, the base station being able to distribute its request only toCooperatively caching vehicles within its coverage area, for
Figure 274558DEST_PATH_IMAGE031
When is coming into contact with
Figure 527685DEST_PATH_IMAGE033
When satisfied with
Figure 486414DEST_PATH_IMAGE034
It holds that for each base station i, the request for content j can be processed locally, in the assisting base station, in the buffer vehicle in range or in the macro-unit, the load ratio
Figure 157698DEST_PATH_IMAGE035
Satisfies the following conditions:
Figure 910890DEST_PATH_IMAGE036
Figure 916892DEST_PATH_IMAGE037
expressed as the profit gained by providing the content j requested at base station i from the cache of base station k,
Figure 413733DEST_PATH_IMAGE038
in order to provide the profit gained by the content j requested at the base station i from the buffer of the buffer vehicle v, servicing the request with the macro-cell will typically introduce intolerable transmission delays to the delay-critical service,
Figure 329736DEST_PATH_IMAGE039
represents a fine through the macrocell service content j
Figure 394775DEST_PATH_IMAGE040
Preferably, in step D, the step of,
regarding the request as a client, and taking each cache node as a service desk to provide service for the client; according to the content bit of all cache nodesRequesting allocation to a base station, a buffer vehicle or a macro cell; when a request is scheduled to another base station, the content acquired from another base station is first transmitted to the requested base station via a high capacity link and then sent to the user by the requested base station, which can be seen as a two-stage process; because in the second phase, the content is transmitted through the transmission link of the local base station, the communication resource of the local base station is occupied; the request service of the local base station or another base station is regarded as the execution process of the local base station;
Figure 763440DEST_PATH_IMAGE041
and
Figure 63971DEST_PATH_IMAGE042
representing the requested rate of arrival at base station i and buffer vehicle v; setting the size of the content
Figure 959115DEST_PATH_IMAGE043
Satisfy the mean value of
Figure 319689DEST_PATH_IMAGE044
The service rates of the corresponding base station i and the buffer vehicle v are respectively subject to mean values
Figure 175649DEST_PATH_IMAGE045
And
Figure 889659DEST_PATH_IMAGE046
setting each base station and buffer vehicle as M/M/1 queue without request priority and waiting queue limit, base station
Figure 780254DEST_PATH_IMAGE047
The average latency for each request is,
Figure 311730DEST_PATH_IMAGE048
is provided with
Figure 514041DEST_PATH_IMAGE049
The average delay per request for the buffer vehicle v within range of base station i is,
Figure 156375DEST_PATH_IMAGE050
wherein
Figure 901477DEST_PATH_IMAGE051
Defining a delay threshold for ensuring quality of service
Figure 603854DEST_PATH_IMAGE052
Satisfy the requirement of
Figure 44194DEST_PATH_IMAGE053
The constraint of transmission delay is defined as
Figure 83694DEST_PATH_IMAGE054
Preferably, in step E, the federated system
Figure 683302DEST_PATH_IMAGE055
Gain of (2)
Figure 166367DEST_PATH_IMAGE056
In order to realize the purpose,
Figure 953058DEST_PATH_IMAGE057
Figure 202774DEST_PATH_IMAGE058
Figure 47102DEST_PATH_IMAGE059
alliance system
Figure 825702DEST_PATH_IMAGE055
To be overloadedPenalty for conditional payments
Figure 365268DEST_PATH_IMAGE060
In order to realize the purpose,
Figure 28461DEST_PATH_IMAGE061
preferably, in step F, the optimization problem is expressed as,
Figure 602662DEST_PATH_IMAGE062
the decision variable is
Figure 552163DEST_PATH_IMAGE063
And
Figure 969238DEST_PATH_IMAGE064
the limitation condition is that the content of the cache node can not exceed the capacity of the cache node, the requests from the base station are all dispatched to a local base station, other base stations, cache vehicles in the coverage area or a macro unit, the average transmission delay of the request of each cache node does not exceed a delay threshold value, the request applying for the content of j can not be dispatched to a cache node without caching the content of j, and the base station can only redirect the request to the cache vehicles in the coverage area.
Preferably, in step G, the first step,
let the ith cache infrastructure provider pair alliance
Figure 29598DEST_PATH_IMAGE065
Has a marginal contribution of
Figure 723885DEST_PATH_IMAGE066
U is a cost function, the ith cache infrastructure provider is from the federation
Figure 737232DEST_PATH_IMAGE055
Allocated revenue
Figure 985811DEST_PATH_IMAGE067
Is composed of
Figure 240074DEST_PATH_IMAGE068
Wherein
Figure 523288DEST_PATH_IMAGE069
Is that
Figure 80171DEST_PATH_IMAGE055
All arrangement of
Figure 691413DEST_PATH_IMAGE070
Is selected from the group consisting of (a) a subset of,
Figure 624733DEST_PATH_IMAGE071
is the set of players in the ranking pi that precede i;
setting the cost of each unit of a cache of a sense cache node k to be
Figure 28033DEST_PATH_IMAGE072
Caching cost of the ith caching infrastructure provider
Figure 614872DEST_PATH_IMAGE073
Is composed of
Figure 103622DEST_PATH_IMAGE074
Net profit for ith cache infrastructure provider
Figure 450421DEST_PATH_IMAGE075
Is composed of
Figure 708227DEST_PATH_IMAGE076
Preferably, in step H, the reaction mixture is,
let the preference function of player i
Figure 341334DEST_PATH_IMAGE077
Player i than
Figure 442014DEST_PATH_IMAGE078
Preference is given to
Figure 717137DEST_PATH_IMAGE055
The presence of a metal in the metal layer, if and only if,
Figure 563871DEST_PATH_IMAGE079
the preference function is equal to the utility each user assigns in the federation;
when a participant has no incentive to unilaterally change his federation into another federation in the partition, the participant in the partition is nash-stable,
Figure 774403DEST_PATH_IMAGE080
by applying utility values
Figure 972166DEST_PATH_IMAGE081
Combining with alliance division, alliance formation game is defined on an aggregation N, the utility value of the aggregation is irrelevant to other alliances, and the requirement of meeting
Figure 785402DEST_PATH_IMAGE082
Figure 876854DEST_PATH_IMAGE083
Is the optimal solution of the optimization problem;
the profit sharing method satisfies the following condition,
effectiveness:
Figure 382922DEST_PATH_IMAGE084
symmetry: if it is
Figure 67981DEST_PATH_IMAGE085
For all
Figure 294694DEST_PATH_IMAGE086
Are all established, then
Figure 381599DEST_PATH_IMAGE087
Fairness: for any
Figure 58568DEST_PATH_IMAGE088
The contribution of j to i is equal to the contribution of i to j,
Figure 89978DEST_PATH_IMAGE089
virtualization: easy i is a virtual player, i.e. for all
Figure 510595DEST_PATH_IMAGE090
Then, then
Figure 452006DEST_PATH_IMAGE091
Preferably, in step H, the reaction mixture is,
the decision process at a particular time t consists of a number of rounds, each comprising N steps, each participant being able to make a decision; in t rounds, random sequence
Figure 299877DEST_PATH_IMAGE092
Is generated in which
Figure 834894DEST_PATH_IMAGE093
Indicating the ith participant selected to make the decision, at each step, player i chooses to leave the current league and join the new league or stay in the current league, including,
player i iteratively retrieves the current partition
Figure 528044DEST_PATH_IMAGE094
Use a group of
Figure 589541DEST_PATH_IMAGE095
To record the historical alliances that participant i joined before, the retrieval process ignores the historical set
Figure 201788DEST_PATH_IMAGE096
In the new alliance, the computing participant i joins the new alliance
Figure 348735DEST_PATH_IMAGE097
Profit of
Figure 111155DEST_PATH_IMAGE098
(ii) a If the new federation's revenue exceeds the current one, the new federation is recorded as the best federation, and the process continues until all possible federations are complete, except for those in the history set, after iteration, if the best federation has been updated, then the current federation is appended to the history set and a new current partition is obtained
Figure 902524DEST_PATH_IMAGE094
At the end of the t-th round, after all participants made the decision, we obtained and recorded a partition
Figure 826618DEST_PATH_IMAGE099
At a given round t +1, if
Figure 195283DEST_PATH_IMAGE100
Then that means that no players can deviate from their current league for better profit, then nash stability is reached.
The invention has the beneficial effects that: the present invention proposes a framework to combine a wide variety of caching infrastructure providers to increase overall profits. The caching process is described by a model consisting of caching infrastructure provider owned base stations and caching vehicles, including content placement and request scheduling in close proximity to the user. On the basis of the model, a utility function of a federation consisting of different cache infrastructure providers is defined, and benefits obtained by different service request modes and constraints of average delay and cache capacity are mainly considered. By solving the optimization problem, the optimal profit under a specific alliance state can be obtained. Each caching infrastructure provider decides to join a different federation at each time slot based on its own profit. The total profit of the federation and the average personal profit of the participants are increased by 53% and 42%, respectively, compared to a collaboration scenario with only fixed caching nodes.
Drawings
FIG. 1 is a diagram of an example scenario of an edge computing environment.
Fig. 2 is a schematic diagram of a cache federation architecture consisting of a macrocell, a base station, and a cache vehicle.
FIG. 3 is a graphical illustration of dynamic user request arrival rates and buffer vehicle number fluctuations.
FIG. 4 is a graph of total profit for all caching infrastructure providers under three different strategies.
Fig. 5(a) - (c) are plots of profit comparison for a fixed cache infrastructure provider for three different situations.
FIG. 6 is a graph comparing profits of mobile caching infrastructure providers.
FIG. 7 is a diagram illustrating the impact of user request heterogeneity on total revenue.
FIG. 8 is a graphical illustration of the mobile revenue versus total revenue impact.
FIG. 9 is a graph of the number of cache nodes versus algorithm runtime.
Detailed Description
Referring to fig. 1-2, a fringe alliance gaming method oriented to content cooperative caching comprises the following steps:
A. establishing a cache infrastructure provider alliance system model;
B. setting a service position of a alliance system model;
C. setting a user request scheduling strategy;
D. establishing a service request model;
E. calculating the income of the alliance system;
F. optimizing a content caching strategy and a workload scheduling strategy to maximize profit of the alliance system;
G. setting a profit allocation scheme of the alliance system;
H. and setting a alliance game scheme.
In the step A, the step B is carried out,
the cache infrastructure providers comprise fixed base stations and mobile vehicles with storage units, different cache infrastructure providers form a federation, provide cache services for content providers and reasonably distribute revenue among the federations, select which cache infrastructure provider cooperates and how much storage resources are leased in each specific area will determine the profit obtained, the whole process of federation formation is dynamic, and each cache infrastructure provider decides whether to cooperate with other cache infrastructure providers at each time slot depending on the state of the system;
set of cache infrastructure providers as
Figure 620448DEST_PATH_IMAGE001
Wherein
Figure 656537DEST_PATH_IMAGE002
Representing the set of fixed caching infrastructure providers that own the base station,
Figure 751532DEST_PATH_IMAGE003
representing a collection of mobile caching infrastructure providers having mobile caching vehicles, the federation system model including the collection of
Figure 482858DEST_PATH_IMAGE004
And a base station and a cell
Figure 587081DEST_PATH_IMAGE005
The base station and the buffer vehicle are respectively referred to by edge nodes, and the set of all buffer nodes is represented as
Figure 477676DEST_PATH_IMAGE006
For fixation of
Figure 868206DEST_PATH_IMAGE007
And move
Figure 211463DEST_PATH_IMAGE008
Are defined separately
Figure 853797DEST_PATH_IMAGE009
And
Figure 202827DEST_PATH_IMAGE010
to belong to their base stations and to the set of buffer vehicles,
Figure 905203DEST_PATH_IMAGE011
representing a collection of edge nodes belonging to a caching infrastructure provider,
Figure 735756DEST_PATH_IMAGE012
representing a set of buffer vehicles within the coverage of base station k;
when a request arrives at the base station, if the base station caches the content required by the request, the request directly sends the content to the user; otherwise, the cache vehicle cooperating with the base station in the coverage area of the base station can be used as an assisting node to process the request; meanwhile, the base stations can cooperate with each other through a high-speed link;
the base station is used as a static edge node, and the number and the position of the base station are fixed; determining the number of available buffer vehicles in the coverage area of each base station in each time slot, and calculating the minimum buffer vehicle number of each mobile buffer infrastructure provider in a certain time period in the area of the base station as the number of available vehicles by the base station according to the track data provided by the mobile buffer infrastructure provider;
all base stations in the alliance continuously collect information of arrival requests in each time period, wherein the information comprises the arrival rate and the type proportion; the status of the buffer vehicles, including the number of available vehicles and which buffer infrastructure provider they belong to, is obtained by the mobile buffer infrastructure providers in the consortium and sent to the base station.
In the step (B), the step (A),
the fixed content library comprises collections
Figure 40836DEST_PATH_IMAGE013
The probability and magnitude of its request are respectively
Figure 640444DEST_PATH_IMAGE014
And
Figure 513722DEST_PATH_IMAGE101
Figure 831571DEST_PATH_IMAGE016
the probability that content i is requested is,
Figure 18970DEST_PATH_IMAGE017
wherein the content of the first and second substances,
Figure 207506DEST_PATH_IMAGE018
a value of between 0.5 and 1, representing the peak of the distribution,
Figure 251685DEST_PATH_IMAGE018
reflecting different skewness of distribution, for the base station k, the arrival rate of the user is
Figure 932196DEST_PATH_IMAGE019
The arrival rate of the request for the content j is
Figure 454445DEST_PATH_IMAGE020
Figure 28645DEST_PATH_IMAGE021
The cache capacity of the cache node i is shown, a base station or a cache vehicle cannot cache all contents at the same time, the cache node selectively caches the contents according to the request of a user, and binary variables are used
Figure 368360DEST_PATH_IMAGE022
To indicate the state of the placement of the content,
Figure 395222DEST_PATH_IMAGE023
indicating that the content j is placed in the cache nodes i, each cache node cannot cache the content beyond its storage capacity,
Figure 721161DEST_PATH_IMAGE102
in the step C, the step C is carried out,
Figure 759655DEST_PATH_IMAGE103
indicating a request for a scheduling policy that is,
Figure 411216DEST_PATH_IMAGE104
the ratio of the requests representing the content j of the request to be processed at the local base station i, for
Figure 925374DEST_PATH_IMAGE027
And is
Figure 914059DEST_PATH_IMAGE028
Figure 462852DEST_PATH_IMAGE029
The proportion of requests representing the request content j from base station i load to base station k,
Figure 19735DEST_PATH_IMAGE030
indicating the proportion of the load allocated to the macrocell, for
Figure 630976DEST_PATH_IMAGE031
Figure 564297DEST_PATH_IMAGE032
The proportion of requests representing the content of the request j distributed from the base station i to the buffer vehicles v, the base station being able to distribute its request only to cooperating buffer vehicles within its coverage area, for
Figure 826651DEST_PATH_IMAGE031
When is coming into contact with
Figure 554436DEST_PATH_IMAGE033
When satisfied with
Figure 43186DEST_PATH_IMAGE034
It holds that for each base station i, the request for content j can be processed locally, in the assisting base station, in the buffer vehicle in range or in the macro-unit, the load ratio
Figure 124406DEST_PATH_IMAGE035
Satisfies the following conditions:
Figure 382212DEST_PATH_IMAGE036
Figure 546477DEST_PATH_IMAGE037
expressed as the profit gained by providing the content j requested at base station i from the cache of base station k,
Figure 381578DEST_PATH_IMAGE038
in order to provide the profit gained by the content j requested at the base station i from the buffer of the buffer vehicle v, servicing the request with the macro-cell will typically introduce intolerable transmission delays to the delay-critical service,
Figure 391122DEST_PATH_IMAGE039
represents a fine through the macrocell service content j
Figure 769014DEST_PATH_IMAGE040
In the step D, the step of the method is carried out,
regarding the request as a client, and taking each cache node as a service desk to provide service for the client; requests are assigned to base stations, buffer vehicles or macro cells according to the content locations of all buffer nodes; when a request is scheduled to another base station, the content acquired from another base station is first transmitted to the requested base station via a high capacity link and then sent to the user by the requested base station, which can be seen as a two-stage process; because in the second phase, the content is transmitted through the transmission link of the local base station, the communication resource of the local base station is occupied; the local base station or another base station is regarded as the local base stationThe execution process of (1);
Figure 838601DEST_PATH_IMAGE041
and
Figure 177309DEST_PATH_IMAGE042
representing the requested rate of arrival at base station i and buffer vehicle v; setting the size of the content
Figure 724965DEST_PATH_IMAGE043
Satisfy the mean value of
Figure 957364DEST_PATH_IMAGE044
The service rates of the corresponding base station i and the buffer vehicle v are respectively subject to mean values
Figure 588065DEST_PATH_IMAGE045
And
Figure 7545DEST_PATH_IMAGE046
setting each base station and buffer vehicle as M/M/1 queue without request priority and waiting queue limit, base station
Figure 624471DEST_PATH_IMAGE105
The average latency for each request is,
Figure 586742DEST_PATH_IMAGE048
is provided with
Figure 263711DEST_PATH_IMAGE049
The average delay per request for the buffer vehicle v within range of base station i is,
Figure 170487DEST_PATH_IMAGE050
wherein
Figure 450159DEST_PATH_IMAGE051
Defining a delay threshold for ensuring quality of service
Figure 657149DEST_PATH_IMAGE052
Satisfy the requirement of
Figure 505020DEST_PATH_IMAGE053
The constraint of transmission delay is defined as
Figure 164671DEST_PATH_IMAGE054
In step E, the alliance system
Figure 727327DEST_PATH_IMAGE055
Gain of (2)
Figure 257666DEST_PATH_IMAGE056
In order to realize the purpose,
Figure 401071DEST_PATH_IMAGE106
alliance system
Figure 282440DEST_PATH_IMAGE055
Penalties paid for overload conditions
Figure 654646DEST_PATH_IMAGE060
In order to realize the purpose,
Figure 570650DEST_PATH_IMAGE061
in step F, the optimization problem is represented as,
Figure 229164DEST_PATH_IMAGE107
the decision variable is
Figure 722462DEST_PATH_IMAGE063
And
Figure 22994DEST_PATH_IMAGE064
the limitation condition is that the content of the cache node can not exceed the capacity of the cache node, the requests from the base station are all dispatched to a local base station, other base stations, cache vehicles in the coverage area or a macro unit, the average transmission delay of the request of each cache node does not exceed a delay threshold value, the request applying for the content of j can not be dispatched to a cache node without caching the content of j, and the base station can only redirect the request to the cache vehicles in the coverage area.
In the step G, the step C is carried out,
let the ith cache infrastructure provider pair alliance
Figure 59083DEST_PATH_IMAGE065
Has a marginal contribution of
Figure 763865DEST_PATH_IMAGE066
U is a cost function, the ith cache infrastructure provider is from the federation
Figure 885404DEST_PATH_IMAGE055
Allocated revenue
Figure 989627DEST_PATH_IMAGE067
Is composed of
Figure 473698DEST_PATH_IMAGE068
Wherein
Figure 5173DEST_PATH_IMAGE069
Is that
Figure 958217DEST_PATH_IMAGE055
All arrangement of
Figure 866130DEST_PATH_IMAGE070
Is selected from the group consisting of (a) a subset of,
Figure 611232DEST_PATH_IMAGE071
is in line pi before iA set of players;
setting the cost of each unit of a cache of a sense cache node k to be
Figure 172663DEST_PATH_IMAGE072
Caching cost of the ith caching infrastructure provider
Figure 3216DEST_PATH_IMAGE073
Is composed of
Figure 183662DEST_PATH_IMAGE074
Net profit for ith cache infrastructure provider
Figure 658637DEST_PATH_IMAGE075
Is composed of
Figure 266335DEST_PATH_IMAGE076
In the step (H), the step (A),
let the preference function of player i
Figure 584184DEST_PATH_IMAGE077
Player i than
Figure 427375DEST_PATH_IMAGE078
Preference is given to
Figure 147070DEST_PATH_IMAGE055
The presence of a metal in the metal layer, if and only if,
Figure 191249DEST_PATH_IMAGE079
the preference function is equal to the utility each user assigns in the federation;
when a participant has no incentive to unilaterally change his federation into another federation in the partition, the participant in the partition is nash-stable,
Figure 606181DEST_PATH_IMAGE080
by applying utility values
Figure 394008DEST_PATH_IMAGE108
Combining with alliance division, alliance formation game is defined on an aggregation N, the utility value of the aggregation is irrelevant to other alliances, and the requirement of meeting
Figure 968209DEST_PATH_IMAGE109
Figure 42344DEST_PATH_IMAGE083
Is the optimal solution of the optimization problem;
the profit sharing method satisfies the following condition,
effectiveness:
Figure 69206DEST_PATH_IMAGE084
symmetry: if it is
Figure 270512DEST_PATH_IMAGE085
For all
Figure 699219DEST_PATH_IMAGE086
Are all established, then
Figure 819622DEST_PATH_IMAGE087
Fairness: for any
Figure 723993DEST_PATH_IMAGE088
The contribution of j to i is equal to the contribution of i to j,
Figure 588043DEST_PATH_IMAGE089
virtualization: easy i is a virtual player, i.e. for all
Figure 871257DEST_PATH_IMAGE090
Then, then
Figure 569086DEST_PATH_IMAGE091
In the step (H), the step (A),
at a particular time tThe decision process consists of several rounds, each comprising N steps, each participant can make a decision; in t rounds, random sequence
Figure 304961DEST_PATH_IMAGE092
Is generated in which
Figure 238282DEST_PATH_IMAGE093
Indicating the ith participant selected to make the decision, at each step, player i chooses to leave the current league and join the new league or stay in the current league, including,
player i iteratively retrieves the current partition
Figure 235057DEST_PATH_IMAGE094
Use a group of
Figure 228420DEST_PATH_IMAGE095
To record the historical alliances that participant i joined before, the retrieval process ignores the historical set
Figure 717171DEST_PATH_IMAGE096
In the new alliance, the computing participant i joins the new alliance
Figure 792531DEST_PATH_IMAGE097
Profit of
Figure 50337DEST_PATH_IMAGE098
(ii) a If the new federation's revenue exceeds the current one, the new federation is recorded as the best federation, and the process continues until all possible federations are complete, except for those in the history set, after iteration, if the best federation has been updated, then the current federation is appended to the history set and a new current partition is obtained
Figure 214602DEST_PATH_IMAGE094
At the end of the t-th round, after all participants made the decision, we obtained and recorded a partition
Figure 49703DEST_PATH_IMAGE099
At a given round t +1, if
Figure 59247DEST_PATH_IMAGE100
Then that means that no players can deviate from their current league for better profit, then nash stability is reached.
Experiment of
(one) Experimental parameters
Within a given area, we consider
Figure 437139DEST_PATH_IMAGE110
Individual Cache Infrastructure Providers (CIPs), wherein
Figure 382092DEST_PATH_IMAGE111
A fixed cache infrastructure provider and
Figure 579855DEST_PATH_IMAGE112
a mobile cache infrastructure provider, each cache infrastructure provider providing
Figure 393090DEST_PATH_IMAGE113
Different types of content. We are only concerned with business areas that are more crowded and have more traffic flow during the day than at night. 3 fixed cache infrastructure providers own 3, 2 and 3 base stations, respectively, in the area. The 3 mobile caching infrastructure providers own 150, 200 and 150 cache vehicles, respectively. We assume that all cache contents have the same size
Figure 484543DEST_PATH_IMAGE114
Since different content can be divided into the same size. For simplicity, we assume that all base stations and buffer vehicles have the same configuration. Then we set
Figure 725032DEST_PATH_IMAGE115
Indicating that the base station and the buffer vehicle are one70 and 10 contents can be stored simultaneously in the time period. Cost per buffer unit of base station
Figure 675670DEST_PATH_IMAGE116
Set to 0.01 $. For a caching vehicle, its caching cost is low due to its low capacity and mobility
Figure 902383DEST_PATH_IMAGE117
Above the base station, we set it to $ 0.02.
Profit obtained when request for content j is served locally
Figure 989288DEST_PATH_IMAGE118
And
Figure 666257DEST_PATH_IMAGE119
for all base stations and vehicles to be identical, we set them to 0.03$ and 0.04$, respectively, per request. When a request cannot be processed on a caching node close to the user, we set the penalty to $ 0.08, which is higher than any profit or caching cost. The penalty indicates that, in order to reduce caching costs, the caching infrastructure provider prefers to place content on caching nodes to serve as much as possible, rather than denying service, which is consistent with the reality that the caching infrastructure provider should guarantee cache hit rates to increase the profit of Content Providers (CPs). For each base station, we will time the processing of each request
Figure 963246DEST_PATH_IMAGE120
Set to 0.005 seconds. Compared with the base station, the processing capacity of the buffer vehicle is weaker, so the processing time of the buffer vehicle is set
Figure 852705DEST_PATH_IMAGE121
0.015 seconds. To ensure satisfactory quality of service, we will delay the threshold
Figure 59695DEST_PATH_IMAGE122
Set to 0.02 seconds.
To reflect the reality, we used the Uber pickups dataset and the T-drive dataset of ModelWhale to simulate the user request and flow trajectory. The data set from ModelWhale is the taxi-taking record for the best-paced rental car, including time and location data requested by the U.S. New York user. The T-drive data set records the driving tracks of 10000 taxis in one week in Beijing. These data show the relationship between user requests and traffic during the day. After data extraction is performed on the data set, fig. 3 is drawn, and fluctuation conditions of user request quantity and buffer vehicle quantity in a base station coverage range in the area are shown in the figure. Fig. 3 illustrates that traffic varies with user requests in a business area on a certain day. This is a key prerequisite to ensure that caching using vehicles is a promising approach to satisfy dynamic user requests. Based on the data from the day above, we performed a series of experiments to test the performance of the league.
To represent attributes intuitively, we have set two different policies as baselines in addition to the federation policy:
not applicable to the strategy. Uncooperative means that there is no cooperation between all caching infrastructure providers in the area, i.e. all base stations handle all requests locally. Redundant requests that the base station cannot handle can only be transmitted to the macrocell. When the requests significantly exceed the processing capability of the base station, a large number of requests offloaded to the macrocell may result in a large penalty. Mobile CIPs do not make a profit in this case, since the mobile buffer vehicle cannot provide services without cooperating with the base station.
BS-only. BS-only represents a strategy where all fixed CIPs form a federation and cooperate with each other. This is a commonly used collaboration scheme at present. The federation exists only in the base station and does not include cached vehicles. Although cooperative base stations can share different contents, due to the limitation of the transmission capability of the base station, the scheme still cannot meet the requirement when a large number of requests occur.
(II) comparison of Performance
To run our experiments, we developed a simulation program with Python to solve the optimization problem in section four. Fig. 4 shows the performance of the coalition formation algorithm at different time periods of the day. We compared the total profit for all caching infrastructure providers in three different cases. In particular, we divide a day into 24 time periods, corresponding to 24 hours. In each time slot, the cache infrastructure provider forms a alliance according to the request arrival rate of each base station and the number of cache vehicles in the region, and decides how to cache the content and schedule the request. As can be seen from fig. 4, the unworked policy is always kept at a lower profit level. Due to the limited buffer capacity of a single base station, even in a time slot with a low request rate, a single job tends to miss most of the requested content. The BS-only strategy has better performance under the condition of few requests because the base stations can share the content. But during business hours, when demand is increased dramatically, the overall profit tends to decrease. In this case, the main bottleneck is the processing power of the base station. Considering only the cooperation between base stations, the key to improving cache hit rate is to increase the base stations or other cache infrastructure in the area. The federation policy performs much better than the other two policies during most of the day. It has much less advantage than the BS-only strategy in the first few hours because the request arrival rate is low and can be handled by the base station alone. When the request rate reaches a specific level after 7 o' clock, the buffer vehicle plays an important role in sharing the base station load and improving the overall profit of the alliance.
Feasibility of alliance (III)
Although we have studied that the formation of a federation works well to increase the overall profit for the caching infrastructure providers, we must determine that each caching infrastructure provider benefits from it, in order to guarantee the feasibility of the federation. Fig. 5(a) (b) (c) shows the profit for a fixed caching infrastructure provider in three different cases, and the profit for a mobile caching infrastructure provider under a dynamic federation policy. FIG. 5(a) (b) (c) illustrates that the increment of the federation profit is not exactly equal to the profit obtained by the mobile caching infrastructure provider. In other words, participation by the mobile caching infrastructure provider not only brings income to itself, but also increases the profit of the fixed caching infrastructure provider. Mobile caching infrastructure providers are of great help to fixed caching infrastructure providers when the amount of requests reaches a high level and the base stations themselves cannot handle the requests well. For mobile caching infrastructure providers, caching vehicles need to cooperate with base stations to provide service. As can be seen from fig. 6, mobile caching infrastructure providers are willing to join the federation when they can profit from it. The mobile caching infrastructure provider may remain independent when the request rate is low and the base station does not need additional caching nodes to share the load. When the number of requests exceeds a certain number, the mobile caching infrastructure provider joins the federation, increasing the profit of the federation, and thereby obtaining the profit of the federation.
Based on the above results, we find it feasible to increase the profit of the caching infrastructure providers by means of federation. At the same time, dynamic federation based on the present situation has a crucial advantage over the past large federation of all caching infrastructure providers. Some caching vehicles owned by the mobile caching infrastructure provider are idle for some time period, which means that caching resources will be wasted if a large federation is formed. Some mobile caching infrastructure providers do not join the federation due to the limited analog size. Thus, if the situation is large enough and there are more participants, the idle cache infrastructure provider may join other alliances and contribute to the total profit. Useless cache infrastructure providers stay in the large federation for a long time, causing serious harm to other members in the federation in addition to wasting resources.
Influence of (IV) related parameters
The user requests the impact of the heterogeneity. An imbalance in user requests will exacerbate the situation where most base stations are overloaded or idle. It compromises the overall profit. Thus, in this case, we studied the impact of the heterogeneity of user requests on overall profit. We change the standard deviation of user requests in different base stations
Figure 517352DEST_PATH_IMAGE123
. As can be seen from fig. 7, there is no cooperation withThe overall profit in the case of BS-only is strongly affected by the increased standard deviation. When in use
Figure 911425DEST_PATH_IMAGE123
Increasing from 12 to 15, the profit drops by about 37% and 31% in the case of no cooperation and BS-only, respectively. And the dynamic alliance is better adapted to the imbalance of the user request.
The impact of the mobile phone revenue. We define the profit-to-profit ratio as the ratio of the profit gained by a moving vehicle to the base station
Figure 870153DEST_PATH_IMAGE124
. As shown in FIG. 8, as the profit ratio increases, the league overall profit margin increases. This means that if the revenue obtained by caching vehicle service requests reaches a certain level, the federation can obtain considerable profit. In other words, once the quality of service of the caching vehicle improves, resulting in higher costs, the caching vehicle may make a significant contribution to the caching consortium.
Efficiency of the (V) algorithm
In this section, we demonstrate the effectiveness of our algorithm through simulation experiments. We change the size of the network (number of cache nodes) and record the running time of the corresponding algorithm in fig. 9. As shown in fig. 9, the running time increases linearly with the increase of cache nodes, which shows that our algorithm has better efficiency. During the experiment, we kept the number of cache infrastructure providers unchanged (3 fixed cache infrastructure providers and 3 mobile cache infrastructure providers), only changing the number of cache nodes owned by the cache infrastructure providers. For the cache node, the number of cache vehicles is adjusted, and the number of base stations is also adjusted according to the proportion (for example, 3 base stations correspond to 97 vehicles, and 8 base stations correspond to 292 vehicles). It is not meaningful to add vehicles without considering the base station, for example, 100 and 200 buffer vehicles are not different for the base station because 100 have satisfied the user requirements of the base station. Due to the nature of the Shapril value, it calculates the profit
Figure 56284DEST_PATH_IMAGE125
The factorial order of (a), the membership, has a significant impact on runtime. Although an increase in the number of cache nodes may not result in a sharp rise in runtime, especially the number of cache infrastructure providers is at a lower level. In fact, in practical cases, the number of caching infrastructure providers is not large, for example, the fixed caching infrastructure providers in china mainly include telecommunications, mobile and telecommunications. Therefore, our algorithm adapts well to real scenes.

Claims (10)

1. A fringe alliance game method facing content cooperative caching is characterized by comprising the following steps:
A. establishing a cache infrastructure provider alliance system model;
B. setting a service position of a alliance system model;
C. setting a user request scheduling strategy;
D. establishing a service request model;
E. calculating the income of the alliance system;
F. optimizing a content caching strategy and a workload scheduling strategy to maximize profit of the alliance system;
G. setting a profit allocation scheme of the alliance system;
H. and setting a alliance game scheme.
2. The content collaborative cache-oriented league gaming method of claim 1, wherein: in the step A, the step B is carried out,
the cache infrastructure providers comprise fixed base stations and mobile vehicles with storage units, different cache infrastructure providers form a federation, provide cache services for content providers and reasonably distribute revenue among the federations, select which cache infrastructure provider cooperates and how much storage resources are leased in each specific area will determine the profit obtained, the whole process of federation formation is dynamic, and each cache infrastructure provider decides whether to cooperate with other cache infrastructure providers at each time slot depending on the state of the system;
set of cache infrastructure providers as
Figure DEST_PATH_IMAGE001
Wherein
Figure DEST_PATH_IMAGE002
Representing the set of fixed caching infrastructure providers that own the base station,
Figure DEST_PATH_IMAGE003
representing a collection of mobile caching infrastructure providers having mobile caching vehicles, the federation system model including the collection of
Figure DEST_PATH_IMAGE004
And a base station and a cell
Figure DEST_PATH_IMAGE005
The base station and the buffer vehicle are respectively referred to by edge nodes, and the set of all buffer nodes is represented as
Figure DEST_PATH_IMAGE006
For fixation of
Figure DEST_PATH_IMAGE007
And move
Figure DEST_PATH_IMAGE008
Are defined separately
Figure DEST_PATH_IMAGE009
And
Figure DEST_PATH_IMAGE010
to belong to their base stations and to the set of buffer vehicles,
Figure DEST_PATH_IMAGE011
representing a collection of edge nodes belonging to a caching infrastructure provider,
Figure DEST_PATH_IMAGE012
representing a set of buffer vehicles within the coverage of base station k;
when a request arrives at the base station, if the base station caches the content required by the request, the request directly sends the content to the user; otherwise, the cache vehicle cooperating with the base station in the coverage area of the base station can be used as an assisting node to process the request; meanwhile, the base stations can cooperate with each other through a high-speed link;
the base station is used as a static edge node, and the number and the position of the base station are fixed; determining the number of available buffer vehicles in the coverage area of each base station in each time slot, and calculating the minimum buffer vehicle number of each mobile buffer infrastructure provider in a certain time period in the area of the base station as the number of available vehicles by the base station according to the track data provided by the mobile buffer infrastructure provider;
all base stations in the alliance continuously collect information of arrival requests in each time period, wherein the information comprises the arrival rate and the type proportion; the status of the buffer vehicles, including the number of available vehicles and which buffer infrastructure provider they belong to, is obtained by the mobile buffer infrastructure providers in the consortium and sent to the base station.
3. The content collaborative cache oriented league gaming method of claim 2, wherein: in the step (B), the step (A),
the fixed content library comprises collections
Figure DEST_PATH_IMAGE013
The probability and magnitude of its request are respectively
Figure DEST_PATH_IMAGE014
And
Figure DEST_PATH_IMAGE015
Figure DEST_PATH_IMAGE016
the probability that content i is requested is,
Figure DEST_PATH_IMAGE017
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE018
a value of between 0.5 and 1, representing the peak of the distribution,
Figure 427929DEST_PATH_IMAGE018
reflecting different skewness of distribution, for the base station k, the arrival rate of the user is
Figure DEST_PATH_IMAGE019
The arrival rate of the request for the content j is
Figure DEST_PATH_IMAGE020
Figure DEST_PATH_IMAGE021
The cache capacity of the cache node i is shown, a base station or a cache vehicle cannot cache all contents at the same time, the cache node selectively caches the contents according to the request of a user, and binary variables are used
Figure DEST_PATH_IMAGE022
To indicate the state of the placement of the content,
Figure DEST_PATH_IMAGE023
indicating that the content j is placed in the cache nodes i, each cache node cannot cache the content beyond its storage capacity,
Figure DEST_PATH_IMAGE024
4. the content collaborative cache oriented league gaming method of claim 3, wherein: in the step C, the step C is carried out,
Figure DEST_PATH_IMAGE025
indicating a request for a scheduling policy that is,
Figure DEST_PATH_IMAGE026
the ratio of the requests representing the content j of the request to be processed at the local base station i, for
Figure DEST_PATH_IMAGE027
And is
Figure DEST_PATH_IMAGE028
Figure DEST_PATH_IMAGE029
The proportion of requests representing the request content j from base station i load to base station k,
Figure DEST_PATH_IMAGE030
indicating the proportion of the load allocated to the macrocell, for
Figure DEST_PATH_IMAGE031
Figure DEST_PATH_IMAGE032
The proportion of requests representing the content of the request j distributed from the base station i to the buffer vehicles v, the base station being able to distribute its request only to cooperating buffer vehicles within its coverage area, for
Figure 21766DEST_PATH_IMAGE031
When is coming into contact with
Figure DEST_PATH_IMAGE033
When satisfied with
Figure DEST_PATH_IMAGE034
It holds that for each base station i, the request for content j can be processed locally, in the assisting base station, in the buffer vehicle in range or in the macro-unit, the load ratio
Figure DEST_PATH_IMAGE035
Satisfies the following conditions:
Figure DEST_PATH_IMAGE036
Figure DEST_PATH_IMAGE037
expressed as the profit gained by providing the content j requested at base station i from the cache of base station k,
Figure DEST_PATH_IMAGE038
in order to provide the profit gained by the content j requested at the base station i from the buffer of the buffer vehicle v, servicing the request with the macro-cell will typically introduce intolerable transmission delays to the delay-critical service,
Figure DEST_PATH_IMAGE039
represents a fine through the macrocell service content j
Figure DEST_PATH_IMAGE040
5. The content collaborative cache-oriented league gaming method of claim 4, wherein: in the step D, the step of the method is carried out,
regarding the request as a client, and taking each cache node as a service desk to provide service for the client; requests are assigned to base stations, buffer vehicles or macro cells according to the content locations of all buffer nodes; when the request is receivedWhen the base station is scheduled to another base station, the content acquired from another base station is firstly transmitted to the requested base station through a high-capacity link and then is sent to the user by the requested base station, which can be regarded as a two-stage process; because in the second phase, the content is transmitted through the transmission link of the local base station, the communication resource of the local base station is occupied; the request service of the local base station or another base station is regarded as the execution process of the local base station;
Figure DEST_PATH_IMAGE041
and
Figure DEST_PATH_IMAGE042
representing the requested rate of arrival at base station i and buffer vehicle v; setting the size of the content
Figure DEST_PATH_IMAGE043
Satisfy the mean value of
Figure DEST_PATH_IMAGE044
The service rates of the corresponding base station i and the buffer vehicle v are respectively subject to mean values
Figure DEST_PATH_IMAGE045
And
Figure DEST_PATH_IMAGE046
setting each base station and buffer vehicle as M/M/1 queue without request priority and waiting queue limit, base station
Figure DEST_PATH_IMAGE047
The average latency for each request is,
Figure DEST_PATH_IMAGE048
is provided with
Figure DEST_PATH_IMAGE049
At a base station
Figure DEST_PATH_IMAGE050
The average latency per request for buffered vehicles v within the range is,
Figure DEST_PATH_IMAGE051
wherein
Figure DEST_PATH_IMAGE052
Defining a delay threshold for ensuring quality of service
Figure DEST_PATH_IMAGE053
Satisfy the requirement of
Figure DEST_PATH_IMAGE054
The constraint of transmission delay is defined as
Figure DEST_PATH_IMAGE055
6. The content collaborative cache oriented league gaming method of claim 5, wherein: in step E, the alliance system
Figure DEST_PATH_IMAGE056
Gain of (2)
Figure DEST_PATH_IMAGE057
In order to realize the purpose,
Figure DEST_PATH_IMAGE058
Figure DEST_PATH_IMAGE059
Figure DEST_PATH_IMAGE060
alliance system
Figure 693181DEST_PATH_IMAGE056
Penalties paid for overload conditions
Figure DEST_PATH_IMAGE061
In order to realize the purpose,
Figure DEST_PATH_IMAGE062
7. the content collaborative cache-oriented league gaming method of claim 6, wherein: in step F, the optimization problem is represented as,
Figure DEST_PATH_IMAGE063
the decision variable is
Figure DEST_PATH_IMAGE064
And
Figure DEST_PATH_IMAGE065
the limitation condition is that the content of the cache node can not exceed the capacity of the cache node, the requests from the base station are all dispatched to a local base station, other base stations, cache vehicles in the coverage area or a macro unit, the average transmission delay of the request of each cache node does not exceed a delay threshold value, the request applying for the content of j can not be dispatched to a cache node without caching the content of j, and the base station can only redirect the request to the cache vehicles in the coverage area.
8. The content collaborative cache-oriented league gaming method of claim 7, wherein: in the step G, the step C is carried out,
let the ith cache infrastructure provider pair alliance
Figure DEST_PATH_IMAGE066
Has a marginal contribution of
Figure DEST_PATH_IMAGE067
U is a cost function, the ith cache infrastructure provider is from the federation
Figure 370281DEST_PATH_IMAGE056
Allocated revenue
Figure DEST_PATH_IMAGE068
Is composed of
Figure DEST_PATH_IMAGE069
Wherein
Figure DEST_PATH_IMAGE070
Is that
Figure 387916DEST_PATH_IMAGE056
All arrangement of
Figure DEST_PATH_IMAGE071
Is selected from the group consisting of (a) a subset of,
Figure DEST_PATH_IMAGE072
is arranged in pi
Figure 898794DEST_PATH_IMAGE050
A set of previous players;
setting definition cache node
Figure DEST_PATH_IMAGE073
The cost per unit of cache is
Figure DEST_PATH_IMAGE074
Caching cost of the ith caching infrastructure provider
Figure DEST_PATH_IMAGE075
Is composed of
Figure DEST_PATH_IMAGE076
Net profit for ith cache infrastructure provider
Figure DEST_PATH_IMAGE077
Is composed of
Figure DEST_PATH_IMAGE078
9. The content collaborative cache-oriented league gaming method of claim 8, wherein: in the step (H), the step (A),
let the preference function of player i
Figure DEST_PATH_IMAGE079
Player i than
Figure DEST_PATH_IMAGE080
Preference is given to
Figure 228144DEST_PATH_IMAGE056
The presence of a metal in the metal layer, if and only if,
Figure DEST_PATH_IMAGE081
the preference function is equal to the utility each user assigns in the federation;
when a participant has no incentive to unilaterally change his federation into another federation in the partition, the participant in the partition is nash-stable,
Figure DEST_PATH_IMAGE082
by passingWill utility value
Figure DEST_PATH_IMAGE083
Combining with alliance division, alliance formation game is defined on an aggregation N, the utility value of the aggregation is irrelevant to other alliances, and the requirement of meeting
Figure DEST_PATH_IMAGE084
Figure DEST_PATH_IMAGE085
Is the optimal solution of the optimization problem;
the profit sharing method satisfies the following condition,
effectiveness:
Figure DEST_PATH_IMAGE086
symmetry: if it is
Figure DEST_PATH_IMAGE087
For all
Figure DEST_PATH_IMAGE088
Are all established, then
Figure DEST_PATH_IMAGE089
Fairness: for any
Figure DEST_PATH_IMAGE090
The contribution of j to i is equal to the contribution of i to j,
Figure DEST_PATH_IMAGE091
Figure DEST_PATH_IMAGE092
virtualization: easy i is a virtual player, i.e. for all
Figure DEST_PATH_IMAGE093
Then, then
Figure DEST_PATH_IMAGE094
10. The content collaborative cache-oriented league gaming method of claim 8, wherein: in the step (H), the step (A),
the decision process at a particular time t consists of a number of rounds, each comprising N steps, each participant being able to make a decision; in t rounds, random sequence
Figure DEST_PATH_IMAGE095
Is generated in which
Figure DEST_PATH_IMAGE096
Indicating the ith participant selected to make the decision, at each step, player i chooses to leave the current league and join the new league or stay in the current league, including,
player i iteratively retrieves the current partition
Figure DEST_PATH_IMAGE097
Use a group of
Figure DEST_PATH_IMAGE098
To record the historical alliances that participant i joined before, the retrieval process ignores the historical set
Figure DEST_PATH_IMAGE099
In the new alliance, the computing participant i joins the new alliance
Figure DEST_PATH_IMAGE100
Profit of
Figure 990826DEST_PATH_IMAGE101
(ii) a If the new league's revenue exceeds the current one, thenThe new federation is recorded as the best federation, and this process continues until all possible federations are complete, except for the federations in the history set, after iteration, if the best federation has been updated, then the current federation is appended to the history set and a new current partition is obtained
Figure 925284DEST_PATH_IMAGE097
At the end of the t-th round, after all participants made the decision, we obtained and recorded a partition
Figure DEST_PATH_IMAGE102
At a given round t +1, if
Figure 590752DEST_PATH_IMAGE103
Then that means that no players can deviate from their current league for better profit, then nash stability is reached.
CN202110349244.0A 2021-03-31 2021-03-31 Edge alliance game method for content cooperation cache Active CN112804361B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110349244.0A CN112804361B (en) 2021-03-31 2021-03-31 Edge alliance game method for content cooperation cache

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110349244.0A CN112804361B (en) 2021-03-31 2021-03-31 Edge alliance game method for content cooperation cache

Publications (2)

Publication Number Publication Date
CN112804361A true CN112804361A (en) 2021-05-14
CN112804361B CN112804361B (en) 2021-07-02

Family

ID=75816112

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110349244.0A Active CN112804361B (en) 2021-03-31 2021-03-31 Edge alliance game method for content cooperation cache

Country Status (1)

Country Link
CN (1) CN112804361B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113037876A (en) * 2021-05-25 2021-06-25 中国人民解放军国防科技大学 Cooperative game-based cloud downlink task edge node resource allocation method
CN113784320A (en) * 2021-08-23 2021-12-10 华中科技大学 Alliance dividing and adjusting method based on multiple relays and multiple relay transmission system
CN114980212A (en) * 2022-04-29 2022-08-30 中移互联网有限公司 Edge caching method and device, electronic equipment and readable storage medium
CN116208669A (en) * 2023-04-28 2023-06-02 湖南大学 Intelligent lamp pole-based vehicle-mounted heterogeneous network collaborative task unloading method and system

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105049326A (en) * 2015-06-19 2015-11-11 清华大学深圳研究生院 Social content caching method in edge network area
US20170149922A1 (en) * 2014-07-01 2017-05-25 Cisco Technolgy Inc. Cdn scale down
CN107483630A (en) * 2017-09-19 2017-12-15 北京工业大学 A kind of construction method for combining content distribution mechanism with CP based on the ISP of edge cache
CN110062037A (en) * 2019-04-08 2019-07-26 北京工业大学 Content distribution method and device
US20190356498A1 (en) * 2018-05-17 2019-11-21 At&T Intellectual Property I, L.P. System and method for optimizing revenue through bandwidth utilization management
CN111815367A (en) * 2020-07-22 2020-10-23 北京工业大学 Network profit optimization allocation mechanism construction method based on edge cache

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170149922A1 (en) * 2014-07-01 2017-05-25 Cisco Technolgy Inc. Cdn scale down
CN105049326A (en) * 2015-06-19 2015-11-11 清华大学深圳研究生院 Social content caching method in edge network area
CN107483630A (en) * 2017-09-19 2017-12-15 北京工业大学 A kind of construction method for combining content distribution mechanism with CP based on the ISP of edge cache
US20190356498A1 (en) * 2018-05-17 2019-11-21 At&T Intellectual Property I, L.P. System and method for optimizing revenue through bandwidth utilization management
CN110062037A (en) * 2019-04-08 2019-07-26 北京工业大学 Content distribution method and device
CN111815367A (en) * 2020-07-22 2020-10-23 北京工业大学 Network profit optimization allocation mechanism construction method based on edge cache

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
XIAOFENG CAO.ETC: ""Edge Federation_ Towards an Integrated Service Provisioning Model"", 《IEEE/ACM TRANSACTIONS ON NETWORKING ( VOLUME: 28, ISSUE: 3, JUNE 2020)》 *
郭建宇等: "面向ICN的非合作博弈优化缓存策略", 《电讯技术》 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113037876A (en) * 2021-05-25 2021-06-25 中国人民解放军国防科技大学 Cooperative game-based cloud downlink task edge node resource allocation method
CN113784320A (en) * 2021-08-23 2021-12-10 华中科技大学 Alliance dividing and adjusting method based on multiple relays and multiple relay transmission system
CN113784320B (en) * 2021-08-23 2023-07-25 华中科技大学 Multi-relay-based alliance dividing and adjusting method and multi-relay transmission system
CN114980212A (en) * 2022-04-29 2022-08-30 中移互联网有限公司 Edge caching method and device, electronic equipment and readable storage medium
CN114980212B (en) * 2022-04-29 2023-11-21 中移互联网有限公司 Edge caching method and device, electronic equipment and readable storage medium
CN116208669A (en) * 2023-04-28 2023-06-02 湖南大学 Intelligent lamp pole-based vehicle-mounted heterogeneous network collaborative task unloading method and system
CN116208669B (en) * 2023-04-28 2023-06-30 湖南大学 Intelligent lamp pole-based vehicle-mounted heterogeneous network collaborative task unloading method and system

Also Published As

Publication number Publication date
CN112804361B (en) 2021-07-02

Similar Documents

Publication Publication Date Title
CN112804361B (en) Edge alliance game method for content cooperation cache
CN109379727B (en) MEC-based task distributed unloading and cooperative execution scheme in Internet of vehicles
Xu et al. Collaborate or separate? Distributed service caching in mobile edge clouds
CN111405527B (en) Vehicle-mounted edge computing method, device and system based on volunteer cooperative processing
Yu et al. Cooperative resource management in cloud-enabled vehicular networks
Samanta et al. Latency-oblivious distributed task scheduling for mobile edge computing
CN111866601B (en) Cooperative game-based video code rate decision method in mobile marginal scene
Jie et al. Online task scheduling for edge computing based on repeated Stackelberg game
Wu et al. A profit-aware coalition game for cooperative content caching at the network edge
Krolikowski et al. A decomposition framework for optimal edge-cache leasing
Zamzam et al. Game theory for computation offloading and resource allocation in edge computing: A survey
Lungaro et al. Predictive and context-aware multimedia content delivery for future cellular networks
Ma et al. Reinforcement learning based task offloading and take-back in vehicle platoon networks
Mishra et al. A collaborative computation and offloading for compute-intensive and latency-sensitive dependency-aware tasks in dew-enabled vehicular fog computing: A federated deep Q-learning approach
Amer et al. An optimized collaborative scheduling algorithm for prioritized tasks with shared resources in mobile-edge and cloud computing systems
Dong et al. Quantum particle swarm optimization for task offloading in mobile edge computing
Wang et al. An adaptive QoS management framework for VoD cloud service centers
Song et al. Joint bandwidth allocation and task offloading in multi-access edge computing
Zhang et al. Distributed pricing and bandwidth allocation in crowdsourced wireless community networks
Nguyen et al. EdgePV: collaborative edge computing framework for task offloading
CN114466023B (en) Computing service dynamic pricing method and system for large-scale edge computing system
Peng et al. A task assignment scheme for parked-vehicle assisted edge computing in iov
Fang et al. Edge cache-based isp-cp collaboration scheme for content delivery services
Cui et al. GreenLoading: Using the citizens band radio for energy-efficient offloading of shared interests
Sterz et al. Multi-stakeholder service placement via iterative bargaining with incomplete information

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant