CN108833294A - The traffic scheduling method of the low bandwidth overhead of data-oriented center wide area network - Google Patents
The traffic scheduling method of the low bandwidth overhead of data-oriented center wide area network Download PDFInfo
- Publication number
- CN108833294A CN108833294A CN201810898884.5A CN201810898884A CN108833294A CN 108833294 A CN108833294 A CN 108833294A CN 201810898884 A CN201810898884 A CN 201810898884A CN 108833294 A CN108833294 A CN 108833294A
- Authority
- CN
- China
- Prior art keywords
- bandwidth
- data
- data center
- link
- node
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/10—Flow control; Congestion control
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/50—Queue scheduling
- H04L47/52—Queue scheduling by attributing bandwidth to queues
- H04L47/522—Dynamic queue service slot or variable bandwidth allocation
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
Abstract
The traffic scheduling method of the low bandwidth overhead of data-oriented center wide area network is a kind of wide area network flow scheduling control program.Its function be on the basis of considering small stream, link failure, ensureing the problems such as stream deadline, by reasonably dispatch spread greatly it is defeated, to reduce data center's wide area network user's bandwidth rental expense.System executes process:1) each data center's proxy server periodically collects the traffic requests (including flow demand, deadline, source destination node) at corresponding data center, and solicited message is sent to central controller;2) on the basis of considering small stream, link failure, ensureing the problems such as stream deadline, the reasonable big stream of scheduling, to reduce network bandwidth expense.The present invention can also effectively reduce the expense that data center wide area network user rents bandwidth to Internet Service Provider, cut operating costs under the premise of ensuring service quality.
Description
Technical field
The invention belongs to Internet technical fields, are related to flow scheduling technology, in particular to a kind of data-oriented center is wide
The traffic scheduling method of the low bandwidth overhead of domain net.
Background technique
Many Internet service providers and cloud service provider maintain multiple data centers to support its business, such as Microsoft, paddy
Song.The various distributed application programs of the overall situation are run in these data centers, and they are distributed in different geographic areas,
Which dictates that they have the demand being in communication with each other across geographic area, wide area network is ensureing that these data centers can be in different geographical positions
Crucial effect has been played in the intercommunication set.Mass data transport stream results in high bandwidth and opens between data center
Pin, data center owner will rent wide area network bandwidth to Internet Service Provider every year, and expense is up to several hundred million.It is severeer
, unreasonable flow scheduling results in the low bandwidth availability ratio between data center, the bandwidth benefit of most links
It is no more than 60% with rate, it means that there are the wastes of significant proportion in high bandwidth cost.How rationally and effectively to carry out
Flow scheduling reduces bandwidth cost, while guaranteeing that data flow is timely completed, and becomes one of flow scheduling field between data center
A major issue.
Big stream is defined as between data center that accounting is great in wide area network flow, and data volume is big and the duration is long
One kind stream.Usually big stream accounts for 85% to 95% specific gravity between data center in flow, data volume is several TB to several PB, continues
Time is up to several hours.Its two typical examples are:Financial institution is in day of trade remote backup transaction record, search engine
Index entry etc. is periodically synchronized between data center.Another kind of stream between data center is interactive small stream, they continue
Time is shorter, and delay sensitive is stronger.Big stream is not high to the requirement of time delay in contrast, can tolerate using Centralized Controller
It is scheduled bring time delay.To sum up, reasonably scheduling is carried out to big stream to have great importance.In some scenarios, one section
The parameter of all big streams is all unpredictable in time, an arrival time flowed greatly, deadline, and data volume is only in its production
Just it is known that these scenes are referred to as online scene after raw.To the rational management of chunk data stream under online scene, no
It is only the great guarantee of network service quality, can more saves the effective way that a large amount of bandwidth rents expense.
Many research work around the big stream expansion of rational management have been emerged in recent years.A kind of main thought is,
Increase storage equipment in data center, choose whether to store when data reach or forward, that is, storage forwarding strategy.At this
There are two types of the research work being unfolded under kind thinking, the first work proposes to keep in the data reached when link is busy,
Data are transmitted when link idle, and the utilization rate of bandwidth is finally improved on time dimension.Another kind work passes through storage
Forwarding strategy balances the bandwidth availability ratio of each chain road, to realize load balancing.Since it is desired that temporary pass through each number
According to the flow at center, the deployed with devices needs under this thinking increase additional storage equipment in each data center, so not
Storage overhead is only added additional, but also flow scheduling becomes more complicated.Therefore, it is intended that seeking a kind of more reasonable
Scheduling scheme carries out the optimization of bandwidth cost, while ensureing each big stream on time under the premise of not increasing extra storage expense
It completes.
Summary of the invention
In order to overcome the disadvantages of the above prior art, the purpose of the present invention is to provide a kind of the online of data-oriented center
Scene low bandwidth overhead traffic scheduling method, under the premise of guaranteeing that all big streams can be timely completed, by reasonably adjusting
Degree minimizes number every big stream bring extra bandwidth and rents expense, to minimize total bandwidth cost;The present invention is each
Reasonably bandwidth allocation is flowed greatly to be each in transmission time slot, transmitting path is set, on the basis for guaranteeing that all big streams are timely completed
On, it minimizes bandwidth and rents expense.
To achieve the goals above, the technical solution adopted by the present invention is that:
The flow scheduling system of the online scene low bandwidth overhead of data-oriented center wide area network, which is mainly characterized in that,
Between data center in wide area network under realize according to the following steps:
One lease period is divided into several transmission time slots by step (1), i.e., 1 ..., T, with a digraph G=(V,
E) link between data center and data center is indicated, wherein V is the node set of digraph, is indicated in all data
The set of the heart, E are the link sets of digraph, indicate the set of all links, with five-tuple ri=(si,ti,di,ai,τi) come
One big stream is represented, wherein si, ti, di, ai, τiRespectively represent the source node flowed greatly for i-th, destination node, data volume, when reaching
Between and deadline;Operation data core agent server obtains the source section of traffic requests every a time slot periodicity
Point, destination node, data volume, arrival time and deadline;
Traffic requests information is sent to central controller by step (2), data center's proxy server, for its scheduling.
Step (3), central tune degree system run PDA algorithm, and the input of PDA algorithm is each data center's proxy server
Stream solicited message is sent, when algorithm initialization, enables all bandwidth value ce=0, the smallest extra bandwidth is calculated by PDA algorithm and is opened
Sell and consider influence of the small stream to bandwidth cost, specific step is as follows:
Integer variable is become continuous variable to the relaxation of former integer programming problem by step (3a), then solves relaxation rear mold
The solution of the linear programming of type, continuous solution each of the links charge bandwidth value beBecause being actually to be collected according to integer bandwidth
Expense, therefore its corresponding integer bandwidth value is ce;It is initialized according to the solution of linear programmingAccording to ceInitialization is most
Small cost M ← ∑e∈Eceue, wherein ueIndicate the unit bandwidth price of link e;
Step (3b) is chosenMinimum, and ceIt is not equal toK link, Gu
Determine ce;
Step (3c), if c cannot be foundeIt is not equal toLink, then jump execute step (4);
Step (3d), the linear programming after solving fixed K link, calculates according to the solution of linear programmingFlower
Take obj ← ∑e∈Eceue;
Step (3e) updates the least cost M ← obj if current iteration result obj is less than known the least cost M, saves
The charge bandwidth c of each of the linkse;
Whether step (3e), the number of iterations are more than threshold value J, and iteration is jumped out if being more than, and are executed step (4), otherwise iteration
Number adds one, jumps and executes step (3b);
Step (4) generates scheduling scheme, and scheduling result is sent each data center's proxy server;
Present invention P0It indicates to make objective function under traffic constraints, capacity-constrained and Integer constrained characteristic totally three constraints:
The optimization problem of minimum, that is, network bandwidth is spent when minimizing transmission flow;
Wherein, there are two traffic constraints, first traffic constraints is:
And
And t ∈ N+
δ+(v) it indicates using node v as the set of all directed edges of starting point, δ-(v) it indicates using node v as all of terminal
The set of directed edge, xi,e(t) transmitted data amount at t-th moment of i-th of request on connection e, N are indicated+Indicate positive integer
Another traffic constraints is:
Wherein, δ+(si) indicate with node siFor the set of all directed edges of starting point, δ-(si) indicate with node siFor end
The set of all directed edges of point
The capacity-constrained is:
Wherein, ceFor the bandwidth value rented on the e of side, the list for the bandwidth that data center owner rents on link e is indicated
Digit, δcIndicate the size of unit bandwidth, δtIndicate the size of each timeslice;
The Integer constrained characteristic is:
Wherein N indicates integer.
Compared with prior art, the beneficial effects of the invention are as follows:
1) under the premise of guaranteeing that all big streams can be transmitted at the appointed time, opening for bandwidth rental is minimized
Pin.
2) scheme proposed by the present invention considers ISP and charges by certain particle size, and practicability is stronger.
3) scheme proposed in the present invention does not need to introduce additional storage equipment, saves total scheduling overhead.
Detailed description of the invention
Fig. 1 is the online schematic diagram of a scenario at data-oriented center.
Fig. 2 is the specific flow chart of the online scene low bandwidth overhead flow scheduling scheme at data-oriented center.Wherein ce
For the bandwidth value rented on the e of side.
Specific embodiment
The embodiment that the present invention will be described in detail with reference to the accompanying drawings and examples.
As shown in Figure 1, the present invention considers the scheduling problem flowed greatly in a lease period, several biographies will be divided into the period
Defeated time slot, i.e., 1 ..., T.The link between data center and data center is indicated with a digraph G=(V, E), wherein V
It is the node set of digraph, indicates the set of all data centers, E is the side collection of digraph, indicates all links
Set.With five-tuple ri=(si,ti,di,ai,τi) represent one big stream, wherein si, ti, di, ai, τiIt respectively represents i-th big
The source node of stream, destination node, data volume, arrival time and deadline.
For stream r one bigi, it transmit data time be limited in time interval [ai,τi] within timeslice on.
In addition, the source mesh node s of a requestiAnd tiBetween there may be a plurality of feasible path, each path by one in E or
Multiple link e are connected in series, and as described above, use xi,e(t) come indicate i-th of request connection e on t-th of moment
Transmitted data amount, available traffic constraints:
And
And t ∈ N+
The constraint is meant that, all moment of any one the big stream on the node except its all source destination nodes
On must satisfy flow conservation, i.e., be necessarily equal to flow into the category of the node from the sum of the flow for belonging to the big stream that the node flows out
In the sum of the flow of the big stream.Wherein, δ+(v) it indicates using node v as the set of all directed edges of starting point, δ-(v) it indicates to save
Point v is the set of all directed edges of terminal.
Another traffic constraints is:
The constraint guarantees that the sum of the flow for belonging to the big stream of the source node outflow of any one big stream subtracts inflow source section
The sum of the flow for belonging to the big stream of point engraves the total data transmission quantity that summation is equal to the big stream when all, and effect is to guarantee
All big streams can be completed at the appointed time.
Total rate that flow is transmitted to guarantee any one link in any transmission time slot is no more than the link rented
Amount of bandwidth, xi,e(t) it must satisfy capacity-constrained:
Wherein, wherein ceIndicate the units for the bandwidth that data center owner rents on link e, δcIndicate unit band
Wide size, δtRepresent the size of each timeslice.
Since the bandwidth of graduation of whole numbers of units, c must be rented when data center owner rents bandwidtheFor integer variable, therefore ce
It needs to meet Integer constrained characteristic:
In order to realize that the present invention minimizes the target that bandwidth rents expense, this programme P0It indicates in traffic constraints, capacity
Constraint and Integer constrained characteristic under totally three constraints, make objective function:
The optimization problem of minimum.Wherein ueIndicate the unit bandwidth price of this link of e.
Central scheduler utilizes the model, plans traffic requests, and referring to Fig. 2, which is:
One lease period is divided into several transmission time slots by step (1), i.e., 1 ..., T, with a digraph G=(V,
E) link between data center and data center is indicated, wherein V is the node set of digraph, is indicated in all data
The set of the heart, E are the link sets of digraph, indicate the set of all links, with five-tuple ri=(si,ti,di,ai,τi) come
One big stream is represented, wherein si, ti, di, ai, τiRespectively represent the source node flowed greatly for i-th, destination node, data volume, when reaching
Between and deadline;Operation data core agent server obtains the source section of traffic requests every a time slot periodicity
Point, destination node, data volume, arrival time and deadline;
Traffic requests information is sent to central controller by step (2), data center's proxy server, for its scheduling.
Step (3), central tune degree system run PDA algorithm, and the input of PDA algorithm is each data center's proxy server
Stream solicited message is sent, when algorithm initialization, enables all bandwidth value ce=0, the smallest extra bandwidth is calculated by PDA algorithm and is opened
Sell and consider influence of the small stream to bandwidth cost, specific step is as follows:
Integer variable is become continuous variable to the relaxation of former integer programming problem by step (3a), then solves relaxation rear mold
The solution of the linear programming of type, continuous solution each of the links charge bandwidth value beBecause being actually to be collected according to integer bandwidth
Expense, therefore its corresponding integer bandwidth value is ce;It is initialized according to the solution of linear programmingAccording to ceInitialization is most
Small cost M ← ∑e∈Eceue, wherein ueIndicate the unit bandwidth price of link e;
Step (3b) is chosenMinimum, and ceIt is not equal toK link, Gu
Determine ce;
Step (3c), if c cannot be foundeIt is not equal toLink, then jump execute step (4)
Step (3d), the linear programming after solving fixed K link, calculates according to the solution of linear programmingFlower
Take obj ← ∑e∈Eceue;
Step (3e) updates the least cost M ← obj if current iteration result obj is less than known the least cost M, saves
The charge bandwidth c of each of the linkse;
Whether step (3e), the number of iterations are more than threshold value J, and iteration is jumped out if being more than, and are executed step (4), otherwise iteration
Number adds one, jumps and executes step (3b);
Step (4) generates scheduling scheme, and scheduling result is sent each data center's proxy server;
In conclusion the invention proposes a kind of flow scheduling sides of the low bandwidth overhead of data-oriented center wide area network
Case.The program can ensure that all big stream is timely completed, while not introduce additional storage overhead.Under the premise of herein, the party
Case greatly improves link utilization, minimizes every stream bring extra bandwidth and rents expense, to save in data
The operation cost of the heart.
Claims (3)
1. the traffic scheduling method of the low bandwidth overhead of data-oriented center wide area network, which is characterized in that in data center's wide area
It is realized according to the following steps in net:
One lease period is divided into several transmission time slots by step (1), i.e., 1 ..., T is come with a digraph G=(V, E)
Indicate the link between data center and data center, wherein V is the node set of digraph, indicates all data centers
Set, E is the link set of digraph, the set of all links is indicated, with five-tuple ri=(si,ti,di,ai,τi) represent
One is flowed greatly, wherein si, ti, di, ai, τiRespectively represent the source node flowed greatly for i-th, destination node, data volume, arrival time with
And deadline;Operation data core agent server obtains source node, the mesh of traffic requests every a time slot periodicity
Node, data volume, arrival time and deadline;
Traffic requests information is sent to central controller by step (2), data center's proxy server, for its scheduling;
Step (3), central tune degree system run PDA algorithm, and the input of PDA algorithm is that each data center's proxy server is sent
Stream solicited message, when algorithm initialization, enable all bandwidth value ce=0, the smallest additional bandwidth overhead is calculated by PDA algorithm
And consider influence of the small stream to bandwidth cost;
Step (4) generates scheduling scheme, and scheduling result is sent each data center's proxy server.
2. the online scene low bandwidth overhead traffic scheduling method at data-oriented center, feature exist according to claim 1
In using P0It indicates to make objective function under traffic constraints, capacity-constrained and Integer constrained characteristic totally three constraints:
The optimization problem of minimum, that is, network bandwidth is spent when minimizing transmission flow;
Wherein, there are two traffic constraints, first traffic constraints is:
And v ≠ si,v≠ti,
t∈[ai,τi] and t ∈ N+
δ+(v) it indicates using node v as the set of all directed edges of starting point, δ-(v) it indicates using node v as all oriented of terminal
The set of link, xi,e(t) transmitted data amount at t-th moment of i-th of request on connection e, N are indicated+Indicate positive integer;
Another traffic constraints is:
Wherein, δ+(si) indicate with node siFor the set of all directed edges of starting point, δ-(si) indicate with node siFor terminal
The set of all directed edges;
The capacity-constrained is:
Wherein, ceFor the bandwidth value rented on the e of side, the units for the bandwidth that data center owner rents on link e is indicated,
δcIndicate the size of unit bandwidth, δtIndicate the size of each timeslice;
The Integer constrained characteristic is:
Wherein N indicates integer.
3. the online scene low bandwidth overhead traffic scheduling method at data-oriented center, feature exist according to claim 1
In the specific steps for calculating the smallest additional bandwidth overhead by PDA algorithm and consider influence of the small stream to bandwidth cost
It is as follows:
Integer variable is become continuous variable to the relaxation of former integer programming problem by step (3a), then solves model after relaxing
The solution of linear programming, continuous solution each of the links charge bandwidth value beBecause being actually to collect the charges according to integer bandwidth,
Therefore its corresponding integer bandwidth value is ce;It is initialized according to the solution of linear programmingAccording to ceThe minimum flower of initialization
Take M ← ∑e∈Eceue, wherein ueIndicate the unit bandwidth price of link e;
Step (3b) is chosenMinimum, and ceIt is not equal toK link, fixed ce;
Step (3c), if c cannot be foundeIt is not equal toLink, then jump execute step (4);
Step (3d), the linear programming after solving fixed K link, calculates according to the solution of linear programmingSpend obj
←∑e∈Eceue;
Step (3e) updates the least cost M ← obj if current iteration result obj is less than known the least cost M, saves every
The charge bandwidth c of linke;
Whether step (3e), the number of iterations are more than threshold value J, and iteration is jumped out if being more than, and are executed step (4), otherwise the number of iterations
Add one, jumps and execute step (3b).
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810898884.5A CN108833294B (en) | 2018-08-08 | 2018-08-08 | Low-bandwidth-overhead flow scheduling method for data center wide area network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810898884.5A CN108833294B (en) | 2018-08-08 | 2018-08-08 | Low-bandwidth-overhead flow scheduling method for data center wide area network |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108833294A true CN108833294A (en) | 2018-11-16 |
CN108833294B CN108833294B (en) | 2020-10-30 |
Family
ID=64153095
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810898884.5A Active CN108833294B (en) | 2018-08-08 | 2018-08-08 | Low-bandwidth-overhead flow scheduling method for data center wide area network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108833294B (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109981334A (en) * | 2019-01-24 | 2019-07-05 | 中山大学 | A kind of live streaming nerve of a covering Cost Optimization Approach with deferred constraint |
CN112202688A (en) * | 2020-09-22 | 2021-01-08 | 临沂大学 | Data evacuation method and system suitable for cloud data center network |
CN112243025A (en) * | 2020-09-22 | 2021-01-19 | 网宿科技股份有限公司 | Node cost scheduling method, electronic device and storage medium |
CN116032845A (en) * | 2023-02-13 | 2023-04-28 | 杭银消费金融股份有限公司 | Data center network overhead management method and system |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103036792A (en) * | 2013-01-07 | 2013-04-10 | 北京邮电大学 | Transmitting and scheduling method for maximizing minimal equity multiple data streams |
CN104966156A (en) * | 2015-06-12 | 2015-10-07 | 中冶南方工程技术有限公司 | Double-layer optimizing method for integrated dispatching of energy of iron and steel enterprise |
CN107454009A (en) * | 2017-09-08 | 2017-12-08 | 清华大学 | The offline scenario low bandwidth overhead flow scheduling scheme at data-oriented center |
CN107483355A (en) * | 2017-09-08 | 2017-12-15 | 清华大学 | The online scene low bandwidth overhead flow scheduling scheme at data-oriented center |
CN107579922A (en) * | 2017-09-08 | 2018-01-12 | 北京信息科技大学 | Network Load Balance apparatus and method |
-
2018
- 2018-08-08 CN CN201810898884.5A patent/CN108833294B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103036792A (en) * | 2013-01-07 | 2013-04-10 | 北京邮电大学 | Transmitting and scheduling method for maximizing minimal equity multiple data streams |
CN104966156A (en) * | 2015-06-12 | 2015-10-07 | 中冶南方工程技术有限公司 | Double-layer optimizing method for integrated dispatching of energy of iron and steel enterprise |
CN107454009A (en) * | 2017-09-08 | 2017-12-08 | 清华大学 | The offline scenario low bandwidth overhead flow scheduling scheme at data-oriented center |
CN107483355A (en) * | 2017-09-08 | 2017-12-15 | 清华大学 | The online scene low bandwidth overhead flow scheduling scheme at data-oriented center |
CN107579922A (en) * | 2017-09-08 | 2018-01-12 | 北京信息科技大学 | Network Load Balance apparatus and method |
Non-Patent Citations (1)
Title |
---|
WENXIN LI,ET AL.: "Cost-minimizing Bandwidth Guarantee for Inter-datacenter Traffic", 《IEEE TRANSACTIONS ON CLOUD COMPUTING》 * |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109981334A (en) * | 2019-01-24 | 2019-07-05 | 中山大学 | A kind of live streaming nerve of a covering Cost Optimization Approach with deferred constraint |
CN112202688A (en) * | 2020-09-22 | 2021-01-08 | 临沂大学 | Data evacuation method and system suitable for cloud data center network |
CN112243025A (en) * | 2020-09-22 | 2021-01-19 | 网宿科技股份有限公司 | Node cost scheduling method, electronic device and storage medium |
CN112243025B (en) * | 2020-09-22 | 2023-10-17 | 网宿科技股份有限公司 | Node cost scheduling method, electronic equipment and storage medium |
CN116032845A (en) * | 2023-02-13 | 2023-04-28 | 杭银消费金融股份有限公司 | Data center network overhead management method and system |
CN116032845B (en) * | 2023-02-13 | 2024-07-19 | 杭银消费金融股份有限公司 | Data center network overhead management method and system |
Also Published As
Publication number | Publication date |
---|---|
CN108833294B (en) | 2020-10-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108833294A (en) | The traffic scheduling method of the low bandwidth overhead of data-oriented center wide area network | |
Zhang et al. | A hierarchical game framework for resource management in fog computing | |
CN107241384A (en) | A kind of content distribution service priority scheduling of resource method based on many cloud frameworks | |
CN104219167B (en) | Network resource scheduling method and server | |
Du et al. | Scientific workflows in IoT environments: a data placement strategy based on heterogeneous edge-cloud computing | |
CN105379204B (en) | Method and system for the resource for selecting data to route | |
CN103207814A (en) | Decentralized cross cluster resource management and task scheduling system and scheduling method | |
CN104969213A (en) | Data stream splitting for low-latency data access | |
CN107483355B (en) | Data center-oriented online scene low-bandwidth overhead traffic scheduling scheme | |
CN105306277A (en) | Message scheduling method and message scheduling device for message queues | |
CN105103524A (en) | SWAN: achieving high utilization in networks | |
KR102082452B1 (en) | Charger reservation system for electric vehicles | |
Loiseau et al. | Incentive mechanisms for internet congestion management: Fixed-budget rebate versus time-of-day pricing | |
CN104580447A (en) | Spatio-temporal data service scheduling method based on access heat | |
CN108874738A (en) | Distributed parallel operation method, device, computer equipment and storage medium | |
Hosseinalipour et al. | Power-aware allocation of graph jobs in geo-distributed cloud networks | |
CN107454009B (en) | Data center-oriented offline scene low-bandwidth overhead traffic scheduling scheme | |
Mohamed et al. | A survey of big data machine learning applications optimization in cloud data centers and networks | |
Li et al. | Efficient adaptive matching for real-time city express delivery | |
Lovén et al. | A dark and stormy night: Reallocation storms in edge computing | |
Lin et al. | Scheduling algorithms for time-constrained big-file transfers in the Internet of Vehicles | |
Li et al. | Incentive mechanism design for edge‐cloud collaboration in mobile crowd sensing | |
CN109714391A (en) | Distributed message dissemination system | |
Konstantinou et al. | COCCUS: self-configured cost-based query services in the cloud | |
Xu et al. | Dynamic security exchange scheduling model for business workflow based on queuing theory in cloud computing |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |