CN108833294B - Low-bandwidth-overhead flow scheduling method for data center wide area network - Google Patents

Low-bandwidth-overhead flow scheduling method for data center wide area network Download PDF

Info

Publication number
CN108833294B
CN108833294B CN201810898884.5A CN201810898884A CN108833294B CN 108833294 B CN108833294 B CN 108833294B CN 201810898884 A CN201810898884 A CN 201810898884A CN 108833294 B CN108833294 B CN 108833294B
Authority
CN
China
Prior art keywords
bandwidth
data center
flow
link
integer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810898884.5A
Other languages
Chinese (zh)
Other versions
CN108833294A (en
Inventor
崔勇
杨振杰
刘亚东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tsinghua University
Original Assignee
Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tsinghua University filed Critical Tsinghua University
Priority to CN201810898884.5A priority Critical patent/CN108833294B/en
Publication of CN108833294A publication Critical patent/CN108833294A/en
Application granted granted Critical
Publication of CN108833294B publication Critical patent/CN108833294B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/52Queue scheduling by attributing bandwidth to queues
    • H04L47/522Dynamic queue service slot or variable bandwidth allocation

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

A low-bandwidth-overhead traffic scheduling method for a data center wide area network is a wide area network traffic scheduling control scheme. The method has the function of reducing the bandwidth renting expense of a data center wide area network user by reasonably scheduling large stream transmission on the basis of considering the problems of small stream, link failure, stream completion time guarantee and the like. The system execution flow comprises the following steps: 1) each data center proxy server periodically receives flow requests (including flow demands, deadline and source destination nodes) of a corresponding data center and sends request information to a central controller; 2) on the basis of considering the problems of small flow, link failure, stream completion time guarantee and the like, large flow is reasonably scheduled so as to reduce network bandwidth overhead. The invention can also effectively reduce the expense of renting the bandwidth from the data center wide area network user to the internet service provider on the premise of ensuring the service quality, and reduce the operation cost.

Description

Low-bandwidth-overhead flow scheduling method for data center wide area network
Technical Field
The invention belongs to the technical field of internet, relates to a traffic scheduling technology, and particularly relates to a low-bandwidth-overhead traffic scheduling method for a data center wide area network.
Background
Many network service providers and cloud services maintain multiple data centers to support their businesses, such as microsoft, google. The data centers run various globally distributed applications and are distributed in different geographical areas, which determines that the data centers have the requirement of mutual communication across the geographical areas, and the wide area network plays a key role in ensuring that the data centers can mutually communicate between different geographical positions. The large amount of data transport streams between data centers results in high bandwidth overhead, and data center owners rent wide area network bandwidth to internet service providers every year, at a cost of hundreds of millions. More critically, unreasonable traffic scheduling results in low bandwidth utilization across data centers, with the majority of links having bandwidth utilization of no more than 60%, which means a large percentage of waste in the high bandwidth overhead. How to reasonably and effectively carry out flow scheduling, reduce bandwidth overhead and ensure that data flow is completed on time becomes an important problem in the field of flow scheduling among data centers.
A large flow is defined as a type of flow that is significant, large in data volume, and long in duration in inter-data center wide area network traffic. Typically, large flows account for 85% to 95% of the inter-data center flow, with data volumes of several TB to several PB's, lasting up to several hours. Two typical examples of this are: the financial institution backs up transaction records at the remote end of the transaction day, and the search engine periodically synchronizes the index items among the data centers, and the like. Another type of stream between data centers is interactive streamlets, which are short in duration and highly sensitive to latency. Compared with a large flow, the requirement on the delay is not high, and the delay caused by scheduling by using the centralized controller can be tolerated. In conclusion, it is of great significance to reasonably schedule the large flows. In some scenarios, the parameters of all streamers are unpredictable over a period of time, the arrival time, the deadline, and the amount of data of a streamer are known only after it is generated, and these scenarios are collectively referred to as online scenarios. The reasonable scheduling of the large data flow in the online scene is not only an important guarantee of the network service quality, but also an effective way of saving a large amount of bandwidth renting expenses.
Much research work has emerged in recent years around the deployment of rational scheduling of large flows. One of the main ideas is to add a storage device to a data center, and to select whether to store or forward data when the data arrives, i.e. a store-and-forward strategy. The first work proposes that the arriving data is temporarily stored when the link is busy, and the data is transmitted when the link is idle, so that the utilization rate of the bandwidth is finally improved in the time dimension. Another work balances bandwidth utilization on each link through store-and-forward strategies, thereby achieving load balancing. Because the traffic passing through each data center needs to be temporarily stored, the device deployment under the idea needs to add an additional storage device to each data center, which not only additionally increases the storage overhead, but also makes traffic scheduling more complex. Therefore, it is desirable to find a more reasonable scheduling scheme to optimize the bandwidth overhead, and simultaneously ensure that each large stream is completed on time without increasing the additional storage overhead.
Disclosure of Invention
In order to overcome the defects of the prior art, the invention aims to provide a data center-oriented online scene low-bandwidth overhead traffic scheduling method, which minimizes the extra bandwidth lease overhead brought by each large flow through reasonable scheduling on the premise of ensuring that all the large flows can be completed on time, thereby minimizing the total bandwidth overhead; the invention reasonably distributes the bandwidth for each big stream in each transmission time slot, sets the transmission path, and minimizes the bandwidth leasing cost on the basis of ensuring that all the big streams are completed on time.
In order to achieve the purpose, the invention adopts the technical scheme that:
the data center wide area network oriented traffic scheduling system with low bandwidth overhead in an online scene is mainly characterized by being implemented in a data center wide area network according to the following steps:
step (1), dividing a lease period into a plurality of transmission time slots, namely 1, …, T, representing links between the data centers by a directed graph G ═ V, E, wherein V is a node set of the directed graph and represents a set of all the data centers, E is a link set of the directed graph and represents a set of all the links, and a quintuple r is used for representing the link set of all the linksi=(si,ti,di,aii) To represent a large stream, where si,ti,di,ai,τiRespectively representing a source node, a destination node, data volume, arrival time and deadline of the ith large flow; operating a data center proxy server, and periodically acquiring a source node, a destination node, data volume, arrival time and deadline of a flow request every other time slot;
and (2) the data center proxy server sends the flow request information to the central controller for scheduling.
Step (3), the central controller runs PDA algorithm, the input of the PDA algorithm is that each data center proxy server sends flow request information, and when the algorithm is initialized, all bandwidth values c are madeeCalculate the minimum extra bandwidth by PDA algorithm ═ 0The method comprises the following steps of (1) overhead and considering the influence of small flows on bandwidth overhead:
step (3a), the original integer programming problem is relaxed, the integer variable is changed into a continuous variable, then the solution of the linear programming of the relaxed model is solved, and the charging bandwidth value of each link of the continuous solution is
Figure GDA0002663696530000031
Since the charging is actually based on the integer bandwidth, the corresponding integer bandwidth value is ce(ii) a Solution initialization according to linear programming
Figure GDA0002663696530000032
According to ceInitialize minimum cost M ← Σe∈EceueWherein u iseRepresents the price per bandwidth of link e;
step (3b), selecting
Figure GDA0002663696530000033
Minimum, and ceIs not equal to
Figure GDA0002663696530000034
K links of (1), fixed ce
Step (3c), if c can not be foundeIs not equal to
Figure GDA0002663696530000035
Skipping to execute the step (4);
step (3d), if c can be foundeIs not equal to
Figure GDA0002663696530000036
The linear programming after the K fixed links are solved, and the calculation is carried out according to the solution of the linear programming
Figure GDA0002663696530000037
Cost obj ← Σe∈Eceue
Step (3e), if the iteration result obj is less than the best known valueSmall cost M, update minimum cost M ← obj, save the charged bandwidth c of each linke
Step (3e), judging whether the iteration times exceed a threshold value J, if so, jumping out of the iteration, executing step (4), otherwise, adding one to the iteration times, and jumping to execute step (3 b);
step (4), generating a scheduling scheme and sending a scheduling result to each data center proxy server;
p for the present invention0Expressing the objective function under three constraints of flow constraint, capacity constraint and integer constraint:
Figure GDA0002663696530000038
the minimized optimization problem, namely minimizing the network bandwidth cost when transmitting the flow;
wherein, there are two flow constraints, the first flow constraint is:
Figure GDA0002663696530000041
and v ≠ si,v≠ti,
Figure GDA0002663696530000042
t∈[aii]And t ∈ N+
+(v) Representing the set of all directed edges starting at node v,-(v) representing the set of all directed edges, x, ending with node vi,e(t) represents the amount of data transmitted at the t-th instant on link e for the ith request, N+Represents a positive integer
Another flow constraint is:
Figure GDA0002663696530000043
wherein,+(si) Represented by node siIs the set of all the directed edges that start,-(si) Watch (A)Shown as node siSet of all directed edges as end points
The capacity constraint is:
Figure GDA0002663696530000044
wherein, ceA value for bandwidth leased on link e, representing a number of units of bandwidth leased on link e by the data center owner,cwhich represents the size of a unit of bandwidth,trepresenting the size of each time slice;
the integer constraint is:
Figure GDA0002663696530000045
wherein N represents an integer.
Compared with the prior art, the invention has the beneficial effects that:
1) the overhead of bandwidth leasing is minimized on the premise that it is guaranteed that all large flows can be transmitted within a specified time.
2) The scheme provided by the invention considers that the ISP charges according to a certain granularity, and the practicability is stronger.
3) The scheme provided by the invention does not need to introduce additional storage equipment, so that the total scheduling overhead is saved.
Drawings
Fig. 1 is a schematic diagram of an online scene oriented to a data center.
Fig. 2 is a specific flowchart of a data center-oriented online scenario low-bandwidth overhead traffic scheduling scheme. Wherein c iseThe value of the bandwidth leased on link e.
Detailed Description
The embodiments of the present invention will be described in detail below with reference to the drawings and examples.
As shown in fig. 1, the present invention considers the scheduling problem of large streams within a lease period, dividing the period into several transmission slots, i.e., 1, …, T. The data center and the link between the data centers are represented by a directed graph G ═ (V, E), where V isThe set of nodes of the graph represents the set of all data centers, and E is the set of edges of the directed graph representing the set of all links. By five members ri=(si,ti,di,aii) To represent a large stream, where si,ti,di,ai,τiRespectively representing the source node, the destination node, the data volume, the arrival time and the deadline of the ith big flow.
For a large flow riThe time for transmitting data is limited to the time interval [ a ]ii]Within the time slice. In addition, a source destination node s of a requestiAnd tiThere may be multiple possible paths in between, each path being formed by one or more links E in series, using x according to the above descriptioni,e(t) to represent the amount of data transmitted at the t-th time on link e for the ith request, a traffic constraint can be obtained:
Figure GDA0002663696530000051
and v ≠ si,v≠ti,
Figure GDA0002663696530000052
t∈[aii]And t ∈ N+
The implication of this constraint is that any instant that any one big flow has to satisfy traffic conservation at all nodes other than its source-destination node, i.e. the sum of the flows belonging to the big flow that flow out from the node must be equal to the sum of the flows belonging to the big flow that flow into the node. Wherein,+(v) representing the set of all directed edges starting at node v,-(v) representing the set of all directed edges that end at node v.
Another flow constraint is:
Figure GDA0002663696530000053
the constraint ensures that the sum of the flows which belong to any one big flow and flow from the source node of the big flow minus the sum of the flows which belong to the big flow and flow into the source node is equal to the total data transmission quantity of the big flow at all times, and the function of the constraint is to ensure that all big flows can be completed within a specified time.
To ensure that the total rate of transmission traffic of any link in any transmission time slot does not exceed the leased link bandwidth size, xi,e(t) the capacity constraint must be met:
Figure GDA0002663696530000061
wherein, c iseRepresenting the number of units of bandwidth leased by the data center owner on link e,cwhich represents the size of a unit of bandwidth,tthe size of each time slice is represented.
Since the data center owner must lease an integer unit of bandwidth when leasing bandwidth, ceIs an integer variable, therefore ceInteger constraints need to be satisfied:
Figure GDA0002663696530000062
in order to achieve the goal of minimizing bandwidth lease overhead of the present invention, the scheme uses P0Expressing the objective function under three constraints of flow constraint, capacity constraint and integer constraint:
Figure GDA0002663696530000063
a minimized optimization problem. Wherein u iseIndicating e the price per bandwidth of this link.
The central controller plans the flow request by using the model, and referring to fig. 2, the algorithm flow is as follows:
step (1), dividing a lease period into a plurality of transmission time slots, namely 1, …, T, and representing a link between a data center and the data center by a directed graph G ═ V, E, whereinV is the node set of the directed graph and represents the set of all data centers, E is the link set of the directed graph and represents the set of all links, and r is a five-tuplei=(si,ti,di,aii) To represent a large stream, where si,ti,di,ai,τiRespectively representing a source node, a destination node, data volume, arrival time and deadline of the ith large flow; operating a data center proxy server, and periodically acquiring a source node, a destination node, data volume, arrival time and deadline of a flow request every other time slot;
and (2) the data center proxy server sends the flow request information to the central controller for scheduling.
Step (3), the central controller runs PDA algorithm, the input of the PDA algorithm is that each data center proxy server sends flow request information, and when the algorithm is initialized, all bandwidth values c are madeeCalculating the minimum extra bandwidth overhead through the PDA algorithm and considering the influence of the small flow on the bandwidth overhead, wherein the specific steps are as follows:
step (3a), the original integer programming problem is relaxed, the integer variable is changed into a continuous variable, then the solution of the linear programming of the relaxed model is solved, and the charging bandwidth value of each link of the continuous solution is
Figure GDA0002663696530000071
Since the charging is actually based on the integer bandwidth, the corresponding integer bandwidth value is ce(ii) a Solution initialization according to linear programming
Figure GDA0002663696530000072
According to ceInitialize minimum cost M ← Σe∈EceueWherein u iseRepresents the price per bandwidth of link e;
step (3b), selecting
Figure GDA0002663696530000073
Minimum, and ceIs not equal to
Figure GDA0002663696530000074
K links of (1), fixed ce
Step (3c), if c can not be foundeIs not equal to
Figure GDA0002663696530000075
The step (4) is executed by jumping to
Step (3d), if c can be foundeIs not equal to
Figure GDA0002663696530000076
The linear programming after the K fixed links are solved, and the calculation is carried out according to the solution of the linear programming
Figure GDA0002663696530000077
Cost obj ← Σe∈Eceue
Step (3e), if the iteration result obj is smaller than the known minimum cost M, updating the minimum cost M ← obj, and storing the charging bandwidth c of each linke
Step (3f), judging whether the iteration times exceed a threshold value J, if so, jumping out of the iteration, executing step (4), otherwise, adding one to the iteration times, and jumping to execute step (3 b);
step (4), generating a scheduling scheme and sending a scheduling result to each data center proxy server;
in summary, the present invention provides a low-bandwidth overhead traffic scheduling scheme for a data center wide area network. The scheme can ensure that all the large flows are completed on time, and meanwhile, extra storage overhead is not introduced. Under the premise, the scheme greatly improves the link utilization rate, and minimizes the extra bandwidth leasing overhead brought by each flow, thereby saving the operation cost of the data center.

Claims (1)

1. The low-bandwidth-overhead traffic scheduling method for the data center wide area network is realized in the data center wide area network according to the following steps:
step (1) of carrying out a treatment,dividing a lease period into a plurality of transmission time slots, namely 1, …, T, representing links between the data centers by a directed graph G ═ V, E, wherein V is a node set of the directed graph and represents a set of all the data centers, E is a link set of the directed graph and represents a set of all the links, and r is a five-tuplei=(si,ti,di,aii) To represent a large stream, where si,ti,di,ai,τiRespectively representing a source node, a destination node, data volume, arrival time and deadline of the ith large flow; operating a data center proxy server, and periodically acquiring a source node, a destination node, data volume, arrival time and deadline of a flow request every other time slot;
step (2), the data center proxy server sends the flow request information to the central controller for scheduling;
and (3) the central controller runs a PDA algorithm, the PDA algorithm inputs flow request information sent by each data center proxy server, and all bandwidth values c are enabled when the algorithm is initializedeCalculating the minimum extra bandwidth overhead through the PDA algorithm and considering the influence of the small flow on the bandwidth overhead;
step (4), generating a scheduling scheme and sending a scheduling result to each data center proxy server;
the method is characterized in that the specific steps of calculating the minimum extra bandwidth overhead through the PDA algorithm and considering the influence of the small flow on the bandwidth overhead are as follows:
step (3a), the original integer programming problem is relaxed, the integer variable is changed into a continuous variable, then the solution of the linear programming of the relaxed model is solved, and the charging bandwidth value of each link of the continuous solution is
Figure FDA0002663696520000011
Since the charging is actually based on the integer bandwidth, the corresponding integer bandwidth value is ce(ii) a Solution initialization according to linear programming
Figure FDA0002663696520000012
According to ceInitialize minimum cost M ← Σe∈EceueWherein u iseRepresents the price per bandwidth of link e;
step (3b), selecting
Figure FDA0002663696520000013
Minimum, and ceIs not equal to
Figure FDA0002663696520000014
K links of (1), fixed ce
Step (3c), if c can not be foundeIs not equal to
Figure FDA0002663696520000021
Skipping to execute the step (4);
step (3d), if c can be foundeIs not equal to
Figure FDA0002663696520000022
The linear programming after the K fixed links are solved, and the calculation is carried out according to the solution of the linear programming
Figure FDA0002663696520000023
Cost obj ← Σe∈Eceue
Step (3e), if the iteration result obj is smaller than the known minimum cost M, updating the minimum cost M ← obj, and storing the charging bandwidth c of each linke
Step (3f), judging whether the iteration times exceed a threshold value J, if so, jumping out of the iteration, executing step (4), otherwise, adding one to the iteration times, and jumping to execute step (3 b);
wherein, with P0Expressing the objective function under three constraints of flow constraint, capacity constraint and integer constraint:
Figure FDA0002663696520000024
the minimized optimization problem, namely minimizing the network bandwidth cost when transmitting the flow;
wherein, there are two flow constraints, the first flow constraint is:
Figure FDA0002663696520000025
and v ≠ si,v≠ti,
Figure FDA0002663696520000026
And t ∈ N+
+(v) Representing the set of all directed edges starting at node v,-(v) representing the set of all directed edges, x, ending with node vi,e(t) represents the amount of data transmitted at the t-th instant on link e for the ith request, N+Represents a positive integer;
another flow constraint is:
Figure FDA0002663696520000027
wherein,+(si) Represented by node siIs the set of all the directed edges that start,-(si) Represented by node siA set of all directed edges that are end points;
the capacity constraint is:
Figure FDA0002663696520000028
wherein, ceA value for bandwidth leased on link e, representing a number of units of bandwidth leased on link e by the data center owner,cwhich represents the size of a unit of bandwidth,trepresenting the size of each time slice;
the integer constraint is:
Figure FDA0002663696520000031
ce∈N
wherein N represents an integer.
CN201810898884.5A 2018-08-08 2018-08-08 Low-bandwidth-overhead flow scheduling method for data center wide area network Active CN108833294B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810898884.5A CN108833294B (en) 2018-08-08 2018-08-08 Low-bandwidth-overhead flow scheduling method for data center wide area network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810898884.5A CN108833294B (en) 2018-08-08 2018-08-08 Low-bandwidth-overhead flow scheduling method for data center wide area network

Publications (2)

Publication Number Publication Date
CN108833294A CN108833294A (en) 2018-11-16
CN108833294B true CN108833294B (en) 2020-10-30

Family

ID=64153095

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810898884.5A Active CN108833294B (en) 2018-08-08 2018-08-08 Low-bandwidth-overhead flow scheduling method for data center wide area network

Country Status (1)

Country Link
CN (1) CN108833294B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109981334A (en) * 2019-01-24 2019-07-05 中山大学 A kind of live streaming nerve of a covering Cost Optimization Approach with deferred constraint
CN112243025B (en) * 2020-09-22 2023-10-17 网宿科技股份有限公司 Node cost scheduling method, electronic equipment and storage medium
CN112202688A (en) * 2020-09-22 2021-01-08 临沂大学 Data evacuation method and system suitable for cloud data center network
CN116032845B (en) * 2023-02-13 2024-07-19 杭银消费金融股份有限公司 Data center network overhead management method and system

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103036792A (en) * 2013-01-07 2013-04-10 北京邮电大学 Transmitting and scheduling method for maximizing minimal equity multiple data streams
CN107454009A (en) * 2017-09-08 2017-12-08 清华大学 The offline scenario low bandwidth overhead flow scheduling scheme at data-oriented center
CN107483355A (en) * 2017-09-08 2017-12-15 清华大学 The online scene low bandwidth overhead flow scheduling scheme at data-oriented center

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104966156B (en) * 2015-06-12 2019-06-07 中冶南方工程技术有限公司 A kind of double-deck optimization method of iron and steel enterprise's Integrated Energy scheduling problem
CN107579922B (en) * 2017-09-08 2020-03-24 北京信息科技大学 Network load balancing device and method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103036792A (en) * 2013-01-07 2013-04-10 北京邮电大学 Transmitting and scheduling method for maximizing minimal equity multiple data streams
CN107454009A (en) * 2017-09-08 2017-12-08 清华大学 The offline scenario low bandwidth overhead flow scheduling scheme at data-oriented center
CN107483355A (en) * 2017-09-08 2017-12-15 清华大学 The online scene low bandwidth overhead flow scheduling scheme at data-oriented center

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Cost-minimizing Bandwidth Guarantee for Inter-datacenter Traffic;Wenxin Li,et al.;《IEEE Transactions on Cloud Computing》;20161231;第1-12页 *

Also Published As

Publication number Publication date
CN108833294A (en) 2018-11-16

Similar Documents

Publication Publication Date Title
CN108833294B (en) Low-bandwidth-overhead flow scheduling method for data center wide area network
Liao et al. Dependency-aware application assigning and scheduling in edge computing
WO2019179250A1 (en) Scheduling method, scheduler, storage medium, and system
Liu et al. Resource preprocessing and optimal task scheduling in cloud computing environments
US8843929B1 (en) Scheduling in computer clusters
CN107483355B (en) Data center-oriented online scene low-bandwidth overhead traffic scheduling scheme
Che et al. A deep reinforcement learning approach to the optimization of data center task scheduling
CN102281290A (en) Emulation system and method for a PaaS (Platform-as-a-service) cloud platform
CN107454009B (en) Data center-oriented offline scene low-bandwidth overhead traffic scheduling scheme
Achary et al. Dynamic job scheduling using ant colony optimization for mobile cloud computing
CN104580447A (en) Spatio-temporal data service scheduling method based on access heat
Bozanta et al. Courier routing and assignment for food delivery service using reinforcement learning
CN114071582A (en) Service chain deployment method and device for cloud-edge collaborative Internet of things
Li et al. Scalable and dynamic replica consistency maintenance for edge-cloud system
Zheng et al. Learning based task offloading in digital twin empowered internet of vehicles
Zhang et al. Employ AI to improve AI services: Q-learning based holistic traffic control for distributed co-inference in deep learning
Luthra et al. TCEP: Transitions in operator placement to adapt to dynamic network environments
Ebrahim et al. Resilience and load balancing in fog networks: A multi-criteria decision analysis approach
Li et al. Efficient adaptive matching for real-time city express delivery
Tao et al. Congestion-aware traffic allocation for geo-distributed data centers
WO2024146193A1 (en) Sdn-based routing path selection method and apparatus, and storage medium
Shahzaad et al. Top-k dynamic service composition in skyway networks
CN113014649A (en) Cloud Internet of things load balancing method, device and equipment based on deep learning
Zhang et al. Dynamic decision-making for knowledge-enabled distributed resource configuration in cloud manufacturing considering stochastic order arrival
Feng et al. Load shedding and distributed resource control of stream processing networks

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant