CN110048966A - The Coflow dispatching method of minimum overhead based on deadline - Google Patents

The Coflow dispatching method of minimum overhead based on deadline Download PDF

Info

Publication number
CN110048966A
CN110048966A CN201910177871.3A CN201910177871A CN110048966A CN 110048966 A CN110048966 A CN 110048966A CN 201910177871 A CN201910177871 A CN 201910177871A CN 110048966 A CN110048966 A CN 110048966A
Authority
CN
China
Prior art keywords
deadline
coflow
bandwidth
flow
scheduling
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910177871.3A
Other languages
Chinese (zh)
Other versions
CN110048966B (en
Inventor
李克秋
王春晓
周晓波
徐仁海
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin University
Original Assignee
Tianjin University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin University filed Critical Tianjin University
Priority to CN201910177871.3A priority Critical patent/CN110048966B/en
Publication of CN110048966A publication Critical patent/CN110048966A/en
Application granted granted Critical
Publication of CN110048966B publication Critical patent/CN110048966B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/52Queue scheduling by attributing bandwidth to queues
    • H04L47/525Queue scheduling by attributing bandwidth to queues by redistribution of residual bandwidth
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/56Queue scheduling implementing delay-aware scheduling
    • H04L47/564Attaching a deadline to packets, e.g. earliest due date first

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The present invention relates to technical field of the computer network and data center network field, for the Coflow dispatching method for proposing the minimum overhead based on deadline, complete more Coflows before deadline reaches, and dedicated dispatching method is devised to miss the Coflows in time limit, minimize bring overhead due to Coflow misses the time limit.Thus, the technical solution adopted by the present invention is that, the Coflow dispatching method of minimum overhead based on deadline, including meeting the Coflow scheduling steps of deadline Deadline and missing deadline Coflow scheduling steps, Coflow is that there are the set of the parallel flow of correlation for one group of the stage of communication of parallel computation.Present invention is mainly applied to network communication occasions.

Description

The Coflow dispatching method of minimum overhead based on deadline
Technical field
The present invention relates to technical field of the computer network and data center network field.Specifically, in data center network Under environment, the research and development of the Coflow flow scheduling technology based on deadline.
Background technique
Data center is the infrastructure of calculating, storage and data transmission, has concentrated various software and hardware resources and crucial industry Business system.With cloud computing, the arrival of big data, Internet of Things and artificial intelligence epoch, the data traffic in data center is in quick-fried Fried formula increases, and traditional data processing technique is insufficient for the processing requirement of mass data, and parallel computing is come into being, The Distributed Computing Platform of mainstream is MapReduce, Spark, Dryad and TensorfLow etc. at present.Big data treatment process The flow transmission time accounting of middle parallel computation is increasing, for example analyzes work on Facebook and spend in network transmission 33% runing time.Network transmission is increasingly becoming the bottleneck of application performance, and therefore, how to optimize network transmission is in data The critical issue of heart network.
Traditional data center network transmission flow dispatching technique is the dispatching technique based on single flow (flow).Tradition Dispatching technique attempts to carve up the demand that user's trustship is applied on the data centre to flow one by one, dispatches flows, and decision goes out It distributes to the sequencing of the resources such as flows bandwidth and executes resource allocation, with this, it would be desirable to pass the demand of network application layer Pass transport network layer.However, single flow generally can not undertake the demand entirely applied.Often, an application has more The parallel flows of item, their demand are not quite similar (for example, the deadlines demand of multiple flows has difference), therefore, Single flow can not represent the demand entirely applied.Scheduling result based on flow may be a certain flow of an application It is extremely early to be scheduled and complete, but other flows completion of the application is later, leads to the final later completion of the application, Mei Nengman The requirement of sufficient user and leave very poor user experience or even user and be no longer ready application hosting and data center.In order to more This shortcoming that application demand can not be communicated to transport layer by flow scheduling from application layer is mended, researcher proposes to be gathered with flow For the Coflow dispatching technique of granularity, a Magpie Bridge has been erected between application layer requirement representation and transport network layer.Definition Coflow be parallel computation one group of stage of communication there are the set of the parallel flow (flows) of correlation.For example, Shuffle process between the Map and Reduce of Mapreduce frame is a Coflow, the data flow during Shuffle It is parallel, and the last one only equal flow completes transmission, i.e. the Map stage completes can just to start Reduce later to calculate. Though the main characteristic of Coflow is multiple flows parallel, but as a whole, only the last one flow transmission It completes, whole (Coflow) is just calculated and really completed, i.e., the deadline of Coflow, (Coflow Complete time, CCT) took Certainly in the flow completed the latest.Deadline due to reducing single flow may not be able to reduce the deadline of application, and Therefore how the deadline that the deadline of reduction Coflow can almost reduce application has dispatched Coflow and has had become mesh The key problem of preceding data center network transmission flow scheduling.
Existing Coflow dispatching method can be divided into two classes, and the appreciable dispatching method of information and information are non Dispatching method.There are two the optimization aim of Coflow is main, one is the deadline for reducing Coflow, the other is making more Coflows completed before deadline.In terms of making more Coflows meet deadline, existing scheduling mechanism It is roughly divided into two kinds: first is that judged before scheduling using admission control principle, for being able to satisfy the Coflow of deadline, Admission scheduling and guarantee its will not by other Coflow preempting resources, the Coflow for not being able to satisfy deadline abandon or Retransmit all data flows;Second of way is that bandwidth is divided into two parts, a part of use priority using multiplexing principle Sequential manner is able to satisfy the Coflows of deadline to dispatch, and another part is dispatched with the mode of weighted average bandwidth allocation It is not able to satisfy the Coflows of deadline.Above scheduling scheme does not account for different Coflows to the sensitivity of deadline Degree does not have efficient scheduling for the Coflows for missing deadline.Currently, existing technical research shows network delay Every increase 100ms, the income from sales of Amazon can decline 1%, and the every increase 400ms of time delay, the volumes of searches of Google can decline 0.6%, it can thus be appreciated that against the background of the prior art, the flow beyond deadline can bring additional overhead.How to adjust The Coflows that degree misses deadline is a vital challenge in Coflow scheduling problem.
In view of above-mentioned background and motivation, the method for set forth herein a kind of Coflow flow scheduling based on deadline: one Aspect reaches more Coflows in deadline in limited data center network resource (bandwidth) Preceding completion;On the other hand, if still there is Coflows to fail to complete before deadline, consider that the type Coflow exceeds Overhead after Deadline, and minimize overhead.
Summary of the invention
In order to overcome the deficiencies of the prior art, the flow scheduling with deadline demand is solved the problems, such as, the present invention is directed to propose The Coflow dispatching method of minimum overhead based on deadline, is not only as far as possible ending more Coflows Time completes before reaching, and devises dedicated dispatching method to miss the Coflows in time limit, minimizes because Coflow is missed Time limit and bring overhead.For this reason, the technical scheme adopted by the present invention is that the minimum overhead based on deadline Coflow dispatching method, including meet deadline Deadline Coflow scheduling steps and miss deadline Coflow Scheduling steps, Coflow be parallel computation one group of stage of communication there are the set of the parallel flow of correlation;
(1) meet deadline Coflow scheduling steps
Relevant information in network is obtained, judges whether Coflow meets deadline using dynamic method first, if full Foot, then determine dispatching sequence, i.e. distance with the method for earliest Deadline First EDF (Earliest-Deadline-First) Deadline closer Coflow is more first scheduled, and after determining dispatching sequence, distributes bandwidth energy as small as possible for each flow Guarantee that it is completed before deadline;
(2) deadline Coflow scheduling steps are missed
Scheduling for missing deadline Coflow is in meeting deadline Coflow scheduling process, in network still There is remaining bandwidth not yet to use, then miss the Coflows of deadline using this some residual bandwidth scheduling, and minimizes super The Coflows bring system overhead of deadline out.
Step (1) is that all flows is allowed all to complete at deadline at that moment, the specific steps are as follows:
1) network bandwidth information is obtained, the remaining bandwidth size of each uplink port and downlink port is specifically included;It obtains The information of each Coflow reached, specifically includes the number of Coflow, deadline, source port number and number, purpose Port number and number, the data volume for each flow for including in Coflow;
2) it sorts to all Coflow according to EDF sequence, i.e., is ranked up and is adjusted by the sequence of deadline from small to large Degree;
3) judge that can Coflow complete before deadline, specific judgment mode are as follows: utilize every in the Coflow The data volume size of a flow seeks corresponding desired bandwidth divided by deadline, then judges that each flow is exported accordingly Whether can meet desired bandwidth using bandwidth with entrance residue;
4) if each flow of Coflow can be all larger than expectation using bandwidth in corresponding destination port and source port residue Bandwidth, then the Coflow can be completed before deadline, it is given to distribute bandwidth as small as possible, allow all flow all with The flow completed the latest is completed at the same time, and the bandwidth of distribution should be the data volume size of flow divided by the deadline of Coflow;
5) if expectation can be less than using bandwidth in corresponding outlet or entrance residue simply by the presence of a flow in Coflow Bandwidth, then the Coflow is not able to satisfy completes before deadline, puts it into and misses in deadline set, waits subsequent The deadline Coflow dispatching method that misses be scheduled;
6) the remaining bandwidth size for updating network link, allocated bandwidth will subtract and carry out next in step (4) The scheduling of a Coflow.
In step (2), specifically with cost function come quantization system expense and the relationship beyond deadline size, and cost Function is set as the linear function of the monotonic increase about t=CCT-deadline, and takes following specific steps:
1) the respective cost function coefficients i.e. slope for missing all Coflows in deadline set is obtained;
2) the respective length for missing all Coflows in deadline set is obtained, the length of Coflow is Coflow The data volume size for the maximum flow for including;
3) it calculates and misses the ratio between the skew i.e. length of Coflow of Coflow and cost function coefficients in deadline set, And sequence from small to large is carried out according to skew;
4) residue of network organization bandwidth is sequentially allocated according to the sequence of step (3).
The features of the present invention and beneficial effect are:
The efficient scheduling for realizing data center Coflow, enables more Coflows to complete before deadline, for The flow design special method for missing deadline is dispatched and minimizes Coflows and miss deadline bring system and always opens Pin.
Detailed description of the invention:
Fig. 1 is Coflow network scheduling abstract model and input case.
Fig. 2 is Cost examples of functions figure.
Fig. 3 is the scheduling result under all possible dispatching sequences (comprising total CCT and system overhead).
Fig. 4 is the flow chart that the Coflow dispatching method of overhead is minimized the present invention is based on deadline.
Specific embodiment
Present invention seek to address that the flow scheduling problem with deadline demand.There are two main targets: one is, having In the case of the data center network resource (bandwidth) of limit, complete more Coflows before deadline reaches;Its Second is that designing proprietary scheduling scheme for the Coflows beyond deadline, minimizes Coflows and brought beyond the time limit System overhead.
For overcome the deficiencies in the prior art, the present invention provides a kind of minimum overhead based on deadline Coflow dispatching method completes more Coflows before deadline reaches, but also to miss the time limit Coflows devises dedicated dispatching method, minimizes bring overhead due to Coflow misses the time limit.Tune of the invention Degree method is formed by meeting deadline Coflow dispatching method and missing deadline Coflow dispatching method.
1 meets deadline Coflow dispatching method
Relevant information in network is obtained, judges whether Coflow meets deadline using dynamic method first, if full Foot, then determine dispatching sequence, i.e. distance with the method for earliest Deadline First (Earliest-Deadline-First, EDF) Deadline closer Coflow is more first scheduled.After determining dispatching sequence, we distribute band as small as possible for each flow Width can guarantee that it is completed before deadline.When the completion that not can be shortened Coflow quickly of single flow transmission Between, that determine the deadline of Coflow is that flow completed the latest, as long as therefore we allow all flows of Coflow All it is completed at the same time with that flow of deadline the latest.For this method, we stress to make Coflows as much as possible It is completed in deadline, so we allow all flows to complete at deadline at that moment, gives the Coflow in this way The bandwidth of distribution be it is the smallest, in network residue can be big as far as possible using bandwidth, can dispatch more Coflows makes him Completed before respective deadline.It describes for simplicity, all Coflows hypothesis in this method is all to reach at 0 moment. Specific step is as follows:
1) network bandwidth information is obtained, the remaining bandwidth size of each uplink port and downlink port is specifically included;It obtains The information of each Coflow reached, specifically includes the number of Coflow, deadline, ingress port (source port) number and It numbers, outlet port (destination port) number and number, the data volume for each flow for including in Coflow.
2) it sorts to all Coflow according to EDF sequence, i.e., is ranked up and is adjusted by the sequence of deadline from small to large Degree.
3) judge that can Coflow complete before deadline, specific judgment mode are as follows: utilize every in the Coflow The data volume size of a flow seeks corresponding desired bandwidth divided by deadline (deadline), then judges that each flow exists Whether corresponding outlet and entrance residue can meet desired bandwidth using bandwidth.
4) if each flow of Coflow can be all larger than desired bandwidth using bandwidth in corresponding outlet and entrance residue, then The Coflow can be completed before deadline, it is given to distribute bandwidth as small as possible, allow all flow all with it is complete the latest At flow be completed at the same time, the bandwidth of distribution should be the data volume size of flow divided by the deadline of Coflow.
5) if expectation can be less than using bandwidth in corresponding outlet or entrance residue simply by the presence of a flow in Coflow Bandwidth, then the Coflow is not able to satisfy completes before deadline, puts it into and misses in deadline set, waits subsequent The deadline Coflow dispatching method that misses be scheduled.
6) the remaining bandwidth size for updating network link, allocated bandwidth will subtract and carry out next in step (4) The scheduling of a Coflow.
2 miss deadline Coflow scheduler module
It misses deadline Coflow scheduler module and is intended to dispatch those Coflows for missing deadline, cut meeting Only in time Coflow scheduling process, still there is remaining bandwidth not yet to use in network, this module utilizes this some residual bandwidth The Coflows of deadline is missed in scheduling, and minimizes the Coflows bring system overhead for exceeding deadline.With Coflow becomes larger beyond the deadline time, and bring overhead can also become larger, in the method with cost function come amount Change overhead and the relationship beyond deadline size, for the sake of simplicity, cost function is set as about t=in the method The linear function of the monotonic increase of CCT-deadline.Specific step is as follows:
1) the respective cost function coefficients (i.e. slope) for missing all Coflows in deadline set are obtained.
2) the respective length for missing all Coflows in deadline set is obtained, the length of Coflow in this method For the data volume size of the Coflow maximum flow for including.
3) it calculates and misses the ratio between the skew i.e. length of Coflow of Coflow and cost function coefficients in deadline set, And sequence from small to large is carried out according to skew.
4) residue of network organization bandwidth is sequentially allocated according to the sequence of step (3).
Example of the invention is described in further detail below in conjunction with attached drawing.
Fig. 1 is the data center network abstract model of Coflow scheduling problem and the example of the scheduling of Coflows more than one.No Generality is lost, similar to other domestic and international important research work, we are abstracted as data center network as shown in Figure 1 by one The big interchanger of a non-obstruction interconnecteds the abstract network of Servers-all.Network throughput 100% inside abstract network and Bandwidth capacity is unlimited, therefore will not lead to network blockage because of competition bandwidth resources there is a situation where more Coflows, still, each Bandwidth resources are limited on Service-Port, the ingress port (Ingress port) comprising being bound with uplink and with downlink chain The outlet port (Egress port) of road binding, more Coflows are reached simultaneously and keen competition bandwidth will be provided on these ports Source, therefore we only focus on the bandwidth allocation on ingress port and outlet port in Coflow scheduling problem.In the model More Coflows examples, each ingress port has some to each outlet port from one or more Coflows flows.For ease of description, we place them into the virtual queue of ingress port, usually assume that the band of each port Tolerance is 1 unit-sized, as shown, there are three Coflows, their details in network are as follows: Coflow1 includes two A flows, data volume are respectively 6,2;Coflow2 includes three flows, and data volume is respectively 2,3,3;Coflow3 includes three A flows, data volume are all 2.It is 1 that the flow ingress port that the data volume of Coflow1 is 6, which is 2 outlet ports, remaining each stream Detailed ingress port and outlet port information it is as shown in Figure 1.Assuming that the deadlines of these three Coflows is 1 unit Time, by calculating, the requirement that it is 1 deadline that these three Coflows, which are not able to satisfy, when can bring accordingly beyond cut-off Between overhead (cost), we will carry out rational management and minimize overhead.
The cost that Fig. 2 is each Colfow is with the variation schematic diagram for exceeding deadline, it is assumed that all Coflow are corresponding Cost function is the linear function of monotonic increase, exceeds the Coflow bring overhead meeting of deadline as shown in Figure 2 Become larger with becoming larger for deadline is exceeded, and the cost function coefficients of difference Coflow are different, therefore exceed deadline Two equal Coflows bring overheads are not also identical.Length and cost system in conjunction with Fig. 1 and Fig. 2 about Coflow Number is as shown in the table:
Coflow Cost coefficient (slope) Coflow length
Coflow1(C1) K1=1 L1=6
Coflow2(C2) K2=2 L2=3
Coflow3(C3) K3=0.5 L3=2
Fig. 3 is different dispatching sequence's dispatching results and overhead figure, and three Coflows whole six are listed in figure Kind of dispatching sequence and corresponding overhead size, exemplary graph 3 (a) introduce how cost calculates: the dispatching sequence of Fig. 3 (a) For C1, C2, C3, the deadline that simple computation can obtain them is respectively 6,8,10.Therefore, Coflow1 is beyond deadline 5, Coflow2 beyond deadline be 7, Coflow3 exceed deadline be 9, cost function will be substituted into the time, acquire each mistake It crosses the Coflow bring overhead of deadline and then acquires total overhead.The overhead total for Fig. 3 (a) be Cost=5*1+7*2+9*0.5=23.5.The overhead calculated result of remaining dispatching sequence is as shown in Figure 3.Wherein Fig. 3 (f) It is the scheduling result figure for minimizing total deadline (CCT) the heuristic mutation operations scheme of Coflow, other subgraphs in comparison diagram 3 Although coming it is found that Fig. 3 (f) total deadline is minimum (CCT=17) for the Coflow for having already passed by deadline It says, system expense (cost=17.5) can be improved by excessively pursuing the small deadline instead.Fig. 3 (d) is to miss cut-off by this paper The scheduling result that the scheduling scheme that time Coflow scheduler module proposes obtains, i.e., according to the length of all Coflow and cost letter The ratio between number system number (skew) is ranked up C2, C3, C1 (skew from small to large2<skew3<skew1), it is carried out according to this sequence Coflow scheduling, other dispatching methods compared to Fig. 3, system overhead are minimum.
Fig. 4 is to specifically include the present invention is based on the Coflow dispatching method flow chart that deadline minimizes overhead Following steps:
1) network bandwidth information is obtained, the remaining bandwidth size of each uplink port and downlink port is specifically included;It obtains The information of each Coflow reached, specifically includes the number of Coflow, arrival time, deadline, ingress port number And it numbers, outlet port number and number, the data volume for each flow for including in Coflow.
2) it sorts to all Coflow according to EDF sequence and is stored in queue 1, i.e., carried out by the sequence of deadline from small to large It sorts and dispatches.
3) Coflow for successively taking 1 head of the queue of queue judges that can Coflow complete before deadline.Specific judgement Mode are as follows: acquire corresponding expectation divided by deadline (deadline) using the data volume size of each flow in the Coflow Bandwidth, then judges whether each flow can be greater than desired bandwidth using bandwidth in corresponding outlet and entrance residue.
4) if the residue of the corresponding ingress port of each flow and outlet port can be both greater than desired bandwidth i.e. using bandwidth Bandwidthremain> size/deadline, then the Coflow can be completed before deadline, to each of the Coflow Flow distributes bandwidth as small as possible, that is, all flow of Coflow is allowed all to complete at that moment of deadline.In this method In, citing makes the deadline T of Coflow nn, distribute least bandwidth and remove scheduling Coflow n, make remaining bandwidth as far as possible It is big, make it possible to dispatch more Coflow, it is that j includes that a certain ingress port for Coflow n, which is i outlet port, Data volume size is dijFlow, the amount of bandwidth allocated it be dij/Tn, the residue on the port i and j is updated after bandwidth allocation Bandwidth information.
5) if bandwidth can be utilized simply by the presence of the residue of the corresponding ingress port of a flow or outlet port in Coflow Less than desired bandwidth, that is, Bandwidthremain< size/deadline, then the Coflow is not able to satisfy deadline, puts it into It misses and the subsequent deadline Coflow dispatching method that misses is waited to be scheduled in deadline set S.
6) the scheduled Coflow is removed from 1 stem of queue.
7) circulation step (3) to (6) until in queue 1 again without Coflow.The Coflows for meeting deadline all at this time It is complete scheduling.
8) the cost function coefficients of all Coflows in set S are obtained, in the method the cost letter of all Coflow Number is the linear function of monotonic increase, such as the cost coefficient of Coflow1 is K1
9) length of all Coflows in set S is obtained, the length of Coflow is the maximum that Coflow includes in this method The data volume size of flow, the length (Length) for the Coflow1 that illustrates are L1.
10) skew=Length/K is calculated to the Coflow missed in deadline set S, and from small to large according to skew Sequence sort deposit queue 2 to Coflow, and give Coflow bandwidth allocation according to this sequence later the step of, for Coflow1, skew=L1/K1
11) all remaining bandwidths of current network are distributed on this when distribution by the Coflow bandwidth allocation for giving 2 head of the queue of queue Coflow。
12) Coflow that scheduling is completed in previous step is removed.
13) step (11) and (12) are repeated, all completes to dispatch until missing the Coflow in deadline set.

Claims (3)

1. a kind of Coflow dispatching method of the minimum overhead based on deadline, characterized in that end including meeting The Coflow scheduling steps of time Deadline lead to deadline Coflow scheduling steps, Coflow is missed for parallel computation One group of letter stage, there are the set of the parallel flow of correlation;
(1) meet deadline Coflow scheduling steps
Relevant information in network is obtained, judges whether Coflow meets deadline using dynamic method first, if satisfied, then Dispatching sequence, i.e. distance are determined with the method for earliest Deadline First EDF (Earliest-Deadline-First) Deadline closer Coflow is more first scheduled, and after determining dispatching sequence, distributes bandwidth energy as small as possible for each flow Guarantee that it is completed before deadline;
(2) deadline Coflow scheduling steps are missed
Scheduling for missing deadline Coflow is that still have in network surplus in meeting deadline Coflow scheduling process Remaining bandwidth not yet uses, then the Coflows of deadline is missed using this some residual bandwidth scheduling, and minimizes to exceed and cut The only Coflows bring system overhead of time.
2. the Coflow dispatching method of the minimum overhead based on deadline as described in claim 1, characterized in that Step (1) is that all flows is allowed all to complete at deadline at that moment, the specific steps are as follows:
1) network bandwidth information is obtained, the remaining bandwidth size of each uplink port and downlink port is specifically included;It obtains and reaches Each Coflow information, specifically include the number of Coflow, deadline, source port number and number, destination port Number and number, the data volume for each flow for including in Coflow;
2) it sorts to all Coflow according to EDF sequence, i.e., is ranked up and is dispatched by the sequence of deadline from small to large;
3) judge that can Coflow complete before deadline, specific judgment mode are as follows: utilize each in the Coflow The data volume size of flow seeks corresponding desired bandwidth divided by deadline, then judge each flow in corresponding outlet and Whether entrance residue can meet desired bandwidth using bandwidth;
If 4) each flow of Coflow can be all larger than desired bandwidth using bandwidth in corresponding destination port and source port residue, Then the Coflow can be completed before deadline, it is given to distribute bandwidth as small as possible, allow all flow all and the latest The flow of completion is completed at the same time, and the bandwidth of distribution should be the data volume size of flow divided by the deadline of Coflow;
If 5) desired bandwidth can be less than using bandwidth in corresponding outlet or entrance residue simply by the presence of a flow in Coflow, Then the Coflow is not able to satisfy completes before deadline, puts it into and misses in deadline set, waits subsequent mistake Deadline Coflow dispatching method is crossed to be scheduled;
6) the remaining bandwidth size for updating network link, allocated bandwidth will subtract and carry out next in step (4) The scheduling of Coflow.
3. the Coflow dispatching method of the minimum overhead based on deadline as described in claim 1, characterized in that In step (2), specifically with cost function come quantization system expense and the relationship beyond deadline size, and cost function is set as The linear function of monotonic increase about t=CCT-deadline, and take following specific steps:
1) the respective cost function coefficients i.e. slope for missing all Coflows in deadline set is obtained;
2) the respective length for missing all Coflows in deadline set is obtained, the length of Coflow includes for Coflow Maximum flow data volume size;
3) it calculates and misses the ratio between the skew i.e. length of Coflow of Coflow and cost function coefficients in deadline set, and press Sequence from small to large is carried out according to skew;
4) residue of network organization bandwidth is sequentially allocated according to the sequence of step (3).
CN201910177871.3A 2019-03-10 2019-03-10 Coflow scheduling method for minimizing system overhead based on deadline Expired - Fee Related CN110048966B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910177871.3A CN110048966B (en) 2019-03-10 2019-03-10 Coflow scheduling method for minimizing system overhead based on deadline

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910177871.3A CN110048966B (en) 2019-03-10 2019-03-10 Coflow scheduling method for minimizing system overhead based on deadline

Publications (2)

Publication Number Publication Date
CN110048966A true CN110048966A (en) 2019-07-23
CN110048966B CN110048966B (en) 2021-12-17

Family

ID=67274603

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910177871.3A Expired - Fee Related CN110048966B (en) 2019-03-10 2019-03-10 Coflow scheduling method for minimizing system overhead based on deadline

Country Status (1)

Country Link
CN (1) CN110048966B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110365608A (en) * 2019-08-01 2019-10-22 西南交通大学 A kind of stream group transmission dispatching method of tolerance deficiency of data transmission
CN112468414A (en) * 2020-11-06 2021-03-09 国网电力科学研究院有限公司 Cloud computing multistage scheduling method, system and storage medium
CN114401234A (en) * 2021-12-29 2022-04-26 山东省计算中心(国家超级计算济南中心) Scheduling method and scheduler based on bottleneck flow sensing and without prior information

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105227488A (en) * 2015-08-25 2016-01-06 上海交通大学 A kind of network flow group scheduling method for distributed computer platforms

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105227488A (en) * 2015-08-25 2016-01-06 上海交通大学 A kind of network flow group scheduling method for distributed computer platforms

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
LI CHEN 等: "Scheduling Mix-flows in Commodity Datacenters with Karuna", 《DOI: HTTP://DX.DOI.ORG/10.1145/2934872.2934888》 *
RENHAI XU 等: "Shaping Deadline Coflows to Accelerate Non-Deadline Coflows", 《IEEE XPLORE》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110365608A (en) * 2019-08-01 2019-10-22 西南交通大学 A kind of stream group transmission dispatching method of tolerance deficiency of data transmission
CN110365608B (en) * 2019-08-01 2022-08-19 西南交通大学 Stream group transmission scheduling method capable of tolerating incomplete data transmission
CN112468414A (en) * 2020-11-06 2021-03-09 国网电力科学研究院有限公司 Cloud computing multistage scheduling method, system and storage medium
CN112468414B (en) * 2020-11-06 2023-10-24 国网电力科学研究院有限公司 Cloud computing multi-level scheduling method, system and storage medium
CN114401234A (en) * 2021-12-29 2022-04-26 山东省计算中心(国家超级计算济南中心) Scheduling method and scheduler based on bottleneck flow sensing and without prior information

Also Published As

Publication number Publication date
CN110048966B (en) 2021-12-17

Similar Documents

Publication Publication Date Title
CN108260169B (en) QoS guarantee-based dynamic service function chain deployment method
CN107992359B (en) Task scheduling method for cost perception in cloud environment
CN110297699B (en) Scheduling method, scheduler, storage medium and system
CN105373426B (en) A kind of car networking memory aware real time job dispatching method based on Hadoop
CN110048966A (en) The Coflow dispatching method of minimum overhead based on deadline
CN112364590B (en) Construction method of practical logic verification architecture-level FPGA (field programmable Gate array) wiring unit
CN109582448A (en) A kind of edge calculations method for scheduling task towards criticality and timeliness
CN113472597B (en) Distributed convolutional neural network fine-grained parameter transmission scheduling method and device
CN102855153B (en) Towards the stream compile optimization method of chip polycaryon processor
Zhang et al. The real-time scheduling strategy based on traffic and load balancing in storm
CN113535393B (en) Computing resource allocation method for unloading DAG task in heterogeneous edge computing
CN104881322A (en) Method and device for dispatching cluster resource based on packing model
CN114071582A (en) Service chain deployment method and device for cloud-edge collaborative Internet of things
US9794138B2 (en) Computer system, method, and program
CN111913800B (en) Resource allocation method for optimizing cost of micro-service in cloud based on L-ACO
Li et al. Efficient online scheduling for coflow-aware machine learning clusters
CN112506634A (en) Fairness operation scheduling method based on reservation mechanism
CN110191155A (en) Parallel job scheduling method, system and storage medium for fat tree interconnection network
CN109976873B (en) Scheduling scheme obtaining method and scheduling method of containerized distributed computing framework
CN113190342B (en) Method and system architecture for multi-application fine-grained offloading of cloud-edge collaborative networks
US11868808B2 (en) Automatic driving simulation task scheduling method and apparatus, device, and readable medium
CN118138923A (en) Unloading method of dependent perception task in multi-core fiber elastic optical network
CN105577834B (en) Two layers of bandwidth allocation methods of cloud data center with Predicable performance and system
Li et al. Co-Scheduler: A coflow-aware data-parallel job scheduler in hybrid electrical/optical datacenter networks
CN110084507A (en) The scientific workflow method for optimizing scheduling of perception is classified under cloud computing environment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20211217

CF01 Termination of patent right due to non-payment of annual fee