CN117615273A - Data scheduling method and device - Google Patents

Data scheduling method and device Download PDF

Info

Publication number
CN117615273A
CN117615273A CN202311457434.XA CN202311457434A CN117615273A CN 117615273 A CN117615273 A CN 117615273A CN 202311457434 A CN202311457434 A CN 202311457434A CN 117615273 A CN117615273 A CN 117615273A
Authority
CN
China
Prior art keywords
scheduled
queue
scheduling
data
data stream
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311457434.XA
Other languages
Chinese (zh)
Inventor
张佳玮
纪越峰
冯时
谷志群
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Posts and Telecommunications
Original Assignee
Beijing University of Posts and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Posts and Telecommunications filed Critical Beijing University of Posts and Telecommunications
Priority to CN202311457434.XA priority Critical patent/CN117615273A/en
Publication of CN117615273A publication Critical patent/CN117615273A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q11/00Selecting arrangements for multiplex systems
    • H04Q11/0001Selecting arrangements for multiplex systems using optical switching
    • H04Q11/0005Switch and router aspects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/62Queue scheduling characterised by scheduling criteria
    • H04L47/622Queue service order
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q11/00Selecting arrangements for multiplex systems
    • H04Q11/0001Selecting arrangements for multiplex systems using optical switching
    • H04Q11/0005Switch and router aspects
    • H04Q2011/0037Operation
    • H04Q2011/005Arbitration and scheduling

Abstract

The embodiment of the application provides a data scheduling method and device, comprising the following steps: receiving a scheduling request frame sent by a top-of-rack exchanger; the scheduling request frame comprises attribute information of at least one data stream and queuing information in a queue to be scheduled of the top-of-rack switch; the queue to be scheduled is used for caching all data streams from the top-of-rack switch to the target top-of-rack switch; determining the scheduling sequence of a queue to be scheduled according to the attribute information and queuing information of each data stream; and scheduling the queues to be scheduled according to the scheduling sequence. The data scheduling method can avoid data collision, and by scheduling delay sensitive service data preferentially, the capacity of completing data transmission within the deadline is improved, and the resource utilization rate is optimized.

Description

Data scheduling method and device
Technical Field
The embodiment of the application relates to the technical field of data processing, in particular to a data scheduling method and device.
Background
With the rapid development of high bandwidth, multi-type services, data center networks face significant challenges. The arrayed waveguide grating router (arrayed waveguide grating router, AWGR) has the advantages of low time delay, rapid optical exchange, green support, energy conservation and the like, and has been widely applied to data center networks. In the face of the communication demands of a large amount of exchange information such as data center network searching, data mining, hadoop application and the like, different deadline demands are accompanied in a large amount of bursty miniflow, users need to respond in time before the deadline, and a network is needed to provide rapid exchange service. In a data center network based on arrayed waveguide grating router networking, how to complete data exchange within the deadline of data flow through data scheduling is a problem to be solved.
Disclosure of Invention
In view of this, an objective of the embodiments of the present application is to provide a data scheduling method and apparatus to solve the scheduling problem of data flow.
Based on the above objects, an embodiment of the present application provides a data scheduling method, including:
receiving a scheduling request frame sent by a top-of-rack exchanger; the scheduling request frame comprises attribute information of at least one data stream and queuing information of each data stream in a queue to be scheduled of the top-of-rack switch; the queue to be scheduled is used for caching all data streams from the top-of-rack switch to the target top-of-rack switch;
determining the scheduling sequence of each queue to be scheduled according to the attribute information and queuing information of each data stream;
and scheduling the queues to be scheduled according to the scheduling sequence.
Optionally, the attribute information includes a size and a deadline of the data stream, and the queuing information includes a position and a waiting time of the data stream in a queue to be scheduled;
determining a scheduling sequence of a queue to be scheduled according to attribute information and queuing information of each data stream, including:
calculating the emergency scheduling degree of the queue to be scheduled according to the size and the deadline of the data stream;
calculating the scheduling priority of the queue to be scheduled according to the emergency scheduling degree, a preset first weight, the queuing position of the cell corresponding to the data stream with the shortest deadline in the queue to be scheduled, a preset second weight, the waiting time of the queue to be scheduled and a preset third weight; the cell is obtained after the frame top switch analyzes and repackages the received data stream.
Optionally, according to the size and the deadline of the data stream, calculating the emergency scheduling degree of the queue to be scheduled, where the method includes:
wherein Ls is s,d Ld is the total size of all data streams from source roof-top switch s to destination roof-top switch d s,d For the shortest deadline in all data streams from the source roof-top switch to the destination roof-top switch.
Optionally, calculating the scheduling priority of the data stream according to the emergency scheduling degree, a preset first weight, a queuing position of a cell corresponding to the data stream with the shortest deadline in the queue to be scheduled, a preset second weight, a waiting time of the queue to be scheduled and a preset third weight, wherein the method comprises the following steps:
wherein LCO s,d For queuing position of cell corresponding to data flow with shortest deadline in queue to be scheduled, IP s,d For waiting time of the queue to be scheduled, alpha is a first weight, beta is a second weight, gamma is a third weight, W s,d For scheduling priority.
Optionally, the attribute information includes that the data flow does not set a deadline, and the queuing information includes waiting time of a queue to be scheduled;
determining the scheduling sequence of each data stream according to the attribute information and queuing information of each data stream, including:
and calculating the scheduling priority of the queue to be scheduled according to the waiting time of the queue to be scheduled and a preset third weight.
Optionally, the scheduling request frame includes a source address of a source top-of-rack switch and a destination address of a destination top-of-rack switch corresponding to each data stream; scheduling the queue to be scheduled according to the scheduling sequence, including:
sequencing each queue to be scheduled according to the sequence of the scheduling priority from big to small;
based on the ordered queues to be scheduled, judging whether a source top-of-rack exchanger and a destination top-of-rack exchanger corresponding to the queues to be scheduled are idle or not in sequence;
and if the source top-of-frame exchanger and the destination top-of-frame exchanger corresponding to the queue to be scheduled are idle, generating a scheduling result frame comprising a source address and a destination address, and sending the scheduling result frame to the source top-of-frame exchanger so that the source top-of-frame exchanger sends the corresponding cells in the queue to be scheduled to the destination top-of-frame exchanger.
Optionally, judging whether the source top-of-rack exchanger and the destination top-of-rack exchanger corresponding to the queue to be scheduled are idle in sequence, including:
and sequentially judging whether the source top-of-rack exchanger corresponding to the queue to be scheduled has an idle outlet port and the destination top-of-rack exchanger has an idle inlet port.
Optionally, before receiving the scheduling request frame sent by the top-of-rack switch, the method further includes:
constructing a first matrix, a second matrix and a third matrix; the first matrix is used for storing the total size of all data flows in the queue to be scheduled; the second matrix is used for storing the queuing position of the data stream with the shortest deadline in the queue to be scheduled; and the third matrix is used for storing the shortest deadlines in all data streams in the queue to be scheduled.
Optionally, after receiving the scheduling request frame sent by the top-of-rack switch, the method further includes:
analyzing the scheduling request frame to obtain a source top-of-frame exchanger, a destination top-of-frame exchanger, the sizes of the data streams, whether to set the deadline and the set deadline corresponding to each data stream;
updating the first matrix according to the size of the data stream;
and updating the second matrix and the third matrix according to the set cutoff time and the set cutoff time.
The embodiment of the application provides a data scheduling device, which comprises:
the receiving module is used for receiving the scheduling request frame sent by the top-of-rack exchanger; the scheduling request frame comprises attribute information of at least one data stream and queuing information in a queue to be scheduled of the top-of-rack switch; the queue to be scheduled is used for caching all data streams from the top-of-rack switch to the target top-of-rack switch;
the calculation module is used for determining the scheduling sequence of each queue to be scheduled according to the attribute information and queuing information of each data stream;
and the scheduling module is used for scheduling the queues to be scheduled according to the scheduling sequence.
From the above, it can be seen that the data scheduling method and apparatus provided in the embodiments of the present application include: and receiving a scheduling request frame sent by the top-of-rack exchanger, determining the scheduling sequence of the queues to be scheduled of each top-of-rack exchanger according to the attribute information and queuing information of each data stream in the scheduling request frame, and scheduling each queue to be scheduled according to the scheduling sequence. According to the method and the device, data collision can be avoided, the capacity of completing data transmission in the deadline is improved, and the resource utilization rate is optimized by preferentially scheduling delay sensitive service data.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, and it is obvious that the drawings in the following description are only embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a network architecture schematic of a leaf ridge structure of some embodiments;
FIG. 2 is a schematic diagram of the structure of a fast tunable laser module of some embodiments;
FIG. 3 is a schematic flow chart of a method according to an embodiment of the present application;
FIG. 4 is a schematic diagram of a system architecture according to an embodiment of the present application;
FIG. 5 is a schematic flow chart of a method according to another embodiment of the present application;
FIG. 6 is a block diagram of a device according to an embodiment of the present application;
fig. 7 is a block diagram of an electronic device according to an embodiment of the present application.
Detailed Description
For the purposes of promoting an understanding of the principles and advantages of the disclosure, reference will now be made to the embodiments illustrated in the drawings and specific language will be used to describe the same.
It should be noted that unless otherwise defined, technical or scientific terms used in the embodiments of the present application should be given the ordinary meaning as understood by one of ordinary skill in the art to which the present disclosure pertains. The terms "first," "second," and the like, as used in embodiments of the present application, do not denote any order, quantity, or importance, but rather are used to distinguish one element from another. The word "comprising" or "comprises", and the like, means that elements or items preceding the word are included in the element or item listed after the word and equivalents thereof, but does not exclude other elements or items. The terms "connected" or "connected," and the like, are not limited to physical or mechanical connections, but may include electrical connections, whether direct or indirect. "upper", "lower", "left", "right", etc. are used merely to indicate relative positional relationships, which may also be changed when the absolute position of the object to be described is changed.
As shown in fig. 1 and 2, in the related art, a data center network generally adopts a leaf ridge architecture, leaf nodes are top switches (tors), and are arranged at the top of a cabinet, end hosts (End-hosts) are arranged in the cabinet and can be used as user terminals or servers for processing and storing network data, and the top switches are connected with the End hosts and are used for data exchange and processing of the hosts at the inner ends of the cabinet; the ridge node is an array waveguide grating router and is connected with the top-of-rack switches, so that data exchange among the top-of-rack switches is realized, the selectable communication paths among any top-of-rack switches are equal to the number of the array waveguide grating routers, and the robustness of data transmission is improved. The sending port of the shelf top switch is provided with a quick Tunable laser module, the quick Tunable laser comprises a Tunable laser (Tunable laser) and a semiconductor optical amplifier (SOA array), the wavelength of data sent by the shelf top switch is changed by the quick Tunable laser module, the correct route of the array waveguide grating router can be ensured, the direct forwarding from any input port to any output port of the switch is realized, nanosecond switching time delay is reached, and the quick switching is realized.
Data center networks need to address data scheduling issues in the face of different types and demands of business data. The common data scheduling method is that each top-mounted switch determines the data frames to be transmitted carried by the designated wavelength according to a unified timetable, and transmits the data frames by polling the queues of different destination addresses, so that the problem of port conflict can be reduced, and the fairness of data transmission of each switch is ensured. However, the polling mode cannot ensure the flow balance of each switch, and the situation that the flow pressure is too high may occur in some switches, and the scheduling is not flexible enough for the data of the delay sensitive service, especially for the bursty small flow with the deadline (the data volume of the small flow is smaller than a certain data volume threshold, and the values of the data volume threshold are different according to different service types), so that the transmission requirement cannot be preferentially met, and the user experience is affected.
In view of this, the embodiment of the present application provides a data scheduling method, in which scheduling request frames of each switch are collected through an arbiter, and according to attribute information of a data stream of each scheduling request frame and queuing information in a queue to be scheduled, transmission sequences of each queue to be scheduled are uniformly scheduled, and by preferentially transmitting data of delay sensitive services, deadline miss probability of the data stream can be reduced, and user experience is improved.
The technical scheme of the application is further described in detail through specific examples.
As shown in fig. 3, an embodiment of the present application provides a data scheduling method, including:
s301: receiving a scheduling request frame sent by a top-of-rack exchanger; the scheduling request frame comprises attribute information of at least one data stream and queuing information in a queue to be scheduled of the top-of-rack switch; the queue to be scheduled is used for caching all data streams from the top-of-rack switch to the target top-of-rack switch;
referring to fig. 4, in the data center network architecture of the present embodiment, an arbiter is configured to collect the scheduling request frames of each top-of-rack switch in a unified manner, and perform unified scheduling on the data streams of each top-of-rack switch, that is, the execution body of the data scheduling method of the present application is an arbiter, and the arbiter is used to perform centralized scheduling on the data of each top-of-rack switch. Each top-of-rack switch is connected with the arbiter through a respective control channel and is used for sending a scheduling request frame to the arbiter and receiving a scheduling result frame sent by the arbiter.
In some embodiments, the scheduling request frame includes attribute information of at least one data stream and queuing information in a to-be-scheduled queue of the roof-top switch. One data stream comprises a plurality of data frames, all the data frames of the same data stream have the same five-tuple, and attribute information of the data stream can be obtained by analyzing the data stream, wherein the attribute information comprises a source address and a destination address of the data stream, the size of the data stream, deadline information and the like; the scheduling request frame sent by the same top-of-rack switch may include information of one or more data flows, where the source address and the destination address of each data flow are the same, or the source address and the destination address are the same (for example, the source IP address and the destination IP address of the data flows are the same, but the protocol types are different and may be considered as different data flows), the source address is the address of the top-of-rack switch (the top-of-rack switch sending the scheduling request frame may also be considered as the source top-of-rack switch), the size of the data flow is the number of data frames included in the data flow, the deadline information of the data flow includes whether the data flow sets a deadline, and in the case of setting the deadline, the deadline refers to the data transmission and response should be completed within the deadline. For a large number of bursty miniflows existing in a data center network, the method has the characteristic of short deadline, and the priority scheduling service is required to be provided for the miniflows, so that the data transmission and response of the miniflows can be completed as much as possible within the deadline.
On one hand, the data frames in the data stream are uniformly segmented and packaged into cells with fixed size according to a preset data format, and the cells are cached in a queue to be scheduled corresponding to the corresponding destination address according to the destination address of the data stream, namely, all the data streams from the top switch to the same destination top switch are cached in the queue to be scheduled; on the other hand, according to the attribute information of the data stream and queuing information in a queue to be scheduled, the data stream is packaged into a scheduling request frame according to a preset control data format, and the scheduling request frame is transmitted to an arbiter through a control channel by a fixed laser. The queuing information of the data stream in the queue to be scheduled comprises queuing positions of cells corresponding to the data stream in the queue to be scheduled and waiting time for waiting to be sent.
The arbiter receives the dispatching request frame of each top-frame exchanger, after determining the dispatching sequence of each data stream according to the preset dispatching algorithm, generates a dispatching result frame, sends the dispatching result frame to the source top-frame exchanger of the dispatching time through the control channel, the source top-frame exchanger determines the cell in the queue to be dispatched of the destination address to be sent according to the dispatching result frame, and when the time slot for sending the cell is reached, the source top-frame exchanger sends the cell to the destination top-frame exchanger through the tunable laser through the data channel. The destination frame top exchanger receives the cells through the broad spectrum receiver, buffers the cells in the queues to be restored corresponding to the source address according to the source address, reads the cells from the queues to be restored in sequence, decapsulates the cells, restores the cells into Ethernet frames, and transmits the Ethernet frames back to the end host.
In some implementations, the time slot is the smallest unit of time schedule, the length of the time slot is equal to the transmission time of the cell and the fixed time of the guard band, which includes errors in the tuning time of the tunable laser and the network time synchronization. The overhead switch analyzes and repackages the data frames in the data stream into cells with the same size, and compared with a mode of scheduling data frames with different sizes, the overhead switch can reduce scheduling complexity and improve scheduling efficiency.
S302: determining the scheduling sequence of a queue to be scheduled according to the attribute information and queuing information of each data stream;
in this embodiment, in a data center network based on an arrayed waveguide grating router networking, according to the situation that data exchange may generate a conflict, the following conflict model is established:
wherein,representing a cell sent from the t-th transmitter of the i-th originating roof-top switch to the r-th receiver of the m-th receiving roof-top switch.
As shown in equation (1), one possible collision scenario is when multiple shelf-top switches attempt to forward their respective cells through the output ports of the same arrayed waveguide grating router, a data collision occurs; another possible collision scenario is when different destination address queues of the same roof-top switch compete for transmission opportunities for the same slot, as shown in equation (2).
In some embodiments, in the data scheduling process, the total transmission time of the data includes a transmission time and a transmission time of the scheduling request frame, a transmission time and a transmission time of the scheduling result frame, and a scheduling time of the arbiter. Since the physical link is fixed, the transmission time and the transmission time of the scheduling request frame are fixed, the transmission time and the transmission time of the scheduling result frame are fixed, and the scheduling time of the arbiter is determined by the adopted scheduling method. Therefore, based on the conflict model, in order to avoid data conflict, reduce the total transmission time of data, and meet the data transmission requirement of delay sensitive services, especially for bursty small flows with deadlines, it is necessary to complete as many data flow transmissions as possible within the deadlines, the data scheduling method adopted by the arbiter should take the number of data flow completed within the maximum deadlines as a primary target, and the total average data flow completion time as a secondary target, that is, for data flows with deadlines, complete data transmission within the deadlines as much as possible, and reduce the data transmission time of the total data flow.
In some embodiments, for a data stream with a deadline, that is, a data stream with a deadline exists in a queue to be scheduled, determining a scheduling order of the queue to be scheduled according to attribute information and queuing information of each data stream includes:
according to the size and the deadline of the data stream, calculating the emergency scheduling degree of the queue to be scheduled;
and calculating the scheduling priority of the queue to be scheduled according to the emergency scheduling degree, the preset first weight, the queuing position of the cell corresponding to the data stream with the shortest deadline in the queue to be scheduled, the preset second weight, the waiting time of the queue to be scheduled and the preset third weight.
In this embodiment, after receiving the scheduling request frame sent by each top switch, the arbiter analyzes the scheduling request frame to obtain the size of the data stream, whether the deadline is set, the set deadline, the queuing position and the waiting time in the queue to be scheduled, and the like, calculates the emergency scheduling degree of the queue to be scheduled according to the size and the deadline of each data stream, and determines the priority of the queue to be scheduled in the round of scheduling according to the emergency scheduling degree, the queuing position of the data stream with the shortest deadline in the queue to be scheduled, and the waiting time of the queue to be scheduled from the last time, and determines the scheduling sequence according to the priority.
In some modes, according to the size and the deadline of the data stream, the emergency scheduling degree of the queue to be scheduled is calculated, and the method comprises the following steps:
wherein Ls is s,d For the total size of all data streams from the source roof-top switch s to the destination roof-top switch d, i.e. the total size of all data streams in the queue to be scheduled, ld s,d For the shortest deadline in all data streams from the source roof-top switch to the destination roof-top switch.
The method for calculating the scheduling priority of the queue to be scheduled comprises the following steps:
wherein LCO s,d For queuing position of cell corresponding to data flow with shortest deadline in queue to be scheduled, IP s,d For the waiting time of the queue to be scheduled from the previous round, alpha is a first weight, beta is a second weight, gamma is a third weight, W s,d For scheduling priority, N [ []The calculation result obtained by normalizing the values in brackets is shown.
According to formulas (3) and (4), the scheduling priority of the queue to be scheduled is determined by the emergency scheduling degree, the corresponding first weight, the queuing sequence of the data stream with the shortest deadline in the queue to be scheduled, the corresponding second weight, the waiting time of the queue to be scheduled and the corresponding third weight. In order to ensure that the data transmission of the small stream is completed as much as possible within the deadline, the shorter the deadline of the data stream and the smaller the number of the data frames, the higher the corresponding emergency scheduling degree and the higher the scheduling priority. Considering that the data center network has not only the data exchange requirement with the deadline, but also the data exchange requirement without the deadline, namely, the data scheduling request received by the arbiter comprises a time delay sensitive type scheduling request frame with the deadline, and further comprises other type scheduling request frames without the deadline, the time delay sensitive type scheduling request frame and other type scheduling request frames are cached in a waiting scheduling queue, and the time delay sensitive type scheduling request frame is possibly arranged behind the other type scheduling request frames in the waiting scheduling queue, so that when the scheduling priority of the time delay sensitive type scheduling request frame is calculated for the priority scheduling time delay sensitive type scheduling request frame, the queuing order of the time delay sensitive type scheduling request frame in the waiting scheduling queue needs to be determined, and the higher the queuing position in the waiting queue is, the higher the queuing priority is.
In some embodiments, for a data stream for which an deadline is not set, that is, a data stream for which an deadline is not set in a queue to be scheduled, determining a scheduling order of the queue to be scheduled according to attribute information and queuing information of each data stream includes:
and calculating the scheduling priority of the queue to be scheduled according to the waiting time of the queue to be scheduled and the preset third weight.
In this embodiment, considering that there is a data stream with no deadline in the queue to be scheduled, in order to avoid that the data stream cannot be processed for a long time, and reduce the average processing time of the overall data stream, the longer the waiting time of the data stream in the queue to be scheduled, the higher the scheduling priority, and when determining the scheduling priority of the data stream, the scheduling priority only needs to be calculated according to the waiting time of the queue to be scheduled and a preset third weight.
In some modes, the scheduling priority of the data stream can be adjusted by adjusting the first weight, the second weight and the third weight according to factors such as specific application scenes, service types, data stream characteristics and the like. The data scheduling method of this embodiment takes the number of data streams completing transmission within the maximum deadline as a primary target and takes the minimum overall average data stream completion time as a secondary target, so the first weight and the second weight should be greater than the third weight, for example, the first weight and the second weight may be set to 10, the third weight is set to 1, the values of the weights are only exemplified, and the specific values are not limited.
In some embodiments, before the arbiter receives the scheduling request frame sent by the roof-top switch, the method further comprises:
constructing a first matrix, a second matrix and a third matrix; the first matrix is used for storing the total size of all data streams from the source top-of-rack switch to the destination top-of-rack switch, namely the total size of all data streams in a queue to be scheduled; the second matrix is used for storing the queuing position of the data stream with the shortest deadline in the queue to be scheduled; the third matrix is used for storing the shortest deadlines in all data streams in the queue to be scheduled.
In this embodiment, a buffer structure for storing each item of attribute information and queuing information in a scheduling request frame is pre-constructed, after the scheduling request frame is received, the scheduling request frame is parsed, and each item of parsed information is correspondingly stored in a corresponding buffer structure. For example, in a network architecture where n roof switches are deployed, a first matrix, a second matrix, and a third matrix of n rows and n columns may be pre-constructed, with element a in the first matrix ij The method is used for storing the total size of all data streams in a queue to be scheduled from a source top-of-rack switch i to a destination top-of-rack switch j (in an embodiment of repackaging data frames into cells, the size of the data streams can be the number of the cells which are buffered in the queue to be scheduled and correspond to the destination address of the destination top-of-rack switch j and repackaged into all data frames); element B in the second matrix ij For storing queuing positions of the data stream with the shortest deadline in a queue to be scheduled corresponding to the destination address of the destination top-of-rack switch j of the source top-of-rack switch i, element C in the third matrix ij The method is used for storing the deadline with the shortest deadline in all data streams in the queues to be scheduled from the source top-of-rack exchanger i to the destination top-of-rack exchanger j.
In some embodiments, after the arbiter receives the scheduling request frame sent by the roof-top switch, the method further comprises:
analyzing the scheduling request frame to obtain a source top-of-frame exchanger and a destination top-of-frame exchanger corresponding to each data stream, the size of the data stream, whether to set the deadline and the set deadline;
updating the first matrix according to the size of the data stream;
and updating the second matrix and the third matrix according to whether the cutoff time is set and the set cutoff time.
In this embodiment, after the arbiter constructs the first matrix, the second matrix, and the third matrix, for the received scheduling request frame, the source top-of-frame switch and the destination top-of-frame switch corresponding to each data stream included, the size of the data stream, whether to set the deadline, and the set deadline are obtained by parsing; and then, determining the element positions to be updated in the three matrixes according to the source top-of-rack exchanger and the destination top-of-rack exchanger, updating the values of the corresponding elements in the first matrix according to the size of the data stream, and updating the values of the corresponding elements in the second matrix and the third matrix according to the deadline. In some approaches, the latency of the data stream may also be maintained and updated using a matrix structure. After each matrix is updated, corresponding parameters can be read from each matrix, and the scheduling priority of each data stream is calculated according to formulas (3) and (4).
S303: scheduling the queues to be scheduled according to the scheduling sequence.
In this embodiment, after determining a scheduling order of a queue to be scheduled, the arbiter schedules the queue to be scheduled according to the scheduling order, and the scheduling method includes:
sequencing each queue to be scheduled according to the sequence of the scheduling priority from big to small;
based on the ordered queues to be scheduled, judging whether a source top-of-rack exchanger and a destination top-of-rack exchanger corresponding to the queues to be scheduled are idle or not in sequence;
if the source top-of-frame exchanger and the destination top-of-frame exchanger are idle, a scheduling result frame comprising a source address and a destination address is generated and sent to the source top-of-frame exchanger, so that the source top-of-frame exchanger sends corresponding cells in a queue to be scheduled to the destination top-of-frame exchanger.
In this embodiment, after the arbiter calculates the scheduling priority of each queue to be scheduled, each queue to be scheduled is ordered according to the order from the big to the small of the scheduling priority, based on the ordered queues to be scheduled, whether the source top switch corresponding to the queues to be scheduled has an idle output port or not is judged from the first, whether the destination top switch has an idle input port or not is judged, if the source top switch has an idle output port, and meanwhile, the destination top switch has an idle input port, and the queues to be scheduled can be scheduled in the round; continuously judging whether a source top-of-rack exchanger corresponding to a second queue to be scheduled has an idle outlet port or not, and whether a destination top-of-rack exchanger has an idle inlet port or not, if so, the queue to be scheduled can be scheduled in the round; and continuously judging the subsequent queues to be scheduled in sequence, if the source top switch corresponding to the queues to be scheduled has an idle outlet port, and meanwhile, the destination top switch has an idle inlet port, the corresponding queues to be scheduled can be scheduled on the round, and if the outlet ports of the source top switch are all occupied or the inlet ports of the destination top switch are all occupied, the corresponding queues to be scheduled cannot be scheduled on the round. And according to the process, after the resource judgment processing is carried out on each ordered queue to be scheduled, determining all the queues to be scheduled which can be scheduled in the round.
For each queue to be scheduled, which can be scheduled in the round, the arbiter generates a scheduling result frame comprising a source address of a source top-of-frame exchanger and a destination address of a destination top-of-frame exchanger, the scheduling result frame is sent to the source top-of-frame exchanger corresponding to the source address through a control channel, after the source top-of-frame exchanger receives the scheduling result frame, the destination address is analyzed, a cell needing to be sent is determined from the queue to be scheduled corresponding to the destination address, and the cell is sent to the destination top-of-frame exchanger through a data channel by utilizing the fast tunable laser module, so that data exchange is completed. Optionally, the queue to be scheduled is a FIFO queue, and when a time slot for transmitting data arrives, the first cell in the queue to be scheduled is transmitted.
As shown in fig. 5, in some embodiments, the process of the arbiter performing the data scheduling method includes: initializing an arbiter and performing time synchronization with a data center network; each top switch in the network sends a scheduling request frame to an arbiter according to a preset period, the arbiter receives each scheduling request frame, analyzes each scheduling request frame to obtain attribute information and queuing information, and caches each item of information of each scheduling request frame in a first matrix, a second matrix and a third matrix. Reading each parameter from the first matrix, the second matrix and the third matrix, and calculating the scheduling priority of each queue to be scheduled according to formulas (3) and (4); and sequencing each queue to be scheduled according to the sequence of the scheduling priority from large to small, buffering the sequenced information of the queues to be scheduled in a polling queue, and sequentially judging whether a source top-of-rack exchanger and a destination top-of-rack exchanger corresponding to the queues to be scheduled are idle or not based on each queue to be scheduled sequenced in the polling queue, wherein if the source top-of-rack exchanger and/or the destination top-of-rack exchanger are not idle, the scheduling cannot be performed in the round, and if the source top-of-rack exchanger and the destination top-of-rack exchanger are idle, the scheduling can be performed in the round. For a queue to be scheduled which can be scheduled in the round, the arbiter generates a scheduling result frame comprising a source address and a destination address and sends the scheduling result frame to the source frame top switch; after receiving the dispatching result frame, the source top switch determines the cell in the queue to be dispatched corresponding to the destination address, and when reaching the time slot of sending data, the cell is sent to the destination top switch.
It should be noted that, the method of the embodiments of the present application may be performed by a single device, for example, a computer or a server. The method of the embodiment can also be applied to a distributed scene, and is completed by mutually matching a plurality of devices. In the case of such a distributed scenario, one of the devices may perform only one or more steps of the methods of embodiments of the present application, and the devices may interact with each other to complete the methods.
It should be noted that the foregoing describes specific embodiments of the present invention. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims can be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.
As shown in fig. 6, an embodiment of the present application further provides a data scheduling apparatus, including:
the receiving module is used for receiving the scheduling request frame sent by the top-of-rack exchanger; the scheduling request frame comprises attribute information of at least one data stream and queuing information in a queue to be scheduled of the top-of-rack switch; the queue to be scheduled is used for caching all data streams from the top-of-rack switch to the target top-of-rack switch;
the calculation module is used for determining the scheduling sequence of the queue to be scheduled according to the attribute information and queuing information of each data stream;
the scheduling module is used for scheduling the queues to be scheduled according to the scheduling sequence.
For convenience of description, the above devices are described as being functionally divided into various modules, respectively. Of course, the functions of each module may be implemented in one or more pieces of software and/or hardware when implementing the embodiments of the present application.
The device of the foregoing embodiment is configured to implement the corresponding method in the foregoing embodiment, and has the beneficial effects of the corresponding method embodiment, which is not described herein.
Fig. 7 is a schematic diagram of a hardware structure of an electronic device according to the embodiment, where the device may include: a processor 1010, a memory 1020, an input/output interface 1030, a communication interface 1040, and a bus 1050. Wherein processor 1010, memory 1020, input/output interface 1030, and communication interface 1040 implement communication connections therebetween within the device via a bus 1050.
The processor 1010 may be implemented by a general-purpose CPU (Central Processing Unit ), microprocessor, application specific integrated circuit (Application Specific Integrated Circuit, ASIC), or one or more integrated circuits, etc. for executing relevant programs to implement the technical solutions provided in the embodiments of the present disclosure.
The Memory 1020 may be implemented in the form of ROM (Read Only Memory), RAM (Random Access Memory ), static storage device, dynamic storage device, or the like. Memory 1020 may store an operating system and other application programs, and when the embodiments of the present specification are implemented in software or firmware, the associated program code is stored in memory 1020 and executed by processor 1010.
The input/output interface 1030 is used to connect with an input/output module for inputting and outputting information. The input/output module may be configured as a component in a device (not shown) or may be external to the device to provide corresponding functionality. Wherein the input devices may include a keyboard, mouse, touch screen, microphone, various types of sensors, etc., and the output devices may include a display, speaker, vibrator, indicator lights, etc.
Communication interface 1040 is used to connect communication modules (not shown) to enable communication interactions of the present device with other devices. The communication module may implement communication through a wired manner (such as USB, network cable, etc.), or may implement communication through a wireless manner (such as mobile network, WIFI, bluetooth, etc.).
Bus 1050 includes a path for transferring information between components of the device (e.g., processor 1010, memory 1020, input/output interface 1030, and communication interface 1040).
It should be noted that although the above-described device only shows processor 1010, memory 1020, input/output interface 1030, communication interface 1040, and bus 1050, in an implementation, the device may include other components necessary to achieve proper operation. Furthermore, it will be understood by those skilled in the art that the above-described apparatus may include only the components necessary to implement the embodiments of the present description, and not all the components shown in the drawings.
The electronic device of the foregoing embodiment is configured to implement the corresponding method in the foregoing embodiment, and has the beneficial effects of the corresponding method embodiment, which is not described herein.
The computer readable media of the present embodiments, including both permanent and non-permanent, removable and non-removable media, may be used to implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device.
Those of ordinary skill in the art will appreciate that: the discussion of any of the embodiments above is merely exemplary and is not intended to suggest that the scope of the disclosure, including the claims, is limited to these examples; the technical features of the above embodiments or in the different embodiments may also be combined under the idea of the present disclosure, the steps may be implemented in any order, and there are many other variations of the different aspects of the embodiments of the present application as described above, which are not provided in details for the sake of brevity.
Additionally, well-known power/ground connections to Integrated Circuit (IC) chips and other components may or may not be shown within the provided figures, in order to simplify the illustration and discussion, and so as not to obscure the embodiments of the present application. Furthermore, the devices may be shown in block diagram form in order to avoid obscuring the embodiments of the present application, and this also takes into account the fact that specifics with respect to implementation of such block diagram devices are highly dependent upon the platform on which the embodiments of the present application are to be implemented (i.e., such specifics should be well within purview of one skilled in the art). Where specific details (e.g., circuits) are set forth in order to describe example embodiments of the disclosure, it should be apparent to one skilled in the art that embodiments of the disclosure can be practiced without, or with variation of, these specific details. Accordingly, the description is to be regarded as illustrative in nature and not as restrictive.
While the present disclosure has been described in conjunction with specific embodiments thereof, many alternatives, modifications, and variations of those embodiments will be apparent to those skilled in the art in light of the foregoing description. For example, other memory architectures (e.g., dynamic RAM (DRAM)) may use the embodiments discussed.
The present embodiments are intended to embrace all such alternatives, modifications and variances which fall within the broad scope of the appended claims. Accordingly, any omissions, modifications, equivalents, improvements, and the like, which are within the spirit and principles of the embodiments of the present application, are intended to be included within the scope of the present disclosure.

Claims (10)

1. A method for scheduling data, comprising:
receiving a scheduling request frame sent by a top-of-rack exchanger; the scheduling request frame comprises attribute information of at least one data stream and queuing information of each data stream in a queue to be scheduled of the top-of-rack switch; the queue to be scheduled is used for caching all data streams from the top-of-rack switch to the target top-of-rack switch;
determining the scheduling sequence of each queue to be scheduled according to the attribute information and queuing information of each data stream;
and scheduling the queues to be scheduled according to the scheduling sequence.
2. The method of claim 1, wherein the attribute information comprises a size and a deadline of the data stream, and the queuing information comprises a position and a waiting time of the data stream in a queue to be scheduled;
determining a scheduling sequence of a queue to be scheduled according to attribute information and queuing information of each data stream, including:
calculating the emergency scheduling degree of the queue to be scheduled according to the size and the deadline of the data stream;
calculating the scheduling priority of the queue to be scheduled according to the emergency scheduling degree, a preset first weight, the queuing position of the cell corresponding to the data stream with the shortest deadline in the queue to be scheduled, a preset second weight, the waiting time of the queue to be scheduled and a preset third weight; the cell is obtained after the frame top switch analyzes and repackages the received data stream.
3. The method according to claim 2, wherein the emergency scheduling degree of the queue to be scheduled is calculated according to the size and the deadline of the data stream, and the method comprises the steps of:
wherein Ls is s,d Ld is the total size of all data streams from source roof-top switch s to destination roof-top switch d s,d For the shortest deadline in all data streams from the source roof-top switch to the destination roof-top switch.
4. The method of claim 3, wherein the scheduling priority of the data stream is calculated according to the emergency scheduling degree, a preset first weight, a queuing position of a cell corresponding to the data stream with the shortest deadline in the queue to be scheduled, a preset second weight, a waiting time of the queue to be scheduled, and a preset third weight, and the method comprises:
wherein LCO s,d For queuing position of cell corresponding to data flow with shortest deadline in queue to be scheduled, IP s,d For waiting time of the queue to be scheduled, alpha is a first weight, beta is a second weight, gamma is a third weight, W s,d For scheduling priority.
5. The method of claim 1, wherein the attribute information comprises a deadline for the data flow, and wherein the queuing information comprises a waiting time for a queue to be scheduled;
determining the scheduling sequence of each data stream according to the attribute information and queuing information of each data stream, including:
and calculating the scheduling priority of the queue to be scheduled according to the waiting time of the queue to be scheduled and a preset third weight.
6. The method according to any one of claims 2-5, wherein the scheduling request frame includes a source address of a source roof-top switch and a destination address of a destination roof-top switch corresponding to each data flow; scheduling the queue to be scheduled according to the scheduling sequence, including:
sequencing each queue to be scheduled according to the sequence of the scheduling priority from big to small;
based on the ordered queues to be scheduled, judging whether a source top-of-rack exchanger and a destination top-of-rack exchanger corresponding to the queues to be scheduled are idle or not in sequence;
and if the source top-of-frame exchanger and the destination top-of-frame exchanger corresponding to the queue to be scheduled are idle, generating a scheduling result frame comprising a source address and a destination address, and sending the scheduling result frame to the source top-of-frame exchanger so that the source top-of-frame exchanger sends the corresponding cells in the queue to be scheduled to the destination top-of-frame exchanger.
7. The method of claim 1, wherein sequentially determining whether the source roof-top switch and the destination roof-top switch corresponding to the queue to be scheduled are idle comprises:
and sequentially judging whether the source top-of-rack exchanger corresponding to the queue to be scheduled has an idle outlet port and the destination top-of-rack exchanger has an idle inlet port.
8. The method of claim 1, wherein prior to receiving the scheduling request frame sent by the roof-top switch, further comprising:
constructing a first matrix, a second matrix and a third matrix; the first matrix is used for storing the total size of all data flows in the queue to be scheduled; the second matrix is used for storing the queuing position of the data stream with the shortest deadline in the queue to be scheduled; and the third matrix is used for storing the shortest deadlines in all data streams in the queue to be scheduled.
9. The method of claim 8, wherein after receiving the scheduling request frame sent by the roof-top switch, further comprising:
analyzing the scheduling request frame to obtain a source top-of-frame exchanger, a destination top-of-frame exchanger, the sizes of the data streams, whether to set the deadline and the set deadline corresponding to each data stream;
updating the first matrix according to the size of the data stream;
and updating the second matrix and the third matrix according to the set cutoff time and the set cutoff time.
10. A data scheduling apparatus, comprising:
the receiving module is used for receiving the scheduling request frame sent by the top-of-rack exchanger; the scheduling request frame comprises attribute information of at least one data stream and queuing information in a queue to be scheduled of the top-of-rack switch; the queue to be scheduled is used for caching all data streams from the top-of-rack switch to the target top-of-rack switch;
the calculation module is used for determining the scheduling sequence of each queue to be scheduled according to the attribute information and queuing information of each data stream;
and the scheduling module is used for scheduling the queues to be scheduled according to the scheduling sequence.
CN202311457434.XA 2023-11-03 2023-11-03 Data scheduling method and device Pending CN117615273A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311457434.XA CN117615273A (en) 2023-11-03 2023-11-03 Data scheduling method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311457434.XA CN117615273A (en) 2023-11-03 2023-11-03 Data scheduling method and device

Publications (1)

Publication Number Publication Date
CN117615273A true CN117615273A (en) 2024-02-27

Family

ID=89948716

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311457434.XA Pending CN117615273A (en) 2023-11-03 2023-11-03 Data scheduling method and device

Country Status (1)

Country Link
CN (1) CN117615273A (en)

Similar Documents

Publication Publication Date Title
CN114286413B (en) TSN network joint routing and stream distribution method and related equipment
EP2613479A1 (en) Relay device
CN109104373B (en) Method, device and system for processing network congestion
US20190044879A1 (en) Technologies for reordering network packets on egress
US9042252B2 (en) Inter-packet interval prediction learning algorithm
Kiani et al. Hierarchical capacity provisioning for fog computing
CN114039918B (en) Information age optimization method and device, computer equipment and storage medium
US11411865B2 (en) Network resource scheduling method, apparatus, electronic device and storage medium
US11502967B2 (en) Methods and apparatuses for packet scheduling for software-defined networking in edge computing environment
CN113783793B (en) Traffic scheduling method for time-sensitive data frames and related equipment
CN113543210B (en) 5G-TSN cross-domain QoS and resource mapping method, equipment and computer readable storage medium
CN111181873A (en) Data transmission method, data transmission device, storage medium and electronic equipment
CN113328953B (en) Method, device and storage medium for network congestion adjustment
US8929216B2 (en) Packet scheduling method and apparatus based on fair bandwidth allocation
CN109819478B (en) Data exchange method and device
CN116233262A (en) Micro-service deployment and request routing method and system based on edge network architecture
US9344384B2 (en) Inter-packet interval prediction operating algorithm
CN107239407B (en) Wireless access method and device for memory
CN117615273A (en) Data scheduling method and device
KR20210061630A (en) Centralized scheduling apparatus and method considering non-uniform traffic
US11108697B2 (en) Technologies for controlling jitter at network packet egress
CN113472685B (en) Photoelectric hybrid switching method and device based on QoS (quality of service) flow classification in data center
CN115277504A (en) Network traffic monitoring method, device and system
CN110233803B (en) Scheduling device and method for transmission network node
Song et al. Priority-based grant-aware scheduling for low-latency switching

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination