CN105847181B - A kind of prediction technique applied to Input queue switch distributed scheduling algorithm - Google Patents
A kind of prediction technique applied to Input queue switch distributed scheduling algorithm Download PDFInfo
- Publication number
- CN105847181B CN105847181B CN201610135932.6A CN201610135932A CN105847181B CN 105847181 B CN105847181 B CN 105847181B CN 201610135932 A CN201610135932 A CN 201610135932A CN 105847181 B CN105847181 B CN 105847181B
- Authority
- CN
- China
- Prior art keywords
- output end
- queue
- input terminal
- request
- prediction
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
- 238000000034 method Methods 0.000 title claims abstract description 21
- 238000013475 authorization Methods 0.000 claims abstract description 76
- 239000004744 fabric Substances 0.000 claims description 4
- 230000007246 mechanism Effects 0.000 abstract description 6
- 230000005540 biological transmission Effects 0.000 description 7
- 230000008859 change Effects 0.000 description 3
- 238000013461 design Methods 0.000 description 3
- 230000006872 improvement Effects 0.000 description 3
- 238000010586 diagram Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 230000004907 flux Effects 0.000 description 2
- 238000012804 iterative process Methods 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 238000004088 simulation Methods 0.000 description 2
- 238000004891 communication Methods 0.000 description 1
- 230000006835 compression Effects 0.000 description 1
- 238000007906 compression Methods 0.000 description 1
- 230000007812 deficiency Effects 0.000 description 1
- 230000009977 dual effect Effects 0.000 description 1
- 230000001902 propagating effect Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/50—Queue scheduling
- H04L47/60—Queue scheduling implementing hierarchical scheduling
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/10—Flow control; Congestion control
- H04L47/28—Flow control; Congestion control in relation to timing considerations
- H04L47/283—Flow control; Congestion control in relation to timing considerations in response to processing delays, e.g. caused by jitter or round trip time [RTT]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/50—Queue scheduling
- H04L47/62—Queue scheduling characterised by scheduling criteria
- H04L47/6215—Individual queue per QOS, rate or priority
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/50—Queue scheduling
- H04L47/62—Queue scheduling characterised by scheduling criteria
- H04L47/6245—Modifications to standard FIFO or LIFO
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
Abstract
The invention discloses a kind of prediction techniques applied to Input queue switch distributed scheduling algorithm.In Input queue switch dispatching algorithm, an active queue A (j) is maintained to track active input terminal for each output end, its length is set as N, when output end j receives request or data packet from input terminal i, A (j) head of the queue is added in i, if the length of A (j) has been more than N, element is removed from tail of the queue, when output end do not receive it is any request or data packet count device be all 0, into prediction mode, prediction authorization is sent to the input terminal of A (j) head of the queue, A (j) tail of the queue is moved it to after having sent, the complexity of A (j) only has O (1), when output end has request or data packet calculator is not all 0, it sends and authorizes by original dispatching algorithm.Using request forecasting mechanism RP, traditional centralized scheduling algorithm may make to be easy to extend in distributed system, and its low-load time delay is reduced to RTT or less.
Description
Technical field
The present invention relates to the packet-scheduling algorithm fields of Input queue switch, more particularly to one kind to be applied to input team
The prediction technique of column interchanger distributed scheduling algorithm.
Background technique
Internet has been further exacerbated by the demand for improving network bandwidth to the continued shift of cloud computing.In cloud computing framework
In, business and data deposit in shared data center, and through internet acquired in user.A large amount of flow in user and
(such as north-south flow) and transmission (such as Dong-west stream between the different server of a data center are transmitted between data center
Amount), then need a more quick internet.High speed internet is realized by high speed router, this routing
Device is usually connected by a data switching exchane with Line cards.
Building an interchanger, there are mainly two types of method, input rank and output queues.Assuming that data packet fixed length and every
A time slot can only accommodate a data packet, then each output port of each time slot of the output dynamic feedback of a N × N can be handed over
Shift to mostly N number of data packet.It can be exchanged to output end at once after being reached due to data packet, not need input rank.Have no to doubt
Ask that output dynamic feedback can provide best delay-throughput performance, but its switching fabric and output caching need to be with route speed
N times of rate operates.On the other hand, each time slot of Input queue switch only allows each input terminal/output end transmission/reception
One data packet, without accelerating.Therefore input rank framework is more suitable for realization of High Speed.
There is well-known, an i.e. head of the queue obstruction (HoL) in Input queue switch.Virtual output can be used
Queue (VOQ) is blocked to eliminate head of the queue, i.e., each input terminal is respectively that each output end maintains an independent queue (see figure
1).This method needs a centralized scheduler to maximize exchange handling capacity, and scheduling problem is then converted into bipartite graph
Matching problem.Full-size matching and maximum weight algorithms are suggested and can ensure that under any permissible flow rate mode
100% handling capacity, but these methods are too complicated for realization of High Speed.
Then it is proposed that greatly the matched sub-optimal algorithm of size (MSM), when it is no it is any input or output in
When unnecessary idle state, very big size matching is just reached.The matching of very big size is found without backtracking, therefore it is than maximum
Size/weight matching can more effectively realize.In numerous very big matched implementation methods of size, iteration scheduling algorithm is due to adopting
It is used widely with MPP.In general, each iteration of iteration scheduling algorithm is by three phases group
At i.e. request, authorization and receiving.In request stage, input terminal sends matching request to output end.It is each defeated in authorization stages
Outlet selects a request to authorize.Receiving the stage, each input terminal selects an authorization to receive, and notifies corresponding defeated
Outlet exit after iterative process.For the interchanger of a N × N, iteration scheduling algorithm, which at most executes n times, can reach pole
Large scale matching.
There are mainly two types of ways of realization for iterative algorithm scheduler, i.e., centralized and distributed.In centralized scheduler (see figure
2 (a)) in, each input terminal of each time slot sends request vector to scheduler, and scheduler executes authorization, receive the stage and with
Iterative process afterwards is until find very big size matching.All processes of centralized scheduler are completed in a time slot, communication
Expense is small, high-efficient.But due to the limitation of I/O interface, its size is usually less than 64 ports.Then when the end of interchanger
When mouth number increases, it is necessary to use distributed scheduler (see Fig. 2 (b)).Distributed scheduler can there are many distributed rank,
Using it is most be complete distributed rank in Fig. 2 (b).I.e. each IS/OS (input/output selector) be dispersed in it is each not
Same port, the propagation delay RTT (see Fig. 3) between them can not ignore, usually multiple time slots.In this case, divide
Cloth dispatching algorithm is come into being.Since the information such as each request, authorization will get to hold accordingly by multiple time slots
Mouthful, therefore each time slot of distributed scheduling algorithm can only execute an iteration (otherwise time delay is too big).And the ranks such as request, authorize
Section need to execute parallel, i.e., each each input terminal of time slot continuously transmits request, without being sent after waiting RTT time slot, output end
Also so.
Due to the difference of distributed scheduler and centralized scheduler, distributed scheduling algorithm and tradition are applied to centralization
The algorithm of scheduler is different.Distribution can only use an iteration, and request authorization stages that need to execute parallel.It has been proposed that
Some dispatching algorithms for aiming at Distributed Design, also centralized scheduling algorithm improvement is applied to distributed system by someone.Although
Some can obtain good performance, but all have ignored in distributed scheduling between the input/output terminal due to caused by RTT time delay
Status information error so that load it is very low when, average queuing delay also be at least RTT time slot.Only a few dispatching algorithm energy
Break this lower limit, effect is not but apparent.
SRR (Synchronize Round Robin) is a kind of dispatching algorithm for aiming at Distributed Design.For a N
The interchanger of × N, each time slot, each input terminal distributes a different preferential output end, and (each output end distributes one not
Same preferential input terminal).Without loss of generality, it is assumed that when time slot t, the preferential output end of input terminal i is j, then
J=(i+t) mod N (1)
In each time slot, SRR gives highest scheduling priority to preferential input-output pair.Request stage, if input terminal i
Preferential output end j has data, i.e. VOQ (i, j) > 0, then input terminal i sends 1bit request to j.Otherwise, input terminal i selects longest
VOQ send request.Authorization stages, if output end j receives the request of its preferential input terminal i, output end j sends to i and authorizes,
Otherwise, output end j arbitrarily selects a transmission authorization in all requests.The preferential input-output of SRR is to can guarantee uniform height
Matching size under load flow, however performance of the SRR under non-homogeneous flow is general, and the only energy under hot spot flow rate mode
Obtain the time delay for being slightly lower than RTT.
D-LQF (Distributed Longest Queue First) is also to design for distributed scheduling algorithm.D‐
LQF combines exhaustive service and longest queue first scheduling, can obtain 100% handling capacity under non-homogeneous flow.D‐
The request of LQF includes 3bits, is respectively that new data packets reach there are also 2bits other than indicating VOQ and whether having the 1bit of data
Mark and authorization are rejected mark.It is reached and is identified by new data packets, each output end can maintain N number of data packet count device
Track the length of its N number of VOQ.In request stage for time slot t, if VOQ (i, j) has data packet quilt in a upper time slot t-1
Scheduling and VOQ (i, j) current time slots non-empty are requested then input terminal i gives output end j to send;Otherwise, input terminal i is all
Non-empty VOQ sends request.When authorization stages, output end j selects a upper time slot service in all input terminals for receiving request
The input terminal crossed sends authorization;Otherwise, output end j selects longest VOQ to send authorization.Finally receiving the stage, input terminal i is straight
Longest VOQ is selected in selecting.The exhaustive service of D-LQF so that can also obtain 100% handling capacity under non-homogeneous flow rate mode, however
Its performance can not be equal to using the method for preferential input-output pair under uniform pattern, and it still fails to break under RTT time delay
Limit.
Distributed DRRM (Distributed Dual Round Robin Matching) is in traditional centralized scheduling algorithm
Improvement has been made on the basis of DRRM.In centralized DRRM, each input terminal sends a single-bit and requests to give some non-empty
VOQ.In request and authorization stages, input/output terminal sends request/authorization according to RR polling pointer.And if only if input
Output end successful match just updates corresponding polling pointer.Distributed DRRM is made that two o'clock is improved on the basis of centralization: a)
In the same RTT time slot, each VOQ is in each time slot using a different pointer;B) PRC is maintained for each VOQ
(Pending Request Counter).Specifically, it after input terminal i, which is sent, to be requested, need to can just be received by RTT time slot
It authorizes or knows that request is rejected, therefore pointer does not change temporarily, only just updated after receiving authorization.And output end transmission is awarded
Pointer can be updated after power immediately, because authorization one is surely received thus.And uncommitted request is referred to as pending
Request, input terminal are after VOQ sends request, and the PRC of the VOQ adds 1, and PRC subtracts 1 when receiving corresponding authorization.Only when PRC is less than
Just input terminal is allowed to send request when VOQ length.Though multiple separate fingers of distributed DRRM can reduce input/output terminal and ask
The synchronization of authorization is asked, the performance of preferential input-output pair is but still less than.
Centralized scheduling algorithm i-SLIP is also extended into distributed system, forms distributed algorithm i- Δ SLIP.i‐
Unlike SLIP is unique compared with DRRM, DRRM only selects a non-empty VOQ to send request, and i-SLIP is all non-emptys
VOQ sends request.It is worth noting that, i- Δ SLIP based on distributed system be not complete distributed system in Fig. 2 (b)
System, it is assumed here that still use a centralized scheduler, only farther out, RTT is line to distance at this time between Line cards and scheduler
Propagation delay between an outpost of the tax office and scheduler.I- Δ SLIP is no longer non-empty VOQ transmission request, but as D-LQF, it is each
Newly report is sent to data packet.Therefore scheduler can participate in following with the length of each VOQ of tracing computation, all non-empty VOQ
Successive ignition process.The distributed system that i- Δ SLIP is used can not achieve good extension, therefore we usually only consider
Fully distributed system in Fig. 2 (b).
Summary of the invention
It is a kind of distributed applied to Input queue switch the purpose of the present invention is having overcome the deficiencies of the prior art and provide
The prediction technique of dispatching algorithm.
Technical scheme is as follows
The invention discloses a kind of prediction techniques applied to Input queue switch distributed scheduling algorithm: in input team
In column interchanger dispatching algorithm, an active queue A (j) is maintained to track active input terminal, length for each output end
It is set as N, when output end j receives request or data packet from input terminal i, A (j) head of the queue is added in i, if the length of A (j)
Be more than N, from tail of the queue remove element, when output end do not receive it is any request or data packet count device be all 0, into prediction mould
Formula, prediction authorization is sent to the input terminal of A (j) head of the queue, A (j) tail of the queue is moved it to after having sent, the complexity of A (j) only has O
(1), it when output end has request or data packet calculator is not all 0, sends and authorizes by original dispatching algorithm.
Preferably, Input queue switch dispatching algorithm described in the method for the present invention is distributed scheduling algorithm or concentration
Formula dispatching algorithm expands in distributed system.
The dispatching algorithm of active queue can not be maintained according to request for output end, all output ends do not maintain to enliven team
Column select it to dispatch the input terminal of highest priority and send prediction authorization when output end enters prediction mode.
When output end authorization is using more than two different priorities, ignores lowest priority and enters prediction mode in advance,
To increase successful match probability.
When the Input queue switch dispatching algorithm is distributed scheduling algorithm SRR, if output end receives request, press
It sends and authorizes according to original algorithm;If output end does not receive any request, into prediction mode, to the preferential input terminal of current time slots
Send prediction authorization.
When the Input queue switch dispatching algorithm is centralized scheduling algorithm HRF/RC, output end will according to request
Input terminal is decoded into four different priority, if it exists the input terminal of the first two priority, then selects the defeated of highest priority
Enter end and send authorization, otherwise, output end ignores the input terminal of third priority, prediction mode is directly entered, according to active queue
Send prediction authorization.
In Input queue switch, Packet Delay is mainly by queuing delay, propagation delay time (1 time slot) and when propagating
Prolong (half of RTT time slot) composition, wherein only queuing delay be is determined by dispatching algorithm and we research emphasis, under
Time delay described in face refers both to queuing delay.The queuing delay of existing major part distributed scheduling algorithm is required at least RTT time slot, i.e.,
Make in the case where loading very low also in this way, only except only a few dispatching algorithm.It will be apparent that this RTT lower limit is unfavorable for exchanging
The high-speed expansion of machine.The invention proposes a kind of request forecasting mechanism Request Prediction (RP), so that centralization is adjusted
After degree algorithm expands to distributed system, low-load time delay can be reduced to RTT or less.Also existing distribution can be directly applied to
Formula dispatching algorithm reduces time delay and improves performance.
In nearly all distributed scheduling algorithm, it is nothing but following that output end j, which selects input terminal to send the foundation of authorization,
Two kinds: having received request or data packet count device non-empty.First assume each output end j for its all VOQ (i, j) (i=0,
1 ..., N-1) maintain corresponding data packet count device C (i, j).Due to the RTT time delay between input/output terminal, C (i, j) and reality
There are both sides errors between VOQ (i, j) length, i.e., when data packet reaches and when data packet is left.Assuming that time slot t moment has
New data packets reach VOQ (i, j), and input terminal i updates VOQ (i, j) immediately and transmits packets to output end j up to report, this
Report need to get to output end by half of RTT time slot.So [t, t+RTT/2) in the moment, C (i, j) < VOQ (i, j).
When time slot t, if output end j has sent authorization to input terminal i, then corresponding C (i, j) subtracts 1 immediately, although this authorization is not necessarily
It can be received.The authorization reaches input terminal at the t+RTT/2 moment, if authorization is received, corresponding data packet is scheduled, VOQ (i,
J) length just subtracts 1, it is seen that [t, t+RTT/2) in the moment, C (i, j) < VOQ (i, j).If authorization is failed, output end is in t+
The RTT moment just learn and by C (i, j) again plus 1, then [t, t+RTT) in the moment, C (i, j) < VOQ (i, j).Thus may be used
See, two aspect errors will lead to C (i, j) < VOQ (i, j), i.e., when the C (i, j) of output end is 0, actually enters the VOQ at end
(i, j) length is likely larger than 0.
Next consider second situation, i.e., can just send corresponding authorization when output end at least receives a request.It is false
If input terminal i has sent request, that is, VOQ (i, j) > 0 to output end j when time slot t.It is received since input terminal is sent a request to
Authorization needs to wait for RTT time slot, and when corresponding authorization reaches input terminal, VOQ (i, j) is equally likely to 0.Conversely, VOQ when time slot t (i,
J)=0, the not sent request of input terminal, but may there is new data packets arrival to lead to VOQ (i, j) non-empty in RTT time slot again.Always
It, leading to the factor of this error is also two aspects, i.e. data packet is reached and sent, ibid.
Error between this input/output terminal encourages us, does not receive any request or all data packets in output end j
When counter is 0, also an input terminal is selected to send authorization, this authorization is known as prediction authorization.Obviously, if output end j at this time
Not sent any authorization, then being that not will receive any data packet after RTT time slot.If output end j has received request or has
Data packet count device non-empty sends authorization according to conventional then being at normal mode.It does not receive and appoints and if only if output end j
When what request or all data packet count devices are 0, just enter prediction mode, an input terminal i is selected to send prediction authorization.
When reaching input terminal i after the prediction authorization RTT/2 time slot, in fact it could happen that three kinds of situations:
A) VOQ (i, j)=0: the prediction authorization is ignored and does not have any influence to matching size.
B) VOQ (i, j) > 0 and input terminal i does not receive other authorizations: the prediction authorization is received, input terminal i and output end j
Successful match sends data packet, and matching size increases.
C) VOQ (i, j) > 0 and input terminal i receive multiple authorizations: input terminal selects one according to specific dispatching algorithm
With (such as maximum queue), matching size does not change, but may match weight and increase.
In summary three kinds of situations, the request forecasting mechanism have certain probability increase dispatching algorithm matching size or
Weight is matched, even if the authorization failure of prediction, will not generate negative influence.Assuming that time slot t output end j is sent to input terminal i
Prediction authorization, then only in t+RTT/2 moment VOQ (i, j) > 0, which is likely to be received.It is pre- in order to increase
Successful probability is surveyed, can use correlation and the continuity features of network flow to select the input terminal of prediction.Because true
Under real network state, the data packet arrival rate of adjacent time-slots usually has correlation, i.e. hypothesis time slot t has new data packets arrival
VOQ (i, j), then other VOQ probability for the likelihood ratio input terminal i that time slot t+1, VOQ (i, j) have data packet to reach are high again.Cause
This, the newest input terminal i for reaching data packet should have highest prediction priority.If the prediction authorization that output end j is sent is not connect
By then illustrating that the VOQ at least RTT time slot does not have data packet arrival, therefore should having minimum prediction priority.
Technical solution of the present invention bring may make traditional centralization to adjust the utility model has the advantages that using request forecasting mechanism RP
Degree algorithm is easy to extend in distributed system, and its low-load time delay is reduced to RTT or less.RP is also directly applicable to
In certain existing distributed scheduling algorithms, its performance is advanced optimized, obtains lower Packet Delay.
Detailed description of the invention
Fig. 1 is the Input queue switch schematic diagram with centralized scheduler;
Fig. 2 is scheduler implementation figure;
Fig. 3 is two-way time (RTT) schematic diagram of input/output selector.
Specific embodiment
In a distributed system, due to the presence of RTT, existence information is asynchronous between input/output terminal.It utilizes
It is this asynchronous, make output end when not receiving any request or without any data packet, sends prediction authorization to input terminal.And
Using the continuity and correlation of data traffic in real network, it is successful to increase the prediction authorization using an active queue
Possibility.The method can effectively increase the matching size in the case of low-load and reduce Packet Delay, break minimum-time lag
For the limitation of RTT.
Below with reference to embodiment, the invention will be further described.
Embodiment one
RR/LQF (Round Robin with Longest Queue First [20]) is a kind of centralized iteration scheduling
Algorithm, each port only need 1bit to send each request, authorize and receive information.Each 1bit required list is shown with new data packets and arrives
Up to corresponding VOQ, the length of its N number of VOQ can be tracked according to the request output end.RR/LQF also uses preferential defeated in (1)
Enter-export pair, is authorizing and receiving the stage, highest priority gives preferential inputoutput pair, and sub-priority is to maximum queue.RR/
LQF can obtain good performance under centralized environment, but after expanding to distributed system, performance is general, and when being limited by RTT
Prolong lower limit.After we will request forecasting mechanism RP to be applied to RR/LQF, time delay can be reduced and improve performance, and low-load time delay is less than
RTT, the algorithm after the improvement are known as RP-RR/LQF.In RP-RR/LQF, each output end j is in addition to maintaining N number of data packet meter
Number device C (i, j) outside, also tracks active input terminal using an active queue A (j).Three of them stage are as follows:
Request stage: if time slot t has new data packets to reach VOQ (i, j), input terminal i sends a 1bit to output end j and asks
It asks.
Authorization stages: when output end j receives the request of input terminal i, C (i, j) adds 1, and input terminal i is moved to queue A (j)
Head of the queue.If all C (i, j)≤0 (i=0,1 ..., N-1), then output end j enters prediction mode, and to A (j) head of the queue
Input terminal send prediction authorization, distribute will the input terminal move to tail of the queue.Non-empty VOQ if it exists, then output end j is in the normal mode
Send authorization.Output end j confirms its preferential input terminal i according to (1), if C (i, j) > 0, is sent to it 1bit authorization, otherwise defeated
Outlet j selects C (i, j) maximum input terminal to send authorization.
After having sent authorization or prediction authorization, output end is by corresponding data packet count device C (i, j)) subtract 1.If RTT time slot
Afterwards, output end j does not receive corresponding data packet, illustrates authorization failure, then C (i, j) is increased by one again.Pay attention to sending as output end j
After complete prediction authorization, corresponding C (i, j) < 0, because sending C (i, j)≤0 before prediction authorizes.Although actual VOQ length is not
It may be negative, but influenced by RTT, C (i, j) is originally just not exclusively synchronous with VOQ (i, j), and C (i, j) < 0 is also only temporary
, negative effect is not had to the performance of dispatching algorithm.
Receive the stage: input terminal i confirms its preferential output end j according to (1), if receive the authorization of output end j and VOQ (i,
J) > 0, then input terminal i thinks that output end j sends data packet;Otherwise, input terminal i selects longest VOQ to send number in all authorizations
According to packet.(since distributed scheduling algorithm is all single iteration, in addition transmission is not needed in actual schedule and receives information,
Directly transmit data packet.)
Simulation result shows that RP-RR/LQF can obtain the time delay lower than RTT under uniform flux and hot spot flow rate mode.
Especially under hot spot flow, RP-RR/LQF compares RR/LQF, and low-load time delay greatly reduces.
Embodiment two
HRF/RC (Highest Rank First with Request Compression [21]) is also one effective
Centralized single iteration dispatching algorithm.In HRF/RC, its N number of VOQ is carried out ranking according to length by each input terminal.VOQ row
Only there are three states for name: empty, non-empty (non-longest) and longest.Each single-bit request indicates the transformation of VOQ state, enables dtIt indicates
The request that time slot t moment is sent, then input terminal i is sent out to output end j when VOQ (i, j) ranking rises (being such as non-empty from space-variant)
Send request dt=0, on the contrary dt=1.If VOQ (i, j) is longest, d alwayst=0;If VOQ (i, j) is always sky, dt=1;
In addition, indicating d using alternate 0 and 1 if VOQ (i, j) is always non-empty (non-longest)t.It is received in conjunction with time slot t-1
Request dt-1, output end can decode the state of corresponding VOQ.If dtdt-0I00, r=0 (VOQ longest);If dtdt-1=01, r=
1 (VOQ longest or non-empty);If dtdt-1=10, r=2 (VOQ non-empty or being sky);If ttdt-1=11, r=3 (VOQ is sky).?
The stage is authorized and receives, HRF/RC preferentially selects the smallest VOQ of r.In addition, to also use the preferentially input in (1) defeated by HRF/RC
Out pair.RP is applied in HRF/RC by we, obtains distributed algorithm RP-HRF/RC.Then when time slot t:
Request stage: for input terminal i, if preferential VOQ (i, j) non-empty, then input terminal i sends d to output end jt=0
And d is sent to other output endst=1.Otherwise, output end i sends d according to VOQ ranking statet。
Authorization stages: if output end j receives the request d of its preferential input terminal it=0, then output end j is sent out to input terminal i
Send authorization.Otherwise, the smallest input terminal of output end selection r sends authorization.If the smallest r is equal to 2 or 3, i.e., all dt=1,
Output end j enters prediction mode, and sends prediction authorization to the input terminal of A (j) head of the queue, and the input terminal is moved to A (j) team
Tail.
Receive the stage: input terminal i preferentially receives the authorization that its preferential output end is sent, otherwise, in all authorizations, selection
The smallest output end of r.
It is worth noting that, output end j is in all d in RP-HRF/RCtEnter prediction mode when=1.Although
dtdt-1When=10, the possible non-empty of corresponding VOQ, however prediction authorization is sent at this time without selecting dtdt-1=10 VOQ transmission is awarded
The reason of power is dt=1 other than indicating the variation of VOQ ranking state, it is also possible to indicate that the preferential VOQ of the input terminal is non-
It is empty.
A) in the first scenario, dtdt-1=10 indicate that the VOQ is possible to as sky.And according to data flow in real network
The continuity and correlation of amount, active VOQ is also possible to non-empty in RTT time slot.Therefore the probability of success of prediction authorization is sent
It is not necessarily by lower than to dtdt-1=10 VOQ sends authorization.
B) in the latter case, output end j is bound to be rejected to the authorization that the input terminal is sent, because of input
End always preferentially selects its preferential output end.
Simulation result shows that RP-HRF/RC can be obtained under uniform flux, burst flow and hot spot flow rate mode and is lower than
The time delay of RTT.
Request forecasting mechanism RP proposed in this paper is more likely to a kind of thought rather than specific strategy.That is, by RP
When applied to different dispatching algorithms, the characteristics of we can analyze the algorithm, simultaneously makes corresponding adjustment, as long as key point is all
It is to increase matching size using the asynchronism between input/output terminal.As to how selection input terminal is predicted, not
Active queue have to be centainly used, as long as can predict most possibly successfully.Therefore on the basis of technical solution of the present invention
On, in conjunction with the common knowledge of this field, to the change that the application is done, it should all think the protection scope of affiliated the application.
Claims (5)
1. a kind of prediction technique applied to Input queue switch distributed scheduling algorithm, it is characterised in that: in input rank
In interchanger dispatching algorithm, the dispatching algorithm of active queue can be maintained according to request for output end, is each output end
An active queue A (j) is maintained to track active input terminal, length is set as N, whenever output end j is received from input terminal i
Request or data packet when, by i be added A (j) head of the queue, if the length of A (j) has been more than N, from tail of the queue remove element, work as output end
It does not receive any request or data packet count device is all 0, into prediction mode, send prediction to the input terminal of A (j) head of the queue and award
Power, moves it to A (j) tail of the queue, the complexity of A (j) only has O (1), when output end has request or data packet calculator after having sent
When being not all 0, sends and authorize by original dispatching algorithm;
The dispatching algorithm of active queue can not be maintained according to request for output end, all output ends do not maintain active queue,
When output end do not receive it is any request or data packet count device be all 0, into prediction mode, enter prediction mode in output end
When, it selects it to dispatch the input terminal of highest priority and sends prediction authorization.
2. prediction technique according to claim 1, it is characterised in that the Input queue switch dispatching algorithm is point
Cloth dispatching algorithm or centralized scheduling algorithm expand in distributed system.
3. prediction technique according to claim 1, it is characterised in that when output end authorization is using more than two different preferential
When grade, ignores lowest priority and enter prediction mode in advance, to increase successful match probability.
4. prediction technique according to claim 1, it is characterised in that the Input queue switch dispatching algorithm is point
When cloth dispatching algorithm SRR, if output end receives request, sends and authorize according to original algorithm;If output end does not receive any ask
It asks, into prediction mode, sends prediction authorization to the preferential input terminal of current time slots.
5. prediction technique according to claim 1, it is characterised in that the Input queue switch dispatching algorithm is collection
When Chinese style dispatching algorithm HRF/RC, input terminal is decoded into four different priority according to request by output end, and if it exists preceding two
The input terminal of a priority then selects the input terminal of highest priority to send authorization, and otherwise, output end ignores third priority
Input terminal is directly entered prediction mode, sends prediction authorization according to active queue.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610135932.6A CN105847181B (en) | 2016-03-10 | 2016-03-10 | A kind of prediction technique applied to Input queue switch distributed scheduling algorithm |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610135932.6A CN105847181B (en) | 2016-03-10 | 2016-03-10 | A kind of prediction technique applied to Input queue switch distributed scheduling algorithm |
Publications (2)
Publication Number | Publication Date |
---|---|
CN105847181A CN105847181A (en) | 2016-08-10 |
CN105847181B true CN105847181B (en) | 2019-04-30 |
Family
ID=56588047
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610135932.6A Expired - Fee Related CN105847181B (en) | 2016-03-10 | 2016-03-10 | A kind of prediction technique applied to Input queue switch distributed scheduling algorithm |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN105847181B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106603145B (en) * | 2016-12-30 | 2019-04-05 | 北京航空航天大学 | A kind of spaceborne CICQ fabric switch grouping scheduling method of GEO satellite considering channel status |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101099355A (en) * | 2005-01-06 | 2008-01-02 | 恩尼格玛半导体有限公司 | Method and apparatus for scheduling packets and/or cells |
CN103856440A (en) * | 2012-11-29 | 2014-06-11 | 腾讯科技(深圳)有限公司 | Message processing method, server and message processing system based on distributed bus |
CN104854831A (en) * | 2012-12-07 | 2015-08-19 | 思科技术公司 | Output queue latency behavior for input queue based device |
-
2016
- 2016-03-10 CN CN201610135932.6A patent/CN105847181B/en not_active Expired - Fee Related
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101099355A (en) * | 2005-01-06 | 2008-01-02 | 恩尼格玛半导体有限公司 | Method and apparatus for scheduling packets and/or cells |
CN103856440A (en) * | 2012-11-29 | 2014-06-11 | 腾讯科技(深圳)有限公司 | Message processing method, server and message processing system based on distributed bus |
CN104854831A (en) * | 2012-12-07 | 2015-08-19 | 思科技术公司 | Output queue latency behavior for input queue based device |
Non-Patent Citations (1)
Title |
---|
On Iterative Scheduling for Input-queued Switches with a Speedup of 2-1/N;Bing Hu et al.;《2014 IEEE 15th International Conference on High Performance Switching and Routing (HPSR)》;IEEE;20141231;第26-31页 |
Also Published As
Publication number | Publication date |
---|---|
CN105847181A (en) | 2016-08-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Guo et al. | On-line multicast scheduling with bounded congestion in fat-tree data center networks | |
Bogatyrev et al. | Model and interaction efficiency of computer nodes based on transfer reservation at multipath routing | |
CN104780122B (en) | Control method based on the stratification network-on-chip router that caching is reallocated | |
CN105827545A (en) | Scheduling method and device of TCP co-flows in data center network | |
CN108847961A (en) | A kind of extensive, high concurrent certainty network system | |
CN106254254A (en) | A kind of network-on-chip communication means based on Mesh topological structure | |
CN105847181B (en) | A kind of prediction technique applied to Input queue switch distributed scheduling algorithm | |
Xie et al. | Data transfer scheduling for maximizing throughput of big-data computing in cloud systems | |
CN101964747B (en) | Two-stage exchanging structure working method based on preposed feedback | |
CN106911593A (en) | A kind of industrial control network array dispatching method based on SDN frameworks | |
CN103259723A (en) | Energy conservation method based on combination of data center network routing and flow preemptive scheduling | |
CN105072046A (en) | Delay tolerant network congestion prevention method based on data concurrence and forwarding by token control node | |
KR20040055312A (en) | Input Buffered Switches and Its Contention Method Using Pipelined Simple Matching | |
Deng et al. | Container and microservice-based resource management for distribution station area | |
Bogatyrev et al. | Inter-machine exchange of real time in distributed computer systems | |
Li et al. | Efficient communication scheduling for parameter synchronization of dml in data center networks | |
Konstantinidou | Segment router: a novel router design for parallel computers | |
Zhang et al. | SARSA-Based Computation Offloading between Cloudlets with EON | |
Razzaque et al. | Multi-token distributed mutual exclusion algorithm | |
CN110430146A (en) | Cell recombination method and switching fabric based on CrossBar exchange | |
Yang et al. | Research on performance of asymmetric polling system based on blockchain | |
Keykhosravi et al. | Multicast scheduling for optical data center switches with tunability constraints | |
CN117135107B (en) | Network communication topology system, routing method, device and medium | |
Lygdenov et al. | Data transmission with priority service and limited capacity storage in unstationary flow and high load | |
CN117176648B (en) | Method, system, equipment and medium for realizing distributed routing |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20190430 Termination date: 20200310 |