CN101237386A - Method for flow dispatching in Ethernet network - Google Patents

Method for flow dispatching in Ethernet network Download PDF

Info

Publication number
CN101237386A
CN101237386A CNA2007100033132A CN200710003313A CN101237386A CN 101237386 A CN101237386 A CN 101237386A CN A2007100033132 A CNA2007100033132 A CN A2007100033132A CN 200710003313 A CN200710003313 A CN 200710003313A CN 101237386 A CN101237386 A CN 101237386A
Authority
CN
China
Prior art keywords
stream
delay
cycle
bridge
schedule information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CNA2007100033132A
Other languages
Chinese (zh)
Inventor
吴起
黄周松
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Samsung Telecommunications Technology Research Co Ltd
Samsung Electronics Co Ltd
Original Assignee
Beijing Samsung Telecommunications Technology Research Co Ltd
Samsung Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Samsung Telecommunications Technology Research Co Ltd, Samsung Electronics Co Ltd filed Critical Beijing Samsung Telecommunications Technology Research Co Ltd
Priority to CNA2007100033132A priority Critical patent/CN101237386A/en
Publication of CN101237386A publication Critical patent/CN101237386A/en
Pending legal-status Critical Current

Links

Images

Abstract

The invention provides a method for stream dispatch in a network which comprises the following steps that: a sending end produces and sends a stream description/dispatch message of a new stream to a receiving end via a plurality of net bridges, the stream description/dispatch message comprises bandwidth demand B of the stream, end-to-end delay demand D, present accumulated delay d and serial number S of stream generation/reserved forwarding period; a net bridge receiving the stream description/dispatch message finds out the serial number S of stream generation/reserved forwarding period received from an upstream node and then can meet the first period of the bandwidth demand B in the stream description/dispatch message as a reserved forwarding period; the net bridge records a serial number T of the reserved forwarding period and refreshes the serial number S of the stream generation/reserved forwarding period and the present accumulated delay d in the stream description/dispatch message, and continues to transmit the stream description/dispatch message to a downstream net bridge under the condition of the present accumulated delay d less than the end-to-end delay demand D until the stream description/dispatch message reaches the receiving end.

Description

In ethernet network, flow the method for scheduling
Technical field
The service quality that the present invention relates in computer and the communication network guarantees and the scheduling field that particularly the service quality of medium access sublayer guarantees and scheduling, as meets the service quality assurance and the scheduling of the Ethernet of IEEE 802.3.Particularly, the present invention relates to a kind of method that in ethernet network, flows scheduling, can support the low service that postpones of those low bandwidth very effectively, VoIP (IP-based voice) business for example, and make the delay that asynchronism caused between the bridge reduce as far as possible.
Background technology
In order to guarantee the service quality of Ethernet, IEEE has set up household Ethernet (residential Ethernet) seminar.In the current research report of this group, having introduced 125 microseconds is the dispatching cycle of unit.In each cycle inside, represent the synchronous flow of multimedia application to have higher priority to send than the asynchronous flow of representing traditional ethernet to use.Asynchronous flow is by hungry to death when preventing that synchronous flow is too much, and the peak use rate of the synchronous flow in each cycle is limited in 75%.The main cause of selecting for 125 microsecond cycles for use is with reference to existing IEEE1394 standard, is widely used for connecting audio/video devices before this standard mesh; Another reason is that short ratio dispatching cycle is easier to provide low delay and low jitter.
On the basis in 125 microsecond cycles, provided in the research report and how synchronous flow dispatched, thereby reach lower delay and jitter mechanism---(Pacing) paces:
By keeping each synchronization frame up to the corresponding transmitting time of this frame, the mechanism of pacing is being safeguarded the pattern of flow on the path of whole stream, thereby has guaranteed the distribution of spatial cache in low shake border and the network.Specifically, category-A frame Be Controlled is in case their (than estimating) than the Zao switch that leaves, wherein controls the synchronous category-A frame that refers to block from n cycle, up to the beginning in n+p cycle synchronously.N+p cycle, and after the non-category-A frame in n+p-1 cycle disposed, transmitter began to send these category-A frames from n cycle, these frames have just become from n+p cycle for next bridge.In addition, the delay of each bridge, shake and buffer memory demand are determined by the parameter p in pacing.
From in essence, pacing belongs to the scheduling mechanism of inoperative conservation.There is balance in the mechanism of pacing between delay and allocated bandwidth granularity.That is to say that if want less delay, then the allocated bandwidth granularity must be bigger, vice versa.The mechanism of pacing to carry out balance between delay and allocated bandwidth granularity by the length of regulating dispatching cycle.Dispatching cycle is short more, and then delay is more little, and the allocated bandwidth granularity is big more.And dispatching cycle is long more, and then delay is big more, and the allocated bandwidth granularity is more little.In order to obtain low the delay, very short dispatching cycle (125 microsecond) has been selected by household Ethernet seminar, and this has just caused very coarse allocated bandwidth.Even application choice is used the small-sized frame of 128 bytes and considered frame gap not, minimum distributed bandwidth has also reached 8.192 megabit per seconds (Mbps).For present important application IP phone, shared bandwidth generally has only 3 ~ 12 kilobit per seconds (Kbps), and the waste of bandwidth has reached 99.8%.
In order to address this problem, household Ethernet seminar has adopted the priority scheduling mechanism based on speed in report, as shown in Figure 2, concrete, the category-A flow is subdivided into four subclasses again, is respectively CLASS_A0, CLASS_A1, CLASS_A2 and CLASS_A3.For these four subclasses, respectively become 125 microseconds, 500 microseconds, 2 millisecond and 8 millisecond from 125 original microseconds its dispatching cycle.The CLASS_A0 subclass has been represented the flow of flank speed, and these flows send with the frequency period of 8KHz, and CLASS_A1, CLASS_A2 and CLASS_A3 have then represented the more flow of low rate successively, as IP phone etc.
More the introducing of multithread quantum class provides the traffic prioritization aspect more to select to the user, yet this scheme still solves the problem between delay and the allocated bandwidth granularity.That is to say that we still need to sacrifice delay and reach higher network utilization.For the IP phone class of low bandwidth, low delay requirement is professional, still mean very big bandwidth waste.
At the application aspects such as (for example IP phone or interactive entertainments) of supporting the low delay requirement of low bandwidth effectively, the method in the Ethernet is difficult to support effectively this type of application at present.Concrete, for 8 milliseconds of low rate stream that just send a secondary data, give birth to the asynchronism of data owing to miscarrying with this, so maximum 8 milliseconds stand-by period is inevitable for time slot of this stream reservation of resource.Yet we wish that the big delay that this asynchronism causes only occurs once in the process of whole end-to-end transmission.That is to say,, make that the delay that asynchronism caused between the bridge reduces as far as possible by carrying out cooperative scheduling between the bridge.
Summary of the invention
In order to overcome the problems referred to above of the prior art the present invention has been proposed.Therefore, the objective of the invention is to propose a kind of method that in ethernet network, flows scheduling, can support the low service that postpones of those low bandwidth very effectively, VoIP business for example, and make the delay that asynchronism caused between the bridge reduce as far as possible.
To achieve these goals, according to the present invention, a kind of method that flows scheduling in network has been proposed, comprise: transmitting terminal produces and sends the new stream description/schedule information that flows via a plurality of bridges to receiving terminal, and described stream description/schedule information comprises the delay d and the miscarriage life/reservation forwarding cycle numbering S of bandwidth demand B, the end-to-end delay demand D of this stream, current accumulative total; And the bridge that receives this stream description/schedule information, find after the miscarriage life/reservation forwarding cycle numbering S that upstream node receives, can satisfy first cycle of the bandwidth demand B in described stream description/schedule information, as reserving the forwarding cycle, this bridge writes down the code T in this reservation forwarding cycle and upgrades miscarriage life/reservation forwarding cycle numbering S in this stream description/schedule information and the delay d of current accumulative total, postpone under the situation of d less than end-to-end delay demand D in current accumulative total, continue downstream that bridge transmits this stream description/schedule information, till arriving receiving terminal.
Preferably, when receiving terminal received this stream description/schedule information, this receiving terminal checked that whether current accumulative total postpones d less than end-to-end delay demand D, if then described stream is dispatched successfully, otherwise refusal should new stream.
Preferably, described method also comprises: at the bridge place, postpone under the situation of d more than or equal to end-to-end delay demand D in current accumulative total, then refusal should new stream.
Preferably, at the sender place, the delay d of the current accumulative total in stream description/schedule information equals 0.
Preferably, the delay d of the current accumulative total at the current bridge place delay that equals the upstream node place adds that the reservation of finding in current bridge place transmits the difference of cycle with respect to the miscarriage life/reservation forwarding cycle at upstream node place.
Preferably, the scheduling of described stream is based on that superframe carries out, and described superframe comprises 64 cycles, and each cycle is 125 microseconds.
Preferably, the delay d of the current accumulative total at current bridge place is that the delay d at upstream node place adds 0.000125* ((T-S+64) MOD 64), and MOD is a modulo operation.
Preferably, be superframe-synchronized between transmitting terminal, each bridge and the receiving terminal.
Preferably, described method is applied to the business of low rate, low delay requirement.
Preferably, described network is an Ethernet.
Description of drawings
Below in conjunction with the detailed description of preferred embodiment of accompanying drawing to being adopted, above-mentioned purpose of the present invention, advantage and feature will become apparent by reference, wherein:
Fig. 1 shows the schematic diagram according to the example of stream scheduling method of the present invention;
Fig. 2 shows the schematic diagram based on the dispatching method of speed priority of prior art;
The schematic diagram of the superframe structure that Fig. 3 shows among the present invention to be adopted;
Fig. 4 shows the delay distribution map according to the scheduling result of the scheduling scheme of pacing of cooperative scheduling scheme of the present invention (deferred constraint is 8 cycles) and prior art;
Fig. 5 shows the priority scheduling mechanism scheme (priority: the delay distribution map of scheduling result CLASS_A1) based on speed according to cooperative scheduling scheme of the present invention (deferred constraint is 32 cycles) and prior art; And
Fig. 6 shows the priority scheduling mechanism scheme (priority: the delay distribution map of scheduling result CLASS_A2) based on speed according to cooperative scheduling scheme of the present invention (deferred constraint is 128 cycles) and prior art;
Embodiment
In order better to carry out cooperative scheduling between bridge, the present invention has defined based on the scheduling framework of superframe and the cooperation algorithm between the bridge.Superframe is from being numbered the dispatching cycle of 64 multiple, and its length equals 64 dispatching cycles (be 125 microseconds each dispatching cycle), promptly 8 milliseconds.
Fig. 3 has provided the structure of superframe.In the scheme of pacing, if do not have stream adding and leave, the dispatch list in each cycle should be constant.For the present invention, the dispatch list of each dispatching cycle in the superframe all may be different.Here, dispatch list comprises the feature description of each stream and the flow size that each stream is allowed to pass through in the regular period.According to the dispatch list of current bridge and the dispatch list of neighbours' bridge, judge whether a new stream can be admitted based on the dispatching algorithm of superframe.If, this new stream is dispatched arrangement, and Resources allocation.
Cooperation algorithm between bridge exchange current schedule information between neighbours' bridge makes dispatching algorithm to reduce end to end by this type of information as far as possible and postpones.After dispatching algorithm was dispatched new stream according to the dispatch list (comprising the stream schedule information) of current bridge and neighbours' bridge, the cooperation algorithm sent to neighbours' bridge to the dispatch list after upgrading.
The cooperation algorithm mainly contains two functions: first function is the synchronous of superframe original position, is called for short superframe-synchronized; Second function is to exchange dispatch list (stream schedule information) between bridge.
For first function, present household Ethernet has been introduced the synchronous notion of the whole network, is easy to carry out superframe-synchronized on this basis.At present in the household Ethernet, exchanged between equipment information selects the high equipment of clock accuracy as chief main equipment (GrandMaster), and all miscellaneous equipments directly or indirectly carry out time synchronized with this equipment.After having selected chief main equipment, determine the superframe original position by this equipment, and broadcast to the whole network.All the other equipment the superframe original position of chief main equipment issue also as oneself superframe original position, thereby reached the synchronous of the whole network superframe original position.
For second function of cooperation algorithm,, can only carry out alternately the stream schedule information that changes in order to reduce unnecessary renewal and mutual.Specifically, admitting the control stage, the bridge of upstream is the dispatch list of new stream, promptly allows this stream to transmit in which in cycle, and allows what information of transmission in these cycles, sends to the bridge in downstream.Here downstream and upstream are object of reference with the sender of stream, for two bridges, near sender's the upstream that is called, away from sender's the downstream that is called.
In the stream scheduling process, can relate to admission control algorithm.Before providing admission control algorithm, need to introduce earlier the notion of vacant ability.Vacant ability refers on certain port of bridge, in the certain hour scope, and the category-A flow that can also transmit.Consider that the category-A flow is subdivided into four subclasses, vacant ability need be added up (i.e. 125 microseconds, 500 microseconds, 2 milliseconds and 8 milliseconds) respectively on the different cycles length of these four subclasses.According to the definition of vacant ability, if on the Cycle Length of certain subclass, vacant ability then illustrates the disposal ability of current stream greater than bridge less than zero, and promptly bridge can't satisfy the demand of current all streams.Otherwise then bridge can satisfy the demand of current all streams.Therefore, after the control acceptance algorithm need be checked and add new stream, whether the vacant ability on the cycle of all four subclasses was more than or equal to zero.If then new stream can be admitted; Otherwise, need the new stream of refusal.
In front under the prerequisite of Ding Yi cooperation algorithm, bridge is only known end to end delay requirement and how long is needed altogether till own from the sender, and unclear situation from own to the recipient.In order to make stream be no more than delay requirement end to end, concerning each bridge, need to reduce as much as possible the required delay of stream queuing.In addition, for compatible, can only transmit at K+1 the earliest at the stream that cycle K receives with (Pacing) mechanism of pacing.Thus, the dispatching algorithm that provides of the present invention is as follows:
The sender is the relevant information (stream description/schedule information of new stream, be stream description information herein) send to the recipient, need therebetween through a plurality of bridges, and stream description information herein comprises the delay d and the numbering S of living cycle of miscarriage of bandwidth demand B, the end-to-end delay demand D of this stream, current accumulative total;
Capture the information of this stream when bridge after, seek from cycle (from miscarriage lifes/the reservations forwarding cycle numbering S that upstream node receives) of estimating to receive this stream afterwards, can satisfy the reservation forwarding cycle in first cycle of flowing bandwidth demand as this stream;
Bridge upgrades stream description/schedule information (being the stream schedule information at the bridge place), comprises that accumulative total postpones d and cycle numbering S (miscarriage life/reservation forwarding cycle numbering is at the bridge place, for stream is reserved forwarding cycle numbering).The delay d of the current accumulative total at current bridge place is that the delay d at upstream node place adds 0.000125* ((T-S+64) MOD 64), and MOD is a modulo operation.
If the accumulative total after upgrading postpones less than the end-to-end delay demand, then this stream schedule information is continued to transmit to the recipient; Otherwise, refuse this stream.
Receive the stream description/schedule information of new stream as the recipient after, check that whether accumulative total postpones less than the end-to-end delay demand, if then explanation is successfully dispatched, otherwise, refuse this stream.
The preferred embodiments of the present invention are described below with reference to the accompanying drawings.
Fig. 1 shows the schematic diagram according to the example of stream scheduling method of the present invention.
As shown in Figure 1, supposing has N bridge on the path from sender to recipient, and newly flowing shared bandwidth is 7, and the end-to-end delay demand is 80, and the cycle that the sender produces the stream place is 1.Algorithm operating procedure based on this example is as follows:
The first step, the sender is the descriptor of this stream, and promptly bandwidth B equals 7, end-to-end delay demand D equals 80, the cycle d at stream place equals 1, the delay S of accumulation is 0, sends to the recipient; Shown in the sender of Fig. 1 is expert at.
Second step, after bridge 1 receives sender's information, check the next cycle (i.e. the 2nd cycle) in own cycle from the stream place, first remaining bandwidth is greater than cycle of 7.Be expert at as can be seen from Fig. 1 bridge 1, the remaining bandwidth in cycle 2 equals 5, does not meet the demands, and the remaining bandwidth in cycle 3 equals 10, is first cycle that meets the demands.
In the 3rd step, bridge 1 is new stream reservation of resource in the cycle 3.Simultaneously, the cycle at stream place is updated to 3, the delay of accumulation is updated to 2, and bandwidth and end-to-end delay demand remain unchanged.
In the 4th step, because the delay of accumulative total is less than the end-to-end delay demand, the information after bridge 1 upgrades these continues to send to the recipient.Shown in the bridge 1 of Fig. 1 is expert at.
The 5th step, after bridge 2 receives the information that bridge 1 forwards, check the next cycle (i.e. the 4th cycle) in own cycle from the stream place, first remaining bandwidth is greater than cycle of 7.Be expert at as can be seen from Fig. 1 bridge 2, the remaining bandwidth in cycle 4 equals 10, meets the demands, and is the cycle that first meets the demands.
In the 6th step, bridge 2 is new stream reservation of resource in the cycle 4.Simultaneously, the cycle at stream place is updated to 4, the delay of accumulation is updated to 3, and bandwidth and end-to-end delay demand remain unchanged.
In the 7th step, the information after bridge 2 upgrades these continues to send to the recipient.Shown in the bridge 2 of Fig. 1 is expert at, and at each bridge place repetition aforesaid operations, up to bridge N;
The 8th step, after bridge N receives the information that bridge N-1 forwards, check the next cycle (i.e. the 1st cycle) in own cycle from the stream place, first remaining bandwidth is greater than cycle of 7.N is expert at as can be seen from Fig. 1 bridge, and the remaining bandwidth in cycle 1 equals 5, does not meet the demands, and the remaining bandwidth in cycle 2 equals 15, is first cycle that meets the demands.
In the 9th step, bridge N is new stream reservation of resource in the cycle 15.Simultaneously, the cycle at stream place is updated to 2, the delay of accumulation is updated to 65, and bandwidth and end-to-end delay demand remain unchanged.
In the tenth step, the information after bridge N upgrades these continues to send to the recipient.Shown in the bridge N of Fig. 1 is expert at.
The 11 step, after the recipient receives the information that bridge N forwards, check out that accumulative total postpones to be no more than the end-to-end delay demand, then should successful the dispatching of stream.
To illustrate below how the present invention can support low bandwidth effectively, the application of low delay requirement, for example VoIP business.
As illustrated in the prior art part, synchronous ethernet is difficult to support effectively the application of low bandwidth, low delay requirement, for example interactive application such as IP phone and game on line at present.The present invention provides the ability of supporting such interactive application to synchronous ethernet.We illustrate this problem with following simple case:
The stream of supposing VoIP uses the frame of 256 bytes, and the minimum spacing between two frames is 16 bytes, and the bandwidth of synchronous ethernet is 100Mb/1Gb, and Cycle Length is 125 microseconds, can bandwidth reserved account for 75% of total bandwidth.
The byte number that each cycle of the fast ethernet switch of 100Mbps can keep is:
Figure A20071000331300111
Similarly, the byte number that can keep is each cycle of the fast ethernet switch of 1Gbps:
Figure A20071000331300112
If resource reservation policy selects each cycle all to keep, then
The VoIP flow amount that the fast ethernet switch of 100Mbps can be supported is:
Figure A20071000331300113
The VoIP flow amount that the fast ethernet switch of 1Gbps can be supported is:
Figure A20071000331300114
In this state, the actual bandwidth that takies of each stream is:
1 × 10 9 × 75 % 43 ≈ 17,441,860 bps ≈ 17.4 Mbps
The delay that switch brought (statistical average) is 125 microseconds
In fact, the needed bandwidth of VoIP generally has only tens to tens Kbps, and the reservation policy bandwidth waste that each cycle all keeps surpasses 99%.
If resource reservation policy adopts the priority scheduling mechanism based on speed of prior art, and selects the CLASS_A1 subclass for use, promptly 4 cycles are reserved once, then
The VoIP flow amount that the fast ethernet switch of 100Mbps can be supported is:
4 * 4=16 stream
The VoIP flow amount that the fast ethernet switch of 1Gbps can be supported is:
43 * 4=172 stream
In this state, the actual bandwidth that takies of each stream is: 1 × 10 9 × 75 % 172 ≈ 4.4 Mbps
The delay that switch brought (statistical average) is
Figure A20071000331300122
If resource reservation policy adopts the priority scheduling mechanism based on speed, and selects the CLASS_A2 subclass for use, promptly 16 cycles are reserved once, then
The VoIP flow amount that the fast ethernet switch of 100Mbps can be supported is:
4 * 16=64 stream
The VoIP flow amount that the fast ethernet switch of 1Gbps can be supported is:
43 * 16=688 stream
In this state, the actual bandwidth that takies of each stream is: 1 × 10 9 × 75 % 688 ≈ 1.1 Mbps
The delay that switch brought (statistical average) is
Figure A20071000331300124
If resource reservation policy adopts the priority scheduling mechanism based on speed of prior art, and selects the CLASS_A3 subclass for use, promptly 64 cycles are reserved once, then
The VoIP flow amount that the fast ethernet switch of 100Mbps can be supported is:
4 * 64=256 stream
The VoIP flow amount that the fast ethernet switch of 1Gbps can be supported is:
43 * 64=2,752 streams
In this state, the actual bandwidth that takies of each stream is: 1 × 10 9 × 75 % 2752 ≈ 272 Kbps
The delay that switch brought (statistical average) is
When selecting the CLASS_A3 subclass for use, the flow amount that switch can be supported will increase greatly compared with CLASS_A1 (being all to keep in each cycle) subclass, bandwidth waste also significantly reduces simultaneously, but the every delay statistical average through a switch of CLASS_A3 subclass is about 4 milliseconds, is difficult to accept for the VoIP business.
If select the defined performing coordinated scheduling algorithm of the present invention for use, the receptive fluxion of switch, and postpone to depend on network topology, distributions, end-to-end delay demand, current resource reservation situation, the series of parameters such as cycle that upstream switches is selected are difficult to directly provide deterministic numerical value as top dispatching method.But the actual bandwidth ratio that takies of each stream is easier to calculate, promptly ( 256 + 16 ) × 8 125 × 10 6 × 64 = 272 Kbps
Below we provide the receptive fluxion of switch and postpone these two parameters with analogue test.
The trunk topology of test is that a height is 4 complete 4 fork trees, and each node on the tree all is a switch, promptly total 1+4+16+64=85 switch.Then, 115 main frames are assigned on these switches randomly.Concrete, for each main frame, from 85 switches, select one at random, and this end node is connected on this switch of selecting at random.All links between the switch, and the link between switch and the end node all is made as the 1Gbps full duplex.Like this, have 200 nodes in the network, and the 4+16+64+115=199 bar 1Gbps full-duplex link between them.
Next, we have produced 20000 VoIP streams at random, and the source node of each stream and destination node be picked at random in above-mentioned 115 main frames all.The Frame size of VoIP is 256 bytes, and frame pitch is 16 bytes.We are added to these VoIP streams one by one and carry out resource reservation in the network, and watch at other under all identical situation of all configurations the mechanism of pacing that defined dispatching algorithm of the present invention and household Ethernet seminar propose and in research report based on the output result of the priority scheduling mechanism of speed.
Fig. 4 provided defined dispatching algorithm of the present invention and prior art the dispatching algorithm of pacing to the specific output result, the delay distribution density figure of promptly above-mentioned 20000 VoIP stream under these two kinds of dispatching algorithms.In this figure, it is the length in 8 cycles that delay requirement is defined as, promptly 1 millisecond; Transverse axis is the delay of stream, and 0 represents those because inadequate resource perhaps postpones to surpass demand and unaccepted stream; The longitudinal axis is the number of the stream of certain delay.As can be seen from the figure, when dispatching algorithm is paced in use, these voip traffic overwhelming majority are rejected (91.3%), and when using the defined dispatching algorithm of the present invention, have only small part (23.2%) to be rejected, and the fluxion that network can be admitted rises to 15343 from 1731, has increased by 786.4%.Notice that this lifting is to obtain under the situation of not losing any delay, the dispatching algorithm of pacing is compared in this explanation, and the defined dispatching algorithm of the present invention can support those to the very strict stream of delay requirement very effectively.
Below we consider the priority scheduling mechanism based on speed of prior art.Household Ethernet seminar has defined 4 kinds of priority altogether, and promptly CLASS_A0, CLASS_A1, CLASS_A2 and CLASS_A3 keep once in per 1/4/16/64 cycle respectively.Wherein CLASS_A0 is consistent with the Pacing scheme, and the delay of CLASS_A3 excessive (reaching 8 milliseconds of every jumpings) therefore is primarily aimed at simulating of CLASS_A1 and these two kinds of priority of CLASS_A2 below.
Fig. 5 provided the defined dispatching algorithm of the present invention and based on CLASS_A1 in the priority scheduling mechanism of speed to the specific output result, the delay distribution density figure of promptly above-mentioned 20000 VoIP stream under these two kinds of dispatching algorithms.In this figure, delay requirement is defined as the length in 32 cycles, promptly 4 milliseconds; Transverse axis is the delay of stream, and 0 represents those because inadequate resource perhaps postpones to surpass demand and unaccepted stream; The longitudinal axis is the number of the stream of certain delay.As can be seen from the figure, use the number of the stream acceptance of CLASS_A1 priority to bring up to 3844 from 1731 of CLASS_A0 (mechanism of promptly pacing), increased by 122.1%, but compared, very big gap has still been arranged with the flow amount 15370 that the defined dispatching algorithm of the present invention can be accepted.
Similarly, Fig. 6 has provided the defined dispatching algorithm of the present invention and based on the output result of CLASS_A2 in the priority scheduling mechanism of speed, wherein delay requirement is defined as the length in 128 cycles, promptly 16 milliseconds; As can be seen from the figure, use the number of the stream acceptance of CLASS_A2 priority to bring up to 7656, increased by 342.3%, but still be lower than the flow amount 15368 that the defined dispatching algorithm of the present invention can be accepted from 1731 of CLASS_A0 (mechanism of promptly pacing).
From top test as can be seen, under the situation of identical configuration, even the defined dispatching algorithm of the present invention is under the situation of stringent delay requirement (1 millisecond), the fluxion that can admit (15343) not only is much better than the fluxion (1731) that Pacing can admit, and obviously be better than fluxion (7656) based on lower priority CLASS_A2 in the priority scheduling mechanism of speed, this explanation is with respect to the given algorithm of present household Ethernet seminar, and the defined dispatching algorithm of the present invention can support those to the very strict stream of delay requirement very effectively.
Although below show the present invention in conjunction with the preferred embodiments of the present invention, one skilled in the art will appreciate that under the situation that does not break away from the spirit and scope of the present invention, can carry out various modifications, replacement and change to the present invention.Therefore, the present invention should not limited by the foregoing description, and should be limited by claims and equivalent thereof.

Claims (10)

1. method that flows scheduling in network comprises:
Transmitting terminal produces and sends the new stream description/schedule information that flows via a plurality of bridges to receiving terminal, and described stream description/schedule information comprises the delay d and the miscarriage life/reservation forwarding cycle numbering S of bandwidth demand B, the end-to-end delay demand D of this stream, current accumulative total; And
Receive the bridge of this stream description/schedule information, find after the miscarriage life/reservation forwarding cycle numbering S that upstream node receives, can satisfy first cycle of the bandwidth demand B in described stream description/schedule information, as reserving the forwarding cycle, this bridge writes down the code T in this reservation forwarding cycle and upgrades miscarriage life/reservation forwarding cycle numbering S in this stream description/schedule information and the delay d of current accumulative total, postpone under the situation of d less than end-to-end delay demand D in current accumulative total, continue downstream that bridge transmits this stream description/schedule information, till arriving receiving terminal.
2. method according to claim 1 is characterized in that: when receiving terminal received this stream description/schedule information, this receiving terminal checked that whether current accumulative total postpones d less than end-to-end delay demand D, if, then described stream is dispatched successfully, otherwise refusal should new stream.
3. method according to claim 1 is characterized in that also comprising: at the bridge place, postpone under the situation of d more than or equal to end-to-end delay demand D in current accumulative total, then refusal should new stream.
4. method according to claim 1 is characterized in that: at the transmitting terminal place, the delay d of the current accumulative total in stream description/schedule information equals 0.
5. method according to claim 1 is characterized in that: the delay that the delay d of the current accumulative total at current bridge place equals the upstream node place adds that the reservation of finding in current bridge place transmits the difference of cycle with respect to the miscarriage life/reservation forwarding cycle at upstream node place.
6. method according to claim 1 is characterized in that: the scheduling of described stream is based on that superframe carries out, and described superframe comprises 64 cycles, and each cycle is 125 microseconds.
7. method according to claim 6 is characterized in that: the delay d of the current accumulative total at current bridge place is that the delay d at upstream node place adds 0.000125* ((T-S+64) MOD 64), and MOD is a modulo operation.
8. method according to claim 6 is characterized in that: between transmitting terminal, each bridge and the receiving terminal is superframe-synchronized.
9. method according to claim 1 is characterized in that: described method is applied to the business of low rate, low delay requirement.
10. method according to claim 1 is characterized in that: described network is an Ethernet.
CNA2007100033132A 2007-02-02 2007-02-02 Method for flow dispatching in Ethernet network Pending CN101237386A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CNA2007100033132A CN101237386A (en) 2007-02-02 2007-02-02 Method for flow dispatching in Ethernet network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CNA2007100033132A CN101237386A (en) 2007-02-02 2007-02-02 Method for flow dispatching in Ethernet network

Publications (1)

Publication Number Publication Date
CN101237386A true CN101237386A (en) 2008-08-06

Family

ID=39920753

Family Applications (1)

Application Number Title Priority Date Filing Date
CNA2007100033132A Pending CN101237386A (en) 2007-02-02 2007-02-02 Method for flow dispatching in Ethernet network

Country Status (1)

Country Link
CN (1) CN101237386A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016066141A1 (en) * 2014-10-31 2016-05-06 Huawei Technologies Co., Ltd. Low jitter traffic scheduling on a packet network
CN110545241A (en) * 2018-05-28 2019-12-06 华为技术有限公司 message processing method and device
CN112242966A (en) * 2019-07-19 2021-01-19 华为技术有限公司 Data forwarding method and device

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016066141A1 (en) * 2014-10-31 2016-05-06 Huawei Technologies Co., Ltd. Low jitter traffic scheduling on a packet network
CN106716938A (en) * 2014-10-31 2017-05-24 华为技术有限公司 Low jitter traffic scheduling on packet network
US10298506B2 (en) 2014-10-31 2019-05-21 Huawei Technologies Co., Ltd. Low jitter traffic scheduling on a packet network
CN106716938B (en) * 2014-10-31 2020-06-02 华为技术有限公司 Low-jitter service scheduling method and device on packet network
CN110545241A (en) * 2018-05-28 2019-12-06 华为技术有限公司 message processing method and device
CN110545241B (en) * 2018-05-28 2022-05-17 华为技术有限公司 Message processing method and device
US11722407B2 (en) 2018-05-28 2023-08-08 Huawei Technologies Co., Ltd. Packet processing method and apparatus
CN112242966A (en) * 2019-07-19 2021-01-19 华为技术有限公司 Data forwarding method and device
CN112242966B (en) * 2019-07-19 2023-11-28 华为技术有限公司 Data forwarding method and device

Similar Documents

Publication Publication Date Title
Yu et al. Traffic statistics and performance evaluation in optical burst switched networks
Dhaini et al. Energy efficiency in TDMA-based next-generation passive optical access networks
CN102835069B (en) Apparatus and method for synchronized networks
TW322669B (en) Power-spectrum-based connection admission control for ATM networks
Imtiaz et al. A performance study of Ethernet Audio Video Bridging (AVB) for Industrial real-time communication
CN101282305B (en) Bandwidth control method for distributed system as well as service plate
CN101599905B (en) Method, device and system for realizing addition of traffic shaping token
CN104053076A (en) Method and system for improving bandwidth allocation efficiency
CN107196876A (en) A kind of passive optical access network network and its traffic scheduling method
Sivaraman et al. Traffic shaping for end-to-end delay guarantees with EDF scheduling
Triki et al. Is it worth adapting sub-wavelength switching control plane to traffic variations?
CN101237386A (en) Method for flow dispatching in Ethernet network
Sriram Methodologies for bandwidth allocation, transmission scheduling, and congestion avoidance in broadband ATM networks
CN101237374A (en) Self-adapted multi-hop time division multiplexing dispatching method
Kweon et al. Traffic-controlled rate monotonic priority scheduling of ATM cells
CN101222429B (en) System and method for managing exchange capacity of transmission network equipment
CN107770875A (en) Aeronautical Ad hoc networks hybrid MAC protocols
Seoane et al. Analysis and simulation of a delay-based service differentiation algorithm for IPACT-based PONs
JP2004289780A (en) Optical subscriber's line terminal station device, optical subscriber's line termination apparatus and band assignment method to be used thereby
Antoniou et al. An efficient deadline-credit-based transport scheme for prerecorded semisoft continuous media applications
Movahhedinia et al. A slot assignment protocol for indoor wireless ATM networks using the channel characteristics and the traffic parameters
CN100442760C (en) Packet equity dispatching method and apparatus
Garg et al. An efficient scheme for optimizing channel utilization in OBS networks
CN201479321U (en) Programmable dynamic bandwidth distributing device
Hansen et al. Probabilistic bandwidth reservation by resource priority multiplexing

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C02 Deemed withdrawal of patent application after publication (patent law 2001)
WD01 Invention patent application deemed withdrawn after publication

Open date: 20080806