EP1142229B1 - Distributed hierarchical scheduling and arbitration for bandwidth allocation - Google Patents

Distributed hierarchical scheduling and arbitration for bandwidth allocation Download PDF

Info

Publication number
EP1142229B1
EP1142229B1 EP99973516A EP99973516A EP1142229B1 EP 1142229 B1 EP1142229 B1 EP 1142229B1 EP 99973516 A EP99973516 A EP 99973516A EP 99973516 A EP99973516 A EP 99973516A EP 1142229 B1 EP1142229 B1 EP 1142229B1
Authority
EP
European Patent Office
Prior art keywords
ingress
bandwidth
data
virtual output
input
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Lifetime
Application number
EP99973516A
Other languages
German (de)
French (fr)
Other versions
EP1142229A1 (en
Inventor
Marek Stephen Piekarski
Ian David Johnson
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Seagate Systems UK Ltd
Original Assignee
Xyratex Technology Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xyratex Technology Ltd filed Critical Xyratex Technology Ltd
Publication of EP1142229A1 publication Critical patent/EP1142229A1/en
Application granted granted Critical
Publication of EP1142229B1 publication Critical patent/EP1142229B1/en
Anticipated expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/02Details
    • H04L12/04Switchboards
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/30Peripheral units, e.g. input or output ports
    • H04L49/3081ATM peripheral units, e.g. policing, insertion or extraction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/30Peripheral units, e.g. input or output ports
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/50Overload detection or protection within a single switching element
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • H04L2012/5678Traffic aspects, e.g. arbitration, load balancing, smoothing, buffer management
    • H04L2012/5679Arbitration or scheduling
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/15Interconnection of switching modules

Definitions

  • the present invention relates to data switching systems and is more particularly concerned with the scheduling and arbitration arrangements for such systems.
  • This invention describes a switch architecture and a set of methods which provide the means by which switches of arbitrary size may be constructed whilst maintaining the ability to allocate guaranteed bandwidth to each possible connection through the switch.
  • a digital switch is used to route data streams from a set of source components to a set of destination components.
  • a cell-based switch operates on data which is packetised into streams of equal size cells. In a large switch the routing functions may be implemented hierarchically, that is sets of lower bandwidth ports are aggregated into a smaller number of higher bandwidth ports which are then interconnected in a central switch.
  • a method of scheduling the passage of data cells from M low-bandwidth data sources to M low-bandwidth data destinations said method being performed by a data switching apparatus including: M/N ingress multiplexers, each arranged to receive data cells from a respective set of N said low-bandwidth data sources, M/N egress multiplexers, each arranged to transmit data cells to a respective set ofN said low-bandwidth data destinations, a master control unit, and a central switch having M/N high bandwidth input ports arranged to receive data cells from respective said ingress multiplexers, and M/N high-bandwidth output ports arranged to transmit data cells to respective said egress multiplexers, the central switch selectively interconnecting the input ports and output ports, under the direction of the master control unit, the method including: each said ingress multiplex maintaining N input queues for queuing data cells received from the N respective said data sources, and maintaining M virtual output queues for queuing data cells directed to respective said data destinations; and the method being characterised
  • a digital data switching apparatus for transmitting data from M low-bandwidth data sources to M low-bandwidth data destinations
  • the apparatus including: M/N ingress multiplexers for receiving data cells from respective sets ofN said low-bandwidth data sources, M/N egress multiplexers for transmitting data cells to respective sets of N said low-bandwidth data destinations, a master control unit, and a central switch having M/N high-bandwidth input ports arranged to receive data cells from respective said ingress multiplexers, and M/N high-bandwidth output ports arranged to transmit data cells to respective said egress multiplexers, the central switch being arranged selectively to interconnect the input ports and output ports, under the direction of the master control unit, each said ingress multiplexer being arranged to maintain N input queues for queuing data cells received from respective said data sources, and to maintain M virtual output queues for queuing data cells directed to respective said data destinations; characterised in that: each ingress multiplexer is arranged to maintain
  • a scheduling and arbitration process in which the scheduling of the input queues is performed in accordance with an N-way weighted round robin.
  • N-way weighted round robin by an N.(2 w -1)-way unweighted round robin where w is the number of bits defining a weight using a list constructed by interleaving N words of (2 w -1) bits each, with w n 1's in a word, where w n is the weight of the queue n.
  • FIG. 1 shows a schematic diagram of a hierarchical switch.
  • the central interconnect 1 provides the cross-connections between a number of high-bandwidth ports.
  • a set of multiplexers 2 on the ingress side and demultiplexers 3 on the egress side provides the aggregation function between the low and high-bandwidth ports.
  • the low bandwidth ports provide connections from the switch to the data sources 4 on the ingress side and the data destinations 5 on the egress side.
  • a switch is required to support full-duplex ports, so that an ingress multiplexer and its corresponding demultiplexer may be considered a single full-duplex device which will be hereafter termed a "router"
  • the data switch may be of the type disclosed in our patent application WO 038375 published 29/06/2000.
  • central interconnect 1 may itself be a hierarchical switch, that is the methods described may be applied to switches with an arbitrary number of hierarchical levels.
  • the aim of these methods is to provide a mechanism whereby the data stream from the switch to a particular destination, which comprises a sequence of cells interleaved from various data sources, may be controlled such that predetermined proportions of its bandwidth are guaranteed to cells from each data source.
  • Figure 2 shows the architecture of an ingress multiplexer.
  • An ingress multiplexer receives a set of data streams from the data sources via a set of low-bandwidth input ports. Each data stream is a sequence of equal size cells (that is, an equal number of bits of data).
  • a set of N low-bandwidth ports 21 each fills one of the N input queues 22.
  • An ingress control unit 23 extracts the destination address from each of the cells in the input queues and transfers them into a set of M virtual output queues 24. There is one virtual output queue for each low-bandwidth output port in the switch.
  • the ingress multiplexer also contains an interconnect link control unit 25 which implements this function by scheduling cells from the virtual output queues 24 across the high-bandwidth link 26 to the central interconnect 1 according to an M-entry egress table 28.
  • the ingress multiplexer contains an NxM-entry ingress port table 27, which defines how its bandwidth to a particular egress port (via a particular virtual output queue) is distributed across the input ports. This table is used by the ingress control unit 23 to determine when (and to what degree) to exert backpressure to the data source resolved down to an individual virtual output queue.
  • the ingress multiplexer 2 of Figure 1 sends control information to the central interconnect 1 indicating the state of the virtual output queues in the form of "connection requests".
  • the central interconnect responds with a sequence of connections which it will establish between the ingress and egress routers. These are "connection grants”.
  • the ingress multiplexer 2 must now allocate the bandwidth to each egress demultiplexer 3 provided by the central interconnect 1 across the virtual output queues associated with each egress demultiplexer.
  • the deterministic scheduling function of the interconnect link control unit 25 may be defined as a weighted round robin (WRR) arbiter.
  • the interconnect link control unit 25 receives a connection grant to a particular egress demultiplexer 3 from the central interconnect 1 and must select one of the N virtual output queues associated with that egress demultiplexer .
  • This may be implemented by expanding the N-way WRR shown in Figure 3a) into an (N.(2 W -1)) - way unweighted round robin as shown in Figure 3b), where W equals the number of bits necessary to define the weight, such that if a queue has a weight of w , then it is represented as ( w -1) entries in the unweighted round robin list.
  • W weighted round robin
  • the entries in the unweighted round robin list are distributed such that for each weight the entries are an equal number of steps apart plus or minus one step.
  • Table 1 below shows an example of such an arrangement of 3-bit weights: w n e n 1 1000000 2 1000100 3 1001010 4 1010101 5 1011011 6 1110111 7 1111111
  • the arbiter must select one of the nine queues with 4-bit weights, that is 8 virtual output queues as described above and a multicast queue. This expands to a 135-entry unweighted round robin.
  • the implementation of a large unweighted round robin arbiter may be achieved without resorting to a slow iterative shift-and-test method by the technique of "divide and conquer", that is the 135-entry round robin is segmented into 9 sections of 16-entry round robins, each of which may be implemented efficiently with combinational logic (9 x 16 provides up to 144 entries, so that the multicast queue of up to 24 entries may actually be allocated more bandwidth than an individual unicast queue of up to 15 entries).
  • FIG 4 illustrates the partitioning of the round robin arbiter.
  • the sorter 41 separates the request vector V (144 bits) into 9 sections of 16-bit vectors , v0 to v8. It also creates nine pointers p0 to p8 for each of the 16-bit round robin blocks 42.
  • the block which corresponds to the existing pointer (which has been saved in register 44) is given a "1" at the corresponding bit location, whilst the other blocks are given dummy pointers initialised to location zero.
  • Each 16-bit round robin block now finds the next "1" in its input vector and outputs its location (g)whether it has to wrap round (w) and whether it has found a "1" in its its vector (f).
  • a selector 43 is now able to identify the block which has found the "1" corresponding to the next "1" in the original 135-bit vector given a signal (s) from the sorter 41. This specifies which round robin block had the original pointer position.
  • Figure 5 shows an example of the above process, but for a smaller configuration for clarity.
  • V 12 bits
  • p 4 bits
  • v0 - 2 2 bits
  • g0 - 2 2 bits
  • Figure 5 depicts the process performed by Figure 4 and at 51 defines the expanded current pointer (P) and the request vector (V) at 52.
  • the sorter 41 produces segmented vectors (v) and segmented pointers (p) where the blocks marked * are dummies.
  • the segmented results (g) of the round robin are shown at 55 whereas the results of the selector process 43 is shown at 56, defining the expanded next pointer (P).
  • the central interconnect 1 provides the cross-connect function in the switch.
  • the bandwidth allocation in the central interconnect is defined by an (M/N) 2 -entry central allocation table, which specifies the weights allocated to each possible connection through the central interconnect (the central interconnect has M/N high-bandwidth ports).
  • Each entry w ie defines the weights allocated to the connection from high-bandwidth port i to high-bandwidth port e.
  • P (M/N) 2 -entry central allocation table
  • the tables are set up via a switch management interface from a connection admission and control processor.
  • the connection admission and control processor When the connection admission and control processor has checked that it has the resources available in the switch to satisfy the connection request, then it can modify the ingress port table, the egress port table and the central allocation table to reflect the new distribution of traffic through the switch.
  • a switch may be required to provide a "best effort" service.
  • the table entries are derived from a number of local parameters. Two such parameters are the length l v of the virtual output queue v and the urgency u v of the virtual output queue. urgency is a parameter which is derived from the headers of the cells entering the queue from the ingress ports.
  • a switch may be implemented which can satisfy a range of requirements (including the two above) by defining a weighting function which "mixes" a number of scheduling parameters to generate the table entries in real time according to a set of "sensitivities" to length, urgency and pseudo-static bandwidth allocation. ( s l , sw , ss ).
  • the requirement on the function are that it should be fast and efficient, since multiple instances occur in the critical path of a switch.
  • bandwidth is allocated purely on the basis of pseudo-static allocations a described above.
  • bandwidth is allocated on the basis of pseudo-static allocation but a data source is allowed to "push" some data harder, when the demand arises, by setting the urgency bit in the appropriate cell headers.
  • Figure 7 is a block diagram of a small switch based on the above principles, showing the correct number of queues, tables and table entries.
  • Each ingress router has two low-bandwidth input ports, A and B for router 71 and ports C and D for router 72.
  • each ingress router has an ingress port table such as 77 for router 72 and an egress port table such as 78, whereby the central switch 73 has a central allocation table 79.
  • each low-bandwidth port may transport 1 Gbps of traffic
  • each high-bandwidth link may carry 2 Gbps and the switch is required to guarantee the follwing bandwidth allocations:
  • Flow bandwidth Destination Port (Gbps) A B C D A 0.5 0.1 0.1 0.2 B 0.2 0.2 0.2 C - 0.5 - 0.2 D 0.1 0.1 0.6 0.2
  • the ingress port table such as 77
  • egress port table such as 78 and central allocation table 79
  • the connection admission and control processor with the following 4- bit values (note here that there will be rounding errors due to the limited resolution of the 4-bit weights):
  • Ingress Port Table Ingress Port Table in router 71)
  • Egress Port Table Egress Port Table in router 7 1)

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)
  • Mobile Radio Communication Systems (AREA)
  • Train Traffic Observation, Control, And Security (AREA)

Abstract

According to the present invention there is provided a scheduling and arbitration process for use in a digital data switching arrangement of the type in which a central switch under the direction of a master control provides the cross-connections between a number of high-bandwidth ports to which are connected on the ingress side of the central switch a number of ingress multiplexers, one for each high-bandwidth input port and on the egress side a number of egress multiplexers, one for each high-bandwidth output port, each ingress multiplexer including a set of N input queues serving N low-bandwidth data sources and a set of M virtual output queues serving M low-bandwidth output data sources, characterized in that the scheduling and arbitration arrangement includes three bandwidth allocation tables, an ingress port table associated with the input queues and having NxM entries each arranged to define the bandwidth for a particular virtual output queue, an egress port table associated with the virtual output queues and having M entries each arranged to define the bandwidth allocation of a high-bandwidth port of the central switch to a virtual output queue and a central allocation table located in the master control and having (MxN)<SUP>2 </SUP>entries each of which specifies the weights allocated to each possible connection through the central switch.

Description

  • The present invention relates to data switching systems and is more particularly concerned with the scheduling and arbitration arrangements for such systems.
  • The continual growth of demand for manageable bandwidth in networks requires the development of new techniques in switch design which decouples the complexity of control from the scale of the port count and aggregate bandwidth. This invention describes a switch architecture and a set of methods which provide the means by which switches of arbitrary size may be constructed whilst maintaining the ability to allocate guaranteed bandwidth to each possible connection through the switch. A digital switch is used to route data streams from a set of source components to a set of destination components. A cell-based switch operates on data which is packetised into streams of equal size cells. In a large switch the routing functions may be implemented hierarchically, that is sets of lower bandwidth ports are aggregated into a smaller number of higher bandwidth ports which are then interconnected in a central switch.
  • The article entitled "Traffic Control in ATM Switches with Large Buffers" published in ICT Specialists Seminar, NL, Leidschendam, KPM Research, Volume Seminar 9, 1995, pages 45 to 60 by Wallmeier et al discloses a method and apparatus for supporting statistical multiplexing in an ATM network. The switch architecture disclosed relies on the use of statistical multiplexing units.
  • It is an object of the present invention to provide a bandwidth allocation arrangement which may be used in such a hierarchical switch.
  • According to the present invention there is provided a method of scheduling the passage of data cells from M low-bandwidth data sources to M low-bandwidth data destinations, said method being performed by a data switching apparatus including: M/N ingress multiplexers, each arranged to receive data cells from a respective set of N said low-bandwidth data sources, M/N egress multiplexers, each arranged to transmit data cells to a respective set ofN said low-bandwidth data destinations, a master control unit, and a central switch having M/N high bandwidth input ports arranged to receive data cells from respective said ingress multiplexers, and M/N high-bandwidth output ports arranged to transmit data cells to respective said egress multiplexers, the central switch selectively interconnecting the input ports and output ports, under the direction of the master control unit, the method including: each said ingress multiplex maintaining N input queues for queuing data cells received from the N respective said data sources, and maintaining M virtual output queues for queuing data cells directed to respective said data destinations; and the method being characterised in that: each ingress multiplexer further maintains a respective ingress port table, each ingress port table having NxM entries, each entry corresponding to a respective combination of a said data source for that ingress port and a said data destination, each ingress multiplexer transfers data cells from said input queues to said virtual output queues with a relative frequency according to value of the corresponding entry of the ingress port table; each ingress multiplexer further maintains a respective egress port table, the egress port table having M entries, each entry corresponding to a respective said data destination, each ingress multiplexer transfers data cells from said virtual output queues to said respective input ports of the central switch with a relative frequency according to the value of the corresponding entry of the egress port table; the master control unit maintains a central allocation table having (M/N)2 entries, each corresponding to a respective combination of an input ort and an output port, and the master control unit controls the central switch to interconnect pairs of said input ports and output ports with a relative frequency according to the value of the corresponding entry of the central allocation table; whereby said ingress port tables, egress port tables and central allocation table together determine the bandwidth through the digital data switching apparatus from each said data source to each said data destination.
  • According to a second aspect of the present invention a digital data switching apparatus for transmitting data from M low-bandwidth data sources to M low-bandwidth data destinations, the apparatus including: M/N ingress multiplexers for receiving data cells from respective sets ofN said low-bandwidth data sources, M/N egress multiplexers for transmitting data cells to respective sets of N said low-bandwidth data destinations, a master control unit, and a central switch having M/N high-bandwidth input ports arranged to receive data cells from respective said ingress multiplexers, and M/N high-bandwidth output ports arranged to transmit data cells to respective said egress multiplexers, the central switch being arranged selectively to interconnect the input ports and output ports, under the direction of the master control unit, each said ingress multiplexer being arranged to maintain N input queues for queuing data cells received from respective said data sources, and to maintain M virtual output queues for queuing data cells directed to respective said data destinations; characterised in that: each ingress multiplexer is arranged to maintain a respective ingress port table, each ingress port table having NxM entries, each entry corresponding to a respective combination of a said data source and a said data destination, and each ingress multiplexer is arranged to transfer data cells from said input queues to said virtual output queues with a relative frequency according to value of the corresponding entry of the ingress port table; each ingress multiplexer is arranged to maintain a respective egress port table, the egress port table having M entries, each corresponding to a respective said data destination, and each ingress multiplier is arranged to transfer data cells from said virtual output queues to said respective input ports of the central switch with a relative frequency according to value of the corresponding entry of the egress port table, and the master control unit is arranged to maintain a central allocation table having (M/N)2 entries, each corresponding to a respective combination of an input port and an output port, and the master control unit controls the central switch to interconnect pairs of said input ports and output ports with a relative frequency according to the value of the corresponding entry of the central allocation table; whereby said ingress port tables, egress port tables and central allocation table together determine the bandwidth through the digital data switching apparatus from each said data source to each said data destination.
  • According to a feature of the invention there is provided a scheduling and arbitration process in which the scheduling of the input queues is performed in accordance with an N-way weighted round robin.
  • According to a further feature of the invention there is provided an implementation of the N-way weighted round robin by an N.(2 w -1)-way unweighted round robin where w is the number of bits defining a weight using a list constructed by interleaving N words of (2 w -1) bits each, with w n 1's in a word, where w n is the weight of the queue n.
  • The invention, together with its various features, will be more readily understood from the following description of one embodiment, which should be read in conjunction with the accompanying drawings. In the drawings:-
    • Figure 1 shows a simplified form of a data switch,
    • Figure 2 shows an egress multiplexer,
    • Figure 3 shows the weighted round robin arbiter for use in the egress multiplexer,
    • Figure 4 shows the partitioning of the round robin arbiter,
    • Figure 5 shows the operation of the round robin arbiter,
    • Figure 6 shows the allocations for a 4-port interconnect with 3 bit weights, and
    • Figure 7 shows a block diagram of a small switch based on the principles of the invention.
  • Referring now to Figure 1, this shows a schematic diagram of a hierarchical switch. The central interconnect 1 provides the cross-connections between a number of high-bandwidth ports. A set of multiplexers 2 on the ingress side and demultiplexers 3 on the egress side provides the aggregation function between the low and high-bandwidth ports. The low bandwidth ports provide connections from the switch to the data sources 4 on the ingress side and the data destinations 5 on the egress side. In practice, a switch is required to support full-duplex ports, so that an ingress multiplexer and its corresponding demultiplexer may be considered a single full-duplex device which will be hereafter termed a "router" Typically the data switch may be of the type disclosed in our patent application WO 038375 published 29/06/2000.
  • It should be noted that the central interconnect 1 may itself be a hierarchical switch, that is the methods described may be applied to switches with an arbitrary number of hierarchical levels.
  • The aim of these methods is to provide a mechanism whereby the data stream from the switch to a particular destination, which comprises a sequence of cells interleaved from various data sources, may be controlled such that predetermined proportions of its bandwidth are guaranteed to cells from each data source.
  • Figure 2 shows the architecture of an ingress multiplexer. An ingress multiplexer receives a set of data streams from the data sources via a set of low-bandwidth input ports. Each data stream is a sequence of equal size cells (that is, an equal number of bits of data). A set of N low-bandwidth ports 21 each fills one of the N input queues 22. An ingress control unit 23 extracts the destination address from each of the cells in the input queues and transfers them into a set of M virtual output queues 24. There is one virtual output queue for each low-bandwidth output port in the switch. The ingress multiplexer also contains an interconnect link control unit 25 which implements this function by scheduling cells from the virtual output queues 24 across the high-bandwidth link 26 to the central interconnect 1 according to an M-entry egress table 28.
  • In addition to the data flow indicated by the arrows in Figure 1, there is also a flow of backpressure or flow-control information associated with each of the data flows. This control flow is indicated in Figure 2 by dashed arrows. The ingress multiplexer contains an NxM-entry ingress port table 27, which defines how its bandwidth to a particular egress port (via a particular virtual output queue) is distributed across the input ports. This table is used by the ingress control unit 23 to determine when (and to what degree) to exert backpressure to the data source resolved down to an individual virtual output queue.
  • The ingress multiplexer 2 of Figure 1 sends control information to the central interconnect 1 indicating the state of the virtual output queues in the form of "connection requests". The central interconnect responds with a sequence of connections which it will establish between the ingress and egress routers. These are "connection grants". The ingress multiplexer 2 must now allocate the bandwidth to each egress demultiplexer 3 provided by the central interconnect 1 across the virtual output queues associated with each egress demultiplexer.
  • The deterministic scheduling function of the interconnect link control unit 25 may be defined as a weighted round robin (WRR) arbiter. The interconnect link control unit 25 receives a connection grant to a particular egress demultiplexer 3 from the central interconnect 1 and must select one of the N virtual output queues associated with that egress demultiplexer . This may be implemented by expanding the N-way WRR shown in Figure 3a) into an (N.(2W-1)) - way unweighted round robin as shown in Figure 3b), where W equals the number of bits necessary to define the weight, such that if a queue has a weight of w, then it is represented as (w-1) entries in the unweighted round robin list. For example, with 4-bit weights, a 4-way weighted round robin expands to a 60-way unweighted round robin.
  • In order to optimise the service intervals to the queues under all weighting conditions, the entries in the unweighted round robin list are distributed such that for each weight the entries are an equal number of steps apart plus or minus one step. Table 1 below shows an example of such an arrangement of 3-bit weights:
    w n e n
    1 1000000
    2 1000100
    3 1001010
    4 1010101
    5 1011011
    6 1110111
    7 1111111
  • In the system described, the arbiter must select one of the nine queues with 4-bit weights, that is 8 virtual output queues as described above and a multicast queue. This expands to a 135-entry unweighted round robin. The implementation of a large unweighted round robin arbiter may be achieved without resorting to a slow iterative shift-and-test method by the technique of "divide and conquer", that is the 135-entry round robin is segmented into 9 sections of 16-entry round robins, each of which may be implemented efficiently with combinational logic (9 x 16 provides up to 144 entries, so that the multicast queue of up to 24 entries may actually be allocated more bandwidth than an individual unicast queue of up to 15 entries).
  • Figure 4 illustrates the partitioning of the round robin arbiter. The sorter 41 separates the request vector V (144 bits) into 9 sections of 16-bit vectors , v0 to v8. It also creates nine pointers p0 to p8 for each of the 16-bit round robin blocks 42. The block which corresponds to the existing pointer (which has been saved in register 44) is given a "1" at the corresponding bit location, whilst the other blocks are given dummy pointers initialised to location zero. Each 16-bit round robin block now finds the next "1" in its input vector and outputs its location (g)whether it has to wrap round (w) and whether it has found a "1" in its its vector (f). A selector 43 is now able to identify the block which has found the "1" corresponding to the next "1" in the original 135-bit vector given a signal (s) from the sorter 41. This specifies which round robin block had the original pointer position. The selector 43 is itself a round robin function which may be implemented as a combinational logic function
    "find the next block starting at s which has w=false and f=true (if not found, select s)".
  • Figure 5 shows an example of the above process, but for a smaller configuration for clarity. In the example, V = 12 bits, p = 4 bits, v0 - 2 = 2 bits and g0 - 2 = 2 bits. Figure 5 depicts the process performed by Figure 4 and at 51 defines the expanded current pointer (P) and the request vector (V) at 52. The sorter 41 produces segmented vectors (v) and segmented pointers (p) where the blocks marked * are dummies. The segmented results (g) of the round robin are shown at 55 whereas the results of the selector process 43 is shown at 56, defining the expanded next pointer (P).
  • The central interconnect 1 provides the cross-connect function in the switch. The bandwidth allocation in the central interconnect is defined by an (M/N)2 -entry central allocation table, which specifies the weights allocated to each possible connection through the central interconnect (the central interconnect has M/N high-bandwidth ports). The central allocation table contains P2 entries, where P=(M/N). Each entry w ie defines the weights allocated to the connection from high-bandwidth port i to high-bandwidth port e. However, not all combinations of entries constitute a self-consistent set, that is the allocations as seen from the outputs could contradict the allocations as seen from the inputs. A set of allocations is only self-consistent if the sums of weights at each output and input are equal. Figure 6 shows a self-consistent set (a) and a non-self-consistent set (b) of allocation for a 4-port interconnect with 3-bit weights. Inputs are shown at IP and outputs at OP, with the sum designated as Σ. Assuming that the central allocation table has a self-consistent set of entries, it is possible to define the bandwidth allocation to a link between input port i and output port e with weight w ie as p ie , where: p i e = w i e n = 0 ( p 1 ) w i n
    Figure imgb0001
  • The egress port table defines how the bandwidth of a high-bandwidth port to the central interconnect 1 is allocated across the virtual output queues. There is no issue with self-consistence as all possible entries are self-consistent, so that the bandwidth allocation for a virtual output queue v with weight w v is given by: p f = w v n = 0 ( N 1 ) w n
    Figure imgb0002

    Similarly, the ingress port table entries give the bandwidth allocation of a virtual output queue to the ingress ports with weight w f is given by: p f = w f n = 0 ( N 1 ) w n
    Figure imgb0003

    Therefore the proportion of bandwidth at an egress port v allocated to an ingress port f is given by: p f v = p f . p v . p i e
    Figure imgb0004
  • In a switch which is required to maintain strict bandwidth allocation between ports (such as an ATM switch), the tables are set up via a switch management interface from a connection admission and control processor. When the connection admission and control processor has checked that it has the resources available in the switch to satisfy the connection request, then it can modify the ingress port table, the egress port table and the central allocation table to reflect the new distribution of traffic through the switch.
  • In contrast, a switch may be required to provide a "best effort" service. In this case the table entries are derived from a number of local parameters. Two such parameters are the length l v of the virtual output queue v and the urgency u v of the virtual output queue. urgency is a parameter which is derived from the headers of the cells entering the queue from the ingress ports.
  • A switch may be implemented which can satisfy a range of requirements (including the two above) by defining a weighting function which "mixes" a number of scheduling parameters to generate the table entries in real time according to a set of "sensitivities" to length, urgency and pseudo-static bandwidth allocation. (s l ,sw, ss). The requirement on the function are that it should be fast and efficient, since multiple instances occur in the critical path of a switch. In the system described the weighting function has the form: w v = { l v 2 2 ( 1 / s l ) + p v 2 ( 1 / s s ) + u v 2 ( 1 / s u ) } ( 1 b v )
    Figure imgb0005
    where
    • b v is the backpressure applied from the egress multiplexer,
    • w v is the weight of the queue as applied to the scheduler, and
    • p v is a pseudo-static bandwidth allocation, such as an egress port table.
  • Despite the apparent complexity of this function, it may be implemented exclusively with an adder, multiplexers and small lookup tables , thus meeting the requirement for sped and efficiency. Features of this weighting function are that, for s l = 1.0, s s = 0.0 and s u = 0.0, bandwidth is allocated locally purely on the basis of queue length, with a non-linear function , so that the switch always attempts to avoid queues overflowing. When s l = 0.0, s s = 1.0 and s u = 0.0, bandwidth is allocated purely on the basis of pseudo-static allocations a described above. Finally, when s l = 0.0, s s = 1.0 and s u = 0.5, bandwidth is allocated on the basis of pseudo-static allocation but a data source is allowed to "push" some data harder, when the demand arises, by setting the urgency bit in the appropriate cell headers.
  • Figure 7 is a block diagram of a small switch based on the above principles, showing the correct number of queues, tables and table entries. In Figure 7 there are two ingress routers 71 and 72, a central cross-bar switch 73 and two egress routers 74 and 75. Each ingress router has two low-bandwidth input ports, A and B for router 71 and ports C and D for router 72. As mentioned previously, each ingress router has an ingress port table such as 77 for router 72 and an egress port table such as 78, whereby the central switch 73 has a central allocation table 79. Assuming that each low-bandwidth port may transport 1 Gbps of traffic, each high-bandwidth link may carry 2 Gbps and the switch is required to guarantee the follwing bandwidth allocations:
    Flow bandwidth Destination Port
    (Gbps) A B C D
    A 0.5 0.1 0.1 0.2
    B 0.2 0.2 0.2 0.2
    C - 0.5 - 0.2
    D 0.1 0.1 0.6 0.2
    then the ingress port table such as 77, egress port table such as 78 and central allocation table 79 would be set up by the connection admission and control processor with the following 4- bit values (note here that there will be rounding errors due to the limited resolution of the 4-bit weights):
    Ingress Port Table Ingress Port Table
    (in router 71) (in router 72)
    Source Source
    A B C D
    A 15 6 A 0 3
    B 3 6 B 15 3
    C 3 6 C 0 15
    D 6 6 D 6 5
    Egress Port Table Egress Port Table
    (in router 7 1) (in router 72)
    Source Source
    AB CD
    A 14 A 2
    B 6 B 12
    C 6 C 12
    D 8 D 8
    Central Allocation Table
    Destination Router
    AB CD
    Source AB 15 10
    CD 10 15

Claims (14)

  1. A method of scheduling the passage of data cells from M low-bandwidth data sources (4) to M low-bandwidth data destinations (5), said method being performed by a data switching apparatus including:
    M/N ingress multiplexers (2; 71, 72), each arranged to receive data cells from a respective set of N said low-bandwidth data sources (4),
    M/N egress multiplexers (3; 74, 75), each arranged to transmit data cells to a respective set of N said low-bandwidth data destinations (5),
    a master control unit, and
    a central switch having M/N high-bandwidth input ports arranged to receive data cells from respective said ingress multiplexers (2; 71, 72), and M/N high-bandwidth output ports arranged to transmit data cells to respective said egress multiplexers (3; 74, 75), the central switch selectively interconnecting the input ports and output ports, under the direction of the master control unit,
    the method including:
    each said ingress multiplexer (2; 71, 72) maintaining N input queues (22) for queuing data cells received from the N respective said data sources, and maintaining M virtual output queues (24) for queuing data cells directed to respective said data destinations;
    and the method being characterised in that:
    each ingress multiplexer further maintains a respective ingress port table (27; 77), each ingress port table having NxM entries, each entry corresponding to a respective combination of a said data source for that ingress port and a said data destination,
    each ingress multiplexer transfers data cells from said input queues to said virtual output queues with a relative frequency according to value of the corresponding entry of the ingress port table (27; 77);
    each ingress multiplexer further maintains a respective egress port table (28; 78), the egress port table having M entries, each entry corresponding to a respective said data destination,
    each ingress multiplexer transfers data cells from said virtual output queues to said respective input ports of the central switch with a relative frequency according to the value of the corresponding entry of the egress port table (28; 78);
    the master control unit maintains a central allocation table (79) having (M/N)2 entries, each corresponding to a respective combination of an input port and an output port, and
    the master control unit controls the central switch to interconnect pairs of said input ports and output ports with a relative frequency according to the value of the corresponding entry of the central allocation table (79);
    whereby said ingress port tables (27; 77), egress port tables (28; 78) and central allocation table (79) together determine the bandwidth through the digital data switching apparatus from each said data source (4) to each said data destination (5).
  2. A method according to claim 1 in which each said ingress multiplexer (2; 71, 72), for each virtual output queue (24), transfers data cells to that virtual output queue from said input queues in accordance with a N-way weighted round robin, using N weights determined respectively by the N entries of the ingress port table for that virtual output queue.
  3. A method according to claim 2 in which each weight is defined by a number of bits w, and the N-way weighted round robin for each virtual output queue is implemented by an N(2w-1)-way unweighted round robin using a request vector list constructed by interleaving N words of (2w-1) bits each, each word corresponding to a respective input queue and having a number of "1"s determined by the entry of the ingress port table for that input queue and that virtual output queue.
  4. A method according to claim 3 in which the request vector list is separated into a plurality of round robin blocks, each corresponding to a respective input queue, a first round robin process being performed independently within each block, and a second round robin process being performed to make a selection among the blocks.
  5. A method according to any preceding claim in which the ingress port table (27; 77), the egress port table (28; 78) and the central allocation table (79) are all programmed from an external source.
  6. A method according to claim 5 in which the external source uses parameters characterizing the length of each virtual output queue and the urgency of each virtual output queue.
  7. A method according to claim 6 in which the external source uses a set of sensitivities relating to length, urgency and pseudo-static bandwidth allocation.
  8. A digital data switching apparatus for transmitting data from M low-bandwidth data sources (4) to M low-bandwidth data destinations (5), the apparatus including:
    M/N ingress multiplexers (2; 71, 72) for receiving data cells from respective sets of N said low-bandwidth data sources (4),
    M/N egress multiplexers (3; 74, 75) for transmitting data cells to respective sets of N said low-bandwidth data destinations (5), a master control unit, and
    a central switch having M/N high-bandwidth input ports arranged to receive data cells from respective said ingress multiplexers (2; 71, 72), and M/N high-bandwidth output ports arranged to transmit data cells to respective said egress multiplexers (3; 74, 75), the central switch being arranged selectively to interconnect the input ports and output ports, under the direction of the master control unit,
    each said ingress multiplexer (2; 71, 72) being arranged to maintain N input queues (22) for queuing data cells received from respective said data sources, and to maintain M virtual output queues (24) for queuing data cells directed to respective said data destinations;
    characterised in that:
    each ingress multiplexer (2; 71, 72) is arranged to maintain a respective ingress port table (27; 77), each ingress port table having NxM entries, each entry corresponding to a respective combination of a said data source and a said data destination, and each ingress multiplexer is arranged to transfer data cells from said input queues to said virtual output queues with a relative frequency according to value of the corresponding entry of the ingress port table;
    each ingress multiplexer is arranged to maintain a respective egress port table (28; 78), the egress port table having M entries, each corresponding to a respective said data destination (5), and each ingress multiplier is arranged to transfer data cells from said virtual output queues to said respective input ports of the central switch with a relative frequency according to value of the corresponding entry of the egress port table (28; 78),
    and the master control unit is arranged to maintain a central allocation table (79) having (M/N)2 entries, each corresponding to a respective combination of an input port and an output port, and the master control unit controls the central switch to interconnect pairs of said input ports and output ports with a relative frequency according to the value of the corresponding entry of the central allocation table (79);
    whereby said ingress port tables (27; 77), egress port tables (28; 78) and central allocation table (79) together determine the bandwidth through the digital data switching apparatus from each said data source (4) to each said data destination (5).
  9. An apparatus according to claim 8 in which each said ingress multiplexer is arranged, for each virtual output queue (24), to transfer data cells to that virtual output queue from said input queues (22) in accordance with a N-way weighted round robin, using N weights determined respectively by the N entries of the ingress port table (27; 77) for that virtual output queue.
  10. A apparatus according to claim 9 in which each weight has a number of bits w, and the N-way weighted round robin for each virtual output queue is implemented by an N(2w-1)- way unweighted round robin using a request vector list constructed by interleaving N words of (2w-1) bits each, each word corresponding to a respective input queue and having a number of "1" s determined by the entry of the ingress port table for that input queue and that virtual output queue.
  11. An apparatus according to claim 10 in which the request vector list is separated into a plurality of round robin blocks, each corresponding to a respective input queue, each ingress multiplexer being arranged to preform a first round robin process independently within each block, and a second round robin process to make a selection among the blocks.
  12. A apparatus according to any of claims 8 to 11 further comprising an external source unit arranged to program the ingress port table (27; 77), the egress port table (28; 78) and the central allocation table (29).
  13. An apparatus according to claim 12 in which the external source unit is arranged to operate using parameters characterizing the length of the virtual output queue and the urgency of the virtual output queue.
  14. An apparatus according to claim 13 in which the external source unit is arranged to operate using a set of sensitivities relating to the length, urgency and pseudo-static bandwidth allocation.
EP99973516A 1998-12-22 1999-12-01 Distributed hierarchical scheduling and arbitration for bandwidth allocation Expired - Lifetime EP1142229B1 (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
GBGB9828143.9A GB9828143D0 (en) 1998-12-22 1998-12-22 Distributed hierarchical scheduling and arbitration for bandwidth allocation
GB9828143 1998-12-22
PCT/GB1999/004007 WO2000038376A1 (en) 1998-12-22 1999-12-01 Distributed hierarchical scheduling and arbitration for bandwidth allocation

Publications (2)

Publication Number Publication Date
EP1142229A1 EP1142229A1 (en) 2001-10-10
EP1142229B1 true EP1142229B1 (en) 2006-07-26

Family

ID=10844657

Family Applications (1)

Application Number Title Priority Date Filing Date
EP99973516A Expired - Lifetime EP1142229B1 (en) 1998-12-22 1999-12-01 Distributed hierarchical scheduling and arbitration for bandwidth allocation

Country Status (11)

Country Link
US (1) US7099355B1 (en)
EP (1) EP1142229B1 (en)
JP (1) JP2002533995A (en)
KR (1) KR20010086086A (en)
CN (1) CN1338168A (en)
AT (1) ATE334534T1 (en)
AU (1) AU1398900A (en)
CA (1) CA2353622A1 (en)
DE (1) DE69932533D1 (en)
GB (1) GB9828143D0 (en)
WO (1) WO2000038376A1 (en)

Families Citing this family (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6532509B1 (en) 1999-12-22 2003-03-11 Intel Corporation Arbitrating command requests in a parallel multi-threaded processing system
US6694380B1 (en) 1999-12-27 2004-02-17 Intel Corporation Mapping requests from a processing unit that uses memory-mapped input-output space
AU5497401A (en) 2000-05-18 2001-11-26 Power X Limited Apparatus and method for resource arbitration
JP2002111716A (en) 2000-10-04 2002-04-12 Nec Corp Packet switch and multicast control system used therefor
DE10130749A1 (en) * 2001-06-26 2003-01-02 Philips Corp Intellectual Pty Packet switching device with feedback coupling for allocation unit has port controller for storing, arranging packets in queues, generating queue state feedback information units for allocation unit
US6757246B2 (en) 2001-08-14 2004-06-29 Pts Corporation Method and apparatus for weighted arbitration scheduling separately at the input ports and the output ports of a switch fabric
US6990072B2 (en) 2001-08-14 2006-01-24 Pts Corporation Method and apparatus for arbitration scheduling with a penalty for a switch fabric
US20030048792A1 (en) * 2001-09-04 2003-03-13 Qq Technology, Inc. Forwarding device for communication networks
US7206858B2 (en) * 2002-09-19 2007-04-17 Intel Corporation DSL transmit traffic shaper structure and procedure
FR2845224B1 (en) * 2002-09-26 2004-12-17 Cit Alcatel SCHEDULING DEVICE FOR AN ASYMMETRICALLY SHARED RESOURCE SYSTEM
CN1689281A (en) * 2002-10-02 2005-10-26 皇家飞利浦电子股份有限公司 Weight adaptation in packet switches
US7475159B2 (en) * 2003-09-25 2009-01-06 International Business Machines Corporation High-speed scheduler
US7876763B2 (en) * 2004-08-05 2011-01-25 Cisco Technology, Inc. Pipeline scheduler including a hierarchy of schedulers and multiple scheduling lanes
CN100382533C (en) * 2004-10-26 2008-04-16 中兴通讯股份有限公司 Grading and polling method for scheduling device
US7525978B1 (en) 2005-04-15 2009-04-28 Altera Corporation Method and apparatus for scheduling in a packet buffering network
US8144719B2 (en) * 2005-10-25 2012-03-27 Broadbus Technologies, Inc. Methods and system to manage data traffic
US20080112400A1 (en) * 2006-11-15 2008-05-15 Futurewei Technologies, Inc. System for Providing Both Traditional and Traffic Engineering Enabled Services
US20090055234A1 (en) * 2007-08-22 2009-02-26 International Business Machines Corporation System and methods for scheduling meetings by matching a meeting profile with virtual resources
EP2051456B1 (en) * 2007-10-18 2009-12-09 Alcatel Lucent Method for routing data from inlet stages to outlet stages of a node of a communications network
US8391302B1 (en) * 2009-12-03 2013-03-05 Integrated Device Technology, Inc. High-performance ingress buffer for a packet switch
CN102014074B (en) * 2010-12-28 2012-11-07 汉柏科技有限公司 Single queue bandwidth allocation method
US9559985B1 (en) * 2014-01-28 2017-01-31 Google Inc. Weighted cost multipath routing with intra-node port weights and inter-node port weights
US9716669B2 (en) 2014-12-04 2017-07-25 Juniper Networks, Inc. Multi-chassis switch having a modular center stage chassis
RU2649788C1 (en) 2016-06-16 2018-04-04 Общество С Ограниченной Ответственностью "Яндекс" Method and system for transaction request processing in distributed data processing systems
US10476810B1 (en) 2018-04-26 2019-11-12 Hewlett Packard Enterprise Development Lp Network source arbitration
US12072829B2 (en) * 2021-10-29 2024-08-27 Microchip Technology Incorporated System and method for flexibly crossing packets of different protocols

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0680173B1 (en) * 1994-04-28 2003-09-03 Hewlett-Packard Company, A Delaware Corporation Multicasting apparatus
US5561663A (en) 1994-12-30 1996-10-01 Stratacom, Inc. Method and apparatus for performing communication rate control using geometric weighted groups
GB2316572B (en) * 1996-08-14 2000-12-20 Fujitsu Ltd Multicasting in switching apparatus

Also Published As

Publication number Publication date
GB9828143D0 (en) 1999-02-17
DE69932533D1 (en) 2006-09-07
JP2002533995A (en) 2002-10-08
US7099355B1 (en) 2006-08-29
AU1398900A (en) 2000-07-12
CA2353622A1 (en) 2000-06-29
CN1338168A (en) 2002-02-27
EP1142229A1 (en) 2001-10-10
ATE334534T1 (en) 2006-08-15
WO2000038376A1 (en) 2000-06-29
KR20010086086A (en) 2001-09-07

Similar Documents

Publication Publication Date Title
EP1142229B1 (en) Distributed hierarchical scheduling and arbitration for bandwidth allocation
Glacopelli et al. Sunshine: A high performance self-routing broadband packet switch architecture
Turner An optimal nonblocking multicast virtual circuit switch
Leslie et al. Fairisle: An ATM network for the local area
EP1087635B1 (en) High-capacity WDM-TDM packet switch
Suzuki et al. Output‐buffer switch architecture for asynchronous transfer mode
CA1320257C (en) Method and apparatus for input-buffered asynchronous transfer mode switching
EP1364552B1 (en) Switching arrangement and method with separated output buffers
EP1383287B1 (en) Arbiter for an input buffered communication switch
EP1744497B1 (en) Method for managing a plurality of virtual links shared on a communication line and network implementing said method
CA2224606C (en) A distributed buffering system for atm switches
JPH0637797A (en) Reserved ring mechanism of packet exchange network
Duan et al. A high-performance OC-12/OC-48 queue design prototype for input-buffered ATM switches
US6345040B1 (en) Scalable scheduled cell switch and method for switching
Cao The maximum throughput of a nonblocking space-division packet switch with correlated destinations
Kornaros et al. ATLAS I: implementing a single-chip ATM switch with backpressure
Kornaros et al. Implementation of ATLAS I: a single-chip ATM switch with backpressure
JP3319442B2 (en) ATM switch
Marcus An architecture for QoS analysis and experimentation
Mhamdi et al. High-performance switching based on buffered crossbar fabrics
Kraimeche Design and analysis of the Stacked-Banyan ATM switch fabric
Lele et al. Architecture of a reconfigurable low power gigabit atm switch
Sabaa et al. Implementation of a window-based scheduler in an ATM switch
Shipley VLSI architectures for a new class of ATM switches
Zhu et al. Realisation and performance of IEEE 1355 DS and HS link based, high speed, low latency packet switching networks

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20010703

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LI LU MC NL PT SE

AX Request for extension of the european patent

Free format text: AL;LT;LV;MK;RO;SI

RAP1 Party data changed (applicant data changed or rights of an application transferred)

Owner name: XYRATEX TECHNOLOGY LIMITED

17Q First examination report despatched

Effective date: 20050506

GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

GRAS Grant fee paid

Free format text: ORIGINAL CODE: EPIDOSNIGR3

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

AK Designated contracting states

Kind code of ref document: B1

Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LI LU MC NL PT SE

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: NL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20060726

Ref country code: LI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20060726

Ref country code: IT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT;WARNING: LAPSES OF ITALIAN PATENTS WITH EFFECTIVE DATE BEFORE 2007 MAY HAVE OCCURRED AT ANY TIME BEFORE 2007. THE CORRECT EFFECTIVE DATE MAY BE DIFFERENT FROM THE ONE RECORDED.

Effective date: 20060726

Ref country code: FI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20060726

Ref country code: CH

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20060726

Ref country code: BE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20060726

Ref country code: AT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20060726

REG Reference to a national code

Ref country code: GB

Ref legal event code: FG4D

REG Reference to a national code

Ref country code: CH

Ref legal event code: EP

REG Reference to a national code

Ref country code: IE

Ref legal event code: FG4D

REF Corresponds to:

Ref document number: 69932533

Country of ref document: DE

Date of ref document: 20060907

Kind code of ref document: P

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20061026

Ref country code: DK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20061026

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: DE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20061027

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: ES

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20061106

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20061201

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: PT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20061226

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MC

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20061231

NLV1 Nl: lapsed or annulled due to failure to fulfill the requirements of art. 29p and 29m of the patents act
EN Fr: translation not filed
PLBE No opposition filed within time limit

Free format text: ORIGINAL CODE: 0009261

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT

26N No opposition filed

Effective date: 20070427

REG Reference to a national code

Ref country code: GB

Ref legal event code: 732E

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: GR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20061027

Ref country code: FR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20070511

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: LU

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20061201

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: FR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20060726

Ref country code: CY

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20060726

REG Reference to a national code

Ref country code: GB

Ref legal event code: 732E

Free format text: REGISTERED BETWEEN 20111215 AND 20111221

REG Reference to a national code

Ref country code: GB

Ref legal event code: 732E

Free format text: REGISTERED BETWEEN 20120503 AND 20120509

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: GB

Payment date: 20181128

Year of fee payment: 20

REG Reference to a national code

Ref country code: GB

Ref legal event code: PE20

Expiry date: 20191130

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: GB

Free format text: LAPSE BECAUSE OF EXPIRATION OF PROTECTION

Effective date: 20191130