WO2003032554A2 - Near-non-blocking switch scheduler for three-stage banyan switches - Google Patents

Near-non-blocking switch scheduler for three-stage banyan switches Download PDF

Info

Publication number
WO2003032554A2
WO2003032554A2 PCT/US2002/031338 US0231338W WO03032554A2 WO 2003032554 A2 WO2003032554 A2 WO 2003032554A2 US 0231338 W US0231338 W US 0231338W WO 03032554 A2 WO03032554 A2 WO 03032554A2
Authority
WO
WIPO (PCT)
Prior art keywords
source
switch
destination
stage
path
Prior art date
Application number
PCT/US2002/031338
Other languages
French (fr)
Other versions
WO2003032554A3 (en
Inventor
Walter Clark Milliken
Tushar Saxena
Original Assignee
Bbnt Solutions Llc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Bbnt Solutions Llc filed Critical Bbnt Solutions Llc
Priority to AU2002327813A priority Critical patent/AU2002327813A1/en
Publication of WO2003032554A2 publication Critical patent/WO2003032554A2/en
Publication of WO2003032554A3 publication Critical patent/WO2003032554A3/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/25Routing or path finding in a switch fabric
    • H04L49/253Routing or path finding in a switch fabric using establishment or release of connections between ports
    • H04L49/254Centralised controller, i.e. arbitration or scheduling
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/15Interconnection of switching modules
    • H04L49/1507Distribute and route fabrics, e.g. sorting-routing or Batcher-Banyan

Definitions

  • the present invention relates generally to network switch subsystems and, more particularly, to switch schedulers and methods for switch scheduling.
  • Network routing devices generally receive data from one or more input lines through one or more corresponding input ports and transfer the data to one or more output lines via one or more output ports.
  • the particular output port(s) that output the data may be determined from one or more addresses accompanying the data.
  • the data may be located in a pipeline before and after a switching or mapping operation is performed by the device.
  • the pipeline may run from the input and output ports to a switch subsystem that switches or routes data based on an address contained in the data.
  • the switch subsystem may include a switch interface coupled to the pipeline, a switch, and a switch scheduler to handle data requests for the output ports and to pass data to the switch.
  • Fig. 1 is a block diagram of a conventional switch subsystem 100 in a network routing device.
  • Switch subsystem 100 may include a switch interface 120 that transmits data from a pipeline output 110 and receives data for a pipeline input 150.
  • Switch subsystem 100 also may include a switch scheduler 130 and a switch 140 coupled to switch interface 120. Only one switch input and one switch output are shown for clarity of explanation. However, an N-port switch subsystem 100 may have N inputs to switch scheduler 130, N connections between switch scheduler 130 and switch 140, and N outputs from switch 140 to switch interface 120.
  • Pipeline output 110 transmits a chunk of data and a header to switch interface 120.
  • the header may include scheduling requests for future switch cycles and addressing information to control switch 140 during the current switch cycle.
  • the different message elements are combined into a single message sent to scheduler 130. As shown in Fig. 1, the single message includes scheduling requests for future cycles (left portion of message), addressing information to control the switch (middle portion of message), and the data chunk (right portion of message).
  • Switch scheduler 130 strips off the request element and passes the remainder (i.e., address and data chunk) unchanged from each of its input ports to a corresponding output port.
  • Switch 140 uses the remainder of the message passed on by scheduler 130 to determine an appropriate port for the data chunk on switch interface 120. Switch 140 then transmits the data chunk, via switch interface 120, to pipeline input 150. Meanwhile, during the same switch cycle, switch scheduler 130 determines whether to grant the scheduling request message for a future cycle. Switch scheduler 130 transmits a grant or non-grant message, via switch interface 120, to pipeline output 110.
  • Switch scheduling in this manner will necessarily entail at least a one cycle latency between the grant decision (made while another chunk is being switched in one cycle) and the switching of the data chunk associated with that grant decision (in a later cycle).
  • Switch scheduler 130 may use a number of schemes to determine which request to grant when, for example, multiple requests are received for the same output port during the same switch cycle in an N-port switch subsystem. For switch subsystems 100 that do not favor certain input ports over other input ports, switch scheduler 130 typically uses a round- robin method of handling scheduling requests.
  • Banyan networks are characterized by each input port on the switch being locally controlled on each cycle by the addressing information (e.g., the address portion of the header) in the arriving message. Banyan networks typically have multiple stages composed of interconnected, locally switching elements. As each stage of the banyan network is traversed, the address bits for that stage are deleted from the address header. After the last stage, the address header is empty, leaving only the data chunk (with the chunk header) to arrive at switch interface 120.
  • FIG. 2 is an explanatory diagram of a conventional two-column banyan network 200 that constitutes a 16-port switch.
  • Two-column banyan network 200 may include eight identical 4x4 crossbar switching elements 210 arrayed in two stages. As illustrated in Fig. 2, the connectivity between the first and second stages is a full mesh, meaning that each crossbar switching element 210 in the first stage is connected to all crossbar switching elements 210 in the second stage. Only output ports of the elements have labels (e.g., 0-3) in Fig. 2, and input ports are not labeled.
  • Crossbar switching elements 210 cause a given message to always arrive at the same output port, regardless of which input receives the message.
  • the crossbar input port connections can be arranged to suit implementation convenience.
  • Fig. 2 illustrates the operation of conventional two-column banyan network 200.
  • second element 210 in the first stage receives the address and data chunk at one of its inputs.
  • Second crossbar switching element 210 routes the address and data chunk to output port 3, while stripping off the first two address bits (i.e., 11) that identify this output port.
  • Fourth element 210 in the second stage then receives the remaining address and data chunk at one of its inputs.
  • Fourth crossbar switching element 210 routes the address and data chunk to output port 2, while stripping off the first two address bits (i.e., 10) that identify this output port.
  • conventional two-column banyan network 200 routes the data chunk to the 14th output that corresponds to its switch address of 1110.
  • such simple banyan networks may have interior blocking problems. That is, two messages addressed to different outputs may simultaneously require the same interior connection, and hence be "blocked" from proceeding. For example, in Figure 3 two messages addressed to outputs o and 2 are received on different inputs on the upper-left crossbar switching element 210. Both messages, however, require the single connection 310 between the upper-left and upper-right crossbar switching elements 210. This requires that at least one of the two messages be deferred, even though no (external) output port conflict exists. Such interior blocking may substantially reduce the potential throughput of such banyan switches.
  • Batcher networks are conventionally known solutions to interior blocking. Batcher networks perform an ordering of inputs before their introduction to a banyan network to prevent interior blocking. However, Batcher networks are complex networks that are impractical to implement in most situations.
  • Switch schedulers and methods of switch scheduling consistent with the present invention address this and other needs by utilizing a three-stage banyan switch and allocating internal paths of the switch to source and destination pairs prior to sending grant messages to a switch interface. Additional bits may be appended to the destination port addresses in accordance with the allocation of internal paths by the switch scheduler.
  • a method of allocating internal paths in a multi-stage switch having a plurality of source ports and a like plurality of destination ports for a switch cycle may include receiving information identifying one or more of said source ports with one or more of said destination ports respectively as a like plurality of respective pairs of source and destination ports.
  • Open paths may be identified between said one of said source ports and said one of said destination ports within one of said pairs of source and destination ports.
  • An open path may be associated with said one of said pairs if any path between said one of said source ports and said one of said destination ports is identified as being open, and said open path may be designated as closed.
  • a switching system may include a three-stage banyan switch having a number of inputs and outputs.
  • a switch scheduler may be configured to allocate internal paths in the switch among a plurality of source and destination port pairs to avoid path blocking within the switch, and may also be configured to append one or more address bits to addresses identifying destination ports within the plurality of source and destination port pairs in accordance with the allocation of the internal paths.
  • the switch scheduler also may be configured to remove a source and destination port pair from the plurality of source and destination port pairs if all paths between the pair are blocked.
  • a computer program product stored on a computer-readable medium and including instructions executable by at least one processor may include a first program segment to receive a plurality of source/destination pairs for a switch.
  • the computer program may also include a second program segment to identify all available paths between one source/destination pair of said plurality of source/destination pairs, and a third program segment to allocate an available path to the one source/destination pair and to mark the available path as unavailable to other ones of the plurality of source/destination pairs.
  • a fourth program segment may transmit information relating to the available path and the one source/ destination pair.
  • FIG. 1 is a block diagram of a conventional switch subsystem in a network routing device
  • FIG. 2 is an explanatory diagram of a conventional two-stage banyan network
  • FIG. 3 is a diagram of a conventional two-stage banyan network that illustrates an interior blocking effect
  • Fig. 4 is a block diagram of an three-stage banyan network according to an implementation consistent with the present invention
  • Fig. 5 is a flow chart illustrating exemplary processing performed by the switch scheduler
  • Fig. 6 is an exemplary diagram illustrating several portions of the flow chart in Fig. 5 for the addresses and switch architecture in Fig. 4;
  • Fig. 7 is an exemplary processing element to implement path assignment processing in an implementation consistent with the present invention.
  • Switch schedulers and methods of switch scheduling consistent with the present invention utilize a three-stage banyan switch and allocate internal paths of the switch to source and destination pairs prior to sending grant messages from a switch scheduler to a switch interface. Additional bits are appended to the destination port addresses of the pairs in accordance with the allocation of internal paths by the switch scheduler.
  • Fig. 4 is a block diagram of an exemplary three-stage banyan switch 400 according to an implementation consistent with the present invention.
  • Switch 400 may be coupled between a switch scheduler 130 and a switch interface 120, as illustrated, for example, in Fig. 1. That is, switch 400 may be one implementation of switch 140 shown in Fig. 1.
  • Switch 400 may include a first stage 410, a second stage 420, and a third stage 430. Each of the first, second, and third stages may include a number of crossbar switching elements 210. In the exemplary configuration shown in Fig. 4, switch 400 has 16 inputs and outputs, and each crossbar switching element has four inputs and outputs.
  • Three stages 410-430 have a full-mesh interconnection pattern between each adjacent stage.
  • the additional stage e.g., first stage 410) provides a degree of freedom relative to the two-stage switch in Figs. 2 and 3 to avoid internal blocking within switch 400.
  • Other switches may include more stages.
  • switch 400 in Fig. 4 that is described in detail below illustrates that additional stage 410 (and its associated two bits 440 appended to the front of the addresses) avoids the internal blocking seen in Fig. 3.
  • additional stage 410 and its associated two bits 440 appended to the front of the addresses
  • two bits 440 associated with the first input identify output port o
  • two bits 441 associated with the other first input identify output port 1.
  • the remaining bits identify the rest of the path, and output contention is avoided.
  • avoidance of output blocking is ensured, because switch scheduler 130 allocates only one grant for a given output port during a particular switch cycle.
  • Two additional address bits 440 and 441 for new first stage 410 may be computed by switch scheduler 130 and returned as part of the grant message, or may be supplied statically by the message sender (not shown).
  • switch interface 120 may supply statically address bits 440 and 441 to map one pipeline 110 to another 120.
  • pipeline 110 may supply static address bits 440 and 441. Details of generating two additional address bits 440 and 441 follow.
  • the message sender may statically supply the two address bits 440 and 441.
  • the switch scheduler 130 may resolve internal path contention dynamically. For the 3-column switch design of Fig. 4, switch scheduler 130 may compute additional address bits 440 and 441 for "de-blocking" and may include them in each grant message sent to switch interface 120. With such dynamic processing, pipeline configuration issues are minimized. In addition, dynamic changes to pipelines will not cause "cascade effects" that require other pipelines to be reconfigured at runtime.
  • Fig. 5 illustrates dynamic address assignment processing that is performed by switch scheduler 130 according to an implementation consistent with the present invention.
  • Scheduler 130 first determines a set of grants from the set of requests received from switch interface 120 [act 510]. In determining the set of grants (i.e., a set of source/ destination pairs that have received grants and associated address bits for the destinations), scheduler 130 may use a round- robin scheduling algorithm, details of which appear in the above-referenced related application, incorporated by reference herein.
  • Switch scheduler 130 next marks all available paths within switch 400 as "open” (or “available”) [act 520].
  • the paths in switch 400 may be conceptualized as two state arrays (e.g., each 4x4 as shown in Fig. 6), with one array (denoted “SP") representing the connections between first (i.e., source) stage or layer 410 and second stage or layer 420, and the second array (denoted "DP") representing the connections between third (i.e., destination) stage or layer 430 and second stage or layer 420.
  • switch scheduler 130 determines all open paths from crossbar switching element 210 in first stage 410 that corresponds to the source and crossbar switching element 210 in third stage 430 corresponding to the destination [act 530].
  • a path is considered to be "open” if there are two connections from source and destination switching elements 210 to the same switching element 210 in second layer 420, neither of which has been "taken” (or "closed") by another grant.
  • the paths between the source switching element and the destination switching element may be viewed as a logical AND between a row in the SP array corresponding to the source and a row in the DP array corresponding to the destination.
  • Switch scheduler 130 determines if there are any open, or unblocked, paths between the source and destination of the current grant [act 540]. If there is at least one unblocked path, scheduler 130 may choose one path, mark the path as taken, append two appropriate bits to the front of the address bits associated with the destination, and send the grant message to switch interface 120 [act 550]. If there is more than one available path, switch scheduler 130 may choose the first open path it encounters, or it may choose a random open path. The two bits appended to the destination address may be chosen to send the destination address information and data chunk along the chosen path.
  • switch scheduler 130 may "retract" a grant [act 560]. Such retraction may be accomplished by deleting the grant, or otherwise not sending the grant to switch interface 120. Retracting a grant necessitates that the granted request be made again at a later time, thereby reducing throughput. However, in typical situations, this "withdrawal" usually constitutes only a few tenths of a percent of the total throughput of the switch. [0037] If there are any grants remaining that have not had paths assigned, switch scheduler 130 repeats acts 530-560 as necessary for the remaining grants [act 570].
  • the de-blocking processing as described herein could result in a grant retraction due to, for example, a "permanent" l-to-i pipeline connection through switch 400.
  • a "dynamic" path assigned to a grant may conflict with the permanent pipeline for a particular internal path segment. For example, some portion of the only possible path for assignment to a grant may conflict with a "permanent" or "static” connection through switch 400. In such an instance, switch scheduler 130 would retract the grant (e.g., either associated with the dynamic or the permanent path, whichever path is assigned later) whose path is blocked.
  • a message sender e.g., source port
  • a destination e.g., a message sender
  • no "implicit grants" e.g., for permanent or static connections
  • An algorithm similar to the flow chart in Fig. 5 may be used to allocate the "closed” paths among permanent source/destination pairs in this alternate implementation.
  • some paths may be "reserved” (i.e., not available to be “closed”) for later use by the dynamic scheduler.
  • Such a "closed path” algorithm may be run relatively infrequently (e.g., periodically over hours or days) as compared to the processing in Fig. 5 to dynamically allocate paths for grants (e.g., nanoseconds).
  • Fig. 6 is an exemplary diagram illustrating several portions of the flow chart in Fig. 5 for the addresses and switch architecture in Fig. 4.
  • SP array 610 and DP array 620 have already been initialized (e.g., with logical l's) to indicate that all connections in switch 400 were initially available [act 520].
  • the path for the first source/destination pair (address 0000) in Fig. 4 has already been assigned at the stage of processing shown in Fig. 6.
  • the logical o's in the upper left elements of SP array 610 and DP array 620 indicate that the two top-most paths in Fig. 4 are now closed to other grants.
  • the processing for the second source/destination pair (address 0010) will now be described.
  • Switch scheduler 130 performs a logical AND of the first row of SP array 610 (corresponding to the oth element 210 in first stage 410 that contains the source) and the first row of DP array 620 (corresponding to the oth element 210 in third stage 430 that contains the destination) [act 530].
  • This ANDing operation produces a composite row 630 whose elements indicate the number and location of any open paths.
  • the three l's in row 630 indicate that three open paths remain between the oth source crossbar switching element in stage 410 and the oth destination crossbar switching element in stage 430 [act 540].
  • Switch scheduler 130 may choose the first logical "1" that it encounters (shown circled) along row 630 as the path for the second source/destination pair (address 0010) [act 550]. Other choices within row 630 are possible (e.g., the second or third logical "1"), but may not be as fast as choosing the first available logical "1" (i.e., path) encountered. After this open path is chosen for the second source/destination pair, a logical "o" is inserted in each of the SP array and the DP array to indicate that this path is no longer available for other grants in the current switch cycle. The resulting SP and DP arrays after assignment of the path for the second grant in Fig. 4 are labeled 610' and 620', respectively. The switch scheduler may append two bits 440 (e.g., 01 for the second input in Fig. 4) to the destination address based on which path is selected.
  • two bits 440 e.g., 01 for the second input in Fig. 4
  • the path assignment processing described above may be executed by, for example, a processor in the switch scheduler 130 in Order(N2) time, where N is the number of switch inputs and outputs.
  • Order(N2) time is an improvement over an algorithm that finds an optimum set of paths, which is NP-complete.
  • additional hardware e.g., Order(N) hardware elements
  • the path assignment processing may be executed in Order(N) time.
  • Fig. 7 illustrates one exemplary processing element 700 for implementing the above-described path assignment processing in an implementation consistent with the present invention.
  • Exemplary processing element 700 may include an SP register 710, a DP register 720, a P register 730, and three AND gates 740-760.
  • the total hardware may include N processing elements 700, where N is the number of switch inputs.
  • the hardware also may include a processor to control and facilitate data transfer among the processing elements 700.
  • SP register 710 may be configured to store a state (i.e., open or taken) of a connection between a source crossbar switching element 210 in first stage 410 and a crossbar switching element 210 in second stage 420.
  • DP register 720 may be configured to store a state (i.e., open or taken) of a connection between a destination crossbar switching element 210 in third stage 430 and a crossbar switching element 210 in second stage 420.
  • N pairs of SP register 710 and DP register 720 may constitute SP array 610 and DP array 620 in Fig. 6.
  • P register 730 may be configured to receive and store a request for the path represented by the contents of SP register 710 and DP register 720.
  • AND gates 740-760 are configured to perform a logical AND operation on their respective two inputs.
  • AND gate 740 is configured to AND the contents of SP register 710 and DP register 720 to determine if a complete path is open. If AND gate 740 produces a logical "1," the path is open, while if it produces a "o,” at least a portion of the path is already taken.
  • AND gate 750 is configured to AND the contents of P register 730 and the output of AND gate 740 to determine if a complete path is open and has been requested. If AND gate 750 produces a logical "1," the path is open and has been requested by P register 730. If AND gate 750 produces a "o,” either a path was not open, or it was not requested, or both. If AND gate 750 produces a "1,” a source/destination pair is assigned to that path, and a signal is sent to an AX register (not shown) containing the additional two address bits for the middle switch column. The "1" output by AND gate 750 also resets SP register 710 and DP register 720 to "o" for the remainder of the switch cycle, indicating that the path represented by these registers is taken.
  • AND gate 760 is configured to either pass on or prevent the contents of the P register 730 from passing to an adjacent stage. If AND gate 740 produces a "1," the inverting input on AND gate 760 will prevent the contents of P register 730 from being passed on to the next stage. Such a state signifies that the requested path has already been assigned, and the request in P register 730 need not be passed on. Conversely, if AND gate 740 produces a "o,” the inverting input on AND gate 760 will pass any logic "1" in P register 730 to the next stage.
  • AND gates 740-760 are shown, those skilled in the art that other gates and/ or logic conventions may be utilized to perform the above-described functions.
  • FIG. 7 does not show elements for initializing DP and SP registers 710 and 720, such initialization may be performed at the start of the switch cycle by loading each under the control of a processor (not shown).
  • DP and SP registers 710 and 720 are set to '1', but support of static paths requires that the processor be able to set the registers to 'o,' thus reserving at least some of the interior connections for static paths.
  • the present invention may also be implemented by a computer program embodied in a computer-readable media, such as magnetic or optical discs, random access memory, or any other type of electronic or optical storage.
  • a computer program while not necessarily suitable for high-speed routers, may nonetheless assign internal paths within banyan switches, thereby preventing blocking internal to the switches.
  • the present invention cover the modifications and variations of the invention provided that they come within the scope of the claims and their equivalents.

Abstract

A switching system may include a banyan switch (400) having three stages (410-430) to lessen internal path blocking within the switch. A switch scheduler (130) may be configured to allocate internal paths within the switch among a plurality of source and destination port pairs to avoid such path blocking. The scheduler may also be configured to append one or more address bits (440) to addresses identifying the destination ports so that data input to the first stage (410) travels along its allocated path. The switch scheduler may contain a number of hardware processing elements (700) to rapidly perform the allocation of the internal paths.

Description

NEAR-NON-BLOCKING SWITCH SCHEDULER FOR THREE-STAGE
BANYAN SWITCHES
RELATED APPLICATION
[0001] This application is related to the following commonly assigned, co- pending application entitled "Round Robin Switch Scheduler With Random Source Swapping" (attorney docket no. 00-4060), serial number 09/948,812, filed September 7, 2001, which is incorporated by reference in its entirety.
BACKGROUND OF THE INVENTION
Field of the Invention
[0002] The present invention relates generally to network switch subsystems and, more particularly, to switch schedulers and methods for switch scheduling.
Description of Related Art [0003] Network routing devices generally receive data from one or more input lines through one or more corresponding input ports and transfer the data to one or more output lines via one or more output ports. The particular output port(s) that output the data may be determined from one or more addresses accompanying the data. Within a routing device, the data may be located in a pipeline before and after a switching or mapping operation is performed by the device. The pipeline may run from the input and output ports to a switch subsystem that switches or routes data based on an address contained in the data. The switch subsystem may include a switch interface coupled to the pipeline, a switch, and a switch scheduler to handle data requests for the output ports and to pass data to the switch.
[0004] Fig. 1 is a block diagram of a conventional switch subsystem 100 in a network routing device. Switch subsystem 100 may include a switch interface 120 that transmits data from a pipeline output 110 and receives data for a pipeline input 150. Switch subsystem 100 also may include a switch scheduler 130 and a switch 140 coupled to switch interface 120. Only one switch input and one switch output are shown for clarity of explanation. However, an N-port switch subsystem 100 may have N inputs to switch scheduler 130, N connections between switch scheduler 130 and switch 140, and N outputs from switch 140 to switch interface 120.
[0005] Data flow through switch subsystem 100 will now be described for one switch cycle (i.e., the time needed for switch subsystem 100 to receive N inputs of data, switch the data, and provide the data to pipeline input 150). Pipeline output 110 transmits a chunk of data and a header to switch interface 120. The header may include scheduling requests for future switch cycles and addressing information to control switch 140 during the current switch cycle. To minimize the number of connections between switch scheduler 130/switch 140 and switch interface 120, the different message elements are combined into a single message sent to scheduler 130. As shown in Fig. 1, the single message includes scheduling requests for future cycles (left portion of message), addressing information to control the switch (middle portion of message), and the data chunk (right portion of message).
[0006] Switch scheduler 130 strips off the request element and passes the remainder (i.e., address and data chunk) unchanged from each of its input ports to a corresponding output port. Switch 140 uses the remainder of the message passed on by scheduler 130 to determine an appropriate port for the data chunk on switch interface 120. Switch 140 then transmits the data chunk, via switch interface 120, to pipeline input 150. Meanwhile, during the same switch cycle, switch scheduler 130 determines whether to grant the scheduling request message for a future cycle. Switch scheduler 130 transmits a grant or non-grant message, via switch interface 120, to pipeline output 110. [0007] Switch scheduling in this manner will necessarily entail at least a one cycle latency between the grant decision (made while another chunk is being switched in one cycle) and the switching of the data chunk associated with that grant decision (in a later cycle). Switch scheduler 130 may use a number of schemes to determine which request to grant when, for example, multiple requests are received for the same output port during the same switch cycle in an N-port switch subsystem. For switch subsystems 100 that do not favor certain input ports over other input ports, switch scheduler 130 typically uses a round- robin method of handling scheduling requests.
[0008] One type of switch 140 that may be used is a self-routing banyan network. Banyan networks are characterized by each input port on the switch being locally controlled on each cycle by the addressing information (e.g., the address portion of the header) in the arriving message. Banyan networks typically have multiple stages composed of interconnected, locally switching elements. As each stage of the banyan network is traversed, the address bits for that stage are deleted from the address header. After the last stage, the address header is empty, leaving only the data chunk (with the chunk header) to arrive at switch interface 120.
[0009] Fig. 2 is an explanatory diagram of a conventional two-column banyan network 200 that constitutes a 16-port switch. Two-column banyan network 200 may include eight identical 4x4 crossbar switching elements 210 arrayed in two stages. As illustrated in Fig. 2, the connectivity between the first and second stages is a full mesh, meaning that each crossbar switching element 210 in the first stage is connected to all crossbar switching elements 210 in the second stage. Only output ports of the elements have labels (e.g., 0-3) in Fig. 2, and input ports are not labeled. Crossbar switching elements 210 cause a given message to always arrive at the same output port, regardless of which input receives the message. Hence, the crossbar input port connections can be arranged to suit implementation convenience.
[0010] Fig. 2 illustrates the operation of conventional two-column banyan network 200. For example, second element 210 in the first stage receives the address and data chunk at one of its inputs. Second crossbar switching element 210 routes the address and data chunk to output port 3, while stripping off the first two address bits (i.e., 11) that identify this output port. Fourth element 210 in the second stage then receives the remaining address and data chunk at one of its inputs. Fourth crossbar switching element 210 routes the address and data chunk to output port 2, while stripping off the first two address bits (i.e., 10) that identify this output port. In this manner, conventional two-column banyan network 200 routes the data chunk to the 14th output that corresponds to its switch address of 1110.
[0011] Conventional banyan network 200 has no mechanism to handle multiple messages destined for the same network output port. For example, if two messages are destined for the same output port on the same crossbar element on the same cycle, an error occurs. This potential problem is resolved by having switch scheduler 130 determine which input requests to grant so that no collisions occur on any given cycle at the output of network 200.
[0012] However, such simple banyan networks may have interior blocking problems. That is, two messages addressed to different outputs may simultaneously require the same interior connection, and hence be "blocked" from proceeding. For example, in Figure 3 two messages addressed to outputs o and 2 are received on different inputs on the upper-left crossbar switching element 210. Both messages, however, require the single connection 310 between the upper-left and upper-right crossbar switching elements 210. This requires that at least one of the two messages be deferred, even though no (external) output port conflict exists. Such interior blocking may substantially reduce the potential throughput of such banyan switches.
[0013] So-called "Batcher networks" are conventionally known solutions to interior blocking. Batcher networks perform an ordering of inputs before their introduction to a banyan network to prevent interior blocking. However, Batcher networks are complex networks that are impractical to implement in most situations.
[0014] As a result, a need exists for a switch subsystem requiring little additional hardware that prevents interior blocking problems in a banyan network.
SUMMARY OF THE INVENTION
[0015] Switch schedulers and methods of switch scheduling consistent with the present invention address this and other needs by utilizing a three-stage banyan switch and allocating internal paths of the switch to source and destination pairs prior to sending grant messages to a switch interface. Additional bits may be appended to the destination port addresses in accordance with the allocation of internal paths by the switch scheduler.
[0016] In accordance with the purpose of the invention as embodied and broadly described herein, a method of allocating internal paths in a multi-stage switch having a plurality of source ports and a like plurality of destination ports for a switch cycle may include receiving information identifying one or more of said source ports with one or more of said destination ports respectively as a like plurality of respective pairs of source and destination ports. Open paths may be identified between said one of said source ports and said one of said destination ports within one of said pairs of source and destination ports. An open path may be associated with said one of said pairs if any path between said one of said source ports and said one of said destination ports is identified as being open, and said open path may be designated as closed. [0017] In another implementation consistent with the present invention, a switching system may include a three-stage banyan switch having a number of inputs and outputs. A switch scheduler may be configured to allocate internal paths in the switch among a plurality of source and destination port pairs to avoid path blocking within the switch, and may also be configured to append one or more address bits to addresses identifying destination ports within the plurality of source and destination port pairs in accordance with the allocation of the internal paths. The switch scheduler also may be configured to remove a source and destination port pair from the plurality of source and destination port pairs if all paths between the pair are blocked. [0018] In yet another implementation consistent with the present invention, a computer program product stored on a computer-readable medium and including instructions executable by at least one processor may include a first program segment to receive a plurality of source/destination pairs for a switch. The computer program may also include a second program segment to identify all available paths between one source/destination pair of said plurality of source/destination pairs, and a third program segment to allocate an available path to the one source/destination pair and to mark the available path as unavailable to other ones of the plurality of source/destination pairs. A fourth program segment may transmit information relating to the available path and the one source/ destination pair. BRIEF DESCRIPTION OF THE DRAWINGS
[0019] The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and, together with the description, explain the invention. In the drawings,
[0020] Fig. 1 is a block diagram of a conventional switch subsystem in a network routing device;
[0021] Fig. 2 is an explanatory diagram of a conventional two-stage banyan network;
[0022] Fig. 3 is a diagram of a conventional two-stage banyan network that illustrates an interior blocking effect; [0023] Fig. 4 is a block diagram of an three-stage banyan network according to an implementation consistent with the present invention;
[0024] Fig. 5 is a flow chart illustrating exemplary processing performed by the switch scheduler;
[0025] Fig. 6 is an exemplary diagram illustrating several portions of the flow chart in Fig. 5 for the addresses and switch architecture in Fig. 4; and
[0026] Fig. 7 is an exemplary processing element to implement path assignment processing in an implementation consistent with the present invention.
DETAILED DESCRIPTION [0027] The following detailed description of the invention refers to the accompanying drawings. The same reference numbers in different drawings identify the same or similar elements. Also, the following detailed description does not limit the invention. Instead, the scope of the invention is defined by the appended claims and equivalents. [0028] Switch schedulers and methods of switch scheduling consistent with the present invention utilize a three-stage banyan switch and allocate internal paths of the switch to source and destination pairs prior to sending grant messages from a switch scheduler to a switch interface. Additional bits are appended to the destination port addresses of the pairs in accordance with the allocation of internal paths by the switch scheduler.
EXEMPLARY SWITCH CONFIGURATION
[0029] Fig. 4 is a block diagram of an exemplary three-stage banyan switch 400 according to an implementation consistent with the present invention. Switch 400 may be coupled between a switch scheduler 130 and a switch interface 120, as illustrated, for example, in Fig. 1. That is, switch 400 may be one implementation of switch 140 shown in Fig. 1. Switch 400 may include a first stage 410, a second stage 420, and a third stage 430. Each of the first, second, and third stages may include a number of crossbar switching elements 210. In the exemplary configuration shown in Fig. 4, switch 400 has 16 inputs and outputs, and each crossbar switching element has four inputs and outputs. Three stages 410-430 have a full-mesh interconnection pattern between each adjacent stage. The additional stage (e.g., first stage 410) provides a degree of freedom relative to the two-stage switch in Figs. 2 and 3 to avoid internal blocking within switch 400. Other switches, consistent with implementations of the present invention, may include more stages.
[0030] Using the same two output addresses (i.e., 0000 and 0010), as in the example shown in Fig. 3, switch 400 in Fig. 4 that is described in detail below illustrates that additional stage 410 (and its associated two bits 440 appended to the front of the addresses) avoids the internal blocking seen in Fig. 3. For example, two bits 440 associated with the first input identify output port o, and two bits 441 associated with the other first input identify output port 1. The remaining bits identify the rest of the path, and output contention is avoided. In addition, as previously noted, avoidance of output blocking is ensured, because switch scheduler 130 allocates only one grant for a given output port during a particular switch cycle. In other words, the two additional address bits used by elements in stage 1 are controlled by switch scheduler 130 to be different from other pairs of additional address bits during the same switch cycle. Two additional address bits 440 and 441 for new first stage 410 may be computed by switch scheduler 130 and returned as part of the grant message, or may be supplied statically by the message sender (not shown). For example, switch interface 120 may supply statically address bits 440 and 441 to map one pipeline 110 to another 120. Alternately, pipeline 110 may supply static address bits 440 and 441. Details of generating two additional address bits 440 and 441 follow.
EXEMPLARY SCHEDULER PROCESSING [0031] As described above, the message sender may statically supply the two address bits 440 and 441. In an alternative implementation consistent with the present invention, the switch scheduler 130 may resolve internal path contention dynamically. For the 3-column switch design of Fig. 4, switch scheduler 130 may compute additional address bits 440 and 441 for "de-blocking" and may include them in each grant message sent to switch interface 120. With such dynamic processing, pipeline configuration issues are minimized. In addition, dynamic changes to pipelines will not cause "cascade effects" that require other pipelines to be reconfigured at runtime.
[0032] Fig. 5 illustrates dynamic address assignment processing that is performed by switch scheduler 130 according to an implementation consistent with the present invention. Scheduler 130 first determines a set of grants from the set of requests received from switch interface 120 [act 510]. In determining the set of grants (i.e., a set of source/ destination pairs that have received grants and associated address bits for the destinations), scheduler 130 may use a round- robin scheduling algorithm, details of which appear in the above-referenced related application, incorporated by reference herein.
[0033] Switch scheduler 130 next marks all available paths within switch 400 as "open" (or "available") [act 520]. The paths in switch 400 may be conceptualized as two state arrays (e.g., each 4x4 as shown in Fig. 6), with one array (denoted "SP") representing the connections between first (i.e., source) stage or layer 410 and second stage or layer 420, and the second array (denoted "DP") representing the connections between third (i.e., destination) stage or layer 430 and second stage or layer 420.
[0034] For each grant (i.e., source and destination pair), switch scheduler 130 determines all open paths from crossbar switching element 210 in first stage 410 that corresponds to the source and crossbar switching element 210 in third stage 430 corresponding to the destination [act 530]. A path is considered to be "open" if there are two connections from source and destination switching elements 210 to the same switching element 210 in second layer 420, neither of which has been "taken" (or "closed") by another grant. The paths between the source switching element and the destination switching element may be viewed as a logical AND between a row in the SP array corresponding to the source and a row in the DP array corresponding to the destination. When there are two open connections to a switching element 210 in second stage 420, the logical AND produces a valid (open path) result. [0035] Switch scheduler 130 determines if there are any open, or unblocked, paths between the source and destination of the current grant [act 540]. If there is at least one unblocked path, scheduler 130 may choose one path, mark the path as taken, append two appropriate bits to the front of the address bits associated with the destination, and send the grant message to switch interface 120 [act 550]. If there is more than one available path, switch scheduler 130 may choose the first open path it encounters, or it may choose a random open path. The two bits appended to the destination address may be chosen to send the destination address information and data chunk along the chosen path.
[0036] If no unblocked path exists, switch scheduler 130 may "retract" a grant [act 560]. Such retraction may be accomplished by deleting the grant, or otherwise not sending the grant to switch interface 120. Retracting a grant necessitates that the granted request be made again at a later time, thereby reducing throughput. However, in typical situations, this "withdrawal" usually constitutes only a few tenths of a percent of the total throughput of the switch. [0037] If there are any grants remaining that have not had paths assigned, switch scheduler 130 repeats acts 530-560 as necessary for the remaining grants [act 570]. It should be noted that the de-blocking processing as described herein could result in a grant retraction due to, for example, a "permanent" l-to-i pipeline connection through switch 400. A "dynamic" path assigned to a grant may conflict with the permanent pipeline for a particular internal path segment. For example, some portion of the only possible path for assignment to a grant may conflict with a "permanent" or "static" connection through switch 400. In such an instance, switch scheduler 130 would retract the grant (e.g., either associated with the dynamic or the permanent path, whichever path is assigned later) whose path is blocked. Hence, with the above-described processing (e.g., act 520), a message sender (e.g., source port) must request a destination, and also must obey the grants from scheduler 130 that it receives. In other words, no "implicit grants" (e.g., for permanent or static connections) are possible using the dynamic near-non-blocking scheduler 130 described above.
[0038] However, in an alternate implementation consistent with the present invention, certain internal paths may be initialized to "closed," or "taken," to allow for permanent or static paths. In this alternate implementation, act 520 would mark only certain connections in the banyan switch as "open." In such an implementation, implicit grants may be possible, because subsequent dynamically assigned paths will not conflict with the static internal path(s) initially marked as taken (i.e., implicitly granted).
[0039] An algorithm similar to the flow chart in Fig. 5 may be used to allocate the "closed" paths among permanent source/destination pairs in this alternate implementation. In this scheme, some paths may be "reserved" (i.e., not available to be "closed") for later use by the dynamic scheduler. Such a "closed path" algorithm may be run relatively infrequently (e.g., periodically over hours or days) as compared to the processing in Fig. 5 to dynamically allocate paths for grants (e.g., nanoseconds).
[0040] Fig. 6 is an exemplary diagram illustrating several portions of the flow chart in Fig. 5 for the addresses and switch architecture in Fig. 4. At the stage of processing shown, SP array 610 and DP array 620 have already been initialized (e.g., with logical l's) to indicate that all connections in switch 400 were initially available [act 520]. Further, the path for the first source/destination pair (address 0000) in Fig. 4 has already been assigned at the stage of processing shown in Fig. 6. The logical o's in the upper left elements of SP array 610 and DP array 620 indicate that the two top-most paths in Fig. 4 are now closed to other grants. The processing for the second source/destination pair (address 0010) will now be described.
[0041] Switch scheduler 130 performs a logical AND of the first row of SP array 610 (corresponding to the oth element 210 in first stage 410 that contains the source) and the first row of DP array 620 (corresponding to the oth element 210 in third stage 430 that contains the destination) [act 530]. This ANDing operation produces a composite row 630 whose elements indicate the number and location of any open paths. The three l's in row 630 indicate that three open paths remain between the oth source crossbar switching element in stage 410 and the oth destination crossbar switching element in stage 430 [act 540]. [0042] Switch scheduler 130 may choose the first logical "1" that it encounters (shown circled) along row 630 as the path for the second source/destination pair (address 0010) [act 550]. Other choices within row 630 are possible (e.g., the second or third logical "1"), but may not be as fast as choosing the first available logical "1" (i.e., path) encountered. After this open path is chosen for the second source/destination pair, a logical "o" is inserted in each of the SP array and the DP array to indicate that this path is no longer available for other grants in the current switch cycle. The resulting SP and DP arrays after assignment of the path for the second grant in Fig. 4 are labeled 610' and 620', respectively. The switch scheduler may append two bits 440 (e.g., 01 for the second input in Fig. 4) to the destination address based on which path is selected.
EXEMPLARY SCHEDULER CONFIGURATION
[0043] The path assignment processing described above may be executed by, for example, a processor in the switch scheduler 130 in Order(N2) time, where N is the number of switch inputs and outputs. Order(N2) time is an improvement over an algorithm that finds an optimum set of paths, which is NP-complete. However, if additional hardware is used in switch scheduler 130 (e.g., Order(N) hardware elements), the path assignment processing may be executed in Order(N) time. Those skilled in the art will appreciate how to implement the path assignment processing in, for example, a processor and memory from Fig. 5 and its associated explanation. The following example illustrates one implementation consistent with the present invention regarding how to implement the path assignment processing using additional hardware.
[0044] Fig. 7 illustrates one exemplary processing element 700 for implementing the above-described path assignment processing in an implementation consistent with the present invention. Exemplary processing element 700 may include an SP register 710, a DP register 720, a P register 730, and three AND gates 740-760. The total hardware may include N processing elements 700, where N is the number of switch inputs. The hardware also may include a processor to control and facilitate data transfer among the processing elements 700. [0045] SP register 710 may be configured to store a state (i.e., open or taken) of a connection between a source crossbar switching element 210 in first stage 410 and a crossbar switching element 210 in second stage 420. Similarly, DP register 720 may be configured to store a state (i.e., open or taken) of a connection between a destination crossbar switching element 210 in third stage 430 and a crossbar switching element 210 in second stage 420. N pairs of SP register 710 and DP register 720 may constitute SP array 610 and DP array 620 in Fig. 6.
[0046] P register 730 may be configured to receive and store a request for the path represented by the contents of SP register 710 and DP register 720. AND gates 740-760 are configured to perform a logical AND operation on their respective two inputs. AND gate 740 is configured to AND the contents of SP register 710 and DP register 720 to determine if a complete path is open. If AND gate 740 produces a logical "1," the path is open, while if it produces a "o," at least a portion of the path is already taken.
[0047] AND gate 750 is configured to AND the contents of P register 730 and the output of AND gate 740 to determine if a complete path is open and has been requested. If AND gate 750 produces a logical "1," the path is open and has been requested by P register 730. If AND gate 750 produces a "o," either a path was not open, or it was not requested, or both. If AND gate 750 produces a "1," a source/destination pair is assigned to that path, and a signal is sent to an AX register (not shown) containing the additional two address bits for the middle switch column. The "1" output by AND gate 750 also resets SP register 710 and DP register 720 to "o" for the remainder of the switch cycle, indicating that the path represented by these registers is taken.
[0048] AND gate 760 is configured to either pass on or prevent the contents of the P register 730 from passing to an adjacent stage. If AND gate 740 produces a "1," the inverting input on AND gate 760 will prevent the contents of P register 730 from being passed on to the next stage. Such a state signifies that the requested path has already been assigned, and the request in P register 730 need not be passed on. Conversely, if AND gate 740 produces a "o," the inverting input on AND gate 760 will pass any logic "1" in P register 730 to the next stage. Although AND gates 740-760 are shown, those skilled in the art that other gates and/ or logic conventions may be utilized to perform the above-described functions.
[0049] Although Fig. 7 does not show elements for initializing DP and SP registers 710 and 720, such initialization may be performed at the start of the switch cycle by loading each under the control of a processor (not shown). Normally, DP and SP registers 710 and 720 are set to '1', but support of static paths requires that the processor be able to set the registers to 'o,' thus reserving at least some of the interior connections for static paths.
[0050] The operation of a group of N processing elements will now be briefly described. For each row or "stage" (e.g., square root of N) of elements 700, the bit in P register 730 is propagated down the row to determine which path(s) are taken between one pair of crossbar switching elements. After all the P register bits have propagated, the bits in SP registers 710 are row-wise shifted to the next group of elements 700. In this manner, each set SP bits may be compared with each group of DP bits. Such shifting of the SP bits may take Order(N) time, as described above. [0051] The foregoing description of preferred embodiments of the present invention provides illustration and description, but is not intended to be exhaustive or to limit the invention to the precise form disclosed. No element, act, or instruction used in the description of the present application should be construed as critical or essential to the invention unless explicitly described as such. As used herein, the article "a" is intended to include one or more items. Where only one item is intended, the term "one" or similar restrictive language is used.
[0052] It will be apparent to those skilled in the art that various modifications and variations can be made in the switch scheduler and scheduling method of the present invention without departing from the spirit or scope of the invention. For example, while series of acts have been described with regard to Fig. 5, the order of the acts may be modified in other implementations consistent with the present invention. Also, although the scheduler and switch have been described with 16 inputs and outputs, they may have a smaller or larger number (e.g., 128, 256, 512, 1024, 2048, etc.) in other implementations consistent with the present invention. Further, the present invention, although described as being implemented via hardware, may also be implemented by a computer program embodied in a computer-readable media, such as magnetic or optical discs, random access memory, or any other type of electronic or optical storage. Such a computer program, while not necessarily suitable for high-speed routers, may nonetheless assign internal paths within banyan switches, thereby preventing blocking internal to the switches. Thus, it is intended that the present invention cover the modifications and variations of the invention provided that they come within the scope of the claims and their equivalents.

Claims

WHAT IS CLAIMED IS l. A method of allocating internal paths in a multi-stage switch having a plurality of source ports and a like plurality of destination ports for a switch cycle, said method comprising: receiving information identifying one or more of said source ports with one or more of said destination ports respectively as a like plurality of respective pairs of source and destination ports; identifying open paths between said one of said source ports and said one of said destination ports within one of said pairs of source and destination ports; associating an open path with said one of said pairs if any path between said one of said source ports and said one of said destination ports is identified as being open; and designating said open path as closed.
2. The method of claim l, further comprising: deleting the pair of source and destination ports from the information if no paths between the source port and the destination port are identified as being open.
3. The method of claim 1, further comprising: initializing all internal connections in the switch as being open.
4. The method of claim 1, wherein the identifying includes: performing a logical AND operation on a set of all connections between the source port and an intermediate layer and a set of all connections between the destination port and the intermediate layer to produce a result set.
5. The method of claim 4, wherein the identifying includes: designating a path as open if an element in the result set has an open state.
6. The method of claim 1, wherein the associating includes: choosing a first open path that is encountered to be associated with the source port and the destination port.
7. The method of claim 1, wherein the associating includes: appending information that designates the open path to an address identifying the destination port.
8. The method of claim 1, further comprising: repeating said identifying, associating, and designating if another pair of source and destination ports does not have a path associated therewith.
9. A switching system, comprising: a three-stage banyan switch having a number of inputs and outputs; and a switch scheduler configured to allocate internal paths in the switch among a plurality of source and destination port pairs to avoid path blocking within the switch and configured to append one or more address bits to addresses identifying destination ports within the plurality of source and destination port pairs in accordance with the allocation of the internal paths, the switch scheduler also being configured to remove a source and destination port pair from the plurality of source and destination port pairs if all paths between the pair are blocked.
10. The switching system of claim 9, wherein the one or more address bits correspond to an output of a first stage of the banyan switch.
11. The switching system of claim 9, wherein the scheduler includes: a plurality of logical elements, each of the logical elements including: two registers respectively corresponding to a source connection and a destination connection within the switch.
12. The switching system of claim n, wherein each of the logical elements includes: a third register configured to indicate whether the source port and a destination port associated with the two registers are among the plurality of source and destination port pairs to be allocated an address.
13. A computer program product stored on a computer-readable medium and including instructions executable by at least one processor, comprising: a first program segment to receive a plurality of source/destination pairs for a switch; a second program segment to identify all available paths between one source/destination pair of said plurality of source/destination pairs; a third program segment to allocate an available path to the one source/destination pair and to mark the available path as unavailable to other ones of the plurality of source/destination pairs; and a fourth program segment to transmit information relating to the available path and the one source/destination pair.
14. The computer program product of claim 13, wherein the second program segment is configured to perform a logical AND on a first row in an array that represents internal connections between a source layer and an internal layer of a switch and a second row in another array that represents internal connections between a destination layer and the internal layer of a switch.
15. The computer program product of claim 14, wherein the third program segment is configured to append information relating to the available path to an address identifying a destination port and to mark elements in the first and second rows of the respective arrays as unavailable.
16. An apparatus for allocating internal paths in a multi-stage switch having a plurality of source ports and a like plurality of destination ports for a switch cycle, said apparatus comprising: means for receiving information identifying one or more of said source ports with one or more of said destination ports respectively as a like plurality of respective pairs of source and destination ports; means for identifying open paths between said one of said source ports and said one of said destination ports within one of said pairs of source and destination ports; means for associating an open path with said one of said pairs if any path between said one of said source ports and said one of said destination ports is identified as being open; and means for designating said open path as closed.
17. A switching system, comprising: a first stage of crossbar elements configured to receive address information and data; a second stage of crossbar elements, the second stage coupled to the first stage; a third stage of crossbar elements, the third stage coupled to the second stage, the third stage outputting the data; and a scheduler configured to: grant a plurality of requests for switching related service, identify an open path for a first granted request between the first stage and the third stage, and allocate the open path to the first granted request, wherein the data is routed from the first stage to the second stage to the third stage based on the allocated open path.
18. The switching system of claim 17, wherein the scheduler is further configured to: append bits to the address information, the appended bits identifying an output port of a crossbar element in the first stage.
19. A method of allocating internal paths in a multi-stage switch, comprising: receiving information identifying one or more pairs of source and destination ports; identifying open paths between a pair of the one or more pairs of source and destination ports; associating an open path with the pair of source and destination ports if any path between a source port and a destination port of the pair is identified as being open; and designating the open path associated with the pair of source and destination ports as closed.
PCT/US2002/031338 2001-10-05 2002-10-01 Near-non-blocking switch scheduler for three-stage banyan switches WO2003032554A2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU2002327813A AU2002327813A1 (en) 2001-10-05 2002-10-01 Near-non-blocking switch scheduler for three-stage banyan switches

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US97150001A 2001-10-05 2001-10-05
US09/971,500 2001-10-05

Publications (2)

Publication Number Publication Date
WO2003032554A2 true WO2003032554A2 (en) 2003-04-17
WO2003032554A3 WO2003032554A3 (en) 2003-10-16

Family

ID=25518470

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2002/031338 WO2003032554A2 (en) 2001-10-05 2002-10-01 Near-non-blocking switch scheduler for three-stage banyan switches

Country Status (2)

Country Link
AU (1) AU2002327813A1 (en)
WO (1) WO2003032554A2 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1529458B (en) * 2003-09-26 2010-04-14 中兴通讯股份有限公司 High-capacity hinder-free switching method in programme controlled switching network
CN101299685B (en) * 2008-03-18 2010-12-15 华为技术有限公司 Method and system for testing switching network as well as test initiation module
US9681951B2 (en) 2013-03-14 2017-06-20 Edwards Lifesciences Cardiaq Llc Prosthesis with outer skirt and anchors

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5949789A (en) * 1996-11-21 1999-09-07 Xerox Corporation Arbitration ring for accessing a limited bandwidth switching network
US5951649A (en) * 1994-03-22 1999-09-14 Cabletron Systems, Inc. Network interconnecting apparatus having a separate forwarding engine object at each interface

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5951649A (en) * 1994-03-22 1999-09-14 Cabletron Systems, Inc. Network interconnecting apparatus having a separate forwarding engine object at each interface
US5949789A (en) * 1996-11-21 1999-09-07 Xerox Corporation Arbitration ring for accessing a limited bandwidth switching network

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1529458B (en) * 2003-09-26 2010-04-14 中兴通讯股份有限公司 High-capacity hinder-free switching method in programme controlled switching network
CN101299685B (en) * 2008-03-18 2010-12-15 华为技术有限公司 Method and system for testing switching network as well as test initiation module
US9681951B2 (en) 2013-03-14 2017-06-20 Edwards Lifesciences Cardiaq Llc Prosthesis with outer skirt and anchors

Also Published As

Publication number Publication date
AU2002327813A1 (en) 2003-04-22
WO2003032554A3 (en) 2003-10-16

Similar Documents

Publication Publication Date Title
US11003604B2 (en) Procedures for improving efficiency of an interconnect fabric on a system on chip
KR100356447B1 (en) Memory interface unit, shared memory switch system and associated method
EP0198010B1 (en) Packet switched multiport memory nxm switch node and processing method
US5426639A (en) Multiple virtual FIFO arrangement
US7450583B2 (en) Device to receive, buffer, and transmit packets of data in a packet switching network
US7324509B2 (en) Efficient optimization algorithm in memory utilization for network applications
EP0848891B1 (en) Switching device, method and apparatus
EP1759559B1 (en) Data processing system and method for time slot allocation
JPH08214000A (en) Method and equipment for multicasting in atm network
JP2002049582A (en) Reconstructible first-in/first-out mechanism
US5590123A (en) Device and method for use of a reservation ring to compute crossbar set-up parameters in an ATM switch
US6728256B1 (en) Shared buffer control device
KR100321784B1 (en) Distributed type input buffer switch system having arbitration latency tolerance and method for processing input data using the same
US10491543B1 (en) Shared memory switch fabric system and method
US6310875B1 (en) Method and apparatus for port memory multicast common memory switches
US6684317B2 (en) Method of addressing sequential data packets from a plurality of input data line cards for shared memory storage and the like, and novel address generator therefor
US5857111A (en) Return address adding mechanism for use in parallel processing system
WO2003032554A2 (en) Near-non-blocking switch scheduler for three-stage banyan switches
CA2152637A1 (en) Network for Transferring Consecutive Packets Between Processor and Memory with a Reduced Blocking Time
JP3133234B2 (en) ATM switch address generation circuit
CA2011399C (en) Routing apparatus and method for high-speed mesh connected local area network
US6731636B1 (en) Scheduler using small sized shuffle pattern in ATM network
US20030110305A1 (en) Systematic memory location selection in ethernet switches
US20240111704A1 (en) Noc buffer management for virtual channels
EP0755139A2 (en) ATM switch address generating circuit

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A2

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BY BZ CA CH CN CO CR CU CZ DE DM DZ EC EE ES FI GB GD GE GH HR HU ID IL IN IS JP KE KG KP KR LC LK LR LS LT LU LV MA MD MG MN MW MX MZ NO NZ OM PH PL PT RU SD SE SG SI SK SL TJ TM TN TR TZ UA UG UZ VN YU ZA ZM

AL Designated countries for regional patents

Kind code of ref document: A2

Designated state(s): GH GM KE LS MW MZ SD SL SZ UG ZM ZW AM AZ BY KG KZ RU TJ TM AT BE BG CH CY CZ DK EE ES FI FR GB GR IE IT LU MC PT SE SK TR BF BJ CF CG CI GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
122 Ep: pct application non-entry in european phase
NENP Non-entry into the national phase in:

Ref country code: JP

WWW Wipo information: withdrawn in national office

Country of ref document: JP