CA2369178A1 - Distributed crossbar switching fabric architecture - Google Patents

Distributed crossbar switching fabric architecture Download PDF

Info

Publication number
CA2369178A1
CA2369178A1 CA 2369178 CA2369178A CA2369178A1 CA 2369178 A1 CA2369178 A1 CA 2369178A1 CA 2369178 CA2369178 CA 2369178 CA 2369178 A CA2369178 A CA 2369178A CA 2369178 A1 CA2369178 A1 CA 2369178A1
Authority
CA
Canada
Prior art keywords
crossbar
data
processing unit
sending
switching fabric
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
CA 2369178
Other languages
French (fr)
Inventor
Mohamed Samy Hosny
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CA 2369178 priority Critical patent/CA2369178A1/en
Publication of CA2369178A1 publication Critical patent/CA2369178A1/en
Abandoned legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/10Packet switching elements characterised by the switching fabric construction
    • H04L49/101Packet switching elements characterised by the switching fabric construction using crossbar or matrix
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/30Peripheral units, e.g. input or output ports
    • H04L49/3027Output queuing

Abstract

A switching fabric system for routing data is provided. The switching fabric system comprises an input port for receiving data, a crossbar ingress processing unit having a receiving end and a sending end, an interconnecting crossbar having a first end and a second end, a crossbar egress processing unit having a receiving end and a sending end, and an output port for sending the data out of the switching fabric system. The crossbar ingress processing unit receives data from the input port and sends data to the interconnecting crossbar. The crossbar egress processing unit receives data from the interconnecting crossbar, stores data, and sends data to the output port. The switching fabric system may be provided without either of the ingress or egress processing unit, where the interconnecting crossbar is connected to either the corresponding input or output port.

Description

Distributed Crossbar Switching Fabric Architecture FIELD OF THE INVENTION
The present invention relates generally to switching fabrics, and in particular, to an architecture for a crossbar switching fabric device. This invention applies to switching fabrics in routers, switches, Sonet crossconnects, Sonet Add drop Muxes or any other apparatus that uses crossbar switching fabrics.
BACKGROUND OF THE INVENTION
A muter is used for switching data packets between a source and a destination in a network including a plurality of ports and a switching fabric device. The switching fabric device receives data packets from the input of one port, and routes it to the appropriate output port.
In packet switch communication systems, a muter is a switching device which receives packets containing data or control information on one port, and based on the destination information contained within the packet, routes the packet out another port to the destination. This process is typically done at three levels. The first is the Physical layer device (PHIL which extracts the packet from the physical media.
The second is Network Processor (NP) which extracts the address from the packet and 2o translates this address to an actual port number. Finally, the Switching Fabric (SF) device routes the packet to its destination.
The data packet enters the router from the ingress side of the muter, first into the Physical layer device (PHY) then into the Network Processor (NP). Then the data packet enters the switch fabric. The data packet exits the switch fabric to the egress side of the muter and then exits the muter. It is the responsibilityof the switch fabric (SF) to route the data packet to its appropriate output port, once the data packet departs the Network Processor (NP) at the input ports. A conflicting situation might occur if two data packets from different input ports are destinedto the same output port at the same time, which may lead to dropping or loosing one of them. To overcome this problem, designers added memory buffering with special scheduling mechanisms to store data packets temporarily; then service these packets basedon the Quality of Service (QoS) for each packet. Current generation switch fabrics are classified into two main architectures, input buffering and output buffering, depending on the location of these memory buffers.
Figure 1 shows an unfolded overview of a conventional router structure 100 with output buffering, i.e., an overview of an output buffered muter. The conventional router structure 100 includes ingress components 101, a switch fabric 102, and egress components 103. The ingress components 101 includes a physical device (PHY) 104 and a network processor 1 O5. The switch fabric 102 includes ports 106, a shared output memory buffer 107, a scheduler 108, and a memory manager 109. The egress components include a network processor 105 and a physical device 104.
For better memory utilization, typically an output buffering architecture 100 employs a shared memory structure 107 where a global memory contains the packets moving into and out from switch fabric 102. The bandwidth required inside the fabric is proportional to both the number of ports 106 and the line rate. This internal speed up factor is inherent to shared memory structures 107, and is the main reason output buffered switches are becoming increasingly difficult to implement. In addition to memory bandwidth limitations, scheduling the data packets becomes more complex as the number of ports grow, and the memory buffer becomes bigger, harder to manage, and the whole switch becomes more expensive.
2o To avoid these limitations in the output buffered switches some systems architects turn to the input buffering model. Figure 2 shows an unfolded overview of a conventional muter structure with input buffering 150. As shown in Figure 2, the network processors 105 on the ingress components 101 contain memory buffers 151.
The switch fabric 152 provides a transport 159, typically in a crossbar structure, between the ingress 101 network processors and egress 103 network processors, hence, eliminating the need for each egress Network Processor (NP) to gain access to any shared resources between output. The muter scalability is therefore improved compared to output buffering. 'The crossbar fabric 152 includes a scheduler 158 that monitors the state of the input queues, and ensures each packet is serviced 3o appropriately.
Although input buffered switch fabrics 150 are more scalable than output buffered switch fabrics 100, they suffer from performance issues relating to head-of
-2-line blocking. Head-of line blocking is a phenomenon that causes one packet to block other packets from reaching their output destinations at the appropriate time, either because of the size of this packet or because of a malfunction in the network that causes one muter to flood one address location in the network. The effect of this phenomenon can be reduced using techniques such that Virtual Output Queuing (VOQ), which uses N separate queues at each input port (N being an integer greater than 0). Each sorting packets may be destined to one of the output ports:
However, VOQ suffers from a scalability problem, since the number of output ports is constrained by the number of VOQ on the ingress side. Also the complexity of the to switch fabric scheduler grows as the number of ports increases.
SUMMARY OF THE INVENTION
'This invention overcomes the scalability problem associated with switch fabrics in general, and simplifies the processing tasks associated with it. A
key element in this invention is slicing the an NxN crossbar switch in a way such that it is divided into N slices, preferably identical, (N being an integer greater than 0) based around the individual input and output ports. This architecture creates independent processing engines for each port, hence, making the tasks of processing ingress and egress data easier. In addition, the invention adds a capability of scaling the switch 2o fabric as the number of ports increase.
In one embodiment of the present invention, a switching fabric system for routing data is provided. The switching fabric system comprises an input port for receiving data into the switching fabric system, a crossbar ingress processing unit having a receiving end and a sending end, an interconnecting crossbar having a first end and a second end, a crossbar egress processing unit comprising a receiving end and a sending end, and an output port connected to the sending end of the crossbar egress processing unit for sending the data out of the switching fabric system. The receiving end of the crossbar ingress processing unit is attached to the input port. The crossbar ingress processing unit receives data from the input port and sends data out of its sending end. The interconnecting crossbar is attached at its first end to the sending end of the crossbar ingress processing unit for receiving the data.
The interconnecting crossbar allows the data to travel from its first end to its second end.
-3-The receiving end of the crossbar egress processing unit is attached to the second end of the interconnecting crossbar. The crossbar egress processing unit stores dataand sends data out its sending end.
In another embodiment ofthe present invention, a switching fabric system for routing data is provided. The switching fabric system comprises an input port for receiving data into the switching fabric system, a crossbar ingress processing unit having a receiving end and a sending end, an interconnecting crossbar having a first end and a second end, an output port connected to the second end of the interconnecting crossbar for sending the data out of the switching fabric system. The receiving end of the crossbar ingress processing unit is attached to the input port. The crossbar ingress processing unit receives data from the input port and sends data out of its sending end. The interconnecting crossbar is attached at its first end to the sending end of the crossbar ingress processing unit for receiving the data.
The interconnecting crossbar allows the data to travel from its first end to its second end.
In another embodiment of the present invention, a switching fabric system for routing data is provided. The switching fabric system comprises an input port for receiving data into the switching fabric system, an interconnecting crossbar having a first end and a second end, a crossbar egress processing unit having a recaving end and a sending end, and an output port connected to the sending end of the crossbar 2o egress processing unit for sending the data out of the switching fabric system. The interconnecting crossbar is attached at its first end to the input port for receiving the data. The interconnecting crossbar allows the data to travel from its first end to its second end. The receiving end of the crossbar egress processing unit is attached to the second end of the interconnecting crossbar. The crossbar egress processing unit stores data and sends data out its sending end.
In another embodiment of the present invention, a crossbar ingress processing unit for receiving and sending data is provided. The crossbar egress processing unit comprises a receiving end for receiving data into the crossbar ingress processing unit, a sending end for sending the data out from the crossbar ingress processing unit, and a 3o crossbar path having a first end attached to the receiving end and a second end attached to the sending end. The crossbar path allows data to travel from the
-4-receiving end to the sending end. The crossbar ingress processing unit may further comprise a channel selector for selecting a path for the data to travel.
In another embodiment ofthe present invention, an egress processing unit for storing and sending data is provided. The crossbar egress processing unit comprises a receiving end for receiving data into the crossbar egress processing unit, a sending end for sending the data out from the crossbar egress processing unit, and a crossbar path having a first end attached to the receiving end and a second end attached to the sending end. The crossbar path allows the data to travel from the receiving end to the sending end. The egress processing unit may further comprise a memory buffer to attached to the crossbar path between the receiving end and the sending end, the memory buffer for receiving the data from the receiving end and for storing the data.
The egress processing unit may further comprise a data packetscheduler for sending data packets from the memory buffer to the sending end of the output memory buffer unit.
15 In another embodiment of the present invention, a method for providing a switching fabric system a mechanism for routing data is provided. The method comprises steps of providing an input port for receiving the data into the switching fabric system, providing a crossbar ingress processing unit attached at one end to the ingress input port, providing an interconnecting crossbar attached at a firstend to the 20 crossbar ingress processing unit, providing a crossbar egress processing unit attached at a receiving end to the second end of the interconnecting crossbar, and providing an output port for sending the data out of the switching fabric system. The interconnecting crossbar allows data to travel from the ingress input port to a second end of the interconnecting crossbar. The crossbar egress processing unit stores data 25 and sends data out its sending end.
BRIEF DESCRIPTION OF THE DRAWINGS
The invention will be further understood from the following description with reference to the drawings in which:
3o Figure 1 shows an unfolded overview of a conventional router structure with output buffering;
-5-Figure 2 shows an unfolded overview of a conventional muter structure with input buffering;
Figure 3 shows a 1x1 output buffered swich fabric system according to an example of an embodiment of the present invention;
Figure 4 shows an NxN distribued output buffered switch fabric system according to an example of an embodiment of the present invention;
Figure 5 shows a crossbar structure superimposed on a shared memory switch;
Figure 6 shows an N Kxl crossbar ingress processing unit and N Kxl crossbar egress processing units functionally equivalent to an NxN crossbar fabric;
Figure 7 shows an unfolded router architecture using 2~tage crossbar ingress processing units;
Figure 8 shows an unfolded muter architecture using a 2-stage crossbar egress processing units;
Figure 9 shows an unfolded router architecture using a 2 stage crossbar i5 ingress and 2-stage crossbar egress processing units;
Figure 10 shows an exploded view of a 2-stage crossbar ingress processing units;
Figure 11 shows an exploded view of a 2~tage crossbar egress processing units;
2o Figure 12 shows an unfolded muter/ switch architecture using only crossbar egress procrssing units; and Figure 13 shows an overview of unfolded router/ switch architecture using both crossbar egress and ingress procrssing units.

Figure 3 shows an example of a one dimensional switching fabric architecture 200 in accordance with an embodiment of the present invention. The switching fabric architecture 200 comprises an input port 201, a crossbar ingress processing unit 202, an interconnecting crossbar 203, a crossbar egress processing unit 204, and an output 30 port 205. The ingress processing unit 202 may contain a crossbar path 207 and a channel selector 206 to select paths) to which egress port the data is destined. The interconnecting crossbar 203 may contain a crossbar path 207. The egress processing
-6-unit 204 may contain a crossbar path 207 and in case of a muter or switch application, a memory buffer 208, and a scheduling unit 209. In alternative embodiments, the switch fabric architecture 200 may be produced without either the crossbar ingress processing unit 202 or the crossbar egress processing unit 204.
In this one-dimensional example, data enters the input port 201 from a sending network processor (not shown) to the crossbar ingress processing unit 202 where the channels) that the data will travel through within the interconnecting crossbar are selected by the channel selector 206. The channel selector 206 is a switching system that maps the port address in the packet header into a physical path within the fabric to interconnecting crossbar. The data is then directed by the interconnecting crossbar 203 to the crossbar egress processing unit 204. The data packet is stored in the memory buffer 208, in the case of a muter application, and sent to the output port 205 via the crossbar path 207 in a timely manner, controlled by a scheduler 209.
The data packet then enters a receiving network processor (not shown).
Figure 4 shows an example of a multiple dimensional switching fabric architecture 300 in accordance with an embodiment of the present invention.
The switching fabric architecture 300 comprises a series of ingress input ports 301, a series of distributed crossbar ingress processing unit 302, an interconnecting crossbar 303, a series of distributed crossbar egress processing units 304, and a series of egress output ports 305. The ingress processing units 302 may contain crossbar paths and a channel selector 306. The interconnecting crossbar 303 rnay contain crossbar paths 307. The crossbar egress processing units 304 may contain crossbar paths 307, and in case of a router or switch application, a memory buffer 308 and a packet scheduling unit 309.
In this multi-dimensional example, data enters an ingress input port 301 from a sending physical layer device and network processor (not shown). There may be N
ingress input ports 301, where N is an integer greater than 0. The data is sent to an ingress processing unit 302, where the channel selector 306 selects the path for the data to proceed through the interconnecting crossbar 303 to the designated egress 3o processing unit 304. The channel selector simple switching system that maps the port address in the packet header into a physical path within the fabric interconnecting crossbar. Again, there may be N egress processing units 304, where N is an integer _7_ greater than 0. The data is stored in the memory buffer 308. There may be many data packets in the memory buffer 308. The data is sent to the egress output port 305 via the crossbar path 307 in a timely manner based on the QoS of the data packet.
The scheduling of the packets based on QoS is performed by the scheduler 309. The data packet then enters a receiving network processor and physical layer device (not shown). In alternative embodiments, the switch fabric architecture 300 may be produced without either the crossbar ingress processing unit 302 or the crossbar egress processing unit 304.
The switch fabric architecture 300 will be further described using Figures 5 to 10 for a router or switch application. The main function of a switch fabric is to transfer data packets from the ingress network processor (NP) to the egress NP, with smallest latency. By applying this concept to the shared memory fabric 500, i.e., input ports 501 fanning out to output ports 505, an imaginary crossbar structure 502 is superimposed to the shared output buffered switch fabric 500, which includes a shared memory buffer 107, a global scheduler 108 and a global channel selector as shown in Figure 5.
In this example, the preferred situation of all input ports 501 fanning out into all output ports 505 is shown. Now consider slicing the NxN crossbar 502 with both the shared memory 107, and the global packet scheduler 108, into, preferably, 2o identical output set of cones 601, based around the output ports, and input set of cones 602, based around the input ports, as shown in Figure 6. Each cone in the 601 set includes individual output memory buffers 308, output packet schedulers (or schedulers) 309, and K input ports (or crossbar paths) 307 that can be ported on separate devices that will be referred to as crossbar egress processing units (or output memory buffers and schedulers) 304. Similarly, each cone in the 602 s~ include individual processing units 302 for ingress ports that may contain a channel selector 306 that selects which paths) the data will travel through in the crossbar switch. The channel selector 306 is a switching system that maps the port address i~ the packet header into a physical path within the fabric interconnecting crossbar 502.
3o A collective set of crossbar egress processing units 304 and ingress processing units 302 are functionally equivalent to an NxN Switch Fabric. This leads to a favorable situation, where each cone 601 set may be separated into a Kxl individual _g_ crossbar egress processing device 304 that may be ported on each port 305.
Similarly, each cone in the 602 set can be separated into a 1xK individual crossbar ingress processing unit 302 and may be ported on each port 301, hence, achieving a highly scalable switching fabric architecture with distributed crossbar processing units. A
Kxlcrossbar egress processing device 304, indicates that K ingress input ports 301 (or K crossbar input processing units 302) , where K is an integer greater than 0, coming into the crossbar egress processing device 304 are fanning out to, preferably, 1 egress port 305 out of the same device. Similarly, a 1xK ingress ports 301 ( or crossbar ingress device 302), indicates that K egress ports 304 fanning into;
preferably, 1 ingress port 301 into the same device.
K may have 3 different situations. If K is more than (situation 1) or equal to (situation 2) the number of ports N in the switch fabric 300, then there will be one device 302 and/or 304 required per port to build an NxN switch fabric 300.
However, if K is less than (situation 3) the number of ports N in the switch fabric 300, a number ~5 of devices 302 and/or 304 may be connected together as shown in Figures 7 and 8 in a 2-stage configuration on each port.
Figure 7 shows another example of an embodiment of a multiple dimensional switching fabric architecture 300 with 2-stage crossbar ingress processing units 302.
In this example, there are N output ports 305 and K crossbar paths 307 in each crossbar ingress 302, where K is an integer greater than 1 and N is an integer greater than K and less than 2K. The crossbar ingress processing units are organized in a manner to handle this situation where there are more ports 305 than crossbar paths 307.
In this example, the first K output ports 305 use the "a" crossbar ingress processing units 302. The remaining output ports will use the "b" crossbar ingress processing units 302. Each corresponding "a" and "b" crossbar ingress processing unit 302 will receive their inputs from a third crossbar egress processing units 302.
Figure 8 shows another example of an embodiment of a multiple dimensional switching fabric architecture 300 with 2-stage crossbar egress processing units 302.
3o In this example, there are N input ports 301 and K crossbar paths 307 in each crossbar egress unit 304, where K is an integer greater than 1 and N is an integer greater than K and less than 2K. The crossbar egress processing units 304,are organized in a manner to handle this situation where there are more ports 301 than crossbar paths 307.
In this example, the first K input ports 301 use the "a" crossbar egress processing units 304. The remaining input ports will use the "b"
crossbaregress processing units 304. Each corresponding "a" and "b" crossbar egress processing units 304 will have their outputs merged using a third crossbar egress processing units 304.
Figure 9 shows another example of an embodiment of a multiple dimensional switching fabric architecture 300 with both 2-stage crossbar ingress 302 and egress 304 processing units.
Figure 10 shows an exploded view of switching fabric architecture 300 shown in Figure 7. Figure 10 outlines the first K output ports 305 connecting to crossbar ingress processing unit la and the remaining output ports 305 connecting to crossbar ingress processing unit 1b. Each corresponding "a" and "b" crossbar ingress processing unit 302 will receive their inputs from crossbar egress processing unit 1 It does not matter which 2 paths are used. Finally, the data originally came from input port 1.
Similarly, 3 or more crossbar ingress processing units may be coupled and or nested in a multistage fashion for situations where there are more than 2K
ports. In 2o addition to the scalability feature, data packet scheduling and memory management tasks are simpler, since each input port will have its own dedicated resources.
Figure 11 shows an exploded view of switching fabric architecture 300 shown in Figure 8. Figure 11 outlines the first K input ports 301 connecting to crossbar egress processing unit la and the remaining input ports 301 connecting to crossbar egress processing unit 1b. The outputs of the merged crossbar paths are then sent to a crossbar path in crossbar egress processing unit 1. It does not matter which 2 paths are used. Finally, the data are sent to the output port 1.
Similarly, 3 or more crossbar egress processing units may be coupled and or nested in a multistage fashion3 for situations where there are more than 2K
ports. In 3o addition to the scalability feature, data packet scheduling and memory management tasks are simpler, since each output port will have its own dedicated resources.

Figure 12 shows an unfolded router architecture in another embodiment of the present invention with only the crossbar egress processing units 304. In this case, a memory manager 109 is not needed since there is a dedicated memory per port and the data packet memories 107 and schedulers 108 are pushed into the crossbar egress processing unit 304 located on the corresponding output port 305, a key advantage in the architecture, while the actual crossbar is a passive interconnecting device.
Figure 13 shows an unfolded router architecture using the invention in its switching fabric with both the crossbar ingress processing units 302 and crossbar egress processing units 304. In this case the egress processing units 304 are used the 1o same way as in Figure 12, while the crossbar ingress processing units 302 are used as crossbar channel selectors to select the paths) the data is going to travel through in the switch fabrics to reach its destination(s).
An aspect of this embodiment is slicing the output buffered switch in such a manner that it is divided into, preferably, identical slices based around the output and 15 input ports of a routing device. Slicing any ingress processing functions into identical slices around their input ports is also provided in this embodiment. This architecture (or system) creates independent ingress and egress processing engines. In the case of muter applications, memory buffers 308 and schedulers 309 are included (versus shared memory buffers 107 and single scheduler 108), hence, making the tasks of 2o managing the memories and scheduling the data packets much easier. In addition, it adds a capability of scaling the switch fabric as the number of ports increase, since the crossbar is sliced into identical slices that can be ported to the individual ports.
Embodiments of this invention may be applied to switching fabrics in routers, switches, Sonet crossconnects, Sonet Add drop Muxes or any other apparatus that 25 uses crossbar switching fabrics. While specific embodiments of the present invention have been described, various modifications and substitutions may be made to such embodiments. Such modifications and substitutions are within the scope of the present invention, and are intended to be covered by the following claims.

Claims (24)

WHAT IS CLAIMED IS:
1. A switching fabric system for routing data, the switching fabric system comprising:
an input port for receiving data into the switching fabric system;
a crossbar ingress processing unit having a receiving end and a sending end, the receiving end of the crossbar ingress processing unit attached to the input port, the crossbar ingress processing unit for receiving the data from the input port and sending data out of the sending end;
an interconnecting crossbar having a first end and a second end, the interconnecting crossbar attached at the first end to the sending end of the crossbar ingress processing unit for receiving the data, the interconnecting crossbar for allowing the data to travel from the first end to the second end;
a crossbar egress processing unit having a receiving end and a sending end, the receiving end of the crossbar egress processing unit attached to the second end of the interconnecting crossbar, the crossbar egress processing unit for storing the data and for sending the data out the sending end of the crossbar egress processing unit; and an output port connected to the sending end of the crossbar egress processing unit for sending the data out of the switching fabric system.
2. The switching fabric system as claimed in claim 1, wherein the crossbar ingress processing unit further comprises a crossbar path on which the data travels.
3. The switching fabric system as claimed in claim 1, wherein the crossbar ingress processing unit further comprises a channel selector for selecting a path for the data to travel.
4. The switching fabric system as claimed in claim 1, wherein the interconnecting crossbar further comprises a crossbar path on which the data travels.
5. The switching fabric system as claimed in claim 1, wherein the crossbar egress processing unit further comprises a crossbar path on which the data travels.
6. The switching fabric system as claimed in claim 1, wherein the crossbar egress processing unit further comprises a memory buffer for storing the data.
7. The switching fabric system as claimed in claim 6, wherein the crossbar egress processing unit further comprises a scheduler for sending the data from thememory buffer to the output port.
8. The switching fabric system as claimed in claim 7, wherein the scheduler sends the data according to the quality of service (QoS) of data packets of the data.
9. The switching fabric system as claimed in claim 1, wherein multiple data packets are stored in an output memory buffer unit.
10. The switching fabric system as claimed in claim 9, wherein the multiple data packets stored in an output memory buffer unit are sent to the egress output port based on the quality of service of the data packet.
11. The switching fabric system as claimed in claim 1, comprising multiple crossbar ingress processing units.
12. The switching fabric system as claimed in claim 11, wherein the interconnecting crossbar comprises multiple crossbar paths connecting the multiple crossbar ingress processing units to the crossbar egress processing unit.
13. The switching fabric system as claimed in claim 1, further comprising multiple crossbar egress processing units.
14. The switching fabric system as claimed in claim 13, wherein the interconnecting crossbar further comprises multiple crossbar paths connecting the crossbar ingress processing units to the multiple crossbar egress processing units.
15. The switching fabric system as claimed in claim 1, further comprising multiple crossbar ingress processing units and multiple crossbar egress processing units.
16. The switching fabric system as claimed in claim 15, wherein the interconnecting crossbar further comprises multiple crossbar paths connecting the multiple crossbar ingress processing units to the multiple crossbar egress processing units.
17. A switching fabric system for routing data, the switching fabric system comprising:
an input port for receiving data into the switching fabric system;
a crossbar ingress processing unit having a receiving end and a sending end, the receiving end of the crossbar ingress processing unit attached to the input port, the crossbar ingress processing unit for receiving the data from the input port and sending data out of the sending end;
an interconnecting crossbar having a first end and a second end, the interconnecting crossbar attached at the first end to the sending end of the crossbar ingress processing unit for receiving the data, the interconnecting crossbar for allowing the data to travel from the first end to the second end;
an output port connected to the second end for sending the data out of the switching fabric system.
18. A switching fabric system for routing data, the switching fabric system comprising:
an input port for receiving data into the switching fabric system;
an interconnecting crossbar having a first end and a second end, the interconnecting crossbar attached at the first end to the input port for receiving the data, the interconnecting crossbar for allowing the data to travel from the first end to the second end;

a crossbar egress processing unit having a receiving end and a sending end, the receiving end of the crossbar egress processing unit attached to the second end of the interconnecting crossbar, the crossbar egress processing unit for storing the data and for sending the data out the sending end of the crossbar egress processing unit; and an output port connected to the sending end of the crossbar egress processing unit for sending the data out of the switching fabric system.
19. A crossbar ingress processing unit for receiving and sending data, the crossbar egress processing unit comprising:
a receiving end for receiving data into the crossbar ingress processing unit;
a sending end for sending the data out from the crossbar ingress processing unit; and a crossbar path having a first end attached to the receiving end and a second end attached to the sending end, the crossbar path for allowing the data to travel from the receiving end to the sending end.
20. The crossbar ingress processing unit as claimed in claim 19, further comprising a channel selector for selecting a path for the data to travel.
21. An egress processing unit for storing and sending data, the crossbar egress processing unit comprising:
a receiving end for receiving data into the crossbar egress processing unit;
a sending end for sending the data out from the crossbar egress processing unit; and a crossbar path having a first end attached to the receiving end and a second end attached to the sending end, the crossbar path for allowing the data to travel from the receiving end to the sending end.
22. The egress processing unit as claimed in claim21, further comprising a memory buffer attached to the crossbar path between the receiving end and the sending end, the memory buffer for receiving the data from the receiving end and for storing the data.
23. The egress processing unit as claimed in claim 22, further comprising a data packet scheduler for sending data packets from the memory buffer to the sending end of the output memory buffer unit.
24. A method for providing a switching fabric system a mechanism for routing data, the method comprising steps of:
providing an input port for receiving the data into the switching fabric system;
providing a crossbar ingress processing unit attached at one end to the ingress input port;
providing an interconnecting crossbar attached at a first end to the crossbar ingress processing unit, the interconnecting crossbar for allowing the data to travel from the ingress input port to a second end of the interconnecting crossbar;
providing a crossbar egress processing unit attached at a receiving end to the second end of the interconnecting crossbar, the crossbar egress processing unit for storing the data and for sending the data out a sending end of the crossbar egress processing unit; and providing an output port for sending the data out of the switching fabric system.
CA 2369178 2002-01-24 2002-01-24 Distributed crossbar switching fabric architecture Abandoned CA2369178A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CA 2369178 CA2369178A1 (en) 2002-01-24 2002-01-24 Distributed crossbar switching fabric architecture

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CA 2369178 CA2369178A1 (en) 2002-01-24 2002-01-24 Distributed crossbar switching fabric architecture

Publications (1)

Publication Number Publication Date
CA2369178A1 true CA2369178A1 (en) 2003-07-24

Family

ID=27626493

Family Applications (1)

Application Number Title Priority Date Filing Date
CA 2369178 Abandoned CA2369178A1 (en) 2002-01-24 2002-01-24 Distributed crossbar switching fabric architecture

Country Status (1)

Country Link
CA (1) CA2369178A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112929283A (en) * 2019-12-06 2021-06-08 中兴通讯股份有限公司 Data processing method, device, equipment and storage medium

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112929283A (en) * 2019-12-06 2021-06-08 中兴通讯股份有限公司 Data processing method, device, equipment and storage medium
US11818057B2 (en) 2019-12-06 2023-11-14 Xi'an Zhongxing New Software Co. Ltd. Method and apparatus for processing data, device, and storage medium
CN112929283B (en) * 2019-12-06 2024-04-02 中兴通讯股份有限公司 Data processing method, device, equipment and storage medium

Similar Documents

Publication Publication Date Title
Iyer et al. Analysis of a packet switch with memories running slower than the line-rate
Hluchyj et al. Queueing in high-performance packet switching
CA2098496C (en) Packet switch
EP0687091B1 (en) Method of regulating backpressure traffic in a packet switched network
US8644327B2 (en) Switching arrangement and method with separated output buffers
US7397808B2 (en) Parallel switching architecture for multiple input/output
Abel et al. A four-terabit packet switch supporting long round-trip times
EP1573950B1 (en) Apparatus and method to switch packets using a switch fabric with memory
US6944170B2 (en) Switching arrangement and method
US20050243829A1 (en) Traffic management architecture
US6870844B2 (en) Apparatus and methods for efficient multicasting of data packets
US7173906B2 (en) Flexible crossbar switching fabric
WO2006063459A1 (en) Compact load balanced switching structures for packet based communication networks
Chiussi et al. A distributed scheduling architecture for scalable packet switches
US20050117575A1 (en) Nonblocking and deterministic unicast packet scheduling
US20060104281A1 (en) Packet routing
Chen et al. A buffer management scheme for the SCOQ switch under nonuniform traffic loading
Mneimneh et al. Switching using parallel input-output queued switches with no speedup
US8566487B2 (en) System and method for creating a scalable monolithic packet processing engine
US7269158B2 (en) Method of operating a crossbar switch
Abel et al. A four-terabit single-stage packet switch with large round-trip time support
Chrysos et al. Discharging the network from its flow control headaches: Packet drops and hol blocking
CA2369178A1 (en) Distributed crossbar switching fabric architecture
US20050129043A1 (en) Nonblocking and deterministic multicast packet scheduling
Engbersen Prizma switch technology

Legal Events

Date Code Title Description
EEER Examination request
FZDE Dead