WO2012036670A1 - Computer system fabric switch - Google Patents

Computer system fabric switch Download PDF

Info

Publication number
WO2012036670A1
WO2012036670A1 PCT/US2010/048694 US2010048694W WO2012036670A1 WO 2012036670 A1 WO2012036670 A1 WO 2012036670A1 US 2010048694 W US2010048694 W US 2010048694W WO 2012036670 A1 WO2012036670 A1 WO 2012036670A1
Authority
WO
WIPO (PCT)
Prior art keywords
packet
ports
function
recited
location
Prior art date
Application number
PCT/US2010/048694
Other languages
French (fr)
Inventor
Gregg B. Lesartre
Original Assignee
Hewlett-Packard Development Company, L.P.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hewlett-Packard Development Company, L.P. filed Critical Hewlett-Packard Development Company, L.P.
Priority to US13/809,452 priority Critical patent/US20130142195A1/en
Priority to PCT/US2010/048694 priority patent/WO2012036670A1/en
Priority to CN201080069101.4A priority patent/CN103098431B/en
Priority to EP10857369.2A priority patent/EP2617167A1/en
Publication of WO2012036670A1 publication Critical patent/WO2012036670A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/25Routing or path finding in a switch fabric
    • H04L49/253Routing or path finding in a switch fabric using establishment or release of connections between ports
    • H04L49/254Centralised controller, i.e. arbitration or scheduling
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/60Software-defined switches
    • H04L49/602Multilayer or multiprotocol switching, e.g. IP switching
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/60Router architectures

Definitions

  • a blade system can include a chassis and blades installed in the chassis.
  • Each blade can include one or more processor nodes; each processor node can include one or more processors and associated memory.
  • the chassis can include a fabric that connects the processor nodes so they can communicate with each other and access each other's memory so that: the collective memory of the connected blades can operate coherently.
  • Fabrics can be scaled up to include links that connect: fabrics that connect blades. In such cases, there are often multiple routes between a communication's source and destination.
  • a fabric can include one or more switches with multiple ports.
  • a switch examines a portion of each received packet: for information pertinent to routing, e.g., the packet's destination. The location of the portion of the packet header examined can vary according to the communication protocol used by the blade system. The switch then, selects an output port based on the routing information.
  • FIGURE I is a schematic diagram of a fabric switch in
  • FIGURE 2 is a flow chart of a fabric-switch process in
  • FIGURE 3 is a schematic diagram of a computer system in accordance with an embodiment
  • FIGURE 4 is a flow chart of a process employed in the context of the computer system of FIG. 3.
  • FIGURE 5 is a schematic diagram of another computer system employing fabric switches in. accordance with an embodiment.
  • a fabric switch 100 includes ports 101, including ports 103 and 105, a location function component 107 and a routing function component 109, as shown in FIG. 1.
  • Fabric switch 100 implements a process 200 flow charted in FIG, 2.
  • the location function component 107 determines a location 120 of routing information 122 in a packet 124 as a location function of the port 107 at which packet 124 was received.
  • the packet is forwarded out a port 109 selected as a routing function (implemented by routing function component 109) of routing information 122.
  • process 200 allows proper routing determinations to be made despite the use of different protocols at respective real or virtual ports of a switch.
  • a blade computer system 300 includes a chassis 301, blades 303, including blades B 1-B8, and a fabric module 305.
  • Fabric module 305 includes at least portions of links 307, e.g., links L1-L8, and a fabric switch 310.
  • Fabric switch 310 includes a
  • Code 315 is configured to, when executed by processor 311, define a database 317 and functionality for a link interface 320 of switch 310. Code 315 further serves to define a link interface 320 with an initialization manager 321 and a packet manager 323. Packet manager 327 includes a location function component 325 and a routing function component 327.
  • Database 317 includes an input table 331 , an output table 333, environmental data 335, allocation policies 337, and virtualization information 339.
  • a processor external to a fabric switch executes software to configure the fabric switch to read the routing field of a packet, perform, a conversion as appropriate, and lookup the output port.
  • Input table 331 uses input port identity as a key field.
  • each input port identity is an offset, a bit length, and a conversion function.
  • the offset and length define a routing field location, typically in the packet header, which bears routing information used to determine which output port through, which to forward a packet. This location is protocol dependent.
  • the value at the indicated location can be used directly as an index to output table 333.
  • some conversion function, identified, in the rightmost column, of table 331 can be applied to obtain the index value to be input to output table 333. For example, for input link identities L3 and L4, the extracted value is to be decremented by unity to yield the input to output table 331.
  • link identity L4 the source link identity value (e.g., 4) is added modulo-8 to the extracted value to determine the value to be input to table 333.
  • link identity L5 For input link L5, four bits are extracted, but the third is ignored.
  • the conversions are tied to the protocols employed by the input links.
  • the conversions can be performed using table look-ups. As explained further below, in some cases, the
  • conversions may take into account environmental data, allocation policies, and virtualization information.
  • a process 400 implemented by blade system 300 and switch 310 includes a configuration phase 410 and a packet phase as flow charted in FIG. 4.
  • Configuration phase 410 includes a process segment 401 in which a link is activated. This activation may be initiated at a blade or other end node, either as the node is booted or when a link-specific interface of the end node is activated. The activation typically involves an exchange of protocol
  • protocol-dependent (i.e., protocol-specific) information can be extracted during link initialization at process segment 402.
  • This protocol-dependent information can include an explici t identification of the location at which routing information can be found.
  • the protocol can be identified and the location for the protocol can be "looked up", e.g., in a table resident on switch 310.
  • the extracted information can be stored in input table 331 in terms of a header location offset and a bit-length following the offset.
  • conversion information for table 331 can be obtained in explicit form from the header location or inferred from the protocol identity from a table in database 317. This completes a setup phase for process 400.
  • Packet phase 420 of process 400 begins with receipt of a packet at a port at process segment 404.
  • location function component 325 uses input table 331 to determine the packet location of routing
  • packet manager 323 extracts the routing information from the determined location of the packet. Depending on the information in the conversion column of table 331, this routing information can be used directly or converted by routing function component 327. in any case, the resulting value can be input to output table 333 at process segment 407 to select a port for outputting the packet. At process segment 408, the packet is forwarded out the selected port.
  • a computer system 500 includes end nodes 501 and fabric 502, as shown in FIG. 5.
  • Fabric 502 includes fabric switches 503 and links 505, End nodes 501 include nodes N11-N44, Fabric switches 503 include fabric switches FS1-FS4.
  • Links 505 include links L11--L43, as well as unlabeled links to end nodes 501 ,
  • Nodes 501 can be of various types with including without limitation processor nodes, network (e.g., Ethernet) switch nodes, storage nodes, memory nodes, and storage network nodes that provide interfacing to mass storage devices.
  • Each fabric switch 503 has eight ports, four of which are shown connected to respective nodes and four of which are shown connected to other fabric switches.
  • node Ni l can communicate with node N21: 1) using link L12; 2) using link L21; 3) using the link combination L14, L34, and L23; 4) using the link combination L14, L34, and L32; 5) using the link combination L14, L43, L23, 6) using the link combination L14, L43, and L32; 7) using the link combination L41, L34, and L23; 8) using the link
  • each switch F S 1 -F S4 can. monitor utilization at each of its ports and. communicate summary information to the other fabric switches.
  • Each, fabric switch stores utilization data as environmental data 335 (FIG. 3).
  • Environmental data 335 can also include non-utilization data, such as the average number of retries required to successfully transmit a packet over a link. Such other environmental data can also be used by a switch in making routing determinations, in other
  • Switches FS1-FS4 can be configured to treat all packets equally. Alternatively, switches FS1-FS4 can be programmed with allocation policies 337 (FIG. 3) that cause packets to be treated with different priorities according to source, destination, protocol, content, or other parameter. For example, if there is not enough direct inter-switch bandwidth to handle both real-time and non-real time packets, non-real-time packets can be redirected along an indirect route. Also, some nodes may be associated with more important users; in that case, traffic associated with other users can be sent along slower routes or even dropped to favor the more important users, in an alternative embodiment, traffic is not prioritized.
  • communications can include different numbers and types of end nodes, different numbers of links associated with nodes, different numbers of inter-switch links, different numbers of ports per switch. Also, the algorithms applied to allocate traffic among alternative routes can vary from those described for system 500.
  • Virtualization data 339 can include data regarding various virtualization schemes including virtual links and virtual channels. An implemented virtualization scheme can then be reflected in the allocation policies 337 and environmental data 335.
  • a physical link e.g., line LI 2
  • Each port connected to the link can have a separate first-in-first-out FIFO buffer for each virtual link, thus defining virtual ports associated with each real fabric switch port. This permits packets sent along different virtual links to progress at: different rates depending on virtual link usage.
  • Virtual channels can be used to handle sessions of packets. For example, it may be desirable to send an acknowledgement packet along the reverse of the route along which the original packet was sent. In. other cases, it may be desirable to maintain the same forward and reverse routes for several packets of a "session". To this end, the packets can be assigned to a virtual channel and the virtual channel can be assigned to a forward and reverse pair of routes. Thus, a series of packets between node Ni l and node N31 could all be assigned (using header information) to a given virtual channel; virtualization data 339 can then specify a mapping of the virtual channel to forward and reverse fabric routes.
  • Fabric switches 100 (FIG. 1), 310 (FIG. 3) and FS1-FS4 (FIG. 5) are, in effect, programmable to handle different fabric protocols on a per-port basis.
  • a switch can be programmed to handle different protocols on a per-virtual-link or per-virtual-channel basis. This gives the computer system owner great flexibility in terms of configuring and upgrading. For example, during the lifetime of an initial set of end nodes, improved end nodes may have been introduced providing for a new fabric protocol for improved performance, in system 500, each end node can be replaced at an optimal time (e.g., as it begins to be unreliable or as it becomes a bottleneck) with a new generation end node.
  • the illustrated fabric switches can handle a combination of old and new- generation end nodes even though the protocols they support store routing information in different places in the transmitted packets.
  • port and “link” can refer to either a real or virtual entity.
  • processor refers to a hardware entity that can. be part of an integrated circuit, a complete integrated circuit, or distributed among plural integrated circuits.
  • media refers to non-transitory, tangible, computer-readable storage media. Unless context indicates that only a software aspect is under consideration, switch components labeled as “managers” or “component” are combinations of software and the hardware used to execute the software.
  • a "system” is a set of interacting elements, wherein the elements can be, by w ay of example and not of limitation, mechanical components, electrical elements, atoms, instructions encoded in storage media, and process segments, in this

Abstract

A fabric switch includes ports, a location function component, and a routing function component. Packets are received and forwarded via the ports. The location function component provides for determining a location of routing information within a received packet of routing information based at least in part on the input port at which said packet was received. The routing function component provides for determining an output port as a routing function based at least in part on the contents of said location.

Description

COMPUTER SYSTEM FABRIC SWITCH [01 ] BACKGROUND
[02] Separate computer nodes can function together as a single computer system by communicating with each other over a fast computer system fabric. For example, a blade system can include a chassis and blades installed in the chassis. Each blade can include one or more processor nodes; each processor node can include one or more processors and associated memory. The chassis can include a fabric that connects the processor nodes so they can communicate with each other and access each other's memory so that: the collective memory of the connected blades can operate coherently. Fabrics can be scaled up to include links that connect: fabrics that connect blades. In such cases, there are often multiple routes between a communication's source and destination.
[03] To route communication packets properly, a fabric can include one or more switches with multiple ports. Typically, a switch examines a portion of each received packet: for information pertinent to routing, e.g., the packet's destination. The location of the portion of the packet header examined can vary according to the communication protocol used by the blade system. The switch then, selects an output port based on the routing information.
[04] BRIEF DESCRIPTION OF THE DRAWINGS
[051 FIGURE I is a schematic diagram of a fabric switch in
accordance with an embodiment.
[06] FIGURE 2 is a flow chart of a fabric-switch process in
accordance with an embodiment. [07] FIGURE 3 is a schematic diagram of a computer system in accordance with an embodiment,
[08] FIGURE 4 is a flow chart of a process employed in the context of the computer system of FIG. 3. [09] FIGURE 5 is a schematic diagram of another computer system employing fabric switches in. accordance with an embodiment.
[10] DETAILED DESCRIPTION
[1 1 ] A fabric switch 100 includes ports 101, including ports 103 and 105, a location function component 107 and a routing function component 109, as shown in FIG. 1. Fabric switch 100 implements a process 200 flow charted in FIG, 2. At process segment 201 , the location function component 107 determines a location 120 of routing information 122 in a packet 124 as a location function of the port 107 at which packet 124 was received. At process segment 202, the packet: is forwarded out a port 109 selected as a routing function (implemented by routing function component 109) of routing information 122. Thus, process 200 allows proper routing determinations to be made despite the use of different protocols at respective real or virtual ports of a switch. [1 2] A blade computer system 300 includes a chassis 301, blades 303, including blades B 1-B8, and a fabric module 305. Fabric module 305 includes at least portions of links 307, e.g., links L1-L8, and a fabric switch 310. Fabric switch 310 includes a
processor 311, media 313 encoded with code 315, and ports 317, e.g., ports P1-P8. Code 315 is configured to, when executed by processor 311, define a database 317 and functionality for a link interface 320 of switch 310. Code 315 further serves to define a link interface 320 with an initialization manager 321 and a packet manager 323. Packet manager 327 includes a location function component 325 and a routing function component 327.
Database 317 includes an input table 331 , an output table 333, environmental data 335, allocation policies 337, and virtualization information 339. In an alternative embodiment, a processor external to a fabric switch executes software to configure the fabric switch to read the routing field of a packet, perform, a conversion as appropriate, and lookup the output port.
[1 3] Input table 331 uses input port identity as a key field.
Associated with each input port identity is an offset, a bit length, and a conversion function. The offset and length define a routing field location, typically in the packet header, which bears routing information used to determine which output port through, which to forward a packet. This location is protocol dependent. [14] In some cases, the value at the indicated location can be used directly as an index to output table 333. In other cases, some conversion function, identified, in the rightmost column, of table 331, can be applied to obtain the index value to be input to output table 333. For example, for input link identities L3 and L4, the extracted value is to be decremented by unity to yield the input to output table 331. For link identity L4, the source link identity value (e.g., 4) is added modulo-8 to the extracted value to determine the value to be input to table 333. For input link L5, four bits are extracted, but the third is ignored. The conversions are tied to the protocols employed by the input links.
[1 5] In practice, the conversions can be performed using table look-ups. As explained further below, in some cases, the
conversions may take into account environmental data, allocation policies, and virtualization information. Once the packet value is extracted/converted, it can be input to output table 333, which associates the packet value with an output port.
[16] A process 400 implemented by blade system 300 and switch 310 includes a configuration phase 410 and a packet phase as flow charted in FIG. 4. Configuration phase 410 includes a process segment 401 in which a link is activated. This activation may be initiated at a blade or other end node, either as the node is booted or when a link-specific interface of the end node is activated. The activation typically involves an exchange of protocol
information. Accordingly, protocol-dependent (i.e., protocol- specific) information can be extracted during link initialization at process segment 402. This protocol-dependent information can include an explici t identification of the location at which routing information can be found. Alternatively, the protocol can be identified and the location for the protocol can be "looked up", e.g., in a table resident on switch 310. At process segment 403, the extracted information can be stored in input table 331 in terms of a header location offset and a bit-length following the offset.
Likewise, conversion information for table 331 can be obtained in explicit form from the header location or inferred from the protocol identity from a table in database 317. This completes a setup phase for process 400.
[1 7] Packet phase 420 of process 400, as flow charted in FIG. 4, begins with receipt of a packet at a port at process segment 404. At process segment 405, location function component 325 (FiG. 3) uses input table 331 to determine the packet location of routing
information by looking up the location as a function of the port at which the packet was received. At process segment 406, packet manager 323 extracts the routing information from the determined location of the packet. Depending on the information in the conversion column of table 331, this routing information can be used directly or converted by routing function component 327. in any case, the resulting value can be input to output table 333 at process segment 407 to select a port for outputting the packet. At process segment 408, the packet is forwarded out the selected port.
[1 8] A computer system 500 includes end nodes 501 and fabric 502, as shown in FIG. 5. Fabric 502 includes fabric switches 503 and links 505, End nodes 501 include nodes N11-N44, Fabric switches 503 include fabric switches FS1-FS4. Links 505 include links L11--L43, as well as unlabeled links to end nodes 501 , Nodes 501 can be of various types with including without limitation processor nodes, network (e.g., Ethernet) switch nodes, storage nodes, memory nodes, and storage network nodes that provide interfacing to mass storage devices. Each fabric switch 503 has eight ports, four of which are shown connected to respective nodes and four of which are shown connected to other fabric switches.
[1 9] Accordingly, there is a choice of fabric routes between each pair of nodes. In fact, in system 500, there are ten possible fabric routes between each pair of end nodes. For example, node Ni l can communicate with node N21: 1) using link L12; 2) using link L21; 3) using the link combination L14, L34, and L23; 4) using the link combination L14, L34, and L32; 5) using the link combination L14, L43, L23, 6) using the link combination L14, L43, and L32; 7) using the link combination L41, L34, and L23; 8) using the link
combination L41, L34, and L32; 9) using the link combination L41, L43, and L23; and 10) using the link combination L41, L43, and L32,
[20] In most cases, one of the two more direct routes via links L12 and L21 would be used in communicating between nodes N11 and N21. Of these two, the least utilized could be selected, in some cases, links L12 and L21 might be so heavily utilized that
communication through one of the other eight routes might be faster and more reliable. So that utilization can be taken into account when a switch makes routing decisions, each switch F S 1 -F S4 can. monitor utilization at each of its ports and. communicate summary information to the other fabric switches. Each, fabric switch stores utilization data as environmental data 335 (FIG. 3). Environmental data 335 can also include non-utilization data, such as the average number of retries required to successfully transmit a packet over a link. Such other environmental data can also be used by a switch in making routing determinations, in other
embodiments, e.g., in which a protocol is not compatible with dynamic routing, dynamic routing is not employed. [21 ] Switches FS1-FS4 can be configured to treat all packets equally. Alternatively, switches FS1-FS4 can be programmed with allocation policies 337 (FIG. 3) that cause packets to be treated with different priorities according to source, destination, protocol, content, or other parameter. For example, if there is not enough direct inter-switch bandwidth to handle both real-time and non-real time packets, non-real-time packets can be redirected along an indirect route. Also, some nodes may be associated with more important users; in that case, traffic associated with other users can be sent along slower routes or even dropped to favor the more important users, in an alternative embodiment, traffic is not prioritized.
[22] Other embodiments providing for inter-switch
communications can include different numbers and types of end nodes, different numbers of links associated with nodes, different numbers of inter-switch links, different numbers of ports per switch. Also, the algorithms applied to allocate traffic among alternative routes can vary from those described for system 500.
[23] Virtualization data 339 can include data regarding various virtualization schemes including virtual links and virtual channels. An implemented virtualization scheme can then be reflected in the allocation policies 337 and environmental data 335. For example, a physical link, e.g., line LI 2, can be time-multiplexed to serve as several virtual links. Each port connected to the link can have a separate first-in-first-out FIFO buffer for each virtual link, thus defining virtual ports associated with each real fabric switch port. This permits packets sent along different virtual links to progress at: different rates depending on virtual link usage.
[24] Virtual channels can be used to handle sessions of packets. For example, it may be desirable to send an acknowledgement packet along the reverse of the route along which the original packet was sent. In. other cases, it may be desirable to maintain the same forward and reverse routes for several packets of a "session". To this end, the packets can be assigned to a virtual channel and the virtual channel can be assigned to a forward and reverse pair of routes. Thus, a series of packets between node Ni l and node N31 could all be assigned (using header information) to a given virtual channel; virtualization data 339 can then specify a mapping of the virtual channel to forward and reverse fabric routes.
[25] Fabric switches 100 (FIG. 1), 310 (FIG. 3) and FS1-FS4 (FIG. 5) are, in effect, programmable to handle different fabric protocols on a per-port basis. In alternative embodiments, a switch can be programmed to handle different protocols on a per-virtual-link or per-virtual-channel basis. This gives the computer system owner great flexibility in terms of configuring and upgrading. For example, during the lifetime of an initial set of end nodes, improved end nodes may have been introduced providing for a new fabric protocol for improved performance, in system 500, each end node can be replaced at an optimal time (e.g., as it begins to be unreliable or as it becomes a bottleneck) with a new generation end node. The illustrated fabric switches can handle a combination of old and new- generation end nodes even though the protocols they support store routing information in different places in the transmitted packets. [26] Unless context indicates otherwise, "port" and "link" can refer to either a real or virtual entity. As used herein, "processor" refers to a hardware entity that can. be part of an integrated circuit, a complete integrated circuit, or distributed among plural integrated circuits. Herein, "media" refers to non-transitory, tangible, computer-readable storage media. Unless context indicates that only a software aspect is under consideration, switch components labeled as "managers" or "component" are combinations of software and the hardware used to execute the software.
[27] Herein, a "system" is a set of interacting elements, wherein the elements can be, by w ay of example and not of limitation, mechanical components, electrical elements, atoms, instructions encoded in storage media, and process segments, in this
specification, related art is discussed for expository purposes.
Related art labeled "prior art", if any, is admitted prior art. Related art not labeled "prior art" is not admitted prior art. The illustrated and other described embodiments, as well as modifications thereto and variations thereupon are within the scope of the following claims.
What is Claimed Is:

Claims

1. A fabric switch comprising:
ports through which packets are received and forwarded;
a location function component for determining a location of routing information within a received packet containing routing information based at least in part on the input port at which said packet was received, and
a routing function component for determining an output port as a routing f unction based at least in part on said routing
information,
2. A fabric switch as recited in Claim 1 further comprising an initialization manager configured to:
activate a link connecting an end node to a port of said switch so as to establish a protocol to which communications over said link are to conform; and
in response to said activating, generate or adjust said location function to correspond to the use of said protocol at that port.
3. A fabric switch as recited in Claim 2 wherein said ports are real ports.
4. A fabric switch, as recited in Claim 2 wherein said ports include both real and virtual ports, said virtual ports including said input port and said output port,
5. A fabric switch as recited in Claim 2 wherein said determining said output port is a routing function at least in part of a virtual channel to which said packet is assigned.
6. A fabric switch process comprising:
a switch determining a location of routing information within a packet as a location function of a first port at which said packet was received; and
said switch forwarding said packet out of a second port of said switch selected as a routing function of said routing information.
7. A process as recited in Claim 6 further comprising:
before said receiving, engaging in activating a link to said first input port so that communications over said link conform to a first fabric protocol; and
generating or adjusting said location function as a function of said first fabric protocol.
8. A process as recited in Claim 7 further wherein said ports are real ports.
9. A process as recited in Claim 7 wherein said ports are virtual ports.
10. A process as recited in Claim 7 wherein said determining said output port is a function at least in part of a virtual channel to which said packet is assigned.
11. A computer product comprising media encoded with code configured to, when executed by a processor,
implement an input function including determining a packet location as a location function of an input port at which a packet was received, and determine a routing value as a routing function of a packet value extracted from said packet location;
forward said packet via an output port determined at least in part as a port function of said routing value.
12. A computer product as recited in Claim 11 wherein said code is further configured to:
before said receiving, engaging in activating a link to said first input port so that communications over said link conform to a first fabric protocol; and
generating or adjusting said location function as a function of said first fabric protocol.
13. A computer product as recited in Claim 12 further wherein said ports are real ports.
14. A computer product as recited in Claim 12 wherein said ports are virtual ports.
15. A computer product as recited in Claim 12 wherein said determining said output port is a function at least in part of a virtual channel to which said packet is assigned.
PCT/US2010/048694 2010-09-14 2010-09-14 Computer system fabric switch WO2012036670A1 (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
US13/809,452 US20130142195A1 (en) 2010-09-14 2010-09-14 Computer system fabric switch
PCT/US2010/048694 WO2012036670A1 (en) 2010-09-14 2010-09-14 Computer system fabric switch
CN201080069101.4A CN103098431B (en) 2010-09-14 2010-09-14 Computer Systems Organization switch
EP10857369.2A EP2617167A1 (en) 2010-09-14 2010-09-14 Computer system fabric switch

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/US2010/048694 WO2012036670A1 (en) 2010-09-14 2010-09-14 Computer system fabric switch

Publications (1)

Publication Number Publication Date
WO2012036670A1 true WO2012036670A1 (en) 2012-03-22

Family

ID=45831870

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2010/048694 WO2012036670A1 (en) 2010-09-14 2010-09-14 Computer system fabric switch

Country Status (4)

Country Link
US (1) US20130142195A1 (en)
EP (1) EP2617167A1 (en)
CN (1) CN103098431B (en)
WO (1) WO2012036670A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6931002B1 (en) * 1998-12-08 2005-08-16 Daniel S. Simpkins Hybrid switching
US7616646B1 (en) * 2000-12-12 2009-11-10 Cisco Technology, Inc. Intraserver tag-switched distributed packet processing for network access servers
US7646760B2 (en) * 2001-10-17 2010-01-12 Brocco Lynne M Multi-port system and method for routing a data element within an interconnection fabric
US20100118703A1 (en) * 2004-06-04 2010-05-13 David Mayhew System and method to identify and communicate congested flows in a network fabric

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB9401092D0 (en) * 1994-01-21 1994-03-16 Newbridge Networks Corp A network management system
US5892924A (en) * 1996-01-31 1999-04-06 Ipsilon Networks, Inc. Method and apparatus for dynamically shifting between routing and switching packets in a transmission network
FI103312B (en) * 1996-11-06 1999-05-31 Nokia Telecommunications Oy switching matrix
US7349416B2 (en) * 2002-11-26 2008-03-25 Cisco Technology, Inc. Apparatus and method for distributing buffer status information in a switching fabric
CN100555985C (en) * 2004-02-20 2009-10-28 富士通株式会社 A kind of switch and routing table method of operating
US7552242B2 (en) * 2004-12-03 2009-06-23 Intel Corporation Integrated circuit having processor and switch capabilities
ATE555575T1 (en) * 2006-03-06 2012-05-15 Nokia Corp AGGREGATION OF VCI ROUTING TABLES
US7623450B2 (en) * 2006-03-23 2009-11-24 International Business Machines Corporation Methods and apparatus for improving security while transmitting a data packet
US8867552B2 (en) * 2010-05-03 2014-10-21 Brocade Communications Systems, Inc. Virtual cluster switching

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6931002B1 (en) * 1998-12-08 2005-08-16 Daniel S. Simpkins Hybrid switching
US7616646B1 (en) * 2000-12-12 2009-11-10 Cisco Technology, Inc. Intraserver tag-switched distributed packet processing for network access servers
US7646760B2 (en) * 2001-10-17 2010-01-12 Brocco Lynne M Multi-port system and method for routing a data element within an interconnection fabric
US20100118703A1 (en) * 2004-06-04 2010-05-13 David Mayhew System and method to identify and communicate congested flows in a network fabric

Also Published As

Publication number Publication date
EP2617167A1 (en) 2013-07-24
US20130142195A1 (en) 2013-06-06
CN103098431B (en) 2016-03-23
CN103098431A (en) 2013-05-08

Similar Documents

Publication Publication Date Title
US8750106B2 (en) Interface control system and interface control method
US9215175B2 (en) Computer system including controller and plurality of switches and communication method in computer system
CN107370642B (en) Multi-tenant network stability monitoring system and method based on cloud platform
US7173912B2 (en) Method and system for modeling and advertising asymmetric topology of a node in a transport network
Aweya IP router architectures: an overview
JP5991424B2 (en) Packet rewriting device, control device, communication system, packet transmission method and program
US20110320632A1 (en) Flow control for virtualization-based server
US7133403B1 (en) Transport network and method
US20120170477A1 (en) Computer, communication system, network connection switching method, and program
US7177310B2 (en) Network connection apparatus
TWI436626B (en) Communication control system, switching device, communication control method, and communication control program
KR20190112804A (en) Packet processing method and apparatus
EP2924925A1 (en) Communication system, virtual-network management device, communication node, and communication method and program
US20130188647A1 (en) Computer system fabric switch having a blind route
KR101788961B1 (en) Method and system of controlling performance acceleration data path for service function chaining
US20130142195A1 (en) Computer system fabric switch
Cisco Overview of Layer 3 Switching and Software Features
EP3621251B1 (en) Packet processing
WO2024093778A1 (en) Packet processing method and related apparatus
JP2000324138A (en) Method for supporting short-cut
KR100317990B1 (en) Apparatus and Method of Supporting Multiple Entities for LAN Emulation Client
KR100482689B1 (en) ATM Based MPLS-LER System and Method for Setting Connection thereof
US20140314092A1 (en) Communication system, communication method, edge device, edge device control method, edge device control program, non-edge device, non-edge device control method, and non-edge device control program
KR100563655B1 (en) Virtual private network service method in MPLS and a computer readable record medium on which a program therefor is
KR100624475B1 (en) Network Element and Packet Forwarding Method thereof

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 201080069101.4

Country of ref document: CN

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 10857369

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 13809452

Country of ref document: US

REEP Request for entry into the european phase

Ref document number: 2010857369

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 2010857369

Country of ref document: EP

NENP Non-entry into the national phase

Ref country code: DE