EP2676406A2 - Next hop computation functions for equal cost multi-path packet switching networks - Google Patents

Next hop computation functions for equal cost multi-path packet switching networks

Info

Publication number
EP2676406A2
EP2676406A2 EP12746891.6A EP12746891A EP2676406A2 EP 2676406 A2 EP2676406 A2 EP 2676406A2 EP 12746891 A EP12746891 A EP 12746891A EP 2676406 A2 EP2676406 A2 EP 2676406A2
Authority
EP
European Patent Office
Prior art keywords
function
node
network
packet
mapping
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP12746891.6A
Other languages
German (de)
English (en)
French (fr)
Inventor
Jerome Chiabaut
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Rockstar Consortium US LP
Original Assignee
Rockstar Consortium US LP
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Rockstar Consortium US LP filed Critical Rockstar Consortium US LP
Publication of EP2676406A2 publication Critical patent/EP2676406A2/en
Withdrawn legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/28Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/38Flow based routing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/02Topology update or discovery
    • H04L45/06Deflection routing, e.g. hot-potato routing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/24Multipath
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/12Avoiding congestion; Recovering from congestion
    • H04L47/125Avoiding congestion; Recovering from congestion by balancing the load, e.g. traffic engineering

Definitions

  • the present invention relates to packet switched networks, and in particular to next hop computation functions for equal cost paths in packet switched networks.
  • Ethernet network architectures devices connected to the network compete for the ability to use shared telecommunications paths at any given time. Where multiple bridges or nodes are used to interconnect network segments, multiple potential paths to the same destination often exist. The benefit of this architecture is that it provides path redundancy between bridges and permits capacity to be added to the network in the form of additional links.
  • a spanning tree was generally used to restrict the manner in which traffic was broadcast on the network. Since routes were learned by broadcasting a frame and waiting for a response, and since both the request and response would follow the spanning tree, most if not all of the traffic would follow the links that were part of the spanning tree. This often led to over-utilization of the links that were on the spanning tree and non-utilization of the links that wasn't part of the spanning tree. Spanning trees may be used in other forms of packet switched networks as well.
  • a link state protocol control plane can be used to control operation of the nodes in the packet network.
  • Using a link state protocol to control a packet network enables more efficient use of network capacity with loop-free shortest path forwarding.
  • STP Spanning Tree Protocol
  • the bridges forming the mesh network exchange link state advertisements to enable each node to have a synchronized view of the network topology.
  • link state routing protocols include Open Shortest Path First (OSPF) and Intermediate System to Intermediate System (IS-IS), although other link state routing protocols may be used as well.
  • OSPF Open Shortest Path First
  • IS-IS Intermediate System to Intermediate System
  • the bridges in the network have a synchronized view of the network topology, have knowledge of the requisite unicast and multicast connectivity, can compute a shortest path connectivity between any pair of bridges in the network, and individually can populate their forwarding information bases (FIBs) according to shortest paths computed based on a common view of the network.
  • FIBs forwarding information bases
  • Link state protocol controlled packet networks provide the equivalent of Ethernet bridged connectivity, but achieve this via configuration of the network element FIBs rather than by flooding and learning.
  • the network will have a loop-free unicast tree to any given bridge from the set of peer bridges, and a congruent, loop-free, point-to-multipoint (p2mp) multicast tree from any given bridge to the same set of peer bridges per service instance hosted at the bridge.
  • p2mp point-to-multipoint
  • the result is the path between a given bridge pair is not constrained to transiting the root bridge of a spanning tree and the overall result can better utilize the breadth of connectivity of a mesh.
  • IEEE Institute of Electrical and Electronics Engineers
  • Equal Cost Multi-Path (ECMP) routing is the process of forwarding packets through a packet switching network so as to distribute traffic among multiple available substantially equal cost paths.
  • ECMP routing may be implemented at a head-end node, as traffic enters the network, or may be implemented in a distributed fashion at each node in the network.
  • each node that has multiple equal cost paths to a destination will locally direct different flows of traffic over the multiple available paths to distribute traffic on the network.
  • optimal usage of the network capacity when distributed per-node ECMP is implemented is difficult to achieve.
  • Fig. 1 shows a typical traffic distribution pattern in an packet network, in which each node on the network uses the same ECMP computation function to select from available paths.
  • traffic intended for one of switches I-L may arrive at any of switches A-D.
  • the goal is to spread the traffic out on the network such that a large number of paths are used to forward traffic through the network.
  • the use of the same next hop computation function on every node can result in very poor traffic distribution in some areas of the network.
  • regularities or patterns in flow IDs may cause traffic to become concentrated and result in insufficient traffic spreading between available paths on the network.
  • Next hop computation functions for use in a per-node ECMP path determination algorithm are provided, which increase traffic spreading between network resources in an equal cost multi-path packet switch network.
  • packets are mapped to output ports by causing each ECMP node on the network to implement an entropy preserving mapping function keyed with unique key material.
  • the unique key material enables each node to instantiate a respective mapping function from a common function prototype such that a given input will map to a different output on different nodes.
  • a compression function is used to convert the keyed output of the mapping function to the candidate set of ECMP ports.
  • FIGs. 1 and 2 are functional block diagrams of example packet switching networks
  • FIGs. 3A-3B show example application of different mappings at nodes to implement ECMP routing
  • Fig. 4 is a functional block diagram of an example network element
  • Fig. 5 is a flow chart illustrating a process that may be used to implement ECMP routing.
  • Fig. 1 shows a sample network in which the same next hop computation function is used at every hop in the network.
  • the traffic flows from the bottom to the top of Fig. 1 and at each node the traffic is locally mapped by the node to one of 4 possible output ports. Between the bottom and the middle rows of switches the traffic is evenly distributed, with each link of the mesh connecting the two rows being utilized. The middle row of switches, however, is unable to make use of all the links that connect it to the top row.
  • the leftmost switch in the middle row only received traffic that is mapped to a leftmost outgoing port.
  • the 1 th switch in the row only receives traffic that is mapped to the 1 th port in the preceding stage.
  • Fig. 2 shows the network of Fig. 1, in which an alternative ECMP path selection process has been implemented to distribute traffic more evenly on the network between the available links.
  • traffic may arrive at any of nodes A, B, C, and D.
  • Each of these nodes may select a link connecting to any of nodes E, F, G, and H. That node then forwards traffic on to node J.
  • Traffic patterns such as the Example shown in Fig.
  • each node of the packet switching network must determine, for each packet it receives on an input port, an appropriate output port on which to output the packet for transmission to a next node on a path through the network toward the destination.
  • each node must make appropriate forwarding determinations for the received packets.
  • the algorithm used by the nodes must be somewhat randomized. However, to enable network traffic to be simulated and predicted, the algorithms should be deterministic. Further, packets associated with individual flows should consistently be allocated to the same output port, to enable flows of packets to be directed out the same port toward the intended destination.
  • permutations are used to distribute traffic at each node of the network.
  • the permutations are created using algorithms which are deterministic, so that the manner in which a particular flow of traffic will be routed through the network may be predicted in advance.
  • the permutations are designed in such a manner to allow good traffic spreading between available links with pseudo-random output behavior given small differences in input stimulus.
  • the selected algorithm is designed such that each node on the network will use the same algorithm in connection with a locally unique key value to enable an essentially locally unique function to be used to select ECMP next hops.
  • each packet received at an ingress port of an ingress switch is assigned a Flow Identifier (Flow ID).
  • Flow IDs may be based on information in a customer MAC header (C-MAC) or in an Internet Protocol (IP) header, such as IP Source Address (SA), IP Destination Address (DA), protocol identifier, source and destination ports and possibly other content of a header of the packet.
  • IP Internet Protocol
  • SA IP Source Address
  • DA IP Destination Address
  • protocol identifier source and destination ports and possibly other content of a header of the packet.
  • SA IP Source Address
  • DA IP Destination Address
  • protocol identifier source and destination ports and possibly other content of a header of the packet.
  • a Flow ID could be assigned to a packet in another manner.
  • a management system may assign Flow IDs to management packets to monitor the health and performance of specific network paths.
  • the Flow ID is encapsulated in a packet header at the ingress switch and may be decapsulated from the packet header at the egress
  • the Flow ID could be carried in the 12-bit VLAN identifier (B-VID) field, in the 24-bit Service identifier (I-SID) field, in a new (as yet unspecified) field or in any part or combination of these fields.
  • B-VID 12-bit VLAN identifier
  • I-SID Service identifier
  • packets having different Flow IDs that require forwarding from the same ingress switch to the same egress switch should be distributed, according to their Flow IDs, among different equal cost paths between the ingress switch and the egress switch where there are multiple substantially equal cost paths between the ingress switch and the egress switch to choose from.
  • each switch in the network determines the appropriate output port for a received packet carrying a particular DA and a particular Flow ID by:
  • the mappings should be such that, at each switch, the distributions of Flow IDs to output ports will be roughly evenly distributed. Note that this step is not required for packets having DAs for which there is only one corresponding candidate output port - the packet may simply be routed to that output port.
  • the mapping of Flow IDs to candidate output ports at a node may be produced by combining an entropy-preserving pseudo-random mapping (i.e. the number of distinct outputs of the mapping should equal the number of distinct inputs) with a compression function that maps a large number of mapped Flow IDs to a small number of candidate output ports.
  • This entropy-preserving mapping may be a bijective function in which the set of distinct inputs is mapped to a set of distinct outputs having the same number of elements as the set of distinct inputs.
  • the entropy-preserving mapping may comprise an injective function in which the set of distinct inputs is mapped to a larger set of possible distinct outputs in which only a number of distinct outputs corresponding to the number of distinct inputs are used.
  • the mapping of distinct inputs to distinct outputs is one-to-one.
  • Figs. 3 A and 3B show an example mapping of eight inputs (1-8) to eight outputs (A- H).
  • each node uses a mapping function to map flow identifiers to a candidate set of outputs. This, in effect, creates a shuffled sequence of flow IDs.
  • each node takes a prototype mapping and applies a key value to the mapping, to instantiate a unique mapping at the node. For example, each node may use the value of the key in a multiplication function to produce a cyclic shift of the shuffled sequence of flow IDs which is unique at the node.
  • Fig. 3A shows the mapping of inputs 1-8 to outputs A-H using a first mapping derived from a prototype mapping using a first key, and Fig.
  • FIG. 3B shows the mapping of inputs 1-8 to outputs A-H using a second mapping derived from the same prototype using a second key. As shown in the comparison of Figs. 3 A and 3B, the use of different keys causes different input values (flow IDs) to be mapped to different output values.
  • the mappings should be such that no two switches have the same entropy-preserving mapping.
  • This can be arranged by assigning a unique key to each switch and mapping the Flow IDs using a keyed mapping function which is unique to each switch. Since the key is different at each switch, and because the underlying algorithm used by the switch to perform the mapping is completely specified by the key and the prototype entropy-preserving mapping function, the keyed mapping function will be unique to the switch. This enables the flows of traffic on the network to be determined, while also allowing a mapping of a particular flow ID to a set of output ports to be different at each switch on the network due to the different key in use at each switch.
  • the number of possible flow IDs may greatly exceed the number of ECMP paths on the network. For example, if the flow ID is 12 bits long, it would be expected that there would be on the order of 4096 possible flow IDs which may be assigned to flows on the network and which will be mapped using the mapping function to a different set of 4096 values. However, it is unlikely that there will be 4096 equal cost paths to a given destination. Accordingly, since the entropy-preserving mapping function produces a larger number of outputs than the number of candidate output ports, the mapping further comprises a compression function to compress the number of distinct outputs to equal the number of candidate output ports. The compression function should be such as to preserve the pseudo- randomness of the prototype mapping.
  • the entropy-preserving mapping function at each node is at least partially based on a value associated with that node, use of a standard compression function common to all the nodes will maintain the link distribution randomization associated with use of the entropy-preserving mapping function.
  • FIGs. 3A and 3B show an example in which the same compression function is used to map each of the outputs of the mapping A-H to a set of three output ports.
  • outputs A, E, and H of the mapping function are compressed to port 1
  • outputs B, and F of the mapping function are compressed to port 2
  • outputs C, D, and G of the mapping function are compressed to port 3.
  • the same compression has been used to reduce the set of output values to a candidate set of output ports.
  • the use of a common compression function allows a different set of inputs (1-8) to be mapped to each of the output ports.
  • the key (key 1) included by a first node in its execution of the mapping function causes input flows 3, 6, and 7 to be mapped to port 1, flows 1 and 4 to be mapped to port 2, and flows 2, 5, and 8 to be mapped to port 3.
  • a different key, key 2 is used and flows 2, 4, and 7 are mapped to port 1, flows 5 and 8 are mapped to port 2, and flows 1, 3, and 6 are mapped to port 3.
  • use of a common compression function allows entropy introduced in the mapping to be preserved in connection with output port selection, so that multiple nodes on the network may use a common compression function.
  • Example mappings are described in greater detail below.
  • x denotes the Flow ID
  • f denotes a prototype mapping
  • n denotes a switch key
  • f n denotes a keyed mapping
  • f n (x) denotes a mapped Flow-ID prior to application of the compression function.
  • Application of the compression function to the mapped Flow ID determines which output port, among the output port candidates, is used for forwarding the packet.
  • a candidate prototype mapping should be constructed such that any pair of switches that use different keys will instantiate entropy-preserving mappings that won't map any flow IDs to a same value.
  • exponential-based mappings are used to randomize flow IDs to disrupt patterns that may be present in the flow IDs. Although other mappings may also be used, the use of exponential-based mappings provide adequate results for many applications. Likewise, it may be possible to combine several mappings, each of which has a desired property, to obtain a combined function exhibiting a set of desired characteristics. Accordingly, different embodiments may be constructed using different mappings (i.e. by combining multiple entropy-preserving prototype mapping functions) to achieve deterministic traffic spreading on equal cost links in an ECMP network.
  • switches may use an IS-IS switch ID, an Shortest Path Bridging MAC (SPBM) Source ID (B-SA), or any other combination of values or a transformation/combination of these values that preserves uniqueness.
  • SPBM Shortest Path Bridging MAC
  • B-SA Shortest Path Bridging MAC
  • these values may be used as keys for the node mapping function, to enable each mapping function to be unique to the particular switch in the network.
  • the prototype mapping function i.e. the algorithm used, is the same at all nodes so that the actual mapping function used at a given node is completely specified by the key value in use at that node.
  • the ECMP key material may be a programmed value provided by a management system, randomly generated on the switches, or derived from unique identifiers (e.g. a hash of system ID or the SPBM SPSourcelD).
  • the nodes may advertise the key material using the link state routing system to enable each node to learn the keys in use at other nodes and to ensure that each other node within the routing area is using unique key material.
  • flow IDs will be small integers, probably no more than 24 bits and most likely 8 or 16 bits.
  • the permutation size will be a power of 2 (e.g. 2 8 , 2 16 ) or close to it.
  • Each switch will be assigned a unique pseudo-random permutation or pseudo-permutation. In connection with this, it is important to avoid a pathological case of two switches in a path using the same mappings.
  • a switch mapping function is constructed by keying a generic function with a small ( ⁇ 64 bits) unique (with high probability) integer.
  • a compression function which may be the same on all switches, may also be used.
  • switches should use different entropy preserving mappings, such as a permutation or injection, such that any two mappings in the family should be sufficiently de-correlated.
  • the mapping function used to map an input set to an output set is injective: that is any two different inputs are mapped to different outputs.
  • the entropy-preserving prototype mapping function should have the desirable property that two instantiations of the mapping function using different random key material will result in different mappings which are not directly correlated with each other in a meaningful/obvious manner.
  • two different instantiations of the entropy-preserving mapping function should not map the same flow ID should to a same value in different switches.
  • a mapping should be identifiable / keyed by a small unique identifier that optionally may be advertised by the routing system, e.g. via IS-IS.
  • This could be an IS-IS system ID, a Shortest Path Bridging MAC Source ID (SPBM SPSourcelD), a new unique identifier, or any combination of these that preserves uniqueness.
  • the key could be provisioned, randomly generated, or derived from unique identifiers such as the hash of the system ID.
  • the mapping should also appear pseudo-random when followed by a simple compression function. This is especially important for small numbers of candidate output ports.
  • More complex entropy -preserving mappings may also be constructed by combining elementary mappings with desirable properties. For instance, combinations of linear-congruential mappings and modular-exponential mappings were found to exhibit the good properties of both: the resulting mappings exhibit the good pseudo-randomness properties of the modular exponential mappings as well as the uniqueness property of the linear- congruential mappings.
  • Modular exponentiation can therefore be used to randomize the shuffled flow IDs.
  • modular exponentiation can be used to construct a more random shuffle in which the modular exponentiation itself is keyed at each node. For instance, a different primitive root could be used at each node. This is easily accomplished by generating a node-specific primitive root as a suitably chosen power of a common base root.
  • f(x) be the function defined for 0 ⁇ x ⁇ 4095.
  • a modular exponentiation is applied first followed by a keyed linear- congruential shuffle.
  • the multiplication by a non-zero multiplier of the sequence produced by the modular exponentiation produces a cyclic shift of the sequence.
  • any primitive root can be expressed as a power of any other primitive root with a suitably chosen exponent.
  • the permutation described above causes a shuffling of input numbers to output numbers which may be used to shuffle flow IDs at each switch on the network.
  • Each switch then takes the shuffled flow IDs and uses its key to further shuffle the flow IDs in a manner unique to the switch.
  • each switch can use its key such that multiplication of the shuffled sequence of flow IDs by a non-zero function of the key produces a cyclic shift of the sequence.
  • the keying material can be used to select a different primitive root that will be used as the basis for a modular exponentiation at each switch. Other ways of using the key material may be implemented as well.
  • cycle notation provides a convenient way to visualize what happens when a family of permutations is generated by the powers of a base permutation.
  • An arbitrary power of a permutation can be computed very efficiently by using a cycle notation of the permutation in which each element is followed by its image through the permutation (with the convention that the last element in the cycle-notation sequence maps back to the first element in the sequence).
  • the cycle notation taking a power, s, of a permutation amounts to skipping ahead s elements in the cycle notation. Conversion between one line and cycle notations is easy. It is possible to start from the cycle notation of the permutation as this guarantees that the order of the permutation will be equal to the number of elements in the permutation.
  • Cycle[n+1] f(Cycle[n]) with Cycle[0] an arbitrarily chosen one of the elements being permuted
  • One-line[Cycle[n]] Cycle[n+s] where care should be taken to implement the wrap-around at the end of the cycle (i.e. the indexing is taken modulo the length of the Cycle).
  • the top row (row 0) corresponds to the zero* power of the permutation
  • row 1 show the one-line notation of the permutation
  • row n shows the one-line notation of the nth power of the permutation.
  • the columns in the table (except the left-most one which shows the successive powers) correspond to the different cycle notations of the base permutation.
  • mappings when combined with a simple compression function (e.g. mod) the mappings have good randomness properties.
  • a Latin square in this context, is an nxn array filled with n symbols, each occurring exactly once in each row and in each column.
  • a particular switch's mapping is represented by a row or column of the multiplication table.
  • Fig. 4 shows an example network element that may be configured to implement ECMP according to an embodiment.
  • the network element 10 includes a data plane 20 and a control plane 22.
  • Other architectures may be implemented as well and the invention is not limited to an embodiment architected as shown in Fig. 4.
  • the discussion of the specific structure and methods of operation of the embodiment illustrated in FIG. 4 is intended only to provide one example of how the invention may be used and implemented in a particular instance.
  • the invention more broadly may be used in connection with any network element configured to handle protocol data units on a communications network.
  • the network element of FIG. 4 may be used as an edge network element such as an edge router, a core network element such as a router/s witch, or as another type of network element.
  • the network element of Fig. 4 may be implemented on a communication network such as the communication network described above in connection with FIG. 1 or in another type of wired/wireless communication networks.
  • the network element includes a control plane 20 and a data plane 22.
  • Control plane 20 includes one or more CPUs 24.
  • Each CPU 24 is running control plane software 26, which may include, for example, one or more routing processes 28, network operation administration and management software 30, an interface creation/management process 32, and an ECMP process 34.
  • the ECMP process may be run independent of the routing process or may be implemented as part of the routing process.
  • the ECMP process applies the entropy preserving mapping function to select ECPM ports for flows as described above. Alternatively, as described below, the ECMP process may be implemented in the data plane rather than the control plane.
  • the control plane also includes memory 36 containing data and instructions which, when loaded into the CPU, implement the control plane software 26.
  • the memory further includes link state database 38 containing information about the topology of the network as determined by the routing process 28.
  • the ECMP process 34 uses the information in the LSDB to determine if more than one substantially equal cost path to a destination exists, and then applies the mapping functions described above to assign flows to selected paths.
  • the data plane 22 includes line cards 42 containing ports 44 which connect with physical media 40 to receive and transmit data.
  • the physical media may include fiber optic cables or electrical wires.
  • the physical media may be implemented as a wireless communication channel, for example using one of the cellular, 802.11 or 802.16 wireless communication standards.
  • ports 44 are supported on line cards 42 to facilitate easy port replacement, although other ways of implementing the ports 44 may be used as well.
  • the data plane 22 further includes a Network Processing Unit (NPU) 46 and a switch fabric 48.
  • the NPU and switch fabric 48 enable data to be switched between ports to allow the network element to forward network traffic toward its destination on the network.
  • the NPU and switch fabric operate on data packets without significant intervention from the control plane to minimize latency associated with forwarding traffic by the network element.
  • the NPU also allows services such as prioritization and traffic shaping to be implemented on particular flows of traffic.
  • the line cards may include processing capabilities as well, to enable responsibility for processing packets to be shared between the line cards and NPU. Multiple processing steps may be implemented by the line cards and elsewhere in the data plane as is known in the art. Details associated with a particular implementation have not been included in Fig. 4 to avoid obfuscation of the salient features associated with an implementation of an embodiment of the invention.
  • the computations required to map flow IDs to next hop output ports may be implemented in the data plane.
  • the control plane in this embodiment, may set up the node-specific function but is not involved in the forwarding decisions. As packets are received and a determination is made that there are multiple equal cost paths to the destination, the packets will be mapped on a per-flow basis to the equal cost paths using the algorithms described above. The particular manner in which responsibility is allocated between the control plane and data plane for the calculations required to implement ECMP will depend on the particular implementation.
  • the routing software 28 will use Link State Database 38 to calculate shortest path trees through the network to each possible destination.
  • the forwarding information will be passed to the data plane and programmed into the forwarding information base.
  • the ECMP process 34 will apply the ECMP algorithm to allocate flows to each of the substantially equal cost paths to the destinations.
  • One method for allocating flows of this nature is set forth in Fig. 5.
  • the ECMP process applies the node-specific key material to the prototype mapping function to create a node specific mapping function (100).
  • the ECMP process then applies the node-specific mapping function to the set of possible input flow identifiers to obtain a shuffled sequence of flow identifiers (102). This shuffled sequence may be programmed in the ECMP process as a table or as an algorithm (104).
  • the ECMP process applies a compression function to allocate mapped flow IDs to a set of candidate output ports (106).
  • ASIC Application Specific Integrated Circuit
  • FPGA Field Programmable Gate Array
  • Programmable logic can be fixed temporarily or permanently in a tangible medium such as a read-only memory chip, a computer memory, a disk, or other storage medium. All such embodiments are intended to fall within the scope of the present invention.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)
EP12746891.6A 2011-02-17 2012-02-17 Next hop computation functions for equal cost multi-path packet switching networks Withdrawn EP2676406A2 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201161443993P 2011-02-17 2011-02-17
PCT/US2012/025552 WO2012112834A2 (en) 2011-02-17 2012-02-17 Next hop computation functions for equal cost multi-path packet switching networks

Publications (1)

Publication Number Publication Date
EP2676406A2 true EP2676406A2 (en) 2013-12-25

Family

ID=46673189

Family Applications (1)

Application Number Title Priority Date Filing Date
EP12746891.6A Withdrawn EP2676406A2 (en) 2011-02-17 2012-02-17 Next hop computation functions for equal cost multi-path packet switching networks

Country Status (7)

Country Link
EP (1) EP2676406A2 (ko)
JP (1) JP2014509145A (ko)
KR (1) KR20140059160A (ko)
CN (1) CN103430494A (ko)
BR (1) BR112013020722A2 (ko)
CA (1) CA2820765A1 (ko)
WO (1) WO2012112834A2 (ko)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015061470A1 (en) * 2013-10-23 2015-04-30 Harshavardha Paramasiviah Internet protocol routing method and associated architectures
GB201807835D0 (en) * 2018-05-15 2018-06-27 Nchain Holdings Ltd Computer-implemented system and method
GB201808493D0 (en) * 2018-05-23 2018-07-11 Nchain Holdings Ltd Computer-Implemented System and Method
US11025534B2 (en) * 2019-10-15 2021-06-01 Cisco Technology, Inc. Service-based node-centric ECMP health
CN110837650B (zh) * 2019-10-25 2021-08-31 华中科技大学 一种不可信网络环境下的云存储oram访问系统和方法
US11616726B2 (en) * 2020-11-24 2023-03-28 Juniper Networks, Inc. End-to-end flow monitoring in a computer network
CN113726660B (zh) * 2021-08-27 2022-11-15 西安微电子技术研究所 一种基于完美哈希算法的路由查找器和方法
CN114884868B (zh) * 2022-05-10 2024-04-12 云合智网(上海)技术有限公司 基于ecmp组的链路保护方法
CN117792992B (zh) * 2024-02-28 2024-05-07 鹏城实验室 数据传输路径控制方法、装置、介质及设备

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7636309B2 (en) * 2005-06-28 2009-12-22 Alcatel-Lucent Usa Inc. Multi-path routing using intra-flow splitting
CN100531134C (zh) * 2006-05-17 2009-08-19 华为技术有限公司 一种实现多路径传输的方法、装置和系统
US8718060B2 (en) * 2006-07-31 2014-05-06 Cisco Technology, Inc. Technique for multiple path forwarding of label-switched data traffic
US8565239B2 (en) * 2009-07-14 2013-10-22 Broadcom Corporation Node based path selection randomization

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See references of WO2012112834A2 *

Also Published As

Publication number Publication date
CN103430494A (zh) 2013-12-04
JP2014509145A (ja) 2014-04-10
KR20140059160A (ko) 2014-05-15
WO2012112834A4 (en) 2013-04-11
CA2820765A1 (en) 2012-08-23
WO2012112834A3 (en) 2013-02-21
WO2012112834A2 (en) 2012-08-23
BR112013020722A2 (pt) 2016-10-18

Similar Documents

Publication Publication Date Title
US20130279503A1 (en) Next Hop Computation Functions for Equal Cost Multi-Path Packet Switching Networks
EP2676406A2 (en) Next hop computation functions for equal cost multi-path packet switching networks
US8750820B2 (en) Method and apparatus for selecting between multiple equal cost paths
US10033641B2 (en) Deterministic and optimized bit index explicit replication (BIER) forwarding
US9197558B2 (en) Load balancing in shortest-path-bridging networks
US10153967B2 (en) Deterministic and optimized bit index explicit replication (BIER) forwarding
US8503456B2 (en) Flow based path selection randomization
US8885643B2 (en) Method for multicast flow routing selection
US8565239B2 (en) Node based path selection randomization
CN106105130B (zh) 一种在源路由中提供熵源的方法和设备
US20120230225A1 (en) Hash-Based Load Balancing with Per-Hop Seeding
CN104246701A (zh) 用于基于源路由在不同无限带宽子网间路由流量的系统和方法
Detal et al. Revisiting flow-based load balancing: Stateless path selection in data center networks
Shimonishi et al. Building hierarchical switch network using openflow
Nakamura et al. Layer-3 multipathing in commodity-based data center networks

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20130917

AK Designated contracting states

Kind code of ref document: A2

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN

18D Application deemed to be withdrawn

Effective date: 20160901