HK1136875B - Traffic routing - Google Patents

Traffic routing Download PDF

Info

Publication number
HK1136875B
HK1136875B HK10102084.1A HK10102084A HK1136875B HK 1136875 B HK1136875 B HK 1136875B HK 10102084 A HK10102084 A HK 10102084A HK 1136875 B HK1136875 B HK 1136875B
Authority
HK
Hong Kong
Prior art keywords
path
label switched
network device
node
metric
Prior art date
Application number
HK10102084.1A
Other languages
Chinese (zh)
Other versions
HK1136875A1 (en
Inventor
Christopher N. Del Regno
Matthew W. Turlington
Scott R. Kotrla
Michael U. Bencheck
Richard C. Schell
Original Assignee
维里逊服务机构有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US11/677,699 external-priority patent/US20080205265A1/en
Application filed by 维里逊服务机构有限公司 filed Critical 维里逊服务机构有限公司
Publication of HK1136875A1 publication Critical patent/HK1136875A1/en
Publication of HK1136875B publication Critical patent/HK1136875B/en

Links

Description

Traffic routing
Background
Routing data in a network becomes increasingly complex due to increased customer bandwidth requirements, increased overall traffic, and so forth. As a result, network devices often suffer from congestion related problems and may also fail. Links connecting various network devices may also experience such problems and/or fail. When a failure occurs, traffic must be rerouted to avoid the failed device and/or the failed link.
Drawings
FIG. 1 illustrates an exemplary network implementing the systems and methods described herein;
FIG. 2 illustrates an exemplary configuration of a portion of the network of FIG. 1;
FIG. 3 illustrates an exemplary configuration of the network device of FIG. 1;
FIG. 4 is a flow chart illustrating exemplary processing for the various devices illustrated in FIG. 1; and
fig. 5 illustrates the routing of data via label switched paths in the network part illustrated in fig. 2.
Detailed Description
The following detailed description refers to the accompanying drawings. The same reference numbers in different drawings may identify the same or similar elements. Furthermore, the following detailed description does not limit the invention. Instead, the scope of the invention is defined by the appended claims and equivalents thereof.
Embodiments described herein relate to network communications and configuring primary and alternate paths in a network. When the primary path is unavailable, data may be rerouted on an alternate path that satisfies a metric associated with a particular user demand.
FIG. 1 is a block diagram of an exemplary network 100 in which systems and methods described herein may be implemented. Network 100 may include network device 110, network device 120, network 130, user devices 140-1 through 140-3, collectively referred to as user devices 140, and user devices 150-1 and 150-2, collectively referred to as user devices 150.
Each of network devices 110 and 120 may include a network node (e.g., a switch, router, gateway, etc.) that receives data and routes the data to a destination device via network 130. In an exemplary embodiment, network devices 110 and 120 may be Provider Edge (PE) devices that use multiprotocol label switching (MPLS) to route data received from various devices, such as user devices 140 and 150. In this case, network devices 110 and 120 may establish Label Switched Paths (LSPs) via network 130, where MPLS labels included with the data packets to identify the next hop to which the data is to be forwarded are used to make data forwarding decisions. For example, a device in an LSP may receive a data packet that includes an MPLS label in the header of the data packet. The hops in the LSP can then use the label to identify the outgoing interface on which to forward the data packet without analyzing other portions of the header, such as the destination address.
As described in detail below, network 130 may include multiple devices and links that may be used to connect network devices 110 and 120. In an exemplary embodiment, network 130 may include a plurality of devices for routing data using MPLS. In this embodiment, network devices 110 and 120 may represent the head-end and tail-end of an LSP, respectively.
Each of the user devices 140-1 through 140-3 may represent a user device such as a Customer Premise Equipment (CPE), Customer Edge (CE) device, switch, router, computer, or other device coupled to the network device 110. The user device 140 may be connected to the network device 110 via wired, wireless, or optical communication mechanisms. For example, user device 140 is connected to network device 110 via a layer 2 network (e.g., an ethernet network), a point-to-point link, a Public Switched Telephone Network (PSTN), a wireless network, the internet, or some other mechanism.
Each of the user devices 150-1 and 150-2 may represent a user device similar to the user device 140. That is, the user equipment 150 may include routers, switches, CPEs, CE devices, computers, and the like. User device 150 may connect to network device 120 using wired, wireless, or optical communication mechanisms.
The exemplary configuration illustrated in fig. 1 is provided for simplicity. It should be understood that a typical network may include more or fewer devices than illustrated in fig. 1. In addition, the network device 110 is shown as a separate unit from the various user devices 140. In other embodiments, the functions performed by one of the network device 110 and the user device 140, described in more detail below, may be performed by a single device or node.
Fig. 2 illustrates an exemplary configuration of a portion of network 130. Referring to FIG. 2, network 130 may include a plurality of nodes 210-1 through 210-4, collectively referred to as nodes 210, a plurality of nodes 220-1 through 220-5, collectively referred to as nodes 220, and a plurality of nodes 230-1 through 230-3, collectively referred to as nodes 230.
Each of nodes 210, 220, and 230 may include a switch, a router, or another network device capable of routing data. In an exemplary embodiment, each of nodes 210, 220, and 230 may represent a network device, such as a router, capable of routing data using MPLS. For example, in one embodiment, network device 110 may represent a head-end of an LSP to network device 120, while network device 120 represents a tail-end. In this embodiment, an LSP from network device 110 to network device 120 may include nodes 210-1 through 210-4, as indicated by the line connecting network device 110 to network device 120 through nodes 210-1 through 210-4. Other LSPs (not shown in fig. 2) may also be established to connect network device 110 to network device 120, as described in detail below.
In an exemplary embodiment, the LSP connecting network device 110 to 120 may represent a circuit for a particular customer. In some embodiments, if the LSP illustrated in FIG. 2 experiences congestion or delay in forwarding data via the LSP, network device 110 may stop routing data via the LSP as opposed to allowing the LSP to be used with the long latency or delay associated with routing data. In this case, the customer associated with the LSP may actually allow the LSP to experience down time, as opposed to routing data with a latency that is longer than the expected latency. Network device 110 and/or network device 120 may also identify a new path in network 130 when the latency of an existing path exceeds a predetermined limit, as described in detail below.
Fig. 3 illustrates an exemplary configuration of network device 110. Network device 120 may be configured in a similar manner. Referring to fig. 3, network device 110 may include routing logic 310, path metric logic 320, LSP routing table 330, and output device 340.
Routing logic 310 may include a processor, microprocessor, Application Specific Integrated Circuit (ASIC), Field Programmable Gate Array (FPGA), or another logic device or component for receiving a data packet and identifying forwarding information for the data packet. In one embodiment, routing logic 310 may identify an MPLS label associated with a data packet and use the MPLS label to identify a next hop for the data packet.
Path metric logic 320 may include a processor, microprocessor, ASIC, FPGA, or another logic device or component for identifying one or more alternative paths in network 130 that satisfy a particular metric. In an exemplary embodiment, the metric may be the sum of the physical distances between each node in the LSP. The time or latency associated with communicating data via the LSP depends on the physical distance and may be a function of the physical distance between the nodes in the LSP.
In an alternative embodiment, the metric may be an actual amount of time to transmit a packet from an LSP head-end, such as network device 110, to an LSP tail-end, such as network device 120. In yet other embodiments, the metric may be a cost associated with transmitting the data packet from network device 110 to network device 120 via multiple hops in the LSP. In each case, path metric logic 320 may select an appropriate LSP based on a particular metric, as described in detail below.
LSP routing table 330 may include routing information for the following LSPs: network device 110 forms LSPs with other devices in network 130. For example, in one embodiment, LSP routing table 330 may include an ingress label field, an output interface field, and an egress label field associated with a plurality of LSPs that include network device 110. In this case, routing logic 310 may access LSP routing table 330 to search for information corresponding to the incoming label to identify an outgoing interface via which to forward the data packet along with the outgoing label attached to the data packet. Routing logic 310 may also communicate with path metric logic 320 to determine an appropriate LSP, if any, via which to forward the data packet.
The output device 340 may include one or more queues via which data packets are output. In one embodiment, output device 340 may include a plurality of queues associated with a plurality of different interfaces via which network device 110 may forward data packets.
As briefly described above, network device 110 may use the label attached to the data packet to determine data forwarding information. Network device 110 may also identify potential alternative paths via which to route data packets. Components in network device 110 may include software instructions embodied in a computer-readable medium, such as a memory. A computer-readable medium may be defined as one or more memory devices and/or carrier waves. The software instructions may be read into memory from another computer-readable medium or another device via a communication interface. The software instructions contained in the memory may cause the various logic components to perform processes that are described subsequently. Alternatively, hardwired circuitry may be used in place of or in combination with software instructions to implement processes consistent with the principles of the invention. Thus, the systems and methods described herein are not limited to any specific combination of hardware circuitry and software.
Fig. 4 is a flow diagram illustrating exemplary processing associated with routing data in network 100. In this example, processing may begin with establishing an LSP in network 130 (act 410). In the exemplary embodiment, assume that network device 110 wishes to establish an LSP with network device 120 via node 210 (FIG. 2). In this case, network device 110 may initiate establishing the LSP by sending a request to establish the LSP with network device 120 that includes label information identifying the label to be used in the LSP and also identifying the destination or tail end of the LSP (i.e., network device 120 in this example). This label information may then be forwarded hop-by-hop to other nodes in the LSP. That is, node 210-1 may receive the request to establish the LSP, store the label information in its memory (e.g., in an LSP routing table), and forward the request and label information to node 210-2, followed by node 210-2 forwarding the request to node 210-3, node 210-3 forwarding the request to node 210-4, and node 210-4 forwarding the request to network device 120.
Each of nodes 210 may store label information in its respective memory, such as an LSP routing table similar to LSP routing table 330. As previously described, LSP routing table 330 may include information identifying an ingress label, an egress interface corresponding to the ingress label, and an egress label appended to a data packet forwarded to a next hop. After network device 120 receives the request and may forward an acknowledgement back to network device 110, an LSP (the first hop of which is labeled 500 in FIG. 5 and referred to herein as LSP500 or path 500) may be established from network device 110 to network device 120. Thereafter, when network device 110 receives a packet with an MPLS label, routing logic 310 searches LSP routing table 330 for the label and identifies the outgoing interface on which to forward the packet. Routing logic 310 may also identify an outgoing label in LSP routing table 330 for the data packet and append the outgoing label to the packet. The next hop then uses the outgoing label to identify the data forwarding information.
Network device 110 may establish additional LSPs with nodes in network 130 (act 420). For example, network device 110 may establish another or alternative LSP from network device 110 to network device 120 via nodes 220 and 210 in an alternative manner as illustrated by the dashed path in FIG. 5 (the first hop of which is labeled 510 in FIG. 5 and is referred to herein as LSP 510 or path 510). In this case, network device 110 may forward the request to establish the LSP to node 220-1 along with label information associated with the LSP, node 220-1 forwards the label information to next hop 210-1, the next hop 210-1 forwards the label information to next hop 220-2, and so on to network device 120.
In an exemplary embodiment, network device 110 may establish another alternative LSP in network 130 to network device 120 via node 230 (the first hop of which is labeled 520 in FIG. 5 and is referred to herein as LSP520 or path 520). In this case, network device 110 may forward the request to establish the LSP to node 230-1 along with label information associated with the LSP, which node 230-1 forwards to next hop 230-2, which next hop 230-2 forwards to next hop 230-3, which next hop 230-3 forwards to network device 120.
After LSPs 500, 510, and 520 have been established, routing logic 310 may designate path 500 as the primary LSP for use in routing data to network device 120. Routing logic 310 may also designate LSPs 510 and 520 as alternative paths.
Assume that data is routed in network 100 using LSP 500. Assume further that data transmitted from network device 110 to network device 120 must be transmitted: that is, such that the path metric associated with communicating data via LSP500 must satisfy the predetermined path metric. For example, assume that the path metric is the sum of the physical distances between each hop in LSP 500. As previously described, the total time for transmitting data from network device 110 to network device 120 may be a function of the distance between hops.
In this example, it is assumed that network device 110 (and possibly other nodes in network 130) may store distance information identifying physical distances (or values representing physical distances) between itself and various other nodes in network 130. For example, network device 110 may store information identifying the distance to node 210-1, the distance to node 220-1, and the distance to node 230-1. Network device 110 may also store information identifying the physical distance between other nodes, such as the distance between nodes 210-1 and 210-2, the distance between nodes 210-2 and 210-3, the distance between nodes 220-1 and 210-1, and so on. In this example, assume that the distance between each hop in LSP500 corresponds to a value of 10. That is, the physical distance may be an assigned value corresponding to the physical distance. In this case, since there are five hops in LSP500 each having a value of 10, the total value is 50. Assume further that the maximum path cumulative metric limit (PAML) (e.g., a value that LSP500 must not exceed) for an LSP from network device 110 to network device 120 is 150. It should be appreciated that the particular PAML may be higher or lower based on particular requirements associated with, for example, customers associated with the LSP from network device 110 to network device 120. For example, a customer associated with the user device 140-1 may want to ensure that data transmitted via the network 130 is transmitted within a guaranteed time. In this case, the customer and the entities associated with network 130 may have negotiated a guaranteed Service Level Agreement (SLA) regarding the delivery time.
Assume that LSP500 experiences a failure. For example, a link connecting two hops in LSP500 may fail, one of nodes 210 may fail, and so on. Network device 110 may detect the failure based on, for example, the absence of an acknowledgement message to the signal transmitted to node 210-1, a timeout associated with a handshake signal, or some other failure indication associated with LSP 500.
Path metric logic 320 may then identify whether an alternate path having a path metric less than the PAML is available (act 430). For example, for path 510, path metric logic 320 may determine that each link between hops in path 510 has a path metric value of 50. In this case, path metric logic 320 determines that the total path metric associated with path 510 is 500 (i.e., 10 links with each value of 50), which in this example is greater than the PAML value of 150. Thus, path metric logic 320 does not signal routing logic 310 to use path 510.
Path metric logic 320 may then check the path metric associated with path 520. In this case, assume that path metric logic 320 determines that the metric associated with each link in path 520 is equal to a value of 25 resulting in a total path metric value of 100. In this case, the path metric value is less than PAML 150. Path metric logic 320 may then signal routing logic 310 to use path 520 (act 440). An LSP corresponding to the association of path 520 may then be established, as described above. In other instances, LSP520 may have been previously established.
Network device 110 may then begin routing data to network device 120 via LSP 520. In this manner, path metric logic 320 may identify paths or LSPs that satisfy the PAML used by network device 110.
If path metric logic 320 is not able to identify a path that satisfies PAML, network device 110 may allow the path from network device 110 to network device 120 to remain down (act 450). That is, rather than transmitting data from network device 110 to network device 120 via another path (e.g., path 510) that has excessive delay or latency associated with transmitting data from network device 110 to network device 120, particular customers associated with LSP500 prefer their connections/services to remain in a "hard failure" state.
Path metric logic 320 may also continue to search for another path using, for example, a constraint-based shortest path first (CSPF) algorithm (act 460). In this case, the CSPF algorithm attempts to identify paths that satisfy PAML. If path metric logic 320 identifies such a path, path metric logic 320 may signal routing logic to use the path (act 440).
However, in alternative embodiments, network device 110 and/or nodes in network 130 may be configured to perform a fast reroute function, where a node is configured to identify an alternate path for forwarding a particular data packet if a link or path is down. In this case, it may not be necessary to pre-provision any backup LSP (e.g., LSP 510 or LSP 520). For example, if the first link in LSP500 is down, network device 110 may automatically signal node 220-1 and/or 230-1 that a fast reroute operation is to occur and establish an LSP to network device 120 based on link availability. Other nodes in network 130 may be similarly configured to perform fast reroute operations so that data may be forwarded from network device 110 to network device 120 hop-by-hop. In this manner, an LSP may be quickly formed (e.g., within 50 milliseconds or less) from network device 110 to network device 120.
In each case (i.e., identifying an alternate path/LSP, performing fast reroute, or the LSP remaining in a hard failure state), it is assumed that the failure or problem associated with LSP500 has been resolved (act 470). That is, primary LSP500 becomes available to route data from network device 110 to network device 120 such that the PAML is less than the predetermined value. In this case, routing logic 310 may switch from the alternative LSP (i.e., LSP520 in this example) back to LSP500 (act 480). In addition, routing logic 310 may switch to LSP500 in a "make before break" manner. That is, routing logic 310 may switch back to LSP500 while ensuring that no data packets are dropped, despite, for example, waiting for LSP500 to be reinitialized and/or ready to receive/transmit data.
In the above example, switching from a primary LSP to a backup LSP was described as being caused by a link failure and/or a device failure. In other instances, the switchover may occur due to congestion and/or latency issues associated with particular devices/portions of the LSP. That is, if a particular portion of the LSP is subject to latency issues that may, for example, render it unable to provide a desired level of service, such as a guaranteed level of service associated with a Service Level Agreement (SLA), network device 110 or another device in network 100 may signal network device 110 to switch to a backup LSP. In each case, network device 110 may switch back to the primary LSP when the problem (e.g., latency, failure, etc.) is resolved. In this way, routing in the network 100 can be optimized.
Embodiments described herein provide for routing data within a network via a primary path or a backup path. The path may be an LSP that meets certain requirements or metrics associated with routing data from one device to another.
The foregoing description of exemplary embodiments provides illustration and description, but is not intended to be exhaustive or to limit the invention to the precise form disclosed. Modifications and variations are possible in light of the above teachings or may be acquired from practice of the invention.
For example, various features are described above for network device 110 identifying an LSP on which to route data. In other embodiments, a control node in network 130 may identify an LSP on which to route data.
In addition, features are described herein primarily with respect to identifying particular paths that satisfy a PAML associated with a physical distance between hops in an LSP. In other embodiments, the PAML may be the actual time associated with data communicated via the LSP. In such an embodiment, path metric logic 320 or another device in network 130 may determine the total time associated with data transmitted from network device 110 to network device 120 by, for example, periodically injecting test packets into LSP500 and monitoring when network device 120 receives them, such as via a response message from network device 120. In other embodiments, one or the monitoring device networks 130 may track the actual propagation time associated with transmitting the actual customer traffic via LSP 500.
For example, the timestamp may be included within a data packet transmitted from network device 110. Each node along LSP500 determines a propagation time based on when the data packet was received and determines a total propagation time by summing the individual propagation times for each link in LSP 500. For example, if each of the 5 links in LSP500 has a propagation time of 30 milliseconds, then path metric logic 320 may determine that the total propagation time via LSP500 is 150 milliseconds. In this case, the PAML may be a value representing the actual time.
In yet another embodiment, the PAML may be associated with a cost for transmitting the data packet. In this case, each link in the network 130 may have an associated cost for transmitting data via that link. Network device 110 may therefore identify an LSP that: in this LSP, the total cost associated with it is less than the PAML.
Additionally, while series of acts have been described with regard to fig. 4, the order of the acts may be varied in other implementations. In addition, independent acts may be implemented in parallel.
It will be apparent to one of ordinary skill in the art that various features described above may be implemented in many different forms of software, firmware, and hardware in the implementations illustrated in the figures. The actual software code or specialized control hardware used to implement various features is not limiting of the invention. Thus, the operation and behavior of the features of the invention were described without reference to the specific software code-it being understood that one of ordinary skill in the art would be able to design software and control hardware to implement the features based on the description herein.
Furthermore, certain portions of the invention may be implemented as "logic" that performs one or more functions. This logic may include hardware, such as a processor, microprocessor, application specific integrated circuit, or field programmable gate array, software, or a combination of hardware and software.
No element, act, or instruction used in the description of the present application should be construed as critical or essential to the invention unless explicitly described as such. Also, as used herein, an item without an added quantity is intended to include one or more items. Where only one item is intended, the terms "a" or "an" or similar language is used. Further, the phrase "based on" means "based, at least in part, on" unless explicitly stated otherwise.

Claims (16)

1. A method for routing data, comprising:
in response to a first request sent from a first node to a second node via a plurality of first hops, forming a first label switched path from the first node to the second node including the plurality of first hops, wherein a first path metric associated with the first label switched path is less than a maximum path cumulative metric limit;
in response to a second request sent from the first node to the second node via a plurality of second hops, forming a second label switched path from the first node to the second node including the plurality of second hops;
determining whether a second path metric associated with the second label switched path is less than the maximum path cumulative metric limit;
designating, by the first node, the first label switched path as a primary label switched path to the second node;
designating, by the first node, the second label switched path as an alternate label switched path to the second node;
transmitting customer traffic via the primary label switched path, the customer traffic comprising data packets having time stamps;
detecting that a first path metric associated with the primary label switched path is not less than the maximum path cumulative metric limit, wherein the detecting comprises:
calculating a total propagation time associated with the primary label switched path based on the time stamps included in the customer traffic;
routing data from the first node to the second node via the alternate label switched path in response to detecting that a first path metric associated with the primary label switched path is not less than the maximum path cumulative metric limit based on the calculated total propagation time and in response to determining that the second path metric is less than the maximum path cumulative metric limit;
detecting a restoration of the primary label switched path; and
automatically switching back to routing data on the primary label switched path in response to the recovering, wherein the automatic switching is performed in a make-before-break manner.
2. The method of claim 1, wherein the second path metric is determined by summing values associated with each of a plurality of links in the second label switched path.
3. The method of claim 2, wherein the value corresponds to a distance of each of the plurality of links.
4. The method of claim 2, wherein the value corresponds to a time or latency associated with routing data via each of the plurality of links.
5. The method of claim 2, wherein the value corresponds to a cost associated with routing data via each of the plurality of links.
6. The method of claim 1, further comprising:
when a path having a path metric less than the maximum path cumulative metric limit cannot be identified, allowing the link from the first node to the second node to remain in a down state.
7. A first network device comprising:
a routing table comprising:
identifying information of the incoming label;
a list of egress interfaces of the first network device corresponding to the ingress tag; and
an egress tag to be appended to a data packet received at the first network device; and
means for determining a first path metric associated with a first label switched path from the first network device to a second network device comprising a plurality of first hops in a network, wherein the first path metric is less than a maximum path cumulative metric limit;
means for searching the routing table for a particular incoming label associated with a data packet received at the first network device;
means for identifying a particular egress interface from the list of egress interfaces corresponding to the particular ingress label using the routing table;
means for appending a particular outgoing label from the routing table to the data packet;
means for routing the data packet from the particular egress interface to the second network device via the first label switched path based on an additional egress label;
means for transmitting customer traffic via the first label switched path, the customer traffic comprising data packets having time stamps;
means for identifying a problem in the first label switched path, wherein when a problem is identified:
calculating a total propagation time associated with the first label switched path based on the time stamps included in the customer traffic; and
determining, based on the calculated total propagation time, that a first path metric associated with the first label switched path is no longer less than the maximum path cumulative metric limit;
means for identifying a second label switched path from the first network device to the second network device, wherein the second label switched path comprises a plurality of second hops in the network from the first network device to the second network device;
means for determining whether a second path metric associated with the second label switched path is less than the maximum path cumulative metric limit;
means for routing data via the second label switched path using the routing table in response to the identified problem when the second path metric is less than the maximum path cumulative metric limit;
means for detecting a restoration of the first label switched path; and
means for automatically switching back to routing data on the first label switched path in response to the recovering.
8. The first network device of claim 7, further comprising: means for summing values associated with each of a plurality of links from the first network device to the second network device when determining whether the second path metric is less than the maximum path cumulative metric limit.
9. The first network device of claim 8, wherein the value corresponds to a time associated with routing data via each of the plurality of links.
10. The first network device of claim 8, wherein the value corresponds to a cost associated with routing data via each of the plurality of links.
11. The first network device of claim 8, further comprising:
means for, in response to the identified problem, refraining from routing data from the first network device to the second network device when the second path metric is not less than the maximum path cumulative metric limit.
12. A method for routing data, comprising:
sending a first request via a plurality of first hops in a network to establish a first label switched path from a first node to a second node;
determining that the first label switched path has a first path metric that is less than a maximum path cumulative metric limit, wherein the first request includes label information identifying a label to be used by each of the plurality of first hops;
transmitting customer traffic via the first label switched path, the customer traffic comprising data packets having time stamps;
detecting a failure or problem in the first label switched path, wherein detecting the failure or problem comprises:
calculating a total propagation time associated with the first label switched path based on the time stamps included in the customer traffic;
in response to detecting a failure or problem, sending a second request via a plurality of second hops in the network to establish a second label switched path from the first node to the second node;
determining whether the second label switched path has a second path metric that is less than the maximum path cumulative metric limit, wherein the maximum path cumulative metric limit corresponds to at least one of a distance, a time, or a cost associated with routing data; and
routing data received at the first node and responsive to the detected failure or problem via the second label switched path when the second path metric is less than the maximum path cumulative metric limit.
13. The method of claim 12, further comprising:
detecting a restoration of the first label switched path; and
automatically routing data on the first label switched path in response to the recovering.
14. The method of claim 12, wherein determining whether the second path metric is less than the maximum path cumulative metric limit comprises:
summing values corresponding to distances associated with links in the second label switched path, an
Determining whether the found sum is less than the maximum path cumulative metric limit.
15. The method of claim 12, wherein determining whether the second path metric is less than the maximum path cumulative metric limit comprises:
summing values corresponding to a time associated with transmitting data via a link in the second label switched path, an
Determining whether the found sum is less than the maximum path cumulative metric limit.
16. The method of claim 12, wherein determining whether the second path metric is less than the maximum path cumulative metric limit comprises:
summing values corresponding to a cost associated with transmitting data via a link in the second label switched path, an
Determining whether the found sum is less than the maximum path cumulative metric limit.
HK10102084.1A 2007-02-22 2008-02-15 Traffic routing HK1136875B (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US11/677,699 2007-02-22
US11/677,699 US20080205265A1 (en) 2007-02-22 2007-02-22 Traffic routing
PCT/US2008/054068 WO2008103602A2 (en) 2007-02-22 2008-02-15 Traffic routing

Publications (2)

Publication Number Publication Date
HK1136875A1 HK1136875A1 (en) 2010-07-09
HK1136875B true HK1136875B (en) 2013-04-19

Family

ID=

Similar Documents

Publication Publication Date Title
US8891381B2 (en) Path testing and switching
TWI586131B (en) Multi-protocol label switching technology for fast rerouting (LDP-FRR) using label allocation protocols
Shand et al. IP fast reroute framework
CN103516604B (en) The quick rerouting protection of service plane triggering
US8908537B2 (en) Redundant network connections
US7039005B2 (en) Protection switching in a communications network employing label switching
US8139479B1 (en) Health probing detection and enhancement for traffic engineering label switched paths
US6721269B2 (en) Apparatus and method for internet protocol flow ring protection switching
CN104067575B (en) Rerouting technique
US9853854B1 (en) Node-protection and path attribute collection with remote loop free alternates
EP2404397B1 (en) Ldp igp synchronization for broadcast networks
CN101523803B (en) Elastic scheme in communication network
CN101317405B (en) Label switching path protection method and device
CN101953124A (en) Constructing Repair Paths Bypassing Multiple Unavailable Links in Data Communication Networks
JP2006005941A (en) Method and apparatus for failure protection and recovery for each service in a packet network
CN101617240B (en) Traffic routing
JPWO2005057864A1 (en) Network path switching system
US20090292942A1 (en) Techniques for determining optimized local repair paths
EP2658177B1 (en) Method for detecting tunnel faults and traffic engineering node
US20090168642A1 (en) Telephone system, and node device and rerouting method for the system
US20080181102A1 (en) Network routing
HK1136875B (en) Traffic routing
KR20150132767A (en) Method and apparatus for transmitting data based on point-to-multipoint using sub-gourp on multi-protocol label switching-transport profile network
JP4377822B2 (en) Failure location detection method and failure location detection device
JPWO2005117365A1 (en) Communication control device and communication control method