US20060039364A1 - Systems and methods for policy-enabled communications networks - Google Patents
Systems and methods for policy-enabled communications networks Download PDFInfo
- Publication number
- US20060039364A1 US20060039364A1 US11/250,076 US25007605A US2006039364A1 US 20060039364 A1 US20060039364 A1 US 20060039364A1 US 25007605 A US25007605 A US 25007605A US 2006039364 A1 US2006039364 A1 US 2006039364A1
- Authority
- US
- United States
- Prior art keywords
- policy
- network
- label switching
- switching network
- traffic
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
- 238000000034 method Methods 0.000 title claims abstract description 32
- 238000004891 communication Methods 0.000 title description 22
- 230000004044 response Effects 0.000 claims description 8
- 238000007726 management method Methods 0.000 description 39
- 230000007246 mechanism Effects 0.000 description 30
- 238000005457 optimization Methods 0.000 description 17
- 238000013459 approach Methods 0.000 description 15
- 230000006870 function Effects 0.000 description 15
- 238000005259 measurement Methods 0.000 description 14
- 230000009471 action Effects 0.000 description 12
- 229920002367 Polyisobutene Polymers 0.000 description 9
- 238000013507 mapping Methods 0.000 description 9
- 238000004422 calculation algorithm Methods 0.000 description 8
- 238000012217 deletion Methods 0.000 description 8
- 230000037430 deletion Effects 0.000 description 8
- 230000003068 static effect Effects 0.000 description 8
- 238000004458 analytical method Methods 0.000 description 7
- 238000010586 diagram Methods 0.000 description 7
- 230000006855 networking Effects 0.000 description 6
- 238000005192 partition Methods 0.000 description 6
- 230000008569 process Effects 0.000 description 6
- 230000008901 benefit Effects 0.000 description 5
- 238000012544 monitoring process Methods 0.000 description 5
- 230000008859 change Effects 0.000 description 4
- 235000019580 granularity Nutrition 0.000 description 4
- 230000001960 triggered effect Effects 0.000 description 4
- 108700010388 MIBs Proteins 0.000 description 3
- 230000005540 biological transmission Effects 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 238000012545 processing Methods 0.000 description 3
- 238000004088 simulation Methods 0.000 description 3
- 238000000638 solvent extraction Methods 0.000 description 3
- 238000013519 translation Methods 0.000 description 3
- 230000004931 aggregating effect Effects 0.000 description 2
- 230000001934 delay Effects 0.000 description 2
- 230000001419 dependent effect Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- HRULVFRXEOZUMJ-UHFFFAOYSA-K potassium;disodium;2-(4-chloro-2-methylphenoxy)propanoate;methyl-dioxido-oxo-$l^{5}-arsane Chemical compound [Na+].[Na+].[K+].C[As]([O-])([O-])=O.[O-]C(=O)C(C)OC1=CC=C(Cl)C=C1C HRULVFRXEOZUMJ-UHFFFAOYSA-K 0.000 description 2
- 230000011664 signaling Effects 0.000 description 2
- 238000011144 upstream manufacturing Methods 0.000 description 2
- 101150012579 ADSL gene Proteins 0.000 description 1
- 102100020775 Adenylosuccinate lyase Human genes 0.000 description 1
- 108700040193 Adenylosuccinate lyases Proteins 0.000 description 1
- 101000852665 Alopecosa marikovskyi Omega-lycotoxin-Gsp2671a Proteins 0.000 description 1
- RYGMFSIKBFXOCR-UHFFFAOYSA-N Copper Chemical compound [Cu] RYGMFSIKBFXOCR-UHFFFAOYSA-N 0.000 description 1
- 235000008694 Humulus lupulus Nutrition 0.000 description 1
- 238000009825 accumulation Methods 0.000 description 1
- 125000002015 acyclic group Chemical group 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000007635 classification algorithm Methods 0.000 description 1
- 230000001427 coherent effect Effects 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 230000001010 compromised effect Effects 0.000 description 1
- 229910052802 copper Inorganic materials 0.000 description 1
- 239000010949 copper Substances 0.000 description 1
- 230000007812 deficiency Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000004069 differentiation Effects 0.000 description 1
- 230000008030 elimination Effects 0.000 description 1
- 238000003379 elimination reaction Methods 0.000 description 1
- 230000008713 feedback mechanism Effects 0.000 description 1
- 230000000977 initiatory effect Effects 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 238000012423 maintenance Methods 0.000 description 1
- 238000013508 migration Methods 0.000 description 1
- 230000005012 migration Effects 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 238000012946 outsourcing Methods 0.000 description 1
- 230000000644 propagated effect Effects 0.000 description 1
- 230000010076 replication Effects 0.000 description 1
- 230000003362 replicative effect Effects 0.000 description 1
- 230000035945 sensitivity Effects 0.000 description 1
- 230000001052 transient effect Effects 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
- 230000007723 transport mechanism Effects 0.000 description 1
- 230000005641 tunneling Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M3/00—Automatic or semi-automatic exchanges
- H04M3/22—Arrangements for supervision, monitoring or testing
- H04M3/2254—Arrangements for supervision, monitoring or testing in networks
- H04M3/2263—Network management
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/08—Configuration management of networks or network elements
- H04L41/0893—Assignment of logical groups to network elements
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/08—Configuration management of networks or network elements
- H04L41/0894—Policy-based network configuration management
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L45/00—Routing or path finding of packets in data switching networks
- H04L45/50—Routing or path finding of packets in data switching networks using label swapping, e.g. multi-protocol label switch [MPLS]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/10—Flow control; Congestion control
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/10—Flow control; Congestion control
- H04L47/20—Traffic policing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L12/00—Data switching networks
- H04L12/64—Hybrid switching systems
- H04L12/6418—Hybrid transport
- H04L2012/6443—Network Node Interface, e.g. Routing, Path finding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L12/00—Data switching networks
- H04L12/64—Hybrid switching systems
- H04L12/6418—Hybrid transport
- H04L2012/6445—Admission control
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/02—Standardisation; Integration
- H04L41/0213—Standardised network management protocols, e.g. simple network management protocol [SNMP]
Definitions
- Embodiments of the present invention relate to communications networks. More particularly, embodiments of the present invention relate to systems and methods for policy-enabled communications networks.
- a policy control can specify that a data packet received by a network element of a communications network from a particular source is to be routed through the network in a specific way instead of the default way.
- a policy control can also be role-based and apply to certain network elements (e.g., edge routers) instead of other network elements (e.g., internal routers).
- MPLS Multi-protocol label switching
- IP Internet Protocol
- Embodiments of the present invention relate to systems and methods for policy-based management of a multiprotocol label switching network.
- a system includes a policy-based network administration system, and the policy-based network administration system includes a plurality of policies.
- the system also includes an MPLS network, which is coupled to the policy-based network administration system.
- FIG. 1 is a schematic diagram of the general architecture of an embodiment of a policy-based network management system.
- FIG. 2 is a schematic diagram illustrating a policy architecture for admission control decisions.
- FIG. 3 shows another configuration of a policy control architecture.
- FIG. 4 shows an illustration of a multipoint-to-point Label Switched Path traversing an MPLS network.
- FIG. 5 shows an example of the use of MPLS in a hierarchy.
- FIG. 6 shows a schematic diagram of an intra-network architecture of a policy-based network management system.
- FIG. 7 illustrates a generic policy-based network architecture in the context of an MPLS network.
- FIG. 8 shows an illustration of policy-based management with scaling by automation.
- FIG. 9 shows an illustration of policy-based management with scaling by roles without closed loop policies triggered by network state.
- FIG. 10 shows an illustration of a large metro scale voice service architecture based in part on an MPLS network.
- FIG. 11 shows an illustration of a policy-based management architecture for voice services over an MPLS network.
- a system for policy management for MPLS network includes life cycle management (e.g., creating, deleting, monitoring, and so forth) of Label Switched Paths (“LSP”) paths through the MPLS network.
- the policy management includes controlling access (e.g., LSP Admission Control) to the life cycle managed resources for traffic on the MPLS network.
- MPLS can support explicit traffic engineering via a number of specifications that allow LSPs to be managed based on Quality-of-Service (“QoS”) and other constraints, such as, for example, Constraint-based Routing Label Distribution Protocol (“CR-LDP”), Resource Reservation Protocol (“RSVP”), and so on.
- QoS Quality-of-Service
- CR-LDP Constraint-based Routing Label Distribution Protocol
- RSVP Resource Reservation Protocol
- MPLS can also be used with implicit traffic engineering of LSP Quality of Service.
- Specific QoS mechanisms e.g., DiffServ, Int-Serv, etc.
- the policy management architecture used to control traffic engineering functionality can be independent of the MPLS mechanisms used and can provide consistent, predictable network services.
- MPLS policy control is intra-domain and can be based on using Common Open Policy Service (“COPS”) to implement policy management.
- COPS Common Open Policy Service
- FIG. 1 is a schematic diagram of the general architecture of an embodiment of a policy-based network management system.
- policy-based networking provides an infrastructure for management of networks with a rich set of management capabilities.
- The-basic components of a policy-based management system can include a Policy Decision Point (“PDP”) 120 and Policy Enforcement Point (“PEP”) 130 .
- the PDP 120 can be a logical component residing within a Policy Server
- the PEP 130 can be a logical component, usually residing in a network device.
- Other components of a policy management system can include a policy management console (“PMC”) 100 to provide a human interface to the policy system and a policy repository (“PR”) 110 to store the policy.
- PMC policy management console
- PR policy repository
- the PMC 100 can be used to generate policies for storage in the policy repository 110 and to administer the distribution of policies across various PDP 120 .
- Policies may also be imported into the system via other mechanisms. For example, they may be retrieved from a Lightweight Directory Access Protocol (“LDAP”) directory and stored directly into the policy repository 110 .
- LDAP Lightweight Directory Access Protocol
- From the PDP 120 policy rules may be installed in the network and implemented at one or more PEPs 130 .
- policies regarding what policy rules are to be installed in the network devices can be the result of several different events. There are primarily at least two models of policy management that determine how and when policy decisions get made, provisioned, and outsourced.
- policy provisioning events occur at the PDP 120 that may cause the PDP 120 to install policy rules in the one or more PEPs 130 . Examples of such events include human intervention.(via the policy management console 100 ), signaling from an external application server, feedback about dynamic state changes in the devices that the PDP 120 is managing, and so forth.
- policy outsourcing events can occur at the PEPs 130 and require a policy-based decision, and the PEP 130 can request the decision from the PDP 120 .
- An example of this type of event is the receipt of an RSVP message, or some other network signaling protocol, containing policy information and a request for resource reservation.
- the PEP 130 sends a message to the PDP 120 requesting a decision based on the policy information provided whether to accept or deny the resource reservation request.
- FIG. 2 is a schematic diagram illustrating a policy architecture for admission control decisions.
- a network node 200 can include a PEP 210 and a Local Policy Decision Point (“LPDP”) 220 .
- the PEP 210 can first use the LPDP 220 to reach a local partial decision.
- the partial decision and the original policy request next can be sent to a PDP 230 that renders a final decision (e.g., considering and approving or considering and overriding the LPDP 220 ).
- FIG. 3 shows another configuration of a policy control architecture.
- a network node 300 can include a PEP 310 and a PDP 320 .
- the policy management system can include feedback from a PEP (e.g., PEPs 130 , 210 , 310 ) to a PDP (e.g., PDPs 120 , 220 , 230 , 320 ).
- the feedback can include-information such as changes in dynamic state of network resources, link failures and congestion, statistics related to installed policy, etc.
- the information supplied by the PEPs may be used by the PDP to make future policy-based decisions or make changes to current decisions regardless of the implemented policy management model.
- Policy protocols have been developed, such as COPS, that can provide this robust feedback mechanism for policy management applications.
- a PDP can receive feedback on a variety of parameters such as flow characteristics and performance.
- FIG. 4 is an illustration of a multipoint-to-point (“MPt-Pt”) Label Switched Path (“LSP”) traversing an MPLS network.
- An LSP in MPLS is typically a sink-based tree structure traversing a series of Label Switch Routers (“LSRs”) 451 - 453 between ingress and egress Edge Routers (“ERs”) 410 - 412 .
- LSRs Label Switch Routers
- ERs egress Edge Routers
- a merging function can be implemented at the LSRs.
- a merging function may not be supported by certain classes of equipment (e.g., legacy ATM switches), and Point-to-Point LSPs are a degenerate case of MPt-Pt LSPs where no merging is performed.
- the first function can classify all possible packets into a set of Forwarding Equivalence Classes (“FECs”).
- FECs Forwarding Equivalence Classes
- the second function can map each FEC to a next hop.
- FECs Forwarding Equivalence Classes
- a particular router will typically consider two packets to be in the same FEC if there is some address prefix X in that router's routing tables such that X is the “longest match” for each packet's destination address.
- each hop in turn re-examines the packet and assigns it to a FEC.
- MPLS the assignment of a particular packet to a particular FEC can be done just once.
- there is no further analysis of the packet's network layer header which has a number of advantages over conventional network layer forwarding including, for example, the following.
- MPLS forwarding can be done by switches that are capable of doing label lookup and replacement (e.g., ATM switches).
- MPLS allows (but does not require) the class of service to be inferred from the label.
- the label represents the combination of a FEC and Quality of Service.
- MPLS also permits the use of labels in a hierarchical form in a process known as label stacking.
- FIG. 5 shows an example of the use of MPLS in a hierarchy.
- MPLS may operate in a hierarchy, for example, by using three transit routing domains such as domains 501 , 502 , and 503 .
- Domain Boundary Routers 511 - 512 , 521 - 522 , and 531 - 532 are shown in each domain and can be operating under the Border Gateway Protocol (“BGP”).
- Border Gateway Protocol (“BGP”).
- Internal routers are not illustrated in domain 501 and 503 .
- internal routers 525 - 528 are illustrated within domain 502 .
- the path between routers 521 and 522 follows the internal routers 525 , 526 , 527 , and 528 within domain 502 .
- FIG. 5 shows an example of the use of MPLS in a hierarchy.
- OSPF Open Shortest-Path First
- the domain boundary routers 511 - 512 , 521 - 522 , and 531 - 532 can operate BGP to determine paths between routing domains 501 , 502 and 503 .
- MPLS allows label forwarding to be done independently at multiple levels.
- an IP packet traverses domain 502 it can contain two labels encoded as a “label stack”.
- the higher level label may be used between routers 521 and 522 and encapsulated inside a header specifying a lower level label used within domain 502 .
- a policy-enabled MPLS network includes Policy rules Information Base (“PIB”) elements that identify LSPs and policy actions that affect LSPs, such as, for example, admission of flows to LSPs, LSP life cycle operations such as creation/deletion of LSPs, and so on.
- PDB Policy rules Information Base
- Policy controls for MPLS can provide a rich environment for the creation of network services in an efficient manner. Operational advantages in a policy-based approach to the management and control of MPLS networks include the following:
- MPLS Abstraction While MPLS can be controlled directly through relevant Management Information Bases (“MIBs”), the use of a higher abstraction level PIB provides a mechanism to abstract away some of the implementation options within MPLS and to focus on operational advantages such as, for example, those provided by explicit routing capabilities.
- MIBs Management Information Bases
- Policy-based networking architecture can provide a mechanism to link service level objectives of the network to specific protocol actions within MPLS.
- FIGS. 6 shows a schematic diagram of an intra-network architecture of a policy-based network management system.
- the Edge Label Switch Routers (“ELSRs”) 641 , 643 become the PEP as they are involved in the admission control of flows to the LSP.
- Intervening LSRs such as LSR 642 , may also be PEPs, for example, in the case of MPt-Pt LSPs.
- Embodiments can use a generic computing platform and leave the LSR as a Policy Ignorant Node (“PIN”) or consider them the same piece of equipment.
- PIN Policy Ignorant Node
- Embodiments of the present invention relate to one or more of two main categories of policies for MPLS: (1) LSP Admission Policies that map traffic flows onto LSPs; and (2) LSP Life Cycle Policies affecting LSP creation, deletion, configuration, and monitoring.
- Mapping traffic flows onto LSPs involves a policy system setting up classifiers in the ingress LSR(s) of an LSP to identify which packets get admitted onto the LSP and process the packets accordingly.
- label switched paths can be associated with a Forwarding Equivalence Class (FEC) that specifies which packets are to be sent onto the LSP.
- FEC Forwarding Equivalence Class
- Classifiers from the policy server can define the characteristics of the FEC, and packets/flows that match these characteristics are sent over the LSP.
- the FEC that gets mapped onto an LSP can be defined according to a number of flow characteristics such as application, source/destination/subnet address, user, DiffServ code point on incoming packet, and so on.
- Configuring LSPs involves the creation and deletion of LSPs in the network according to some QoS or other criteria. This can be achieved in a number of ways, such as manual creation or invoking one of the label distribution-mechanisms that support this (CR-LDP, RSVP). After a label switched path is created, it can be monitored for performance to ensure that the service it provides continues to behave as expected. For example, LSP MIB counters, such as a count of packets dropped in a particular LSP, can be used to gauge performance.
- the LSRs can provide feedback to the policy system to perform this monitoring. For example, an LSP performance table can track incoming and outgoing statistics related to octets, packets, drops, and discards on MPLS trunks. Using this information, the LSR can notify the server when performance levels fall below some threshold based on the available statistics. The server would then have the ability to enhance the current LSP or create alternatives.
- the admission criteria may include, for example, the following criteria: (a) a DiffServ marking as one of the potential classification mechanisms; (b) authentication, for example, for access to an LSP-based Virtual Private Network (“VPN”); or (c) traffic engineering policies related to architectures other than DiffServ (e.g. Int-Serv).
- An MPLS framework can consider classification in terms of establishing a flow with a specific granularity. These granularities can be a base set of criteria for classification policies, such as the following examples of unicast traffic granularities:
- PQ Port Quadruples: same IP source address prefix, destination address prefix, TTL, IP protocol and TCP/UDP source/destination ports;
- PQT Port Quadruples with TOS: same IP source address prefix, destination address prefix, TTL, IP protocol and TCP/UDP source/destination ports and same IP header TOS field (including Precedence and TOS bits);
- HP Home Pairs: same specific IP source and destination address (32 bit);
- NP Network Pairs: same IP source and destination address prefixes (variable length);
- DN Different IP destination network address prefix (variable length);
- ER Egress Router: same egress router ID (e.g. OSPF);
- Next-hop AS same next-hop AS number (BGP);
- DAS Destination AS: same destination AS number (BGP);
- the MPLS framework also can include the following multicast traffic granularities:
- SMT Shared Multicast Tree
- the calculations may involve other traffic characteristics relating to buffer occupancy and scheduling resource decisions. These may include parameters such as: (a) burstiness measures (e.g., Path MTU size or Packet size); or (b) inferred or signaled bandwidth requirements.
- burstiness measures e.g., Path MTU size or Packet size
- MPLS permits a range of LSP creation/deletion modes from relatively static, manually provisioned LSPs, dynamic LSPs initiated in response to routing topology information, and data driven LSP generation. Policy impacts can vary depending on the LSP creation/deletion modes. MPLS supports a variety of mechanisms for the creation/deletion of LSPs, such as manual provisioning, LDP, CR-LDP, RSVP, BGP, etc. In an embodiment, the policy should be independent of the underlying mechanism.
- the role of policy may be to restrict the range of authorized users that can create or delete LSPs, or the range of addresses that can be connected by LSPs (e.g., Intra-Domain, intra-VPN, and so on).
- LSPs e.g., Intra-Domain, intra-VPN, and so on.
- topology driven LSP setup there may be policy constraints on speed of re-establishment of LSPs or the number of LSPs.
- data driven LSP establishment there can be policies related to the data characteristics that trigger the creation or deletion of an LSP.
- LSPs When created, LSPs may have certain attributes. For example, traffic-engineering policies may be applied to reserve network resources such as bandwidth on specific links for an LSP.
- LSPs in general are sink-based tree structures.
- the merge points of the LSP may have policies such as, for example, policies associated with the buffer management at the merge point.
- the characteristics or attributes of an LSP may be impacted by different policy considerations. They can be impacted at the time of LSP creation or may be altered for an existing LSP.
- a policy-enabled MPLS system can include the following features and/or functions: (a) a label distribution protocol that supports the specification of QoS constraints; (b) LSPs are established as administratively specified explicit paths where the route is specified either entirely or partially at the time the path is established; and (c) COPS and PIBs are used for policy protocol between a policy server (e.g., a PDP) and LSRs (e.g., PEPs).
- the policy-enabled MPLS system can include three phases: (a) LSP setup; (b) LSP admission control; and (c) LSP monitoring.
- a PDP determines that an LSP is to be established. Possible choices for how the PDP gets signaled to make this determination include: human input at the network management console (e.g., manually provisioned LSP), receipt of a trigger from an ingress LSR as a result of receiving a particular type of data packet, or observing a particular performance level deficiency (e.g., data-driven LSP provisioning).
- an initial policy can be implemented in the LSR specifying what types of data packets to look for that can trigger an LSP. In some respects, this can appear to be similar to RSVP QoS policy where the decision to permit the resource reservation is outsourced to the PDP.
- the outsourced decision is not just to accept or deny the request, but involves a separate step of initiating the LSP session, as described below.
- an LSP may be required, in an embodiment, to support a specific service or set of services in the network. This may imply traffic characteristics for the LSP such as, for example, peak data rate, committed data rate, burst size, etc.
- the PDP can determine the specific LSRs that are to be part of the path.
- the LSP may be partially explicit, specifying some specific LSRs that must be included, and the remainder of the LSP left to the routing protocols.
- An intelligent PDP may use feedback information from the LSRs to determine if they currently have sufficient resources free to support the resource requirements of the LSP.
- the LSP creation could use a topology-driven method where the path is determined by the routing protocol (and the underlying label distribution protocol processing). In such an embodiment, the LSP creation is initiated with specification of the traffic requirements. For any way that the LSP is routed, any traffic constraint requirements are met by all LSRs that get included in the LSP.
- the PDP can issue a policy message to the ingress LSR of-the LSP, including the explicit route information (if applicable), strict or loose route preferences, traffic parameters (constraint requirements),. etc.
- a COPS Decision (cops-pr, probably using a ⁇ cops-mpls> client type in the PEP) that includes MPLS PIBs describing the CR-LDP constraints.
- the MPLS policy client in the LSR can-take the message and initiate an LSP session.
- CR-LDP is used, for example, this is done by sending a Label Request message containing the necessary CR-LDP Type Length Values (“TLV”) (e.g., Explicit Route TLV, Traffic TLV, CD-LSP FEC, etc.).
- TLV Type Length Values
- RSVP RSVP
- a path message containing the constraint information is sent from the ingress LSR to the egress LSR.
- the LSR establishment is similar, from a policy point of view, regardless of label distribution protocol used.
- use of CR-LDP is described, but based on the written description herein the use of RSVP in an embodiment is apparent to one of skill in the art.
- the Label Request is propagated downstream and gets processed as usual according to CR-LDP procedures (e.g., downstream on demand label advertisement).
- CR-LDP procedures e.g., downstream on demand label advertisement.
- the egress LSR processes the Label Request, it issues a Label Mapping message that propagates back upstream establishing label mappings between MPLS peers for the LDP.
- the ingress LSR receives back a Label Mapping message from the next-hop LSR and it notifies the PDP of the label it received, to be used when forwarding packets to the next-hop on this LDP, and the LSPID. If the path could not be established, for example due to errors or insufficient resources or other issues, the error notification gets sent to the PDP.
- COPS is used as the policy protocol, this is done with a COPS Report message, containing the MPLS label and referencing the Decision message that initiated the CR-LDP session.
- the PDP can issue policies to specify which packets/flows get mapped onto the LSP, i.e., which packets belong to the FEC for the LSP.
- the COPS and PIB example this is done in a similar manner to the way packets get mapped to DiffServ Per Hop Behaviors (“PHB”) in ingress routers of a DiffServ network.
- PDB DiffServ Per Hop Behaviors
- a COPS Decision message can be issued containing PIB table entries, for example, for: the classifier that specifies the FEC, a profile for policing and admission control to the LSP, the label to put on the packets that match the classifier, and what to do with packets that match but are out of profile.
- the MPLS policy is enforced and packets are matched against the FEC classification and profile.
- the metering capability allows the PDP to specify a profile for policing so that admission control can be performed on the packets utilizing the LSP resources.
- the policy installed by the PDP for the FEC can specify a MPLS Action table entry (e.g., of a PIB) for certain data packet types that might be admitted onto the LSP to authenticate the policy information about the packet with the PDP. This action is quite similar to the way COPS-RSVP works, where the PDP returns an accept/deny decision to indicate whether the packet is allowed access to the LSP or not.
- Packets that match the FEC classification are in-profile, and have valid policy information (if applicable) get the label associated with the LSP for that FEC. This can involve pushing the label onto the top of a label stack if the packet already has a label for another LSP. This is handled according to MPLS label processing rules.
- the PDP can monitor the performance of the LSP to ensure the packets that are being mapped to the LSP receive the intended service.
- Information such as that specified in the MPLS LSR MIB, the in-segment performance table, the out-segment performance table, and so on may be used for this purpose (other data/stats may also be better or be better suited for this purpose).
- the PDP gathers this feedback information, it makes decisions regarding the creation/deletion/changing of LSPs and the packets that get mapped onto them.
- Actions taken by the PDP as a result of performance feedback analysis may include re-directing existing LSPs to route traffic around high congestion areas of the network, changing traffic parameters associated with an LSP to reserve more resources for the FEC, adding a new LSP to handle overflow traffic from an existing path, tearing down an LSP no longer in use, and so on.
- a policy system can help to secure the MPLS system by providing appropriate controls on the LSP life cycle. Conversely, if the security of the policy system is compromised, then this may impact any MPLS systems controlled by that policy system. The MPLS network is not expected to impact the security of the policy system.
- Embodiments of the present invention can include policy systems related to one or more of policy-based load balancing in traffic-engineered MPLS networks and traffic engineering of load distribution.
- An overview of load balancing and load distribution is first described, and then an embodiment of the present invention related to load balancing, which can be a specific sub-problem within load distribution, is described.
- mapping traffic to FECs (a) mapping traffic to FECs; (b) mapping FECs to LSPs; and (c) mapping LSPs to physical topology.
- the first two features are discussed in greater detail herein as part of describing MPLS as an interesting subset of IP protocols, load balancing as a traffic engineering objective, and policy-based approaches for describing the objectives and constraints of the traffic engineering optimization.
- Load balancing in MPLS networks concerns the allocation of traffic between two or more LSPs which can have the same origin and destination.
- a pair of LSRs may be connected by several (e.g., parallel) links.
- Link Bundling From an MPLS traffic engineering point of view, for the purpose of scalability, it may be desirable to treat all these links as a single IP link in an operation known as Link Bundling.
- load balancing the load to be balanced is spread across multiple LSPs that in general does not require physical topology adjacency for the LSRs.
- the techniques can be complementary.
- Link bundling typically provides a local optimization that is particularly suited for aggregating low speed links.
- Load Balancing generally is targeted at larger scale network optimizations.
- load balancing is often considered to apply between edge LSRs, it can be applied in an embodiment at any LSR that provides the requisite multiple LSP tunnels with common endpoints.
- the Policy Enforcement Point is the LSR at the source end of the set of LSPs with common endpoints.
- the arriving traffic to be load balanced may be from non-MPLS interfaces or MPLS interfaces.
- the source end of an LSP may act as a merge point for multiple input streams of traffic.
- the set of LSPs over which the load is to be balanced can be pre-defined and the relevant load balancing policies are then applied to these LSPs.
- LSPs can be created and deleted in response to policies with load balancing objectives.
- best effort LSPs are considered, which can simplify the admission control considerations of a load balancing process.
- load balancing on a best effort network can be viewed as a simple case, the basic methodologies have a wider applicability when applied to QoS-based LSP selection. Indeed, the load balancing case for best effort only traffic has similar problems to that of load balancing a particular traffic class such as that with a particular DiffServ PHB. Bandwidth sharing among classes of service can raise some more complex issues that also apply to the placement of traffic into ER-LSPs. As the available capacity for a particular traffic class to a particular destination exceeds the capacity of the LSP for that traffic, an action can be taken to get more bandwidth or control access to the LSP. The PEP can perform an action per traffic class with a likely result that the best effort traffic on the network will become squeezed in favor of higher priority traffic.
- Lending of bandwidth between LSPs can be implemented as a policy.
- the location of the network congestion can have a bearing on a solution, and a policy server can initiate a new LSP and map certain flows to this new LSP to avoid the congestion point, thereby improving the performance of those flows and reducing the congestion problem. This can, however, require a congestion detection methodology and inter-PDP communication.
- a policy provides a rule of the form: IF ⁇ condition> THEN ⁇ action>.
- Policy-based networking is one of a number of mechanisms that can be used in achieving traffic engineering objectives. While traffic engineering may be considered an optimization issue, policy approaches provide considerable flexibility in the specification of the network optimization objectives and constraints.
- policies may be: (a) dependent on time or network state (e.g., either local or global); (b) based on algorithms executed offline or online; (c) stored centrally (e.g., in a directory) or distributed to an engineerable number of policy decision points; (d) prescriptive or descriptive; and (e) designed for open loop or closed loop network control.
- Network feedback can be an important part of policy-based networking. While network configuration (e.g., provisioning) can be performed in an open-loop manner, in general, policy-based networking can imply a closed-loop mechanism. Distribution and performance of the policy system can require adequate resources that are provisioned to meet the required policy update frequency and so on.
- a traffic engineering framework can identify process model components for (a) measurement; (b) modeling, analysis, and simulation; and (c) optimization. Policies may be used to identify relevant measurements available through the network and trigger appropriate actions.
- the available traffic metrics for determining the policy trigger conditions can be constrained, e.g., by generic IP traffic metrics.
- Policies can provide an abstraction of network resources, e.g., a model that can be designed to achieve traffic engineering objectives. Policies can provide a degree of analysis by identifying network problem through correlation of various measurements of network state. A set of policies can be designed to achieve an optimization of network performance through appropriate network provisioning actions.
- load balancing can be an example of traffic mapping.
- a relative simplicity of load balancing algorithms can illustrate approaches to traffic engineering in the context of MPLS networks. While load balancing optimizations have been proposed for various routing protocols, such approaches typically complicate existing routing protocols and tend to optimize towards a fairly limited set of load balancing objectives. Extending these towards more flexible/dynamic load balancing objectives can be overly complicated. Hence, building on a policy-based networking architecture can provide mechanisms specifically designed to support flexible and dynamic administration.
- Online traffic load distribution for a single class of service is known based in part on extensions to Interior Gateway Protocol (“IGP”) that can provide loading information to network nodes.
- IGP Interior Gateway Protocol
- a control mechanism for provisioning bandwidth according to a policy can be provided. Identified and described in this load distribution overview and herein are: (a) mechanisms that affect load distribution and the controls for mechanisms that affect load distribution to enable policy-based traffic engineering of the load distribution to be performed; (b) mechanisms that affect load distribution and the control for those mechanisms to enable policy-based traffic engineering of load distribution; and (c) a description of the use of load distribution mechanisms in the context of an IP network administration.
- the traffic load that an IP network supports may be distributed in various ways within the constraints of the topology of the network (e.g., avoiding routing loops).
- a default mechanism for load distribution is to rely on an IGP (e.g., Intermediate System to Intermediate System (“IS-IS”), OSPF, etc.) to identify a single “shortest” path between any two endpoints of the network.
- IGP e.g., Intermediate System to Intermediate System (“IS-IS”), OSPF, etc.
- “Shortest” is typically defined in terms of a minimization of an administrative weight (e.g., hop count) assigned to each link of the network topology. Having identified a single shortest path, all traffic between those endpoints then follows that path until the IGP detects a topology change. While often called dynamic routing (e.g., because it changes in response to topology changes), it can be better characterized as topology driven route determination.
- This default IGP mechanism works well in a wide variety of operational contexts. Nonetheless, there are operational environments in which network operators may wish to use additional controls to affect the distribution of traffic within their networks. These may include: (a) service specific routing (e.g., voice service may utilize delay sensitive routing, but best effort service may not); (b) customer specific routing (e.g., VPNs); (c) tactical route changes where peak traffic demands exceed single link capacity; and (d) tactical route changes for fault avoidance. In an embodiment, a rationale for greater control of the load distribution than that provided by the default mechanisms is included.
- service specific routing e.g., voice service may utilize delay sensitive routing, but best effort service may not
- customer specific routing e.g., VPNs
- tactical route changes where peak traffic demands exceed single link capacity
- tactical route changes for fault avoidance e.g., a rationale for greater control of the load distribution than that provided by the default mechanisms is included.
- Load Distribution Traffic load distribution may be considered on a service-specific basis or aggregated across multiple services. In considering the load distribution, one can also distinguish between a snapshot of the network's state (e.g., a measurement) and an estimated (e.g., hypothetical) network state that may be based on estimated (e.g., projected) traffic demand.
- Load distribution can have two main components: (1) identification of routes over which traffic flows; and (2) in the case of multipath routing configurations (e.g., where multiple acyclic paths exist between common endpoints), the classification of flows determines the distribution of flows among those routes.
- Traffic load can be a link measurement.
- node constraints e.g., packet forwarding capacity
- Traffic load can be measured in units of network capacity, and network capacity is typically measured in units of bandwidth (e.g., with a magnitude dimensioned in bits/second or packets/second).
- bandwidth can be considered a vector quantity providing both a magnitude and a direction.
- Bandwidth magnitude measurements are typically made at some specific (but often implicit) point in the network where traffic is flowing in a specific direction (e.g., between two points of a unicast transmission). A significance arises from distinguishing between bandwidth measurements made on a link basis and bandwidth demands between end-points of a network.
- a snapshot of the current load distribution may be identified through relevant measurements available on the network.
- the available traffic metrics for determining the load distribution include, for example, generic IP traffic metrics.
- the measurements of network capacity utilization can be combined with the information from the routing database to provide an overall perspective on the traffic distribution within the network. This information may be combined at the routers (and then reported back) or aggregated in the management system for dynamic traffic engineering.
- a peak demand value of the traffic load magnitude may be used for network capacity planning purposes.
- asymmetric host interfaces e.g. Asymmetric Digital Subscriber Line (“ADSL”)
- ADSL Digital Subscriber Line
- client-server application software architectures
- the physical topology e.g., links and nodes
- the physical topology can be fixed while considering the traffic engineering options for affecting the distribution of a traffic load over that topology.
- new nodes and links can be added and considered a network capacity planning issue.
- Fundamental load-affecting mechanisms include: (1) identification of suitable routes; and (2) in the case of multipath routing, allocation of traffic to a specific path.
- the control mechanisms available can impact either of these mechanisms.
- control of the Load Distribution in the context of the TE Framework.
- control of load distribution may be: (a) dependent on time or network state (either local or global), e.g. based on IGP topology information; (b) based on algorithms executed offline or online; (c) impacted by open or closed loop network control; (d) centralized or distributed control of the distributed route set and traffic classification functions; or (e) prescriptive (i.e., a control function) rather than simply descriptive of network state.
- Network feedback can be an important part of the dynamic control of load distribution within the network. While offline algorithms to compute a set of paths between ingress and egress points in an administrative domain may rely on historic load data, online adjustments to the traffic engineered paths typically will rely in part on the load information reported by the nodes.
- the traffic engineering framework identifies process model components for: (a) measurement; (b) modeling, analysis, and simulation; and (c) optimization.
- Traffic load distribution measurement has already been described herein.
- Modeling, analysis, and simulation of the load distribution expected in the network is typically performed offline.
- Such analyses typically produce individual results of limited scope (e.g., valid for a specific demanded traffic load, fault condition, etc.).
- the accumulation of a number of such results can provide an indication of the robustness of a particular network configuration.
- Load distribution optimization objectives may include: (a) elimination of overload conditions on links/nodes; and (b) equalization of load on links/nodes.
- a variety of load distribution constraints may be derived from equipment, network topology, operational practices, service agreements, etc.
- Load distribution constraints may include: (a) current topology/route database; (b) current planned changes to topology/route database; (c) capacity allocations for planned traffic demand; (d) capacity allocations for network protection purposes; and (e) service level agreements (“SLAs”) for bandwidth and delay sensitivity of flows.
- SLAs service level agreements
- control of the load distribution can be a core capability for enabling traffic engineering of the network.
- Route Determination Routing protocols are well known and this description of route determination focuses on specific operational aspects of controlling those routing protocols towards a traffic-engineered load distribution.
- a traffic engineered load distribution typically relies on something other than a default IGP rout set, and typically requires support for multiple path configurations.
- the set of routes deployed for use within a network is not necessarily monolithic. Not all routes in the network may be determined by the same system. Routes may be static or dynamic. Routes may be determined by: (1) topology driven IGP; (2) explicitly specified; (3) capacity constraints (e.g., link/node/service bandwidth); (4) constraints on other desired route characteristics (e.g., delay, diversity/affinity with other routes, etc.). Combinations of the methods are possible, for example, determining partial explicit routes where some of the links are selected by the topology driven IGP, some routes may be automatically generated by the IGP, and others may be explicitly set by some management system.
- Explicit routes are not necessarily static. Explicit routes may be generated periodically by an offline traffic engineering tool and provisioned into the network. MPLS provides efficient mechanisms for explicit routing and bandwidth reservation. Link capacity may be reserved for a variety of protection strategies as well as for planned traffic load demands and in response to signaled bandwidth requests (e.g. RSVP). When allocating capacity, there may be issues in the sequence regarding how capacity on specific routes is to be allocated affecting the overall traffic load capacity. It can be important during path selection to chose paths that have a minimal effect on future path setups. Aggregate capacity required for some paths may exceed the capacities of one or more links along the path, forcing the selection of an alternative path for that traffic. Constraint-based routing approaches may also provide mechanisms to support additional constraints (e.g., other than capacity based constraints).
- additional constraints e.g., other than capacity based constraints.
- IGP Interoperability for Microwave Access
- OSPF Interoperability for Microwave Access
- routers can report relevant network state information (e.g., raw and/or processed) directly to the management system.
- Controls over the determination of routes form an important aspect of traffic engineering for load distribution. Since the routing can operate over a specific topology, any control of the topology abstraction used provides some control of the set of possible routes.
- Hierarchical routing provides a mechanism to abstract portions of the network in order to simplify the topology over which routes are being selected.
- Hierarchical routing examples in IP networks include: (a) use of an Exterior Gateway Protocol (“EGP”) (e.g. BGP) and an IGP (e.g., IS-IS); and (b) MPLS Label stacks.
- EGP Exterior Gateway Protocol
- IGP IGP
- MPLS Label stacks MPLS Label stacks.
- Such hierarchies can provide both a simplified topology and a coarse classification of traffic.
- Operational controls over route determination are another example.
- the default topology driven IGP typically provides the least administrative control over route determination.
- the main control available is the ability to modify the administrative weights.
- a route set comprised entirely of completely-specified explicit-routes is the opposite extreme, i.e., complete offline operational control of the routing.
- a disadvantage of using explicit routes is the administrative burden and potential for human induced errors from using this approach on a large scale.
- Management systems e.g., policy-based management
- explicit route specification is feasible and a finer grained approach is possible for classification, including service differentiation.
- Traffic Classification in Multipath Routing Configurations With multiple paths between two endpoints, there is a choice to be made as to which traffic to send down a particular path. The choice can be impacted by: (1) traffic source preferences (e.g., expressed as marking—Differentiated Services Code Points (“DSCP”)); (2) traffic destination preferences (e.g., peering arrangements); (3) network operator preferences (e.g., time of day routing, scheduled facility maintenance, policy); and (4) network state (e.g., link congestion avoidance).
- traffic source preferences e.g., expressed as marking—Differentiated Services Code Points (“DSCP”)
- traffic destination preferences e.g., peering arrangements
- network operator preferences e.g., time of day routing, scheduled facility maintenance, policy
- network state e.g., link congestion avoidance
- the choice of traffic classification algorithm can be delegated to the network (e.g., load balancing—which may be done based on some hash of packet headers and/or random numbers).
- load balancing which may be done based on some hash of packet headers and/or random numbers.
- This approach is taken in Equal Cost Multipath Protocol (“ECMP”) and Optimized Multipath Protocol (“OMP”).
- ECMP Equal Cost Multipath Protocol
- OMP Optimized Multipath Protocol
- a policy-based approach has the advantage of permitting greater flexibility in the packet classification and path selection. This flexibility can be used for more sophisticated load balancing algorithms, or to meet churn in the network optimization objectives from new service requirements.
- Multipath routing in the absence of explicit routes, can be difficult to traffic engineer as it devolves to the problem of adjusting the administrative weights.
- MPLS networks provide a convenient and realistic context for multipath classification examples using explicit routes.
- One LSP could be established along the default IGP path.
- An additional LSP could be provisioned.(in various ways) to meet different traffic engineering objectives.
- Load balancing can be analyzed as a specific sub-problem within the topic of load distribution. Load-balancing essentially provides a partition of the traffic load across the multiple paths in the MPLS network.
- FIG. 7 illustrates a generic policy-based network architecture in the context of an MPLS network.
- two LSPs are established: LSP A that follows the path of routers 741 , 742 and 743 , and LSP B that follows the path of routers 741 , 744 , and 743 .
- LSPs may be established via policy mechanisms (e.g., using COPS push, and so on).
- a load balancing operation is performed at the LSR containing the ingress of the LSPs to be load balanced.
- LSR 741 is acting as the Policy Enforcement Point for load-balancing policies related to LSPs 751 - 752 .
- the load-balancing encompasses the selection of suitable policies to control the admission of flows to both LSPs 751 - 752 .
- the admission decision for an LSP can be reflected in the placement of that LSP as the Next Hop Forwarding Label Entry (“NHFLE”) within the appropriate routing tables within the LSR.
- NHLFE Next Hop Forwarding Label Entry
- the conditions for the policies applying to the set of LSPs to be load balanced can be consistent. For example, if the condition used to allocate flows between LSPs is the source address range, then the set of policies applied to the set of LSPs can account for the disposition of the entire source address range.
- traffic engineering policies also can be able to utilize for both conditions and actions the parameters available in the standard MPLS MIBs, such as MPLS Traffic Engineering MIB, MPLS LSR MIB, MPLS Packet Classifier MIB, and other MIB elements for additional traffic metrics.
- MPLS Traffic Engineering MIB MPLS LSR MIB
- MPLS Packet Classifier MIB MPLS Packet Classifier MIB
- FECs Forwarding Equivalence Classes
- FTN NHLFE
- the load-balancing operation may be considered as redefining the FECs to send traffic along the appropriate path. Rather than sending all the traffic along a single LSP, the load balancing policy operation results in the creation of new FECs which effectively partition the traffic flow among the LSPs in order to achieve some load balance objective.
- two simple point-to-point LSPs with the same source and destination can have an aggregate FEC (z) load balanced.
- the aggregate FEC (z) is the union of FEC (a) and FEC (b).
- the load balancing policy may adjust the FEC (a) and FEC (b) definitions such that the aggregate FEC (z) is preserved.
- LSP Incoming label Map
- a Point-to-Point LSP that simply transits an LSR at the interior of an MPLS domain does not have an LSP ingress at this transit LSR.
- Merge points of a Multipoint-to-Point LSP may be considered as ingress points for the next link of the LSP.
- a label stacking operation many be considered as an ingress point to a new LSP. The above conditions, which put multiple LSPs onto different LSPs, may require balancing at the interior node.
- the FEC of an incoming flow may be inferred from its label.
- load-balancing policies may operate based on incoming labels to segregate traffic rather than requiring the ability to walk up the incoming label stack to the packet header in order to reclassify the packet.
- the result is a coarse load balancing of LSPs onto one of a number of LSPs from the LSR to the egress LSR.
- the MPLS Architecture identifies that the NHLFE may have multiple entries for one FEC. Multiple NHLFEs may be present to represent: (a) the Incoming FEC/label set is to be multicast; and (b) when route selection based on the EXPansion (“EXP”) field in addition to the label is required. If both multicast and load balancing functions are required, it can be necessary to disambiguate the scope of the operations.
- the load balancing operation can partition a set of input traffic (e.g., defined as FECs or Labels) across a set of output LSPs. One or more of the arriving FECs may be multicast to both the set of load balanced LSPs as well as other LSPs.
- the packet replication (multicast) function occurs before the load balancing.
- the route selection is based on the EXP field, it can be a special case of the policy-based load-balancing approach.
- replicating NHLFEs for this purpose be deprecated and the more generic policy-based approach be used to specify an FEC/label space partition based on the EXP field.
- the load balancing function can be considered as part of the classification function and allows preserving a mapping of a FEC into one NHLFE for unicast. While classification of incoming flows into FECs is often thought of as an operation on some tuple of packet headers, this is not the only basis for classification because router state can also be used.
- An example of a tuple is a set of protocol header fields such as source address, destination address, and protocol ID.
- the source port of a flow may be a useful basis on which to discriminate flows.
- a “random number” generated within the router may be attractive as the basis for allocating flows for a load balancing objective.
- An algorithm within the router which may include some hash function on the packet headers, may generate the “random number.”
- MPLS load balancing partitions an incoming stream of traffic across multiple LSPs.
- the load balancing policy, as well as the ingress LSR where the policy is enforced, can be able to distinctly identify LSPs.
- the PDP that installs the load balancing policy has knowledge of the existing LSPs and is able to identify them in policy rules. One way to achieve this is through the binding of a label to an LSP.
- An example of an MPLS load-balancing policy may state for the simple case of balancing across two LSPs: IF traffic matches classifier, THEN forward on LSP 1 , ELSE forward on LSP 2 .
- Classification can be done on a number of parameters such as packet header fields, incoming labels, etc.
- the classification conditions of an MPLS load-balancing policy are thus effectively constrained to be able to specify the FEC in terms that can be resolved into MPLS packet classification MIB parameters.
- Forwarding traffic on an LSP can be achieved by tagging the traffic with the appropriate label corresponding to the LSP.
- MPLS load-balancing policy actions typically result in the definition of a new aggregate FEC to be forwarded down a specific LSP. This would typically be achieved by appropriate provisioning of the FEC and routing tables (e.g., FTN and ILM), e.g., via the appropriate MIBs.
- the basis for partitioning the traffic can be static or dynamic. Dynamic load balancing can be based on a dynamic administrative control (e.g., time of day), or it can form a closed control loop with some measured network parameter. In an embodiment, “voice trunk” LSP bandwidths can be adjusted periodically based on expected service demand (e.g., voice call intensity, voice call patterns, and so on). Static Partitioning of the Load can be based on information carried within the packet header (e.g. source/destination addresses, source/destination port numbers, packet size, protocol ID, etc.). Static partitioning can also be based on other information available at the LSR (e.g., the arriving physical interface). However if load partition is truly static, or at least very slowly changing (e.g., less than one change/day), then the need for a policy-based control of this provisioning information maybe debatable and a direct manipulation of the LSR MIB may suffice.
- a dynamic administrative control e.g., time of day
- a control-loop based load-balancing scheme can seek to balance the load close to some objective, subject to error in the measurements and delays in the feedback loop.
- the objective may be based on a fraction of the input traffic to be sent down a link (e.g., 20% down a first LSP and 80% down a second LSP) in which case some measurement of the input traffic is required.
- the objective may also be based on avoiding congestive loss in which case some loss metric is required.
- the metrics required for control loop load balancing may be derived from information available locally at the upstream LSR, or may be triggered by events distributed elsewhere in the network. In the latter case, the metrics can be delivered to the Policy Decision Point. Locally derived trigger conditions can be expected to avoid the propagation delays etc. associated with the general distributed case. Frequent notification of the state of these metrics increases network traffic and be undesirable.
- a single large flow is load balanced across a set of links.
- policies based solely on the packet headers may be inadequate and some other approach (e.g. based on a random number generated within the router) may be required.
- the sequence integrity of the aggregate FEC forwarded over a set of load balancing LSPs may not be preserved under such a regime.
- ECMP and OMP can embed the load balancing optimization problem in the IGP implementation. This may be appropriate in the context of a single service if the optimization objectives and constraints can be established. ECMP approaches apply equal cost routes, but do not provide guidance on allocating load between routes with different capacities. OMP attempts a network wide routing optimization (considering capacities) but assumes that all network services can be reduced to a single dimension of capacity. For networks requiring greater flexibility in the optimization objectives and constraints policy-based approaches may be appropriate.
- the policy system provides a mechanism to configure the LSPs within LSRs.
- a system that can be configured can also be incorrectly configured with potentially disastrous results.
- the policy system can help to secure the MPLS system by providing appropriate controls on the LSP life cycle.
- Use of the COPS protocol within the policy system between the PEP/PDP allows the use of message level security for authentication, replay protection, and message integrity.
- Existing protocols such as IPSEC (e.g., a collection of IP security measures that comprise an optional tunneling protocol for IPv6) can also be used to authenticate and secure the channel.
- IPSEC e.g., a collection of IP security measures that comprise an optional tunneling protocol for IPv6
- the COPS protocol also provides a reliable transport mechanism with a session keep-alive.
- FIG. 8 shows an illustration of policy-based management with scaling by automation.
- Configuration management data 800 can include business and service level policies that are part of a PMC. The policies can be communicated to a configuration/data translation point 805 , which is coupled to network devices such as device A 821 and device N 827 .
- Device A 821 can communicate status information to network status point 810
- device N 827 can communicate state information to network topology 815 .
- Each of network status point 810 and network topology point 815 can communicate information to configuration/data translation point 805 so that closed loop policies triggered by network state can automate network response to failures, congestion, demand changes, and so on. Accordingly, traffic engineering functions can move online.
- FIG. 9 shows an illustration of policy-based management with scaling by roles without closed loop policies triggered by network state.
- Policy-based management can automate the configuration translation functions to reduce errors and speed operations.
- Coherent policy can be applied across multiple device instances and device types using higher level abstractions such as roles.
- FIG. 10 shows an illustration of a large metropolitan scale voice service architecture based in part on an MPLS network.
- a central office 1010 includes class 5 central office equipment 1011 and trunk gateways 1012 .
- the central office can include line gateways, service gateways, and so on.
- the central office 1010 is coupled to an MPLS network 1020 providing logical metropolitan connectivity and corresponding to a physical metro topology 1025 .
- 1-5 million voice lines can be concentrated via 80-150 offices to attach via truck gateways to the MPLS network 1020 .
- Each LSP of the MPLS network 1020 for the voice lines can have a low megabyte/second average bandwidth.
- a full mesh interconnect with bi-directional LSPs can require 10-20,000 LSPs per metro for voice service.
- MPLS networks can be scaled by service and across a multi-state region, e.g., a multi-state region of a regional Bell operating company (“RBOC”).
- RBOC regional Bell operating company
- LATA local access and transport areas
- aggregating the total number of LSPs implies greater than 100,000 LSPs across the region for voice service. More LSPs can be required for other services, and twice as many LSPs can be required for protected services.
- metro/LATA interconnect e.g., long distance voice service
- a full mesh of LATAs would require 1-2000 LSPs for inter-LATA voice service interconnection.
- FIG. 11 shows an illustration of a policy-based management architecture for voice services over an MPLS network
- a call control complex 1260 can be coupled to a core MPLS network 1250 and a SS7/AIN network 1270 that provides PSTN voice services.
- the call control complex 1260 can send voice service traffic data to network administration 1280 .
- the network administration 1280 can include traffic management information 1281 (e.g., bandwidth broker policies, routing policies, etc.) and device provisioning changes 1282 (e.g., explicit routes, QoS parameters, etc.).
- Network administration 1280 can thereby provide voice service LSP provisioning information (e.g., policies) to the core MPLS network 1250 .
- traffic management information 1281 e.g., bandwidth broker policies, routing policies, etc.
- device provisioning changes 1282 e.g., explicit routes, QoS parameters, etc.
- network administration 1280 can receive an estimate of traffic demand (e.g., from call control complex 1260 , from elements of the SS7/AIN network 1270 , and so on) to dimension (e.g., dynamically) the capacity of voice trunks in the MPLS network 1250 .
- an estimate of traffic demand e.g., from call control complex 1260 , from elements of the SS7/AIN network 1270 , and so on
- dimension e.g., dynamically
- the MPLS network 1250 includes one or more VPNs that are set up to handle particular types of traffic.
- one or VPNs can be provisioned to carry voice traffic across the MPLS network 1250 .
- VPNs can be provisioned to carry traffic from particular classes of customers, e.g., business customer traffic can be carried by one or more VPNs to provide a better quality of service, consumer customer traffic can be carried by one or more other VPNs to provide a lower quality of service than business customer traffic receives, and so on.
- Policy-based control can configure the LSRs of the MPLS network 1250 so that, for example, voice traffic is set up with an appropriate quality of service level and data traffic is likewise set up with an appropriate quality of service level.
- Coupled encompasses a direct connection, an indirect connection, or a combination thereof.
- two devices that are coupled can engage in direct communications, in indirect communications, or a combination thereof.
- Embodiments of the present invention relate to data communications via one or more networks (e.g., MPLS networks).
- the data communications can be carried by one or more communications channels of the one or more networks.
- a network can include wired communication links (e.g., coaxial cable, copper wires, optical fibers, a combination thereof, and so on), wireless communication links (e.g., satellite communication links, terrestrial wireless communication links, satellite-to-terrestrial communication links, a combination thereof, and so on), or a combination thereof
- a communications link can include one or more communications channels, where a communications channel carries communications.
- a communications link can include multiplexed communications channels, such as time division multiplexing (“TDM”) channels, frequency division multiplexing (“FDM”) channels, code division multiplexing (“CDM”) channels, wave division multiplexing (“WDM”) channels, a combination thereof, and so on.
- TDM time division multiplexing
- FDM frequency division multiplexing
- CDM code division multiplexing
- WDM wave division multiplexing
- instructions adapted to be executed by a processor to perform a method are stored on a computer-readable medium.
- the computer-readable medium can be a device that stores digital information.
- a computer-readable medium includes a compact disc read-only memory (CD-ROM) as is known in the art for storing software.
- CD-ROM compact disc read-only memory
- the computer-readable medium is accessed by a processor suitable for executing instructions adapted to be executed.
- instructions adapted to be executed and “instructions to be executed” are meant to encompass any instructions that are ready to be executed in their present form (e.g., machine code) by a processor, or require further manipulation (e.g., compilation, decryption, or provided with an access code, etc.) to be ready to be executed by a processor.
Landscapes
- Engineering & Computer Science (AREA)
- Signal Processing (AREA)
- Computer Networks & Wireless Communication (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
Abstract
Embodiments of the present invention relate to systems and methods for policy-based management of a multiprotocol label switching (“MPLS”) network. In an embodiment, a system includes a policy-based network administration system, and the policy-based network administration system includes a plurality of policies. The system also includes an MPLS network, which is coupled to the policy-based network administration system.
Description
- This application claims the benefit of U.S. Provisional Patent Application Ser. No. 60/241,374 filed Oct. 19, 2000, which is herein incorporated by reference in its entirety.
- 1. Field of the Invention
- Embodiments of the present invention relate to communications networks. More particularly, embodiments of the present invention relate to systems and methods for policy-enabled communications networks.
- 2. Background Information
- Known policy controls enable improved administrative control of network capabilities to meet, for example, service objectives. For example, a policy control can specify that a data packet received by a network element of a communications network from a particular source is to be routed through the network in a specific way instead of the default way. A policy control can also be role-based and apply to certain network elements (e.g., edge routers) instead of other network elements (e.g., internal routers). Multi-protocol label switching (“MPLS”) networks can provide efficient and/or explicit routing capabilities for Internet Protocol (“IP”) networks, which may be a key element in the traffic engineering of those IP networks. In view of the foregoing, it can be appreciated that a substantial need exists for systems and methods that can advantageously provide for policy-enabled MPLS networks.
- Embodiments of the present invention relate to systems and methods for policy-based management of a multiprotocol label switching network. In an embodiment, a system includes a policy-based network administration system, and the policy-based network administration system includes a plurality of policies. The system also includes an MPLS network, which is coupled to the policy-based network administration system.
-
FIG. 1 is a schematic diagram of the general architecture of an embodiment of a policy-based network management system. -
FIG. 2 is a schematic diagram illustrating a policy architecture for admission control decisions. -
FIG. 3 shows another configuration of a policy control architecture. -
FIG. 4 shows an illustration of a multipoint-to-point Label Switched Path traversing an MPLS network. -
FIG. 5 shows an example of the use of MPLS in a hierarchy. -
FIG. 6 shows a schematic diagram of an intra-network architecture of a policy-based network management system. -
FIG. 7 illustrates a generic policy-based network architecture in the context of an MPLS network. -
FIG. 8 shows an illustration of policy-based management with scaling by automation. -
FIG. 9 shows an illustration of policy-based management with scaling by roles without closed loop policies triggered by network state. -
FIG. 10 shows an illustration of a large metro scale voice service architecture based in part on an MPLS network. -
FIG. 11 shows an illustration of a policy-based management architecture for voice services over an MPLS network. - According to an embodiment of the present invention, a system for policy management for MPLS network includes life cycle management (e.g., creating, deleting, monitoring, and so forth) of Label Switched Paths (“LSP”) paths through the MPLS network. In an embodiment, the policy management includes controlling access (e.g., LSP Admission Control) to the life cycle managed resources for traffic on the MPLS network.
- MPLS can support explicit traffic engineering via a number of specifications that allow LSPs to be managed based on Quality-of-Service (“QoS”) and other constraints, such as, for example, Constraint-based Routing Label Distribution Protocol (“CR-LDP”), Resource Reservation Protocol (“RSVP”), and so on. MPLS can also be used with implicit traffic engineering of LSP Quality of Service. Specific QoS mechanisms (e.g., DiffServ, Int-Serv, etc.) can be used. The policy management architecture used to control traffic engineering functionality can be independent of the MPLS mechanisms used and can provide consistent, predictable network services. In an embodiment, MPLS policy control is intra-domain and can be based on using Common Open Policy Service (“COPS”) to implement policy management.
-
FIG. 1 is a schematic diagram of the general architecture of an embodiment of a policy-based network management system. In an embodiment, policy-based networking provides an infrastructure for management of networks with a rich set of management capabilities. The-basic components of a policy-based management system can include a Policy Decision Point (“PDP”) 120 and Policy Enforcement Point (“PEP”) 130. ThePDP 120 can be a logical component residing within a Policy Server, and thePEP 130 can be a logical component, usually residing in a network device. Other components of a policy management system can include a policy management console (“PMC”) 100 to provide a human interface to the policy system and a policy repository (“PR”) 110 to store the policy. ThePMC 100 can be used to generate policies for storage in thepolicy repository 110 and to administer the distribution of policies acrossvarious PDP 120. Policies may also be imported into the system via other mechanisms. For example, they may be retrieved from a Lightweight Directory Access Protocol (“LDAP”) directory and stored directly into thepolicy repository 110. From thePDP 120, policy rules may be installed in the network and implemented at one ormore PEPs 130. - Decisions regarding what policy rules are to be installed in the network devices can be the result of several different events. There are primarily at least two models of policy management that determine how and when policy decisions get made, provisioned, and outsourced. In policy provisioning, events occur at the
PDP 120 that may cause thePDP 120 to install policy rules in the one ormore PEPs 130. Examples of such events include human intervention.(via the policy management console 100), signaling from an external application server, feedback about dynamic state changes in the devices that thePDP 120 is managing, and so forth. In policy outsourcing, events can occur at thePEPs 130 and require a policy-based decision, and the PEP 130 can request the decision from thePDP 120. An example of this type of event is the receipt of an RSVP message, or some other network signaling protocol, containing policy information and a request for resource reservation. The PEP 130 sends a message to the PDP 120 requesting a decision based on the policy information provided whether to accept or deny the resource reservation request. -
FIG. 2 is a schematic diagram illustrating a policy architecture for admission control decisions. Anetwork node 200 can include a PEP 210 and a Local Policy Decision Point (“LPDP”) 220. The PEP 210 can first use theLPDP 220 to reach a local partial decision. The partial decision and the original policy request next can be sent to aPDP 230 that renders a final decision (e.g., considering and approving or considering and overriding the LPDP 220).FIG. 3 shows another configuration of a policy control architecture. Anetwork node 300 can include a PEP 310 and aPDP 320. - The policy management system can include feedback from a PEP (e.g.,
PEPs PDPs -
FIG. 4 is an illustration of a multipoint-to-point (“MPt-Pt”) Label Switched Path (“LSP”) traversing an MPLS network. An LSP in MPLS is typically a sink-based tree structure traversing a series of Label Switch Routers (“LSRs”) 451-453 between ingress and egress Edge Routers (“ERs”) 410-412. In an embodiment, a merging function can be implemented at the LSRs. In another embodiment, a merging function may not be supported by certain classes of equipment (e.g., legacy ATM switches), and Point-to-Point LSPs are a degenerate case of MPt-Pt LSPs where no merging is performed. - In MPLS networks, choosing the next hop can be based at least upon two functions. The first function can classify all possible packets into a set of Forwarding Equivalence Classes (“FECs”). The second function can map each FEC to a next hop. In conventional IP forwarding, a particular router will typically consider two packets to be in the same FEC if there is some address prefix X in that router's routing tables such that X is the “longest match” for each packet's destination address. As the packet traverses the network, each hop in turn re-examines the packet and assigns it to a FEC. In MPLS, the assignment of a particular packet to a particular FEC can be done just once. At subsequent hops along the LSP, there is no further analysis of the packet's network layer header, which has a number of advantages over conventional network layer forwarding including, for example, the following.
- (a) MPLS forwarding can be done by switches that are capable of doing label lookup and replacement (e.g., ATM switches).
- (b) The considerations that determine how a packet is assigned to a FEC can become ever more and more complicated without impact on the routers that merely forward labeled packets. Since a packet is classified into an FEC when it enters the network, the ingress edge router may use any information it has about the packet, even if that information cannot be gleaned from the network layer header. For example, packets arriving on different ports or at different routers may be assigned to different FECs.
- (c) Sometimes it is desirable to force a packet to follow an explicit route, rather than being chosen by the normal dynamic routing algorithm as the packet travels through the network. This may be done as a matter of policy, or to support traffic-engineering objectives such as load balancing.
- (d) MPLS allows (but does not require) the class of service to be inferred from the label. In this case, the label represents the combination of a FEC and Quality of Service.
- (e) MPLS also permits the use of labels in a hierarchical form in a process known as label stacking.
-
FIG. 5 shows an example of the use of MPLS in a hierarchy. MPLS may operate in a hierarchy, for example, by using three transit routing domains such asdomains domain domain 502. In particular, the path betweenrouters internal routers domain 502. In the hierarchy illustrated inFIG. 5 , there are two levels of routing taking place. For example, Open Shortest-Path First (“OSPF”) may be used for routing withindomain 502. The domain boundary routers 511-512, 521-522, and 531-532 can operate BGP to determine paths betweenrouting domains domain 502, it can contain two labels encoded as a “label stack”. The higher level label may be used betweenrouters domain 502. - According to an embodiment of the present invention, a policy-enabled MPLS network includes Policy rules Information Base (“PIB”) elements that identify LSPs and policy actions that affect LSPs, such as, for example, admission of flows to LSPs, LSP life cycle operations such as creation/deletion of LSPs, and so on. Policy controls for MPLS can provide a rich environment for the creation of network services in an efficient manner. Operational advantages in a policy-based approach to the management and control of MPLS networks include the following:
- (a) MPLS Abstraction. While MPLS can be controlled directly through relevant Management Information Bases (“MIBs”), the use of a higher abstraction level PIB provides a mechanism to abstract away some of the implementation options within MPLS and to focus on operational advantages such as, for example, those provided by explicit routing capabilities.
- (b) Controllability of LSP Life Cycle. While MPLS may be operated in an autonomous fashion (e.g., with topology-driven LSP establishment), the autonomous operation does not necessarily provide the explicit routes and QoS required for traffic engineering. While manual establishment of explicit route LSPs with associated QoS parameters may be feasible, issues of scale and consistency when applied in large networks can arise.
- (c) Consistency with other techniques. The need for MPLS and DiffServ to interact appropriately and the work for policy controls for DiffServ networks are known. In an embodiment, policy controls can be applied to MPLS networks that may, but do not necessarily, implement DiffServ.
- (d) Flexibility in LSP Admission Control. The set of flows admitted to an LSP my change over time. Policy provides a mechanism to simplify the administration of dynamic LSP admission criteria in order to optimize network performance. For example, LSP admission control policies may be established to vary the set of admitted flows to match projected time-of-day sensitive traffic demands.
- (e) Integration with Network Service Objectives. Policy-based networking architecture can provide a mechanism to link service level objectives of the network to specific protocol actions within MPLS.
- FIGS. 6 shows a schematic diagram of an intra-network architecture of a policy-based network management system. Applying the policy-based network architecture to the MPLS network, the Edge Label Switch Routers (“ELSRs”) 641, 643 become the PEP as they are involved in the admission control of flows to the LSP. Intervening LSRs, such as
LSR 642, may also be PEPs, for example, in the case of MPt-Pt LSPs. Embodiments can use a generic computing platform and leave the LSR as a Policy Ignorant Node (“PIN”) or consider them the same piece of equipment. - Embodiments of the present invention relate to one or more of two main categories of policies for MPLS: (1) LSP Admission Policies that map traffic flows onto LSPs; and (2) LSP Life Cycle Policies affecting LSP creation, deletion, configuration, and monitoring. Mapping traffic flows onto LSPs involves a policy system setting up classifiers in the ingress LSR(s) of an LSP to identify which packets get admitted onto the LSP and process the packets accordingly. In MPLS, label switched paths can be associated with a Forwarding Equivalence Class (FEC) that specifies which packets are to be sent onto the LSP. Classifiers from the policy server can define the characteristics of the FEC, and packets/flows that match these characteristics are sent over the LSP. In this way, the FEC that gets mapped onto an LSP can be defined according to a number of flow characteristics such as application, source/destination/subnet address, user, DiffServ code point on incoming packet, and so on. Configuring LSPs involves the creation and deletion of LSPs in the network according to some QoS or other criteria. This can be achieved in a number of ways, such as manual creation or invoking one of the label distribution-mechanisms that support this (CR-LDP, RSVP). After a label switched path is created, it can be monitored for performance to ensure that the service it provides continues to behave as expected. For example, LSP MIB counters, such as a count of packets dropped in a particular LSP, can be used to gauge performance. If the configured resources along the LSP become insufficient for the traffic requests for resources, or if the requirements change, a new path may be necessary or an existing one changed according to a new set of constraints. As part of the policy-based management of MPLS, the LSRs can provide feedback to the policy system to perform this monitoring. For example, an LSP performance table can track incoming and outgoing statistics related to octets, packets, drops, and discards on MPLS trunks. Using this information, the LSR can notify the server when performance levels fall below some threshold based on the available statistics. The server would then have the ability to enhance the current LSP or create alternatives.
- LSP Admission Policies. While an LSP can be configured for use with best effort traffic services, there are often operational reasons and'service class reasons for restricting the traffic that may enter a specific LSP. Classification can result in admission to the FEC associated with a specific LSP. The admission criteria may include, for example, the following criteria: (a) a DiffServ marking as one of the potential classification mechanisms; (b) authentication, for example, for access to an LSP-based Virtual Private Network (“VPN”); or (c) traffic engineering policies related to architectures other than DiffServ (e.g. Int-Serv).
- An MPLS framework can consider classification in terms of establishing a flow with a specific granularity. These granularities can be a base set of criteria for classification policies, such as the following examples of unicast traffic granularities:
- PQ (Port Quadruples): same IP source address prefix, destination address prefix, TTL, IP protocol and TCP/UDP source/destination ports;
- PQT (Port Quadruples with TOS): same IP source address prefix, destination address prefix, TTL, IP protocol and TCP/UDP source/destination ports and same IP header TOS field (including Precedence and TOS bits);
- HP (Host Pairs): same specific IP source and destination address (32 bit);
- NP (Network Pairs): same IP source and destination address prefixes (variable length);
- DN (Destination Network): same IP destination network address prefix (variable length);
- ER (Egress Router): same egress router ID (e.g. OSPF);
- NAS (Next-hop AS): same next-hop AS number (BGP);
- DAS (Destination AS): same destination AS number (BGP);
- The MPLS framework also can include the following multicast traffic granularities:
- SST (Source Specific Tree): same source address and multicast group
- SMT (Shared Multicast Tree): same multicast group address for LSP admission decisions based on QoS criteria, the calculations may involve other traffic characteristics relating to buffer occupancy and scheduling resource decisions. These may include parameters such as: (a) burstiness measures (e.g., Path MTU size or Packet size); or (b) inferred or signaled bandwidth requirements.
- LSP Life Cycle Policies. MPLS permits a range of LSP creation/deletion modes from relatively static, manually provisioned LSPs, dynamic LSPs initiated in response to routing topology information, and data driven LSP generation. Policy impacts can vary depending on the LSP creation/deletion modes. MPLS supports a variety of mechanisms for the creation/deletion of LSPs, such as manual provisioning, LDP, CR-LDP, RSVP, BGP, etc. In an embodiment, the policy should be independent of the underlying mechanism.
- For example, with manually provisioned LSPs, the role of policy may be to restrict the range of authorized users that can create or delete LSPs, or the range of addresses that can be connected by LSPs (e.g., Intra-Domain, intra-VPN, and so on). With topology driven LSP setup, there may be policy constraints on speed of re-establishment of LSPs or the number of LSPs. With data driven LSP establishment, there can be policies related to the data characteristics that trigger the creation or deletion of an LSP.
- When created, LSPs may have certain attributes. For example, traffic-engineering policies may be applied to reserve network resources such as bandwidth on specific links for an LSP. LSPs in general are sink-based tree structures. The merge points of the LSP may have policies such as, for example, policies associated with the buffer management at the merge point. The characteristics or attributes of an LSP may be impacted by different policy considerations. They can be impacted at the time of LSP creation or may be altered for an existing LSP.
- In an embodiment, a policy-enabled MPLS system can include the following features and/or functions: (a) a label distribution protocol that supports the specification of QoS constraints; (b) LSPs are established as administratively specified explicit paths where the route is specified either entirely or partially at the time the path is established; and (c) COPS and PIBs are used for policy protocol between a policy server (e.g., a PDP) and LSRs (e.g., PEPs). The policy-enabled MPLS system can include three phases: (a) LSP setup; (b) LSP admission control; and (c) LSP monitoring.
- LSP Setup. In an embodiment, a PDP determines that an LSP is to be established. Possible choices for how the PDP gets signaled to make this determination include: human input at the network management console (e.g., manually provisioned LSP), receipt of a trigger from an ingress LSR as a result of receiving a particular type of data packet, or observing a particular performance level deficiency (e.g., data-driven LSP provisioning). In the case of data-driven LSP establishment, an initial policy can be implemented in the LSR specifying what types of data packets to look for that can trigger an LSP. In some respects, this can appear to be similar to RSVP QoS policy where the decision to permit the resource reservation is outsourced to the PDP. In an MPLS in accordance with an embodiment of the present invention, however, the outsourced decision is not just to accept or deny the request, but involves a separate step of initiating the LSP session, as described below.
- For example, an LSP may be required, in an embodiment, to support a specific service or set of services in the network. This may imply traffic characteristics for the LSP such as, for example, peak data rate, committed data rate, burst size, etc. If explicit routes are used, the PDP can determine the specific LSRs that are to be part of the path. The LSP may be partially explicit, specifying some specific LSRs that must be included, and the remainder of the LSP left to the routing protocols. An intelligent PDP may use feedback information from the LSRs to determine if they currently have sufficient resources free to support the resource requirements of the LSP. Alternatively, the LSP creation could use a topology-driven method where the path is determined by the routing protocol (and the underlying label distribution protocol processing). In such an embodiment, the LSP creation is initiated with specification of the traffic requirements. For any way that the LSP is routed, any traffic constraint requirements are met by all LSRs that get included in the LSP.
- The PDP can issue a policy message to the ingress LSR of-the LSP, including the explicit route information (if applicable), strict or loose route preferences, traffic parameters (constraint requirements),. etc. In the COPS+PIB example, this is done via a COPS Decision (cops-pr, probably using a <cops-mpls> client type in the PEP) that includes MPLS PIBs describing the CR-LDP constraints.
- The MPLS policy client in the LSR can-take the message and initiate an LSP session. When CR-LDP is used, for example, this is done by sending a Label Request message containing the necessary CR-LDP Type Length Values (“TLV”) (e.g., Explicit Route TLV, Traffic TLV, CD-LSP FEC, etc.). When RSVP is used, a path message containing the constraint information is sent from the ingress LSR to the egress LSR. The LSR establishment is similar, from a policy point of view, regardless of label distribution protocol used. In an embodiment as described herein, use of CR-LDP is described, but based on the written description herein the use of RSVP in an embodiment is apparent to one of skill in the art. The Label Request is propagated downstream and gets processed as usual according to CR-LDP procedures (e.g., downstream on demand label advertisement). When the egress LSR processes the Label Request, it issues a Label Mapping message that propagates back upstream establishing label mappings between MPLS peers for the LDP. Eventually the ingress LSR receives back a Label Mapping message from the next-hop LSR and it notifies the PDP of the label it received, to be used when forwarding packets to the next-hop on this LDP, and the LSPID. If the path could not be established, for example due to errors or insufficient resources or other issues, the error notification gets sent to the PDP. When COPS is used as the policy protocol, this is done with a COPS Report message, containing the MPLS label and referencing the Decision message that initiated the CR-LDP session.
- LSP Admission Control. With the LSP established and the label to be used for sending packets to the next-hop on the LSP known, the PDP can issue policies to specify which packets/flows get mapped onto the LSP, i.e., which packets belong to the FEC for the LSP. Using the COPS and PIB example, this is done in a similar manner to the way packets get mapped to DiffServ Per Hop Behaviors (“PHB”) in ingress routers of a DiffServ network. A COPS Decision message can be issued containing PIB table entries, for example, for: the classifier that specifies the FEC, a profile for policing and admission control to the LSP, the label to put on the packets that match the classifier, and what to do with packets that match but are out of profile.
- As packets come into the ingress LSR the MPLS policy is enforced and packets are matched against the FEC classification and profile. The metering capability allows the PDP to specify a profile for policing so that admission control can be performed on the packets utilizing the LSP resources. Also, the policy installed by the PDP for the FEC can specify a MPLS Action table entry (e.g., of a PIB) for certain data packet types that might be admitted onto the LSP to authenticate the policy information about the packet with the PDP. This action is quite similar to the way COPS-RSVP works, where the PDP returns an accept/deny decision to indicate whether the packet is allowed access to the LSP or not. Packets that match the FEC classification, are in-profile, and have valid policy information (if applicable) get the label associated with the LSP for that FEC. This can involve pushing the label onto the top of a label stack if the packet already has a label for another LSP. This is handled according to MPLS label processing rules.
- LSP Monitoring. The PDP can monitor the performance of the LSP to ensure the packets that are being mapped to the LSP receive the intended service. Information such as that specified in the MPLS LSR MIB, the in-segment performance table, the out-segment performance table, and so on may be used for this purpose (other data/stats may also be better or be better suited for this purpose). As the PDP gathers this feedback information, it makes decisions regarding the creation/deletion/changing of LSPs and the packets that get mapped onto them. Actions taken by the PDP as a result of performance feedback analysis may include re-directing existing LSPs to route traffic around high congestion areas of the network, changing traffic parameters associated with an LSP to reserve more resources for the FEC, adding a new LSP to handle overflow traffic from an existing path, tearing down an LSP no longer in use, and so on.
- In an embodiment, a policy system can help to secure the MPLS system by providing appropriate controls on the LSP life cycle. Conversely, if the security of the policy system is compromised, then this may impact any MPLS systems controlled by that policy system. The MPLS network is not expected to impact the security of the policy system.
- Embodiments of the present invention can include policy systems related to one or more of policy-based load balancing in traffic-engineered MPLS networks and traffic engineering of load distribution. An overview of load balancing and load distribution is first described, and then an embodiment of the present invention related to load balancing, which can be a specific sub-problem within load distribution, is described.
- Load Balancing Overview
- At least three fundamental features related to traffic engineering over MPLS networks are known: (a) mapping traffic to FECs; (b) mapping FECs to LSPs; and (c) mapping LSPs to physical topology. The first two features are discussed in greater detail herein as part of describing MPLS as an interesting subset of IP protocols, load balancing as a traffic engineering objective, and policy-based approaches for describing the objectives and constraints of the traffic engineering optimization.
- Load balancing in MPLS networks concerns the allocation of traffic between two or more LSPs which can have the same origin and destination. In certain embodiments of the present invention, a pair of LSRs may be connected by several (e.g., parallel) links. From an MPLS traffic engineering point of view, for the purpose of scalability, it may be desirable to treat all these links as a single IP link in an operation known as Link Bundling. With load balancing, the load to be balanced is spread across multiple LSPs that in general does not require physical topology adjacency for the LSRs. The techniques can be complementary. Link bundling typically provides a local optimization that is particularly suited for aggregating low speed links. Load Balancing generally is targeted at larger scale network optimizations.
- While load balancing is often considered to apply between edge LSRs, it can be applied in an embodiment at any LSR that provides the requisite multiple LSP tunnels with common endpoints. The Policy Enforcement Point is the LSR at the source end of the set of LSPs with common endpoints. The arriving traffic to be load balanced may be from non-MPLS interfaces or MPLS interfaces. In general, the source end of an LSP may act as a merge point for multiple input streams of traffic.
- The set of LSPs over which the load is to be balanced can be pre-defined and the relevant load balancing policies are then applied to these LSPs. In another embodiment, LSPs can be created and deleted in response to policies with load balancing objectives. According to an embodiment of the present invention, best effort LSPs are considered, which can simplify the admission control considerations of a load balancing process. When LSPs are established with QoS constraints, it can be necessary to determine if the traffic flow sent over the LSP as a result of load balancing fit the profile of the constraints, which can add complexity to the load balancing policy as well as the processing of the LSR performing the load balancing.
- While load balancing on a best effort network can be viewed as a simple case, the basic methodologies have a wider applicability when applied to QoS-based LSP selection. Indeed, the load balancing case for best effort only traffic has similar problems to that of load balancing a particular traffic class such as that with a particular DiffServ PHB. Bandwidth sharing among classes of service can raise some more complex issues that also apply to the placement of traffic into ER-LSPs. As the available capacity for a particular traffic class to a particular destination exceeds the capacity of the LSP for that traffic, an action can be taken to get more bandwidth or control access to the LSP. The PEP can perform an action per traffic class with a likely result that the best effort traffic on the network will become squeezed in favor of higher priority traffic. Lending of bandwidth between LSPs can be implemented as a policy. In an embodiment, the location of the network congestion can have a bearing on a solution, and a policy server can initiate a new LSP and map certain flows to this new LSP to avoid the congestion point, thereby improving the performance of those flows and reducing the congestion problem. This can, however, require a congestion detection methodology and inter-PDP communication.
- In general, a policy provides a rule of the form: IF <condition> THEN <action>. Policy-based networking is one of a number of mechanisms that can be used in achieving traffic engineering objectives. While traffic engineering may be considered an optimization issue, policy approaches provide considerable flexibility in the specification of the network optimization objectives and constraints.
- Engineering Framework. Within the Traffic Engineering (“TE”) framework's taxonomy of traffic engineering systems, policies may be: (a) dependent on time or network state (e.g., either local or global); (b) based on algorithms executed offline or online; (c) stored centrally (e.g., in a directory) or distributed to an engineerable number of policy decision points; (d) prescriptive or descriptive; and (e) designed for open loop or closed loop network control. Network feedback can be an important part of policy-based networking. While network configuration (e.g., provisioning) can be performed in an open-loop manner, in general, policy-based networking can imply a closed-loop mechanism. Distribution and performance of the policy system can require adequate resources that are provisioned to meet the required policy update frequency and so on.
- A traffic engineering framework can identify process model components for (a) measurement; (b) modeling, analysis, and simulation; and (c) optimization. Policies may be used to identify relevant measurements available through the network and trigger appropriate actions. The available traffic metrics for determining the policy trigger conditions can be constrained, e.g., by generic IP traffic metrics.
- Policies can provide an abstraction of network resources, e.g., a model that can be designed to achieve traffic engineering objectives. Policies can provide a degree of analysis by identifying network problem through correlation of various measurements of network state. A set of policies can be designed to achieve an optimization of network performance through appropriate network provisioning actions.
- Policy-based Load Balancing. In general, load balancing can be an example of traffic mapping. In an embodiment, a relative simplicity of load balancing algorithms can illustrate approaches to traffic engineering in the context of MPLS networks. While load balancing optimizations have been proposed for various routing protocols, such approaches typically complicate existing routing protocols and tend to optimize towards a fairly limited set of load balancing objectives. Extending these towards more flexible/dynamic load balancing objectives can be overly complicated. Hence, building on a policy-based networking architecture can provide mechanisms specifically designed to support flexible and dynamic administration.
- Load Distribution Overview
- Online traffic load distribution for a single class of service is known based in part on extensions to Interior Gateway Protocol (“IGP”) that can provide loading information to network nodes. To perform traffic engineering of load distribution for multi-service networks, or off line traffic engineering of single service networks, a control mechanism for provisioning bandwidth according to a policy can be provided. Identified and described in this load distribution overview and herein are: (a) mechanisms that affect load distribution and the controls for mechanisms that affect load distribution to enable policy-based traffic engineering of the load distribution to be performed; (b) mechanisms that affect load distribution and the control for those mechanisms to enable policy-based traffic engineering of load distribution; and (c) a description of the use of load distribution mechanisms in the context of an IP network administration.
- Introduction. The traffic load that an IP network supports may be distributed in various ways within the constraints of the topology of the network (e.g., avoiding routing loops). In an embodiment, a default mechanism for load distribution is to rely on an IGP (e.g., Intermediate System to Intermediate System (“IS-IS”), OSPF, etc.) to identify a single “shortest” path between any two endpoints of the network. “Shortest” is typically defined in terms of a minimization of an administrative weight (e.g., hop count) assigned to each link of the network topology. Having identified a single shortest path, all traffic between those endpoints then follows that path until the IGP detects a topology change. While often called dynamic routing (e.g., because it changes in response to topology changes), it can be better characterized as topology driven route determination.
- This default IGP mechanism works well in a wide variety of operational contexts. Nonetheless, there are operational environments in which network operators may wish to use additional controls to affect the distribution of traffic within their networks. These may include: (a) service specific routing (e.g., voice service may utilize delay sensitive routing, but best effort service may not); (b) customer specific routing (e.g., VPNs); (c) tactical route changes where peak traffic demands exceed single link capacity; and (d) tactical route changes for fault avoidance. In an embodiment, a rationale for greater control of the load distribution than that provided by the default mechanisms is included.
- Load Distribution. Traffic load distribution may be considered on a service-specific basis or aggregated across multiple services. In considering the load distribution, one can also distinguish between a snapshot of the network's state (e.g., a measurement) and an estimated (e.g., hypothetical) network state that may be based on estimated (e.g., projected) traffic demand. Load distribution can have two main components: (1) identification of routes over which traffic flows; and (2) in the case of multipath routing configurations (e.g., where multiple acyclic paths exist between common endpoints), the classification of flows determines the distribution of flows among those routes.
- Traffic Load Definition and Measurement. With modern node equipment supporting wire speed forwarding, traffic load can be a link measurement. In other cases, node constraints (e.g., packet forwarding capacity) may be more relevant. Traffic load can be measured in units of network capacity, and network capacity is typically measured in units of bandwidth (e.g., with a magnitude dimensioned in bits/second or packets/second). However, bandwidth can be considered a vector quantity providing both a magnitude and a direction. Bandwidth magnitude measurements are typically made at some specific (but often implicit) point in the network where traffic is flowing in a specific direction (e.g., between two points of a unicast transmission). A significance arises from distinguishing between bandwidth measurements made on a link basis and bandwidth demands between end-points of a network.
- A snapshot of the current load distribution may be identified through relevant measurements available on the network. The available traffic metrics for determining the load distribution include, for example, generic IP traffic metrics. The measurements of network capacity utilization can be combined with the information from the routing database to provide an overall perspective on the traffic distribution within the network. This information may be combined at the routers (and then reported back) or aggregated in the management system for dynamic traffic engineering.
- A peak demand value of the traffic load magnitude (e.g., over some time interval, in the context of a specific traffic direction) may be used for network capacity planning purposes. Considering the increasing deployment of asymmetric host interfaces (e.g. Asymmetric Digital Subscriber Line (“ADSL”)) and application software architectures (e.g. client-server), traffic load distribution is not necessarily symmetric between the opposite directions of transmission for any two endpoints of the network.
- Load Distribution Controls. For a traffic engineering process to impact the network, there can be adequate controls within the network to implement the results of the offline traffic engineering processes. In an embodiment, the physical topology (e.g., links and nodes) can be fixed while considering the traffic engineering options for affecting the distribution of a traffic load over that topology. In another embodiment, new nodes and links can be added and considered a network capacity planning issue.
- Fundamental load-affecting mechanisms include: (1) identification of suitable routes; and (2) in the case of multipath routing, allocation of traffic to a specific path. For traffic engineering purposes, the control mechanisms available can impact either of these mechanisms.
- Control of the Load Distribution in the context of the TE Framework. When there is a-need for control of the load distribution, the values of control parameters are unlikely to be static. Within the TE Framework's taxonomy of traffic engineering systems, control of load distribution may be: (a) dependent on time or network state (either local or global), e.g. based on IGP topology information; (b) based on algorithms executed offline or online; (c) impacted by open or closed loop network control; (d) centralized or distributed control of the distributed route set and traffic classification functions; or (e) prescriptive (i.e., a control function) rather than simply descriptive of network state.
- Network feedback can be an important part of the dynamic control of load distribution within the network. While offline algorithms to compute a set of paths between ingress and egress points in an administrative domain may rely on historic load data, online adjustments to the traffic engineered paths typically will rely in part on the load information reported by the nodes.
- The traffic engineering framework identifies process model components for: (a) measurement; (b) modeling, analysis, and simulation; and (c) optimization. Traffic load distribution measurement has already been described herein. Modeling, analysis, and simulation of the load distribution expected in the network is typically performed offline. Such analyses typically produce individual results of limited scope (e.g., valid for a specific demanded traffic load, fault condition, etc.). However, the accumulation of a number of such results can provide an indication of the robustness of a particular network configuration.
- The notion of optimization of the load distribution can imply the existence of some objective optimization criteria and constraints. Load distribution optimization objectives may include: (a) elimination of overload conditions on links/nodes; and (b) equalization of load on links/nodes. A variety of load distribution constraints may be derived from equipment, network topology, operational practices, service agreements, etc. Load distribution constraints may include: (a) current topology/route database; (b) current planned changes to topology/route database; (c) capacity allocations for planned traffic demand; (d) capacity allocations for network protection purposes; and (e) service level agreements (“SLAs”) for bandwidth and delay sensitivity of flows. Within the context of the traffic-engineering framework, control of the load distribution can be a core capability for enabling traffic engineering of the network.
- Route Determination. Routing protocols are well known and this description of route determination focuses on specific operational aspects of controlling those routing protocols towards a traffic-engineered load distribution. A traffic engineered load distribution typically relies on something other than a default IGP rout set, and typically requires support for multiple path configurations. In an embodiment, the set of routes deployed for use within a network is not necessarily monolithic. Not all routes in the network may be determined by the same system. Routes may be static or dynamic. Routes may be determined by: (1) topology driven IGP; (2) explicitly specified; (3) capacity constraints (e.g., link/node/service bandwidth); (4) constraints on other desired route characteristics (e.g., delay, diversity/affinity with other routes, etc.). Combinations of the methods are possible, for example, determining partial explicit routes where some of the links are selected by the topology driven IGP, some routes may be automatically generated by the IGP, and others may be explicitly set by some management system.
- Explicit routes are not necessarily static. Explicit routes may be generated periodically by an offline traffic engineering tool and provisioned into the network. MPLS provides efficient mechanisms for explicit routing and bandwidth reservation. Link capacity may be reserved for a variety of protection strategies as well as for planned traffic load demands and in response to signaled bandwidth requests (e.g. RSVP). When allocating capacity, there may be issues in the sequence regarding how capacity on specific routes is to be allocated affecting the overall traffic load capacity. It can be important during path selection to chose paths that have a minimal effect on future path setups. Aggregate capacity required for some paths may exceed the capacities of one or more links along the path, forcing the selection of an alternative path for that traffic. Constraint-based routing approaches may also provide mechanisms to support additional constraints (e.g., other than capacity based constraints).
- There are known IGP (e.g. IS-IS, OSPF) enhancement proposals to support additional network state information for traffic engineering purposes (e.g., available link capacity). Alternatively, routers can report relevant network state information (e.g., raw and/or processed) directly to the management system.
- In networks other than MPLS (e.g., PSTN), there can be some symmetry in the routing of traffic flows and aggregate demand. For the Internet, symmetry is unlikely to be achieved in routing (e.g., due to peering policies sending responses to different peering points than queries).
- Controls over the determination of routes form an important aspect of traffic engineering for load distribution. Since the routing can operate over a specific topology, any control of the topology abstraction used provides some control of the set of possible routes.
- Control of the topology abstraction. There are at least two major controls available on topology abstraction including the use of hierarchical routing and link bundling concepts. Hierarchical routing provides a mechanism to abstract portions of the network in order to simplify the topology over which routes are being selected. Hierarchical routing examples in IP networks include: (a) use of an Exterior Gateway Protocol (“EGP”) (e.g. BGP) and an IGP (e.g., IS-IS); and (b) MPLS Label stacks. Such hierarchies can provide both a simplified topology and a coarse classification of traffic. Operational controls over route determination are another example. The default topology driven IGP typically provides the least administrative control over route determination. The main control available is the ability to modify the administrative weights. This has network wide effects and may result in unanticipated traffic shifts. A route set comprised entirely of completely-specified explicit-routes is the opposite extreme, i.e., complete offline operational control of the routing. A disadvantage of using explicit routes is the administrative burden and potential for human induced errors from using this approach on a large scale. Management systems (e.g., policy-based management) may be deployed to ease these operational concerns, while still providing more precise control over the routes deployed in the network. In MPLS enabled networks, explicit route specification is feasible and a finer grained approach is possible for classification, including service differentiation.
- Traffic Classification in Multipath Routing Configurations. With multiple paths between two endpoints, there is a choice to be made as to which traffic to send down a particular path. The choice can be impacted by: (1) traffic source preferences (e.g., expressed as marking—Differentiated Services Code Points (“DSCP”)); (2) traffic destination preferences (e.g., peering arrangements); (3) network operator preferences (e.g., time of day routing, scheduled facility maintenance, policy); and (4) network state (e.g., link congestion avoidance). There are a number of potential issues related to the use of multi-path routing including: (a) variable path Maximum Transmission Unit (“MTU”); (b) variable latencies; (c) increased difficulty in debugging; and (d) sequence integrity. These issues may be of particular concern when traffic from a single “flow” is routed over multiple paths or during the transition of traffic flow between paths. Known efforts have been made to consider these effects in the development of hashing algorithms for use in multipath routing. However, the transient effects of flow migration for other than best-effort flows have not been resolved.
- The choice of traffic classification algorithm can be delegated to the network (e.g., load balancing—which may be done based on some hash of packet headers and/or random numbers). This approach is taken in Equal Cost Multipath Protocol (“ECMP”) and Optimized Multipath Protocol (“OMP”). Alternatively, a policy-based approach has the advantage of permitting greater flexibility in the packet classification and path selection. This flexibility can be used for more sophisticated load balancing algorithms, or to meet churn in the network optimization objectives from new service requirements.
- Multipath routing, in the absence of explicit routes, can be difficult to traffic engineer as it devolves to the problem of adjusting the administrative weights. MPLS networks provide a convenient and realistic context for multipath classification examples using explicit routes. One LSP could be established along the default IGP path. An additional LSP could be provisioned.(in various ways) to meet different traffic engineering objectives.
- Traffic Engineered Load Distribution in Multipath MPLS networks. Load balancing can be analyzed as a specific sub-problem within the topic of load distribution. Load-balancing essentially provides a partition of the traffic load across the multiple paths in the MPLS network.
-
FIG. 7 illustrates a generic policy-based network architecture in the context of an MPLS network. In this embodiment, two LSPs are established: LSP A that follows the path ofrouters routers - A load balancing operation is performed at the LSR containing the ingress of the LSPs to be load balanced.
LSR 741 is acting as the Policy Enforcement Point for load-balancing policies related to LSPs 751-752. The load-balancing encompasses the selection of suitable policies to control the admission of flows to both LSPs 751-752. - The admission decision for an LSP can be reflected in the placement of that LSP as the Next Hop Forwarding Label Entry (“NHFLE”) within the appropriate routing tables within the LSR. Normally, there is only one NHLFE corresponding to each FEC, however there are some circumstances where multiple NHLFEs may exist for an FEC.
- The conditions for the policies applying to the set of LSPs to be load balanced can be consistent. For example, if the condition used to allocate flows between LSPs is the source address range, then the set of policies applied to the set of LSPs can account for the disposition of the entire source address range.
- For policy-based MPLS networks, traffic engineering policies also can be able to utilize for both conditions and actions the parameters available in the standard MPLS MIBs, such as MPLS Traffic Engineering MIB, MPLS LSR MIB, MPLS Packet Classifier MIB, and other MIB elements for additional traffic metrics.
- Load Balancing at Edge of MPLS Domain. Flows admitted to an LSP at the edge of an MPLS domain can be described by the set of Forwarding Equivalence Classes (FECs) that are mapped to the LSPs in the FEC to NHLFE (“FTN”) table. The load-balancing operation may be considered as redefining the FECs to send traffic along the appropriate path. Rather than sending all the traffic along a single LSP, the load balancing policy operation results in the creation of new FECs which effectively partition the traffic flow among the LSPs in order to achieve some load balance objective. As an example, two simple point-to-point LSPs with the same source and destination can have an aggregate FEC (z) load balanced. The aggregate FEC (z) is the union of FEC (a) and FEC (b). The load balancing policy may adjust the FEC (a) and FEC (b) definitions such that the aggregate FEC (z) is preserved.
- Load Balancing at interior of MPLS Domain. Flows admitted to an LSP at the interior of an MPLS domain can be described by the set of labels that are mapped to the LSPs in the Incoming label Map (“ILM”). A Point-to-Point LSP that simply transits an LSR at the interior of an MPLS domain does not have an LSP ingress at this transit LSR. Merge points of a Multipoint-to-Point LSP may be considered as ingress points for the next link of the LSP. A label stacking operation many be considered as an ingress point to a new LSP. The above conditions, which put multiple LSPs onto different LSPs, may require balancing at the interior node. The FEC of an incoming flow may be inferred from its label. Hence load-balancing policies may operate based on incoming labels to segregate traffic rather than requiring the ability to walk up the incoming label stack to the packet header in order to reclassify the packet. The result is a coarse load balancing of LSPs onto one of a number of LSPs from the LSR to the egress LSR.
- Load Balancing with Multiple NHLFEs. The MPLS Architecture identifies that the NHLFE may have multiple entries for one FEC. Multiple NHLFEs may be present to represent: (a) the Incoming FEC/label set is to be multicast; and (b) when route selection based on the EXPansion (“EXP”) field in addition to the label is required. If both multicast and load balancing functions are required, it can be necessary to disambiguate the scope of the operations. The load balancing operation can partition a set of input traffic (e.g., defined as FECs or Labels) across a set of output LSPs. One or more of the arriving FECs may be multicast to both the set of load balanced LSPs as well as other LSPs. This can imply that the packet replication (multicast) function occurs before the load balancing. When the route selection is based on the EXP field, it can be a special case of the policy-based load-balancing approach. In an embodiment, replicating NHLFEs for this purpose be deprecated and the more generic policy-based approach be used to specify an FEC/label space partition based on the EXP field.
- The load balancing function can be considered as part of the classification function and allows preserving a mapping of a FEC into one NHLFE for unicast. While classification of incoming flows into FECs is often thought of as an operation on some tuple of packet headers, this is not the only basis for classification because router state can also be used. An example of a tuple is a set of protocol header fields such as source address, destination address, and protocol ID. In an embodiment, the source port of a flow may be a useful basis on which to discriminate flows. As another example, a “random number” generated within the router may be attractive as the basis for allocating flows for a load balancing objective. An algorithm within the router, which may include some hash function on the packet headers, may generate the “random number.”
- MPLS Policies for Load Balancing. MPLS load balancing partitions an incoming stream of traffic across multiple LSPs. The load balancing policy, as well as the ingress LSR where the policy is enforced, can be able to distinctly identify LSPs. In an embodiment, the PDP that installs the load balancing policy has knowledge of the existing LSPs and is able to identify them in policy rules. One way to achieve this is through the binding of a label to an LSP. An example of an MPLS load-balancing policy may state for the simple case of balancing across two LSPs: IF traffic matches classifier, THEN forward on LSP 1, ELSE forward on LSP 2. Classification can be done on a number of parameters such as packet header fields, incoming labels, etc. The classification conditions of an MPLS load-balancing policy are thus effectively constrained to be able to specify the FEC in terms that can be resolved into MPLS packet classification MIB parameters.
- Forwarding traffic on an LSP can be achieved by tagging the traffic with the appropriate label corresponding to the LSP. MPLS load-balancing policy actions typically result in the definition of a new aggregate FEC to be forwarded down a specific LSP. This would typically be achieved by appropriate provisioning of the FEC and routing tables (e.g., FTN and ILM), e.g., via the appropriate MIBs.
- The basis for partitioning the traffic can be static or dynamic. Dynamic load balancing can be based on a dynamic administrative control (e.g., time of day), or it can form a closed control loop with some measured network parameter. In an embodiment, “voice trunk” LSP bandwidths can be adjusted periodically based on expected service demand (e.g., voice call intensity, voice call patterns, and so on). Static Partitioning of the Load can be based on information carried within the packet header (e.g. source/destination addresses, source/destination port numbers, packet size, protocol ID, etc.). Static partitioning can also be based on other information available at the LSR (e.g., the arriving physical interface). However if load partition is truly static, or at least very slowly changing (e.g., less than one change/day), then the need for a policy-based control of this provisioning information maybe debatable and a direct manipulation of the LSR MIB may suffice.
- A control-loop based load-balancing scheme can seek to balance the load close to some objective, subject to error in the measurements and delays in the feedback loop. The objective may be based on a fraction of the input traffic to be sent down a link (e.g., 20% down a first LSP and 80% down a second LSP) in which case some measurement of the input traffic is required. The objective may also be based on avoiding congestive loss in which case some loss metric is required.
- The metrics required for control loop load balancing may be derived from information available locally at the upstream LSR, or may be triggered by events distributed elsewhere in the network. In the latter case, the metrics can be delivered to the Policy Decision Point. Locally derived trigger conditions can be expected to avoid the propagation delays etc. associated with the general distributed case. Frequent notification of the state of these metrics increases network traffic and be undesirable.
- In an embodiment, a single large flow is load balanced across a set of links. In this case policies based solely on the packet headers may be inadequate and some other approach (e.g. based on a random number generated within the router) may be required. The sequence integrity of the aggregate FEC forwarded over a set of load balancing LSPs may not be preserved under such a regime.
- ECMP and OMP can embed the load balancing optimization problem in the IGP implementation. This may be appropriate in the context of a single service if the optimization objectives and constraints can be established. ECMP approaches apply equal cost routes, but do not provide guidance on allocating load between routes with different capacities. OMP attempts a network wide routing optimization (considering capacities) but assumes that all network services can be reduced to a single dimension of capacity. For networks requiring greater flexibility in the optimization objectives and constraints policy-based approaches may be appropriate.
- Security Considerations. In an embodiment, the policy system provides a mechanism to configure the LSPs within LSRs. A system that can be configured can also be incorrectly configured with potentially disastrous results. The policy system can help to secure the MPLS system by providing appropriate controls on the LSP life cycle. Use of the COPS protocol within the policy system between the PEP/PDP allows the use of message level security for authentication, replay protection, and message integrity. Existing protocols such as IPSEC (e.g., a collection of IP security measures that comprise an optional tunneling protocol for IPv6) can also be used to authenticate and secure the channel. The COPS protocol also provides a reliable transport mechanism with a session keep-alive.
-
FIG. 8 shows an illustration of policy-based management with scaling by automation.Configuration management data 800 can include business and service level policies that are part of a PMC. The policies can be communicated to a configuration/data translation point 805, which is coupled to network devices such asdevice A 821 anddevice N 827.Device A 821 can communicate status information to networkstatus point 810, anddevice N 827 can communicate state information tonetwork topology 815. Each ofnetwork status point 810 andnetwork topology point 815 can communicate information to configuration/data translation point 805 so that closed loop policies triggered by network state can automate network response to failures, congestion, demand changes, and so on. Accordingly, traffic engineering functions can move online. -
FIG. 9 shows an illustration of policy-based management with scaling by roles without closed loop policies triggered by network state. Policy-based management, however, can automate the configuration translation functions to reduce errors and speed operations. Coherent policy can be applied across multiple device instances and device types using higher level abstractions such as roles. -
FIG. 10 shows an illustration of a large metropolitan scale voice service architecture based in part on an MPLS network. Acentral office 1010 includesclass 5central office equipment 1011 andtrunk gateways 1012. In another embodiment, the central office can include line gateways, service gateways, and so on. Thecentral office 1010 is coupled to anMPLS network 1020 providing logical metropolitan connectivity and corresponding to aphysical metro topology 1025. In an embodiment, 1-5 million voice lines can be concentrated via 80-150 offices to attach via truck gateways to theMPLS network 1020. Each LSP of theMPLS network 1020 for the voice lines can have a low megabyte/second average bandwidth. In an embodiment, a full mesh interconnect with bi-directional LSPs can require 10-20,000 LSPs per metro for voice service. - In an embodiment, MPLS networks can be scaled by service and across a multi-state region, e.g., a multi-state region of a regional Bell operating company (“RBOC”). For example, in a nine state region having 38 local access and transport areas (“LATA”), aggregating the total number of LSPs implies greater than 100,000 LSPs across the region for voice service. More LSPs can be required for other services, and twice as many LSPs can be required for protected services. To provide metro/LATA interconnect (e.g., long distance voice service), a full mesh of LATAs would require 1-2000 LSPs for inter-LATA voice service interconnection.
-
FIG. 11 shows an illustration of a policy-based management architecture for voice services over an MPLS network Acall control complex 1260 can be coupled to acore MPLS network 1250 and a SS7/AIN network 1270 that provides PSTN voice services. The call control complex 1260 can send voice service traffic data to networkadministration 1280. Thenetwork administration 1280 can include traffic management information 1281 (e.g., bandwidth broker policies, routing policies, etc.) and device provisioning changes 1282 (e.g., explicit routes, QoS parameters, etc.).Network administration 1280 can thereby provide voice service LSP provisioning information (e.g., policies) to thecore MPLS network 1250. In an embodiment,network administration 1280 can receive an estimate of traffic demand (e.g., fromcall control complex 1260, from elements of the SS7/AIN network 1270, and so on) to dimension (e.g., dynamically) the capacity of voice trunks in theMPLS network 1250. - In an embodiment of the present invention, the
MPLS network 1250 includes one or more VPNs that are set up to handle particular types of traffic. For example, one or VPNs can be provisioned to carry voice traffic across theMPLS network 1250. As another example, VPNs can be provisioned to carry traffic from particular classes of customers, e.g., business customer traffic can be carried by one or more VPNs to provide a better quality of service, consumer customer traffic can be carried by one or more other VPNs to provide a lower quality of service than business customer traffic receives, and so on. Policy-based control can configure the LSRs of theMPLS network 1250 so that, for example, voice traffic is set up with an appropriate quality of service level and data traffic is likewise set up with an appropriate quality of service level. - As used to describe embodiments of the present invention, the term “coupled” encompasses a direct connection, an indirect connection, or a combination thereof. Moreover, two devices that are coupled can engage in direct communications, in indirect communications, or a combination thereof.
- Embodiments of the present invention relate to data communications via one or more networks (e.g., MPLS networks). The data communications can be carried by one or more communications channels of the one or more networks. A network can include wired communication links (e.g., coaxial cable, copper wires, optical fibers, a combination thereof, and so on), wireless communication links (e.g., satellite communication links, terrestrial wireless communication links, satellite-to-terrestrial communication links, a combination thereof, and so on), or a combination thereof A communications link can include one or more communications channels, where a communications channel carries communications. For example, a communications link can include multiplexed communications channels, such as time division multiplexing (“TDM”) channels, frequency division multiplexing (“FDM”) channels, code division multiplexing (“CDM”) channels, wave division multiplexing (“WDM”) channels, a combination thereof, and so on.
- In accordance with an embodiment of the present invention, instructions adapted to be executed by a processor to perform a method are stored on a computer-readable medium. The computer-readable medium can be a device that stores digital information. For example, a computer-readable medium includes a compact disc read-only memory (CD-ROM) as is known in the art for storing software. The computer-readable medium is accessed by a processor suitable for executing instructions adapted to be executed. The terms “instructions adapted to be executed” and “instructions to be executed” are meant to encompass any instructions that are ready to be executed in their present form (e.g., machine code) by a processor, or require further manipulation (e.g., compilation, decryption, or provided with an access code, etc.) to be ready to be executed by a processor.
- Embodiments of systems and methods for policy-enabled communications networks have been described. In the foregoing description, for purposes of explanation, numerous specific details are set forth to provide a thorough understanding of the present invention. It will be appreciated, however, by one skilled in the art that the present invention may be practiced without these specific details. In other instances, structures and devices are shown in block diagram form. Furthermore, one skilled in the art can readily appreciate that the specific sequences in which methods are presented and performed are illustrative and it is contemplated that the sequences can be varied and still remain within the spirit and scope of the present invention.
- In the foregoing detailed description, systems and methods in accordance with embodiments of the present invention have been described with reference to specific exemplary embodiments. Accordingly, the present specification and figures are to be regarded as illustrative rather than restrictive.
- In the Specification
- Please delete the paragraph [0001] and insert the following paragraph below the title:
- The present application is a continuation of U.S. application Ser. No. 09/956,002, entitled, SYSTEMS AND METHODS FOR POLICY-ENABLED COMMUNICATIONS NETWORKS and filed on Sep. 20, 2001.
Claims (42)
1. A system for policy-based management of a label switching network, the system comprising:
a policy-based network administration system, the policy-based network administration system including a plurality of policies; and
the label switching network that is, coupled to the policy-based network administration system.
2. The system of claim 0, wherein the plurality of policies include a plurality of network operation policies.
3. The system of claim 2 , wherein the plurality of network operation policies include a virtual private network policy.
4. The system of claim 2 , wherein the plurality of network operation policies include a voice traffic policy.
5. The system of claim 2 , wherein the plurality of network operation policies include a first quality of service policy and a second quality of service policy, the first quality of service policy being different from the second quality of service policy.
6. The system of claim 2 , wherein:
the policy-based management system includes a policy decision point; and
the label switching network includes a policy enforcement point, the policy enforcement point being coupled to the policy decision point.
7. The system of claim 6 , wherein the policy-based management system includes a policy repository, the policy repository coupled to the policy decision point.
8. The system of claim 7 , wherein the policy-based management system includes a policy management console, the policy management console coupled to one or more of the policy decision point and the policy repository.
9. The system of claim 6 , wherein the policy enforcement point is a label switch router.
10. The system of claim 9 , wherein the label switch router is an edge label switch router.
11. The system of claim 10 , wherein the edge label switch router is part of a label switched path.
12. A system for policy-based control of a label switching network carrying one or more of voice and data traffic, the system comprising:
a central office, the central office including a trunk gateway;
the label switching network, being coupled to the trunk gateway of the central office; and
a network administration system, the network administration system including a plurality of policies, each policy of at least a subset of the plurality of policies to control at least in part operation of the label switching network.
13. The system of claim 12 , wherein the gateway is one of a line gateway, a trunk gateway, and a service gateway.
14. The system of claim 12 , wherein the central office includes class five central office equipment.
15. The system of claim 12 , wherein the network administration system includes one or more of traffic management information and device provisioning information.
16. The system of claim 12 , wherein the plurality of policies include a plurality of traffic management policies.
17. The system of claim 12 , further comprising a call control complex, the call control complex coupled to the network administration system and the label switching network.
18. The system of claim 17 , wherein the call control complex is to send voice service traffic data to the network administration system.
19. The system of claim 18 , wherein the network administration system is to send voice service label switched path provisioning policies to the label switching network.
20. The system of claim 18 , wherein the network administration system is to send virtual private network provisioning policies to the label switching network.
21. The system of claim 17 , wherein the label switching network is a multiprotocol label switching (“MPLS”), the system further comprising an SS7/AIN network coupled to the call control complex.
22. The system of claim 21 , wherein the SS7/AIN network and the MPLS network are part of a regional telecommunications company network.
23. A method for policy-based control of a label switching network, the method comprising:
storing a policy to control operation of at least a portion of the label switching network;
retrieving the policy in response to a control input;
sending the policy to the label switching network; and
operating the label switching network based at least in part on the policy.
24. The method of claim 23 , wherein the policy is a network operation policy.
25. The method of claim 24 , wherein the network operation policy is a voice traffic policy.
26. The method of claim 24 , wherein the network operation policy is a virtual private network policy.
27. The method of claim 24 , wherein the network operation policy is a quality of service policy.
28. The method of claim 23 , wherein storing the policy to control operation of at least a portion of the label switching network includes storing the policy in a policy repository.
29. The method of claim 23 , wherein the control input is received from a network device of the label switching network.
30. The method of claim 23 , wherein the control input is received from a policy management console.
31-36. (canceled)
37. A system for policy-based management of a label switching network, the system comprising:
the label switching network; and
means for policy-based management of the label switching network, the means for policy-based management of the label switching network coupled to the label switching network.
38. The system of claim 0, wherein:
the means for policy-based management of the label switching network includes a policy decision point; and
the label switching network includes a policy enforcement point.
39. The system of claim 0, wherein the means for policy-based management of the label switching network includes a policy repository.
40. The system of claim 0, wherein the means for policy-based management of the label switching network includes one or more of a voice traffic policy, a virtual private network policy, and a quality of service policy.
41-42. (canceled)
43. A computer-readable medium storing a plurality of instructions to be executed by a processor for policy-based control of a label switching network, the plurality of instructions comprising instructions to:
store a policy to control operation of at least a portion of the label switching network;
retrieve the policy in response to a control input; and
send the policy to the label switching network.
44. The computer-readable medium of claim 0, wherein the policy is selected from the group consisting of a voice traffic management policy, a virtual private network management policy, and a quality of service management policy.
45. The computer-readable medium of claim 43 , wherein the plurality of instructions further comprise instructions to monitor operations of the label switching network, the label switching network operating based at least in part on the policy.
46. The computer-readable medium of claim 43 wherein the label switching network is a multiprotocol label switching (“MPLS”) network.
47. The computer-readable medium of claim 43 , wherein the plurality of instructions further comprise instructions for operating at least a portion of the label switching network based at least in part on a first policy by operating at least a portion of the label switching network as one or more voice trunks based at least in part on a first voice traffic policy.
48. The computer-readable medium of claim 43 , wherein the plurality of instructions further comprise instructions for operating at least a portion of the label switching network based at least in part on a first policy by operating at least a portion of the label switching network as one or more virtual private networks based at least in part on a first virtual network policy.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US11/250,076 US20060039364A1 (en) | 2000-10-19 | 2005-10-13 | Systems and methods for policy-enabled communications networks |
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US24137400P | 2000-10-19 | 2000-10-19 | |
US09/956,002 US7082102B1 (en) | 2000-10-19 | 2001-09-20 | Systems and methods for policy-enabled communications networks |
US11/250,076 US20060039364A1 (en) | 2000-10-19 | 2005-10-13 | Systems and methods for policy-enabled communications networks |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US09/956,002 Continuation US7082102B1 (en) | 2000-10-19 | 2001-09-20 | Systems and methods for policy-enabled communications networks |
Publications (1)
Publication Number | Publication Date |
---|---|
US20060039364A1 true US20060039364A1 (en) | 2006-02-23 |
Family
ID=36687110
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US09/956,002 Expired - Lifetime US7082102B1 (en) | 2000-10-19 | 2001-09-20 | Systems and methods for policy-enabled communications networks |
US11/250,076 Abandoned US20060039364A1 (en) | 2000-10-19 | 2005-10-13 | Systems and methods for policy-enabled communications networks |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US09/956,002 Expired - Lifetime US7082102B1 (en) | 2000-10-19 | 2001-09-20 | Systems and methods for policy-enabled communications networks |
Country Status (1)
Country | Link |
---|---|
US (2) | US7082102B1 (en) |
Cited By (119)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20040076127A1 (en) * | 2002-10-18 | 2004-04-22 | Porte David John | Handling of wireless communications |
US20040177087A1 (en) * | 2002-06-27 | 2004-09-09 | Haitao Wu | Self-adaptive scheduling method and network element |
US20040243670A1 (en) * | 2001-07-10 | 2004-12-02 | Jochen Grimminger | Method for the optimized use of sctp(stream control transmission protocol) in mpls(multi protocol label switching) networks |
US20050044268A1 (en) * | 2003-07-31 | 2005-02-24 | Enigmatec Corporation | Self-managed mediated information flow |
US20050071455A1 (en) * | 2001-12-31 | 2005-03-31 | Samsung Electronics Co., Ltd. | System and method for scalable and redundant COPS message routing in an IP multimedia subsystem |
US20050132378A1 (en) * | 2003-12-05 | 2005-06-16 | Horvitz Eric J. | Systems and methods for guiding allocation of computational resources in automated perceptual systems |
US20060018326A1 (en) * | 2004-07-23 | 2006-01-26 | Marconi Communications, Inc. | LSP path selection |
US20060114838A1 (en) * | 2004-11-30 | 2006-06-01 | Mandavilli Swamy J | MPLS VPN fault management using IGP monitoring system |
US20060155532A1 (en) * | 2004-12-14 | 2006-07-13 | Nam Hyun S | Apparatus and method for managing quality of a label switched path in a convergence network |
US20060250964A1 (en) * | 2004-02-12 | 2006-11-09 | Cisco Technology, Inc. | Traffic flow determination in communications networks |
US20070104194A1 (en) * | 2005-11-04 | 2007-05-10 | Ijsbrand Wijnands | In-band multicast signaling using LDP |
US20070124433A1 (en) * | 2005-11-30 | 2007-05-31 | Microsoft Corporation | Network supporting centralized management of QoS policies |
US20070124485A1 (en) * | 2005-11-30 | 2007-05-31 | Microsoft Corporation | Computer system implementing quality of service policy |
US20070133540A1 (en) * | 2005-12-08 | 2007-06-14 | Kyung Gyu Chun | Method for measuring performance of MPLS LSP |
US20070136397A1 (en) * | 2005-12-09 | 2007-06-14 | Interdigital Technology Corporation | Information life-cycle management architecture for a device with infinite storage capacity |
US20070160061A1 (en) * | 2006-01-06 | 2007-07-12 | Jean-Philippe Vasseur | Technique for dynamically splitting MPLS TE-LSPs |
US20070160079A1 (en) * | 2006-01-06 | 2007-07-12 | Microsoft Corporation | Selectively enabled quality of service policy |
US20070177594A1 (en) * | 2006-01-30 | 2007-08-02 | Juniper Networks, Inc. | Forming equal cost multipath multicast distribution structures |
US20070177593A1 (en) * | 2006-01-30 | 2007-08-02 | Juniper Networks, Inc. | Forming multicast distribution structures using exchanged multicast optimization data |
US20070204018A1 (en) * | 2006-02-24 | 2007-08-30 | Cisco Technology, Inc. | Method and system for obviating redundant actions in a network |
US20070217428A1 (en) * | 2006-03-16 | 2007-09-20 | Ijsbrand Wijnands | Automation fallback to P2P LSPs for mLDP built multipoint-trees |
US7313095B1 (en) * | 2003-11-06 | 2007-12-25 | Sprint Communications Company L.P. | Method for estimating telecommunication network traffic using link weight changes |
US20080025309A1 (en) * | 2006-07-31 | 2008-01-31 | Cisco Technology, Inc. | Technique for multiple path forwarding of label-switched data traffic |
US20080056258A1 (en) * | 2003-06-25 | 2008-03-06 | Fujitsu Limited | Method and System for Multicasting Data Packets in an MPLS Network |
US20080071727A1 (en) * | 2006-09-18 | 2008-03-20 | Emc Corporation | Environment classification |
US20080104693A1 (en) * | 2006-09-29 | 2008-05-01 | Mcalister Donald | Transporting keys between security protocols |
US20080144525A1 (en) * | 2006-12-13 | 2008-06-19 | Crockett Douglas M | Method and apparatus for allocating network resources in a group communication system |
US20080225706A1 (en) * | 2007-03-13 | 2008-09-18 | Lange Andrew S | Under-assigning resources to video in triple-play virtual topologies to protect data-class traffic |
US20080225857A1 (en) * | 2007-03-13 | 2008-09-18 | Lange Andrew S | Low-impact call connection request denial |
US20080225716A1 (en) * | 2007-03-13 | 2008-09-18 | Lange Andrew S | Quality of service admission control network |
US20080225707A1 (en) * | 2007-03-13 | 2008-09-18 | Lange Andrew S | Advanced bandwidth management audit functions |
US20080225712A1 (en) * | 2007-03-13 | 2008-09-18 | Lange Andrew S | Policy enforcement points |
WO2008111028A2 (en) * | 2007-03-13 | 2008-09-18 | Alcatel Lucent | Application-aware policy enforcement |
US20080225709A1 (en) * | 2007-03-13 | 2008-09-18 | Lange Andrew S | Advanced bandwidth management |
US20080259797A1 (en) * | 2007-04-18 | 2008-10-23 | Aladdin Knowledge Systems Ltd. | Load-Balancing Bridge Cluster For Network Nodes |
EP2003821A2 (en) * | 2006-04-26 | 2008-12-17 | Huawei Technologies Co., Ltd. | A strategic routing device and method |
US20090010647A1 (en) * | 2007-07-06 | 2009-01-08 | Jenkins David W | Method and apparatus for routing communications in a mesh network |
US7477657B1 (en) * | 2002-05-08 | 2009-01-13 | Juniper Networks, Inc. | Aggregating end-to-end QoS signaled packet flows through label switched paths |
US7519010B1 (en) | 2004-08-30 | 2009-04-14 | Juniper Networks, Inc. | Inter-autonomous system (AS) multicast virtual private networks |
US20090100162A1 (en) * | 2007-10-15 | 2009-04-16 | Microsoft Corporation | Sharing Policy and Workload among Network Access Devices |
US7558199B1 (en) | 2004-10-26 | 2009-07-07 | Juniper Networks, Inc. | RSVP-passive interfaces for traffic engineering peering links in MPLS networks |
US20090175274A1 (en) * | 2005-07-28 | 2009-07-09 | Juniper Networks, Inc. | Transmission of layer two (l2) multicast traffic over multi-protocol label switching networks |
US7564803B1 (en) * | 2005-08-29 | 2009-07-21 | Juniper Networks, Inc. | Point to multi-point label switched paths with label distribution protocol |
US7567512B1 (en) | 2004-08-27 | 2009-07-28 | Juniper Networks, Inc. | Traffic engineering using extended bandwidth accounting information |
US20090190467A1 (en) * | 2008-01-25 | 2009-07-30 | At&T Labs, Inc. | System and method for managing fault in a multi protocol label switching system |
US20090228954A1 (en) * | 2008-03-07 | 2009-09-10 | At&T Mobility Ii Llc | System and method for policy-enabled mobile service gateway |
US7602702B1 (en) | 2005-02-10 | 2009-10-13 | Juniper Networks, Inc | Fast reroute of traffic associated with a point to multi-point network tunnel |
US7606235B1 (en) * | 2004-06-03 | 2009-10-20 | Juniper Networks, Inc. | Constraint-based label switched path selection within a computer network |
US20100043079A1 (en) * | 2006-09-07 | 2010-02-18 | France Telecom | Code securing for a personal entity |
US20100098000A1 (en) * | 1997-12-31 | 2010-04-22 | Irwin Gerszberg | Hybrid fiber twisted pair local loop network service architecture |
US20100106934A1 (en) * | 2008-10-24 | 2010-04-29 | Microsoft Corporation | Partition management in a partitioned, scalable, and available structured storage |
US20100124231A1 (en) * | 2008-11-14 | 2010-05-20 | Juniper Networks, Inc. | Summarization and longest-prefix match within mpls networks |
US7742482B1 (en) * | 2006-06-30 | 2010-06-22 | Juniper Networks, Inc. | Upstream label assignment for the resource reservation protocol with traffic engineering |
US7769873B1 (en) | 2002-10-25 | 2010-08-03 | Juniper Networks, Inc. | Dynamically inserting filters into forwarding paths of a network device |
US7787380B1 (en) | 2006-06-30 | 2010-08-31 | Juniper Networks, Inc. | Resource reservation protocol with traffic engineering point to multi-point label switched path hierarchy |
US7839862B1 (en) | 2006-06-30 | 2010-11-23 | Juniper Networks, Inc. | Upstream label assignment for the label distribution protocol |
US7856509B1 (en) | 2004-04-09 | 2010-12-21 | Juniper Networks, Inc. | Transparently providing layer two (L2) services across intermediate computer networks |
US20110058558A1 (en) * | 2009-09-08 | 2011-03-10 | Electronics And Telecommunications Research Institute | Network control device and network control method |
US7936780B1 (en) | 2008-03-12 | 2011-05-03 | Juniper Networks, Inc. | Hierarchical label distribution protocol for computer networks |
US7940784B2 (en) | 2008-11-03 | 2011-05-10 | At&T Intellectual Property I, L.P. | Methods and apparatus to advertise network routes to implement a hybrid network topology |
US20110164503A1 (en) * | 2010-01-05 | 2011-07-07 | Futurewei Technologies, Inc. | System and Method to Support Enhanced Equal Cost Multi-Path and Link Aggregation Group |
US7990965B1 (en) | 2005-07-28 | 2011-08-02 | Juniper Networks, Inc. | Transmission of layer two (L2) multicast traffic over multi-protocol label switching networks |
US8078758B1 (en) | 2003-06-05 | 2011-12-13 | Juniper Networks, Inc. | Automatic configuration of source address filters within a network device |
US8125926B1 (en) | 2007-10-16 | 2012-02-28 | Juniper Networks, Inc. | Inter-autonomous system (AS) virtual private local area network service (VPLS) |
US8155125B1 (en) * | 2004-09-17 | 2012-04-10 | Cisco Technology, Inc. | Apparatus and method for utilizing aggregate network links for multicast switching |
US20120087281A1 (en) * | 2006-11-14 | 2012-04-12 | Rahman Shahriar I | Access point profile for a mesh access point in a wireless mesh network |
US20120263072A1 (en) * | 2009-12-29 | 2012-10-18 | Zte Corporation | Ethernet traffic statistics and analysis method and system |
US8310957B1 (en) | 2010-03-09 | 2012-11-13 | Juniper Networks, Inc. | Minimum-cost spanning trees of unicast tunnels for multicast distribution |
US20120300783A1 (en) * | 2009-12-30 | 2012-11-29 | Zte Corporation | Method and system for updating network topology in multi-protocol label switching system |
US8422514B1 (en) | 2010-02-09 | 2013-04-16 | Juniper Networks, Inc. | Dynamic configuration of cross-domain pseudowires |
US20130100951A1 (en) * | 2010-06-23 | 2013-04-25 | Nec Corporation | Communication system, control apparatus, node controlling method and node controlling program |
US8566453B1 (en) * | 2007-11-19 | 2013-10-22 | Juniper Networks, Inc. | COPS-PR enhancements to support fast state synchronization |
US20140003433A1 (en) * | 2012-06-29 | 2014-01-02 | Juniper Networks, Inc. | Methods and apparatus for providing services in distributed switch |
US20140143409A1 (en) * | 2012-11-21 | 2014-05-22 | Cisco Technology, Inc. | Bandwidth On-Demand Services in Multiple Layer Networks |
US8787400B1 (en) | 2012-04-25 | 2014-07-22 | Juniper Networks, Inc. | Weighted equal-cost multipath |
US8819212B1 (en) * | 2007-09-28 | 2014-08-26 | Emc Corporation | Delegation of data classification using common language |
US8837479B1 (en) | 2012-06-27 | 2014-09-16 | Juniper Networks, Inc. | Fast reroute between redundant multicast streams |
US8868720B1 (en) | 2007-09-28 | 2014-10-21 | Emc Corporation | Delegation of discovery functions in information management system |
US20140325042A1 (en) * | 2013-03-18 | 2014-10-30 | International Business Machines Corporation | Robust Service Deployment |
US8886796B2 (en) * | 2008-10-24 | 2014-11-11 | Microsoft Corporation | Load balancing when replicating account data |
US8917729B1 (en) | 2008-12-10 | 2014-12-23 | Juniper Networks, Inc. | Fast reroute for multiple label switched paths sharing a single interface |
US20150029849A1 (en) * | 2013-07-25 | 2015-01-29 | Cisco Technology, Inc. | Receiver-signaled entropy labels for traffic forwarding in a computer network |
US8953500B1 (en) | 2013-03-29 | 2015-02-10 | Juniper Networks, Inc. | Branch node-initiated point to multi-point label switched path signaling with centralized path computation |
US20150124642A1 (en) * | 2013-11-05 | 2015-05-07 | Cisco Technology, Inc. | Running link state routing protocol in clos networks |
US9049148B1 (en) | 2012-09-28 | 2015-06-02 | Juniper Networks, Inc. | Dynamic forwarding plane reconfiguration in a network device |
US9071541B2 (en) | 2012-04-25 | 2015-06-30 | Juniper Networks, Inc. | Path weighted equal-cost multipath |
US9100213B1 (en) | 2011-06-08 | 2015-08-04 | Juniper Networks, Inc. | Synchronizing VPLS gateway MAC addresses |
US9124652B1 (en) * | 2013-03-15 | 2015-09-01 | Google Inc. | Per service egress link selection |
US9141658B1 (en) | 2007-09-28 | 2015-09-22 | Emc Corporation | Data classification and management for risk mitigation |
US9246838B1 (en) | 2011-05-27 | 2016-01-26 | Juniper Networks, Inc. | Label switched path setup using fast reroute bypass tunnel |
US20160055037A1 (en) * | 2014-08-19 | 2016-02-25 | Nec Corporation | Analysis controller, analysis control method and computer-readable medium |
US9323901B1 (en) | 2007-09-28 | 2016-04-26 | Emc Corporation | Data classification for digital rights management |
US20160219016A1 (en) * | 2013-09-30 | 2016-07-28 | Orange | Methods for configuring and managing an ip network, corresponding devices and computer programs |
US9461890B1 (en) | 2007-09-28 | 2016-10-04 | Emc Corporation | Delegation of data management policy in an information management system |
US9577925B1 (en) | 2013-07-11 | 2017-02-21 | Juniper Networks, Inc. | Automated path re-optimization |
US9680872B1 (en) | 2014-03-25 | 2017-06-13 | Amazon Technologies, Inc. | Trusted-code generated requests |
US9806895B1 (en) | 2015-02-27 | 2017-10-31 | Juniper Networks, Inc. | Fast reroute of redundant multicast streams |
US9854001B1 (en) * | 2014-03-25 | 2017-12-26 | Amazon Technologies, Inc. | Transparent policies |
US9935854B2 (en) * | 2014-09-23 | 2018-04-03 | Uila Networks, Inc. | Infrastructure performance monitoring |
US20180146031A1 (en) * | 2015-07-20 | 2018-05-24 | Huawei Technologies Co., Ltd. | Life Cycle Management Method and Apparatus |
US10129182B2 (en) | 2012-06-29 | 2018-11-13 | Juniper Networks, Inc. | Methods and apparatus for providing services in distributed switch |
US10182496B2 (en) | 2013-11-05 | 2019-01-15 | Cisco Technology, Inc. | Spanning tree protocol optimization |
US10313239B2 (en) * | 2011-10-31 | 2019-06-04 | At&T Intellectual Property I, L.P. | Methods, apparatus, and articles of manufacture to provide a multicast virtual private network (MVPN) |
US10382345B2 (en) | 2013-11-05 | 2019-08-13 | Cisco Technology, Inc. | Dynamic flowlet prioritization |
US10397101B1 (en) * | 2012-12-27 | 2019-08-27 | Sitting Man, Llc | Routing methods, systems, and computer program products for mapping identifiers |
US10404582B1 (en) * | 2012-12-27 | 2019-09-03 | Sitting Man, Llc | Routing methods, systems, and computer program products using an outside-scope indentifier |
US10404583B1 (en) * | 2012-12-27 | 2019-09-03 | Sitting Man, Llc | Routing methods, systems, and computer program products using multiple outside-scope identifiers |
US10411997B1 (en) * | 2012-12-27 | 2019-09-10 | Sitting Man, Llc | Routing methods, systems, and computer program products for using a region scoped node identifier |
US10411998B1 (en) * | 2012-12-27 | 2019-09-10 | Sitting Man, Llc | Node scope-specific outside-scope identifier-equipped routing methods, systems, and computer program products |
US10498642B1 (en) * | 2012-12-27 | 2019-12-03 | Sitting Man, Llc | Routing methods, systems, and computer program products |
US10516612B2 (en) | 2013-11-05 | 2019-12-24 | Cisco Technology, Inc. | System and method for identification of large-data flows |
US10778584B2 (en) | 2013-11-05 | 2020-09-15 | Cisco Technology, Inc. | System and method for multi-path load balancing in network fabrics |
WO2022031757A1 (en) * | 2020-08-04 | 2022-02-10 | Gigamon Inc. | Optimal control of network traffic visibility resources and distributed traffic processing resource control system |
US20220225171A1 (en) * | 2021-01-08 | 2022-07-14 | Cisco Technology, Inc. | Reliable and available wireless forwarding information base (fib) optimization |
US20220368625A1 (en) * | 2019-10-09 | 2022-11-17 | Curated Networks | Multipath routing in communication networks |
US11923995B2 (en) | 2009-01-28 | 2024-03-05 | Headwater Research Llc | Device-assisted services for protecting network capacity |
US11968234B2 (en) | 2009-01-28 | 2024-04-23 | Headwater Research Llc | Wireless network service interfaces |
US11985155B2 (en) | 2009-01-28 | 2024-05-14 | Headwater Research Llc | Communications device with secure data path processing agents |
US12120037B2 (en) | 2013-11-05 | 2024-10-15 | Cisco Technology, Inc. | Boosting linked list throughput |
Families Citing this family (102)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7693976B2 (en) * | 2000-07-11 | 2010-04-06 | Ciena Corporation | Granular management of network resources |
US7657629B1 (en) | 2000-09-26 | 2010-02-02 | Foundry Networks, Inc. | Global server load balancing |
US9130954B2 (en) | 2000-09-26 | 2015-09-08 | Brocade Communications Systems, Inc. | Distributed health check for global server load balancing |
US7454500B1 (en) | 2000-09-26 | 2008-11-18 | Foundry Networks, Inc. | Global server load balancing |
US7120156B2 (en) * | 2001-07-16 | 2006-10-10 | Telefonaktiebolaget Lm Ericsson (Publ) | Policy information transfer in 3GPP networks |
US8271672B1 (en) | 2001-08-31 | 2012-09-18 | Juniper Networks, Inc. | Guaranteed bandwidth memory apparatus and method |
US7698454B1 (en) | 2001-11-26 | 2010-04-13 | Juniper Networks, Inc. | Interfacing with streams of differing speeds |
FR2837337B1 (en) * | 2002-03-15 | 2004-06-18 | Cit Alcatel | NETWORK SERVICE MANAGEMENT DEVICE USING THE COPS PROTOCOL FOR CONFIGURING A VIRTUAL PRIVATE NETWORK |
WO2003079614A1 (en) * | 2002-03-18 | 2003-09-25 | Nortel Networks Limited | Resource allocation using an auto-discovery mechanism for provider-provisioned layer-2 and layer-3 virtual private networks |
US7512702B1 (en) * | 2002-03-19 | 2009-03-31 | Cisco Technology, Inc. | Method and apparatus providing highly scalable server load balancing |
US7480283B1 (en) | 2002-03-26 | 2009-01-20 | Nortel Networks Limited | Virtual trunking over packet networks |
US20050152270A1 (en) * | 2002-04-12 | 2005-07-14 | Gerardo Gomez Paredes | Policy-based qos management in multi-radio access networks |
US7352747B2 (en) * | 2002-07-31 | 2008-04-01 | Lucent Technologies Inc. | System and method of network traffic assignment on multiple parallel links between IP/MPLS routers |
US7086061B1 (en) | 2002-08-01 | 2006-08-01 | Foundry Networks, Inc. | Statistical tracking of global server load balancing for selecting the best network address from ordered list of network addresses based on a set of performance metrics |
US7676576B1 (en) | 2002-08-01 | 2010-03-09 | Foundry Networks, Inc. | Method and system to clear counters used for statistical tracking for global server load balancing |
US7574508B1 (en) | 2002-08-07 | 2009-08-11 | Foundry Networks, Inc. | Canonical name (CNAME) handling for global server load balancing |
EP1398907B1 (en) * | 2002-09-10 | 2010-12-08 | Siemens Aktiengesellschaft | Method of control of transmission resource in a packetized network when topology changes occur |
US7376086B1 (en) * | 2002-09-12 | 2008-05-20 | Nortel Networks Limited | Constraint based routing with non-transitive exceptions |
US20040095888A1 (en) * | 2002-11-15 | 2004-05-20 | International Business Machines Corporation | Apparatus and methods for network connected information handling systems devices |
US7669234B2 (en) * | 2002-12-31 | 2010-02-23 | Broadcom Corporation | Data processing hash algorithm and policy management |
US7872991B2 (en) * | 2003-02-04 | 2011-01-18 | Alcatel-Lucent Usa Inc. | Methods and systems for providing MPLS-based layer-2 virtual private network services |
CN1283079C (en) * | 2003-02-20 | 2006-11-01 | 华为技术有限公司 | IP network service quality assurance method and system |
US20040202197A1 (en) * | 2003-04-08 | 2004-10-14 | Docomo Communications Laboratories Usa, Inc. | Mobile terminal and method of providing cross layer interaction in a mobile terminal |
JP4222184B2 (en) * | 2003-04-24 | 2009-02-12 | 日本電気株式会社 | Security management support system, security management support method and program |
US7437458B1 (en) * | 2003-06-13 | 2008-10-14 | Juniper Networks, Inc. | Systems and methods for providing quality assurance |
US20050008014A1 (en) * | 2003-07-07 | 2005-01-13 | Debasis Mitra | Techniques for network traffic engineering |
US20050166260A1 (en) * | 2003-07-11 | 2005-07-28 | Christopher Betts | Distributed policy enforcement using a distributed directory |
US7616632B2 (en) * | 2003-07-25 | 2009-11-10 | Kanchei Loa | System and method of implementing contacts of small worlds in packet communication networks |
JP4587446B2 (en) * | 2003-08-07 | 2010-11-24 | キヤノン株式会社 | NETWORK SYSTEM, SWITCH DEVICE, ROUTE MANAGEMENT SERVER, ITS CONTROL METHOD, COMPUTER PROGRAM, AND COMPUTER-READABLE STORAGE MEDIUM |
US9584360B2 (en) | 2003-09-29 | 2017-02-28 | Foundry Networks, Llc | Global server load balancing support for private VIP addresses |
US20050083858A1 (en) * | 2003-10-09 | 2005-04-21 | Kanchei Loa | System and method of utilizing virtual ants in small world infrastructure communication networks |
US8024437B2 (en) * | 2003-10-30 | 2011-09-20 | Paul Unbehagen | Autodiscovery for virtual networks |
US8312145B2 (en) * | 2003-12-22 | 2012-11-13 | Rockstar Consortium US L.P. | Traffic engineering and bandwidth management of bundled links |
US20050141523A1 (en) * | 2003-12-29 | 2005-06-30 | Chiang Yeh | Traffic engineering scheme using distributed feedback |
US7577359B2 (en) * | 2004-05-03 | 2009-08-18 | At&T Intellectual Property I, L.P. | System and method for SONET transport optimization (S-TOP) |
US7496651B1 (en) | 2004-05-06 | 2009-02-24 | Foundry Networks, Inc. | Configurable geographic prefixes for global server load balancing |
US7584301B1 (en) * | 2004-05-06 | 2009-09-01 | Foundry Networks, Inc. | Host-level policies for global server load balancing |
JP2008502234A (en) * | 2004-06-07 | 2008-01-24 | ▲ホア▼▲ウェイ▼技術有限公司 | How to achieve route forwarding in a network |
US7463584B2 (en) * | 2004-08-03 | 2008-12-09 | Nortel Networks Limited | System and method for hub and spoke virtual private network |
US20060028981A1 (en) * | 2004-08-06 | 2006-02-09 | Wright Steven A | Methods, systems, and computer program products for managing admission control in a regional/access network |
US7423977B1 (en) | 2004-08-23 | 2008-09-09 | Foundry Networks Inc. | Smoothing algorithm for round trip time (RTT) measurements |
US20060047758A1 (en) * | 2004-08-26 | 2006-03-02 | Vivek Sharma | Extending and optimizing electronic messaging rules |
US7536448B2 (en) * | 2004-09-02 | 2009-05-19 | Cisco Technology, Inc. | Auto-generation of configuration and topology models |
US7646719B2 (en) * | 2004-12-02 | 2010-01-12 | Cisco Technology, Inc. | Inter-domain TE-LSP selection |
US7593405B2 (en) * | 2004-12-09 | 2009-09-22 | Telefonaktiebolaget Lm Ericsson (Publ) | Inter-domain traffic engineering |
US7535926B1 (en) * | 2005-01-07 | 2009-05-19 | Juniper Networks, Inc. | Dynamic interface configuration for supporting multiple versions of a communication protocol |
US7768910B2 (en) * | 2005-02-04 | 2010-08-03 | Neidhardt Arnold L | Calculations for admission control |
US20060206606A1 (en) * | 2005-03-08 | 2006-09-14 | At&T Corporation | Method and apparatus for providing dynamic traffic control within a communications network |
CN100428699C (en) * | 2005-03-30 | 2008-10-22 | 华为技术有限公司 | Multi protocol label exchange performance supervision ability notifying and arranging method |
US7356539B2 (en) * | 2005-04-04 | 2008-04-08 | Research In Motion Limited | Policy proxy |
US7724728B2 (en) * | 2005-04-19 | 2010-05-25 | Cisco Technology, Inc. | Policy-based processing of packets |
US7668969B1 (en) | 2005-04-27 | 2010-02-23 | Extreme Networks, Inc. | Rule structure for performing network switch functions |
US7860006B1 (en) * | 2005-04-27 | 2010-12-28 | Extreme Networks, Inc. | Integrated methods of performing network switch functions |
JP4606249B2 (en) * | 2005-05-18 | 2011-01-05 | 富士通株式会社 | Information processing method and router |
US7870265B2 (en) * | 2005-06-30 | 2011-01-11 | Oracle International Corporation | System and method for managing communications sessions in a network |
US7889711B1 (en) * | 2005-07-29 | 2011-02-15 | Juniper Networks, Inc. | Filtering traffic based on associated forwarding equivalence classes |
US8027684B2 (en) * | 2005-08-22 | 2011-09-27 | Infosys Technologies, Ltd. | System for performing a task in a communication network and methods thereof |
CN101346634B (en) * | 2005-11-04 | 2012-10-24 | 甲骨文国际公司 | System and method for a gatekeeper in a communications network |
US20070115916A1 (en) * | 2005-11-07 | 2007-05-24 | Samsung Electronics Co., Ltd. | Method and system for optimizing a network based on a performance knowledge base |
US7519624B2 (en) * | 2005-11-16 | 2009-04-14 | International Business Machines Corporation | Method for proactive impact analysis of policy-based storage systems |
US7580974B2 (en) | 2006-02-16 | 2009-08-25 | Fortinet, Inc. | Systems and methods for content type classification |
JP4682068B2 (en) * | 2006-03-17 | 2011-05-11 | 富士通株式会社 | Quality assurance service information notification method, communication apparatus, and interdomain information transmission apparatus |
US7983299B1 (en) * | 2006-05-15 | 2011-07-19 | Juniper Networks, Inc. | Weight-based bandwidth allocation for network traffic |
US8001250B2 (en) * | 2006-05-16 | 2011-08-16 | Oracle International Corporation | SIP and HTTP convergence in network computing environments |
US8112525B2 (en) | 2006-05-16 | 2012-02-07 | Oracle International Corporation | Engine near cache for reducing latency in a telecommunications environment |
US8171466B2 (en) | 2006-05-16 | 2012-05-01 | Oracle International Corporation | Hitless application upgrade for SIP server architecture |
US8219697B2 (en) | 2006-05-17 | 2012-07-10 | Oracle International Corporation | Diameter protocol and SH interface support for SIP server architecture |
US7599290B2 (en) * | 2006-08-11 | 2009-10-06 | Latitude Broadband, Inc. | Methods and systems for providing quality of service in packet-based core transport networks |
EP1921806A1 (en) * | 2006-09-26 | 2008-05-14 | Nokia Siemens Networks Gmbh & Co. Kg | Method for managing network resource usage |
US7661027B2 (en) * | 2006-10-10 | 2010-02-09 | Bea Systems, Inc. | SIP server architecture fault tolerance and failover |
US8705374B1 (en) * | 2006-10-31 | 2014-04-22 | At&T Intellectual Property Ii, L.P. | Method and apparatus for isolating label-switched path impairments |
US7831108B2 (en) * | 2006-12-13 | 2010-11-09 | Adobe Systems Incorporated | Universal front end for masks, selections, and paths |
US20080147551A1 (en) * | 2006-12-13 | 2008-06-19 | Bea Systems, Inc. | System and Method for a SIP Server with Online Charging |
US9667430B2 (en) * | 2006-12-13 | 2017-05-30 | Oracle International Corporation | System and method for a SIP server with offline charging |
US7787381B2 (en) * | 2006-12-13 | 2010-08-31 | At&T Intellectual Property I, L.P. | Methods and apparatus to manage network transport paths in accordance with network policies |
US8127133B2 (en) * | 2007-01-25 | 2012-02-28 | Microsoft Corporation | Labeling of data objects to apply and enforce policies |
US7843856B2 (en) * | 2007-01-31 | 2010-11-30 | Cisco Technology, Inc. | Determination of available service capacity in dynamic network access domains |
US7765312B2 (en) * | 2007-03-12 | 2010-07-27 | Telefonaktiebolaget L M Ericsson (Publ) | Applying policies for managing a service flow |
US8472325B2 (en) * | 2007-05-10 | 2013-06-25 | Futurewei Technologies, Inc. | Network availability enhancement technique for packet transport networks |
US8442384B2 (en) * | 2007-07-16 | 2013-05-14 | Michael Bronstein | Method and apparatus for video digest generation |
EP2020781A1 (en) * | 2007-07-31 | 2009-02-04 | Nokia Siemens Networks Oy | Method and device for processing an MPLS network by a policy decision function and communication system comprising such device |
US20090086754A1 (en) * | 2007-10-02 | 2009-04-02 | Futurewei Technologies, Inc. | Content Aware Connection Transport |
US7831701B2 (en) * | 2007-10-27 | 2010-11-09 | At&T Mobility Ii Llc | Cascading policy management deployment architecture |
US8631470B2 (en) * | 2008-02-20 | 2014-01-14 | Bruce R. Backa | System and method for policy based control of NAS storage devices |
US8549654B2 (en) * | 2008-02-20 | 2013-10-01 | Bruce Backa | System and method for policy based control of NAS storage devices |
US8155028B2 (en) * | 2008-03-17 | 2012-04-10 | Alcatel Lucent | Method and apparatus for providing full logical connectivity in MPLS networks |
US20110208779A1 (en) * | 2008-12-23 | 2011-08-25 | Backa Bruce R | System and Method for Policy Based Control of NAS Storage Devices |
US7944857B2 (en) * | 2009-01-12 | 2011-05-17 | Hewlett-Packard Development Company, L.P. | Method and system for deriving tunnel path information in MPLS networks |
US7948986B1 (en) | 2009-02-02 | 2011-05-24 | Juniper Networks, Inc. | Applying services within MPLS networks |
US8681654B2 (en) * | 2009-10-14 | 2014-03-25 | At&T Intellectual Property I, L.P. | Methods and apparatus to design a survivable internet protocol link topology |
EP2383955B1 (en) | 2010-04-29 | 2019-10-30 | BlackBerry Limited | Assignment and distribution of access credentials to mobile communication devices |
CN102263688B (en) * | 2010-05-26 | 2015-11-25 | 华为技术有限公司 | A kind of method and network equipment selecting label forwarding path |
US8549148B2 (en) | 2010-10-15 | 2013-10-01 | Brocade Communications Systems, Inc. | Domain name system security extensions (DNSSEC) for global server load balancing |
TWI459768B (en) | 2011-12-30 | 2014-11-01 | Ind Tech Res Inst | Communication system and method for assisting transmission of tcp packets |
US8769633B1 (en) | 2012-12-12 | 2014-07-01 | Bruce R. Backa | System and method for policy based control of NAS storage devices |
US10142172B2 (en) * | 2015-07-22 | 2018-11-27 | Facebook, Inc. | Internet service provider management platform |
US9800433B2 (en) | 2015-12-16 | 2017-10-24 | At&T Intellectual Property I, L.P. | Method and apparatus for providing a point-to-point connection over a network |
US10033709B1 (en) | 2017-11-20 | 2018-07-24 | Microsoft Technology Licensing, Llc | Method and apparatus for improving privacy of communications through channels having excess capacity |
JP6927135B2 (en) * | 2018-04-24 | 2021-08-25 | 日本電信電話株式会社 | Traffic estimation device, traffic estimation method and program |
US11863445B1 (en) * | 2019-09-25 | 2024-01-02 | Juniper Networks, Inc. | Prefix range to identifier range mapping |
US11245611B2 (en) * | 2020-05-12 | 2022-02-08 | Arista Networks, Inc. | Analysis of routing policy application to routes |
US20220086190A1 (en) * | 2020-09-16 | 2022-03-17 | Salesforce.Com, Inc. | Correlation of security policy input and output changes |
Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6341127B1 (en) * | 1997-07-11 | 2002-01-22 | Kabushiki Kaisha Toshiba | Node device and method for controlling label switching path set up in inter-connected networks |
US20020019879A1 (en) * | 2000-05-15 | 2002-02-14 | Mark Jasen | Method and system for prioritizing network services |
US6466548B1 (en) * | 1998-10-28 | 2002-10-15 | Cisco Technology, Inc. | Hop by hop quality of service measurement system |
US6466984B1 (en) * | 1999-07-02 | 2002-10-15 | Cisco Technology, Inc. | Method and apparatus for policy-based management of quality of service treatments of network data traffic flows by integrating policies with application programs |
US20020156914A1 (en) * | 2000-05-31 | 2002-10-24 | Lo Waichi C. | Controller for managing bandwidth in a communications network |
US6611863B1 (en) * | 2000-06-05 | 2003-08-26 | Intel Corporation | Automatic device assignment through programmable device discovery for policy based network management |
US6614781B1 (en) * | 1998-11-20 | 2003-09-02 | Level 3 Communications, Inc. | Voice over data telecommunications network architecture |
US6633635B2 (en) * | 1999-12-30 | 2003-10-14 | At&T Corp. | Multiple call waiting in a packetized communication system |
US6665273B1 (en) * | 2000-01-11 | 2003-12-16 | Cisco Technology, Inc. | Dynamically adjusting multiprotocol label switching (MPLS) traffic engineering tunnel bandwidth |
US6778494B1 (en) * | 1999-03-10 | 2004-08-17 | Nortel Networks Limited | Label switched media gateway and network |
US6856676B1 (en) * | 1998-10-15 | 2005-02-15 | Alcatel | System and method of controlling and managing voice and data services in a telecommunications network |
US6882643B1 (en) * | 1999-07-16 | 2005-04-19 | Nortel Networks Limited | Supporting multiple services in label switched networks |
-
2001
- 2001-09-20 US US09/956,002 patent/US7082102B1/en not_active Expired - Lifetime
-
2005
- 2005-10-13 US US11/250,076 patent/US20060039364A1/en not_active Abandoned
Patent Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6341127B1 (en) * | 1997-07-11 | 2002-01-22 | Kabushiki Kaisha Toshiba | Node device and method for controlling label switching path set up in inter-connected networks |
US6856676B1 (en) * | 1998-10-15 | 2005-02-15 | Alcatel | System and method of controlling and managing voice and data services in a telecommunications network |
US6466548B1 (en) * | 1998-10-28 | 2002-10-15 | Cisco Technology, Inc. | Hop by hop quality of service measurement system |
US6614781B1 (en) * | 1998-11-20 | 2003-09-02 | Level 3 Communications, Inc. | Voice over data telecommunications network architecture |
US6778494B1 (en) * | 1999-03-10 | 2004-08-17 | Nortel Networks Limited | Label switched media gateway and network |
US6466984B1 (en) * | 1999-07-02 | 2002-10-15 | Cisco Technology, Inc. | Method and apparatus for policy-based management of quality of service treatments of network data traffic flows by integrating policies with application programs |
US6882643B1 (en) * | 1999-07-16 | 2005-04-19 | Nortel Networks Limited | Supporting multiple services in label switched networks |
US6633635B2 (en) * | 1999-12-30 | 2003-10-14 | At&T Corp. | Multiple call waiting in a packetized communication system |
US6665273B1 (en) * | 2000-01-11 | 2003-12-16 | Cisco Technology, Inc. | Dynamically adjusting multiprotocol label switching (MPLS) traffic engineering tunnel bandwidth |
US20020019879A1 (en) * | 2000-05-15 | 2002-02-14 | Mark Jasen | Method and system for prioritizing network services |
US20020156914A1 (en) * | 2000-05-31 | 2002-10-24 | Lo Waichi C. | Controller for managing bandwidth in a communications network |
US6611863B1 (en) * | 2000-06-05 | 2003-08-26 | Intel Corporation | Automatic device assignment through programmable device discovery for policy based network management |
Cited By (235)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100098000A1 (en) * | 1997-12-31 | 2010-04-22 | Irwin Gerszberg | Hybrid fiber twisted pair local loop network service architecture |
US8355365B2 (en) * | 1997-12-31 | 2013-01-15 | At&T Intellectual Property Ii, L.P. | Hybrid fiber twisted pair local loop network service architecture |
US20040243670A1 (en) * | 2001-07-10 | 2004-12-02 | Jochen Grimminger | Method for the optimized use of sctp(stream control transmission protocol) in mpls(multi protocol label switching) networks |
US7882226B2 (en) * | 2001-12-31 | 2011-02-01 | Samsung Electronics Co., Ltd. | System and method for scalable and redundant COPS message routing in an IP multimedia subsystem |
US20050071455A1 (en) * | 2001-12-31 | 2005-03-31 | Samsung Electronics Co., Ltd. | System and method for scalable and redundant COPS message routing in an IP multimedia subsystem |
US7477657B1 (en) * | 2002-05-08 | 2009-01-13 | Juniper Networks, Inc. | Aggregating end-to-end QoS signaled packet flows through label switched paths |
US20040177087A1 (en) * | 2002-06-27 | 2004-09-09 | Haitao Wu | Self-adaptive scheduling method and network element |
US7917648B2 (en) * | 2002-06-27 | 2011-03-29 | Nokia Corporation | Self-adaptive scheduling method and network element |
US20040076127A1 (en) * | 2002-10-18 | 2004-04-22 | Porte David John | Handling of wireless communications |
US7769873B1 (en) | 2002-10-25 | 2010-08-03 | Juniper Networks, Inc. | Dynamically inserting filters into forwarding paths of a network device |
US8078758B1 (en) | 2003-06-05 | 2011-12-13 | Juniper Networks, Inc. | Automatic configuration of source address filters within a network device |
US8989181B2 (en) * | 2003-06-25 | 2015-03-24 | Fujitsu Limited | Method and system for multicasting data packets in an MPLS network |
US20080056258A1 (en) * | 2003-06-25 | 2008-03-06 | Fujitsu Limited | Method and System for Multicasting Data Packets in an MPLS Network |
US20050044268A1 (en) * | 2003-07-31 | 2005-02-24 | Enigmatec Corporation | Self-managed mediated information flow |
US9525566B2 (en) * | 2003-07-31 | 2016-12-20 | Cloudsoft Corporation Limited | Self-managed mediated information flow |
US7313095B1 (en) * | 2003-11-06 | 2007-12-25 | Sprint Communications Company L.P. | Method for estimating telecommunication network traffic using link weight changes |
US7873724B2 (en) * | 2003-12-05 | 2011-01-18 | Microsoft Corporation | Systems and methods for guiding allocation of computational resources in automated perceptual systems |
US20050132378A1 (en) * | 2003-12-05 | 2005-06-16 | Horvitz Eric J. | Systems and methods for guiding allocation of computational resources in automated perceptual systems |
US8194546B2 (en) * | 2004-02-12 | 2012-06-05 | Cisco Technology, Inc. | Traffic flow determination in communications networks |
US20060250964A1 (en) * | 2004-02-12 | 2006-11-09 | Cisco Technology, Inc. | Traffic flow determination in communications networks |
US8151000B1 (en) | 2004-04-09 | 2012-04-03 | Juniper Networks, Inc. | Transparently providing layer two (L2) services across intermediate computer networks |
US8880727B1 (en) | 2004-04-09 | 2014-11-04 | Juniper Networks, Inc. | Transparently providing layer two (L2) services across intermediate computer networks |
US7856509B1 (en) | 2004-04-09 | 2010-12-21 | Juniper Networks, Inc. | Transparently providing layer two (L2) services across intermediate computer networks |
US7606235B1 (en) * | 2004-06-03 | 2009-10-20 | Juniper Networks, Inc. | Constraint-based label switched path selection within a computer network |
US8630295B1 (en) * | 2004-06-03 | 2014-01-14 | Juniper Networks, Inc. | Constraint-based label switched path selection within a computer network |
US7643425B2 (en) * | 2004-07-23 | 2010-01-05 | Ericsson Ab | LSP path selection |
US20060018326A1 (en) * | 2004-07-23 | 2006-01-26 | Marconi Communications, Inc. | LSP path selection |
US7567512B1 (en) | 2004-08-27 | 2009-07-28 | Juniper Networks, Inc. | Traffic engineering using extended bandwidth accounting information |
US7889652B1 (en) | 2004-08-27 | 2011-02-15 | Juniper Networks, Inc. | Traffic engineering using extended bandwidth accounting information |
US7564806B1 (en) | 2004-08-30 | 2009-07-21 | Juniper Networks, Inc. | Aggregate multicast trees for multicast virtual private networks |
US7990963B1 (en) | 2004-08-30 | 2011-08-02 | Juniper Networks, Inc. | Exchange of control information for virtual private local area network (LAN) service multicast |
US8160076B1 (en) | 2004-08-30 | 2012-04-17 | Juniper Networks, Inc. | Auto-discovery of multicast virtual private networks |
US7570605B1 (en) | 2004-08-30 | 2009-08-04 | Juniper Networks, Inc. | Multicast data trees for multicast virtual private networks |
US8625465B1 (en) | 2004-08-30 | 2014-01-07 | Juniper Networks, Inc. | Auto-discovery of virtual private networks |
US7558219B1 (en) | 2004-08-30 | 2009-07-07 | Juniper Networks, Inc. | Multicast trees for virtual private local area network (LAN) service multicast |
US8121056B1 (en) | 2004-08-30 | 2012-02-21 | Juniper Networks, Inc. | Aggregate multicast trees for multicast virtual private networks |
US8111633B1 (en) | 2004-08-30 | 2012-02-07 | Juniper Networks, Inc. | Multicast trees for virtual private local area network (LAN) service multicast |
US7590115B1 (en) | 2004-08-30 | 2009-09-15 | Juniper Networks, Inc. | Exchange of control information for virtual private local area network (LAN) service multicast |
US8068492B1 (en) | 2004-08-30 | 2011-11-29 | Juniper Networks, Inc. | Transport of control and data traffic for multicast virtual private networks |
US7558263B1 (en) | 2004-08-30 | 2009-07-07 | Juniper Networks, Inc. | Reliable exchange of control information for multicast virtual private networks |
US7804790B1 (en) | 2004-08-30 | 2010-09-28 | Juniper Networks, Inc. | Aggregate multicast trees for virtual private local area network (LAN) service multicast |
US7570604B1 (en) | 2004-08-30 | 2009-08-04 | Juniper Networks, Inc. | Multicast data trees for virtual private local area network (LAN) service multicast |
US7983261B1 (en) | 2004-08-30 | 2011-07-19 | Juniper Networks, Inc. | Reliable exchange of control information for multicast virtual private networks |
US7519010B1 (en) | 2004-08-30 | 2009-04-14 | Juniper Networks, Inc. | Inter-autonomous system (AS) multicast virtual private networks |
US7957386B1 (en) | 2004-08-30 | 2011-06-07 | Juniper Networks, Inc. | Inter-autonomous system (AS) multicast virtual private networks |
US7933267B1 (en) | 2004-08-30 | 2011-04-26 | Juniper Networks, Inc. | Shared multicast trees for multicast virtual private networks |
US7522599B1 (en) | 2004-08-30 | 2009-04-21 | Juniper Networks, Inc. | Label switching multicast trees for multicast virtual private networks |
US7522600B1 (en) | 2004-08-30 | 2009-04-21 | Juniper Networks, Inc. | Transport of control and data traffic for multicast virtual private networks |
US8155125B1 (en) * | 2004-09-17 | 2012-04-10 | Cisco Technology, Inc. | Apparatus and method for utilizing aggregate network links for multicast switching |
US8279754B1 (en) | 2004-10-26 | 2012-10-02 | Juniper Networks, Inc. | RSVP-passive interfaces for traffic engineering peering links in MPLS networks |
US7558199B1 (en) | 2004-10-26 | 2009-07-07 | Juniper Networks, Inc. | RSVP-passive interfaces for traffic engineering peering links in MPLS networks |
US20060114838A1 (en) * | 2004-11-30 | 2006-06-01 | Mandavilli Swamy J | MPLS VPN fault management using IGP monitoring system |
US8572234B2 (en) * | 2004-11-30 | 2013-10-29 | Hewlett-Packard Development, L.P. | MPLS VPN fault management using IGP monitoring system |
US20060155532A1 (en) * | 2004-12-14 | 2006-07-13 | Nam Hyun S | Apparatus and method for managing quality of a label switched path in a convergence network |
US7599310B2 (en) * | 2004-12-14 | 2009-10-06 | Electronics And Telecommunications Research Institute | Apparatus and method for managing quality of a label switched path in a convergence network |
US7602702B1 (en) | 2005-02-10 | 2009-10-13 | Juniper Networks, Inc | Fast reroute of traffic associated with a point to multi-point network tunnel |
US20090175274A1 (en) * | 2005-07-28 | 2009-07-09 | Juniper Networks, Inc. | Transmission of layer two (l2) multicast traffic over multi-protocol label switching networks |
US7990965B1 (en) | 2005-07-28 | 2011-08-02 | Juniper Networks, Inc. | Transmission of layer two (L2) multicast traffic over multi-protocol label switching networks |
US9166807B2 (en) | 2005-07-28 | 2015-10-20 | Juniper Networks, Inc. | Transmission of layer two (L2) multicast traffic over multi-protocol label switching networks |
US7940698B1 (en) * | 2005-08-29 | 2011-05-10 | Juniper Networks, Inc. | Point to multi-point label switched paths with label distribution protocol |
US7564803B1 (en) * | 2005-08-29 | 2009-07-21 | Juniper Networks, Inc. | Point to multi-point label switched paths with label distribution protocol |
US20070104194A1 (en) * | 2005-11-04 | 2007-05-10 | Ijsbrand Wijnands | In-band multicast signaling using LDP |
US20120195312A1 (en) * | 2005-11-04 | 2012-08-02 | Ijsbrand Wijnands | Automation fallback to p2p lsps for mldp built multipoint-trees |
US7852841B2 (en) | 2005-11-04 | 2010-12-14 | Cisco Technology, Inc. | In-band multicast signaling using LDP |
US8948170B2 (en) * | 2005-11-04 | 2015-02-03 | Cisco Technology, Inc. | Automation fallback to P2P LSPs for MLDP built multipoint-trees |
US20070124433A1 (en) * | 2005-11-30 | 2007-05-31 | Microsoft Corporation | Network supporting centralized management of QoS policies |
US7979549B2 (en) | 2005-11-30 | 2011-07-12 | Microsoft Corporation | Network supporting centralized management of QoS policies |
US20070124485A1 (en) * | 2005-11-30 | 2007-05-31 | Microsoft Corporation | Computer system implementing quality of service policy |
US20070133540A1 (en) * | 2005-12-08 | 2007-06-14 | Kyung Gyu Chun | Method for measuring performance of MPLS LSP |
US7561524B2 (en) * | 2005-12-08 | 2009-07-14 | Electronics And Telecommunications Research Institute | Method for measuring performance of MPLS LSP |
US20070136397A1 (en) * | 2005-12-09 | 2007-06-14 | Interdigital Technology Corporation | Information life-cycle management architecture for a device with infinite storage capacity |
US8170021B2 (en) | 2006-01-06 | 2012-05-01 | Microsoft Corporation | Selectively enabled quality of service policy |
US9112765B2 (en) | 2006-01-06 | 2015-08-18 | Microsoft Technology Licensing, Llc | Selectively enabled quality of service policy |
US20070160079A1 (en) * | 2006-01-06 | 2007-07-12 | Microsoft Corporation | Selectively enabled quality of service policy |
US7903584B2 (en) * | 2006-01-06 | 2011-03-08 | Cisco Technology, Inc. | Technique for dynamically splitting MPLS TE-LSPs |
US20070160061A1 (en) * | 2006-01-06 | 2007-07-12 | Jean-Philippe Vasseur | Technique for dynamically splitting MPLS TE-LSPs |
US20070177594A1 (en) * | 2006-01-30 | 2007-08-02 | Juniper Networks, Inc. | Forming equal cost multipath multicast distribution structures |
US20070177593A1 (en) * | 2006-01-30 | 2007-08-02 | Juniper Networks, Inc. | Forming multicast distribution structures using exchanged multicast optimization data |
US7839850B2 (en) | 2006-01-30 | 2010-11-23 | Juniper Networks, Inc. | Forming equal cost multipath multicast distribution structures |
US8270395B2 (en) | 2006-01-30 | 2012-09-18 | Juniper Networks, Inc. | Forming multicast distribution structures using exchanged multicast optimization data |
US20070204018A1 (en) * | 2006-02-24 | 2007-08-30 | Cisco Technology, Inc. | Method and system for obviating redundant actions in a network |
US8065393B2 (en) * | 2006-02-24 | 2011-11-22 | Cisco Technology, Inc. | Method and system for obviating redundant actions in a network |
WO2007106639A3 (en) * | 2006-02-24 | 2008-10-09 | Cisco Tech Inc | Method and system for obviating redundant actions in a network |
US8107473B2 (en) * | 2006-03-16 | 2012-01-31 | Cisco Technology, Inc. | Automation fallback to P2P LSPs for mLDP built multipoint-trees |
US20070217428A1 (en) * | 2006-03-16 | 2007-09-20 | Ijsbrand Wijnands | Automation fallback to P2P LSPs for mLDP built multipoint-trees |
US7957375B2 (en) * | 2006-04-26 | 2011-06-07 | Huawei Technologies Co., Ltd. | Apparatus and method for policy routing |
EP2003821A2 (en) * | 2006-04-26 | 2008-12-17 | Huawei Technologies Co., Ltd. | A strategic routing device and method |
US20090046718A1 (en) * | 2006-04-26 | 2009-02-19 | Huawei Technologies Co., Ltd. | Apparatus and method for policy routing |
EP2003821B2 (en) † | 2006-04-26 | 2020-04-15 | Huawei Technologies Co., Ltd. | A strategic provider edge router |
EP2003821A4 (en) * | 2006-04-26 | 2009-08-26 | Huawei Tech Co Ltd | A strategic routing device and method |
US7787380B1 (en) | 2006-06-30 | 2010-08-31 | Juniper Networks, Inc. | Resource reservation protocol with traffic engineering point to multi-point label switched path hierarchy |
US8488614B1 (en) | 2006-06-30 | 2013-07-16 | Juniper Networks, Inc. | Upstream label assignment for the label distribution protocol |
US8767741B1 (en) * | 2006-06-30 | 2014-07-01 | Juniper Networks, Inc. | Upstream label assignment for the resource reservation protocol with traffic engineering |
US8462635B1 (en) | 2006-06-30 | 2013-06-11 | Juniper Networks, Inc. | Resource reservation protocol with traffic engineering point to multi-point label switched path hierarchy |
US7742482B1 (en) * | 2006-06-30 | 2010-06-22 | Juniper Networks, Inc. | Upstream label assignment for the resource reservation protocol with traffic engineering |
US7839862B1 (en) | 2006-06-30 | 2010-11-23 | Juniper Networks, Inc. | Upstream label assignment for the label distribution protocol |
US7860104B1 (en) * | 2006-06-30 | 2010-12-28 | Juniper Networks, Inc. | Upstream label assignment for the resource reservation protocol with traffic engineering |
US20080025309A1 (en) * | 2006-07-31 | 2008-01-31 | Cisco Technology, Inc. | Technique for multiple path forwarding of label-switched data traffic |
US8718060B2 (en) * | 2006-07-31 | 2014-05-06 | Cisco Technology, Inc. | Technique for multiple path forwarding of label-switched data traffic |
WO2008016558A3 (en) * | 2006-07-31 | 2009-04-16 | Cisco Tech Inc | Technique for multiple path forwarding of label-switched data traffic |
US20100043079A1 (en) * | 2006-09-07 | 2010-02-18 | France Telecom | Code securing for a personal entity |
US9135322B2 (en) | 2006-09-18 | 2015-09-15 | Emc Corporation | Environment classification |
US20080071726A1 (en) * | 2006-09-18 | 2008-03-20 | Emc Corporation | Cascaded discovery of information environment |
US8832246B2 (en) | 2006-09-18 | 2014-09-09 | Emc Corporation | Service level mapping method |
US11846978B2 (en) | 2006-09-18 | 2023-12-19 | EMC IP Holding Company LLC | Cascaded discovery of information environment |
US8938457B2 (en) | 2006-09-18 | 2015-01-20 | Emc Corporation | Information classification |
US20080071727A1 (en) * | 2006-09-18 | 2008-03-20 | Emc Corporation | Environment classification |
US9361354B1 (en) | 2006-09-18 | 2016-06-07 | Emc Corporation | Hierarchy of service areas |
US10394849B2 (en) | 2006-09-18 | 2019-08-27 | EMC IP Holding Company LLC | Cascaded discovery of information environment |
US20080104693A1 (en) * | 2006-09-29 | 2008-05-01 | Mcalister Donald | Transporting keys between security protocols |
US8046820B2 (en) * | 2006-09-29 | 2011-10-25 | Certes Networks, Inc. | Transporting keys between security protocols |
US8305996B2 (en) * | 2006-11-14 | 2012-11-06 | Cisco Technology, Inc. | Access point profile for a mesh access point in a wireless mesh network |
US20120087281A1 (en) * | 2006-11-14 | 2012-04-12 | Rahman Shahriar I | Access point profile for a mesh access point in a wireless mesh network |
US8559610B2 (en) * | 2006-12-13 | 2013-10-15 | Qualcomm Incorporated | Method and apparatus for allocating network resources in a group communication system |
US20080144525A1 (en) * | 2006-12-13 | 2008-06-19 | Crockett Douglas M | Method and apparatus for allocating network resources in a group communication system |
US20080225716A1 (en) * | 2007-03-13 | 2008-09-18 | Lange Andrew S | Quality of service admission control network |
US20080225712A1 (en) * | 2007-03-13 | 2008-09-18 | Lange Andrew S | Policy enforcement points |
US8446845B2 (en) * | 2007-03-13 | 2013-05-21 | Alcatel Lucent | Advanced bandwidth management audit functions |
US8274983B2 (en) * | 2007-03-13 | 2012-09-25 | Alcatel Lucent | Low-impact call connection request denial |
US8320381B2 (en) * | 2007-03-13 | 2012-11-27 | Alcatel Lucent | Application-aware policy enforcement |
US8320245B2 (en) * | 2007-03-13 | 2012-11-27 | Alcatel Lucent | Policy enforcement points |
US8320380B2 (en) * | 2007-03-13 | 2012-11-27 | Alcatel Lucent | Under-assigning resources to video in triple-play virtual topologies to protect data-class traffic |
US20080225707A1 (en) * | 2007-03-13 | 2008-09-18 | Lange Andrew S | Advanced bandwidth management audit functions |
US20080225709A1 (en) * | 2007-03-13 | 2008-09-18 | Lange Andrew S | Advanced bandwidth management |
US20080225857A1 (en) * | 2007-03-13 | 2008-09-18 | Lange Andrew S | Low-impact call connection request denial |
US8374082B2 (en) * | 2007-03-13 | 2013-02-12 | Alcatel Lucent | Advanced bandwidth management |
US8385194B2 (en) * | 2007-03-13 | 2013-02-26 | Alcatel Lucent | Quality of service admission control network |
US20080225708A1 (en) * | 2007-03-13 | 2008-09-18 | Lange Andrew S | Application-aware policy enforcement |
US20080225706A1 (en) * | 2007-03-13 | 2008-09-18 | Lange Andrew S | Under-assigning resources to video in triple-play virtual topologies to protect data-class traffic |
WO2008111028A2 (en) * | 2007-03-13 | 2008-09-18 | Alcatel Lucent | Application-aware policy enforcement |
WO2008111028A3 (en) * | 2007-03-13 | 2009-01-15 | Alcatel Lucent | Application-aware policy enforcement |
US20080259797A1 (en) * | 2007-04-18 | 2008-10-23 | Aladdin Knowledge Systems Ltd. | Load-Balancing Bridge Cluster For Network Nodes |
US20090010647A1 (en) * | 2007-07-06 | 2009-01-08 | Jenkins David W | Method and apparatus for routing communications in a mesh network |
US8819212B1 (en) * | 2007-09-28 | 2014-08-26 | Emc Corporation | Delegation of data classification using common language |
US9461890B1 (en) | 2007-09-28 | 2016-10-04 | Emc Corporation | Delegation of data management policy in an information management system |
US8868720B1 (en) | 2007-09-28 | 2014-10-21 | Emc Corporation | Delegation of discovery functions in information management system |
US9323901B1 (en) | 2007-09-28 | 2016-04-26 | Emc Corporation | Data classification for digital rights management |
US9141658B1 (en) | 2007-09-28 | 2015-09-22 | Emc Corporation | Data classification and management for risk mitigation |
US20090100162A1 (en) * | 2007-10-15 | 2009-04-16 | Microsoft Corporation | Sharing Policy and Workload among Network Access Devices |
US8125926B1 (en) | 2007-10-16 | 2012-02-28 | Juniper Networks, Inc. | Inter-autonomous system (AS) virtual private local area network service (VPLS) |
US8566453B1 (en) * | 2007-11-19 | 2013-10-22 | Juniper Networks, Inc. | COPS-PR enhancements to support fast state synchronization |
US20090190467A1 (en) * | 2008-01-25 | 2009-07-30 | At&T Labs, Inc. | System and method for managing fault in a multi protocol label switching system |
US8607304B2 (en) * | 2008-03-07 | 2013-12-10 | At&T Mobility Ii Llc | System and method for policy-enabled mobile service gateway |
US20090228954A1 (en) * | 2008-03-07 | 2009-09-10 | At&T Mobility Ii Llc | System and method for policy-enabled mobile service gateway |
US7936780B1 (en) | 2008-03-12 | 2011-05-03 | Juniper Networks, Inc. | Hierarchical label distribution protocol for computer networks |
US9996572B2 (en) | 2008-10-24 | 2018-06-12 | Microsoft Technology Licensing, Llc | Partition management in a partitioned, scalable, and available structured storage |
US20100106934A1 (en) * | 2008-10-24 | 2010-04-29 | Microsoft Corporation | Partition management in a partitioned, scalable, and available structured storage |
US8886796B2 (en) * | 2008-10-24 | 2014-11-11 | Microsoft Corporation | Load balancing when replicating account data |
US7940784B2 (en) | 2008-11-03 | 2011-05-10 | At&T Intellectual Property I, L.P. | Methods and apparatus to advertise network routes to implement a hybrid network topology |
US20100124231A1 (en) * | 2008-11-14 | 2010-05-20 | Juniper Networks, Inc. | Summarization and longest-prefix match within mpls networks |
US20110194561A1 (en) * | 2008-11-14 | 2011-08-11 | Juniper Networks, Inc. | Summarization and longest-prefix match within mpls networks |
US8363667B2 (en) | 2008-11-14 | 2013-01-29 | Juniper Networks, Inc. | Summarization and longest-prefix match within MPLS networks |
US7929557B2 (en) | 2008-11-14 | 2011-04-19 | Juniper Networks, Inc. | Summarization and longest-prefix match within MPLS networks |
US8917729B1 (en) | 2008-12-10 | 2014-12-23 | Juniper Networks, Inc. | Fast reroute for multiple label switched paths sharing a single interface |
US11968234B2 (en) | 2009-01-28 | 2024-04-23 | Headwater Research Llc | Wireless network service interfaces |
US11985155B2 (en) | 2009-01-28 | 2024-05-14 | Headwater Research Llc | Communications device with secure data path processing agents |
US11923995B2 (en) | 2009-01-28 | 2024-03-05 | Headwater Research Llc | Device-assisted services for protecting network capacity |
US20110058558A1 (en) * | 2009-09-08 | 2011-03-10 | Electronics And Telecommunications Research Institute | Network control device and network control method |
US20120263072A1 (en) * | 2009-12-29 | 2012-10-18 | Zte Corporation | Ethernet traffic statistics and analysis method and system |
US20120300783A1 (en) * | 2009-12-30 | 2012-11-29 | Zte Corporation | Method and system for updating network topology in multi-protocol label switching system |
US8619587B2 (en) * | 2010-01-05 | 2013-12-31 | Futurewei Technologies, Inc. | System and method to support enhanced equal cost multi-path and link aggregation group |
US20110164503A1 (en) * | 2010-01-05 | 2011-07-07 | Futurewei Technologies, Inc. | System and Method to Support Enhanced Equal Cost Multi-Path and Link Aggregation Group |
US8422514B1 (en) | 2010-02-09 | 2013-04-16 | Juniper Networks, Inc. | Dynamic configuration of cross-domain pseudowires |
US8310957B1 (en) | 2010-03-09 | 2012-11-13 | Juniper Networks, Inc. | Minimum-cost spanning trees of unicast tunnels for multicast distribution |
EP2587741A4 (en) * | 2010-06-23 | 2014-01-15 | Nec Corp | Communication system, control apparatus, node control method and program |
US9049150B2 (en) * | 2010-06-23 | 2015-06-02 | Nec Corporation | Communication system, control apparatus, node controlling method and node controlling program |
EP2587741A1 (en) * | 2010-06-23 | 2013-05-01 | Nec Corporation | Communication system, control apparatus, node control method and program |
US20130100951A1 (en) * | 2010-06-23 | 2013-04-25 | Nec Corporation | Communication system, control apparatus, node controlling method and node controlling program |
US9246838B1 (en) | 2011-05-27 | 2016-01-26 | Juniper Networks, Inc. | Label switched path setup using fast reroute bypass tunnel |
US9100213B1 (en) | 2011-06-08 | 2015-08-04 | Juniper Networks, Inc. | Synchronizing VPLS gateway MAC addresses |
US10833989B2 (en) | 2011-10-31 | 2020-11-10 | At&T Intellectual Property I, L.P. | Methods, apparatus, and articles of manufacture to provide a multicast virtual private network (MVPN) |
US10313239B2 (en) * | 2011-10-31 | 2019-06-04 | At&T Intellectual Property I, L.P. | Methods, apparatus, and articles of manufacture to provide a multicast virtual private network (MVPN) |
US9071541B2 (en) | 2012-04-25 | 2015-06-30 | Juniper Networks, Inc. | Path weighted equal-cost multipath |
US8787400B1 (en) | 2012-04-25 | 2014-07-22 | Juniper Networks, Inc. | Weighted equal-cost multipath |
US8837479B1 (en) | 2012-06-27 | 2014-09-16 | Juniper Networks, Inc. | Fast reroute between redundant multicast streams |
US10097481B2 (en) * | 2012-06-29 | 2018-10-09 | Juniper Networks, Inc. | Methods and apparatus for providing services in distributed switch |
US20140003433A1 (en) * | 2012-06-29 | 2014-01-02 | Juniper Networks, Inc. | Methods and apparatus for providing services in distributed switch |
US10129182B2 (en) | 2012-06-29 | 2018-11-13 | Juniper Networks, Inc. | Methods and apparatus for providing services in distributed switch |
US9049148B1 (en) | 2012-09-28 | 2015-06-02 | Juniper Networks, Inc. | Dynamic forwarding plane reconfiguration in a network device |
US9444712B2 (en) * | 2012-11-21 | 2016-09-13 | Cisco Technology, Inc. | Bandwidth on-demand services in multiple layer networks |
US20140143409A1 (en) * | 2012-11-21 | 2014-05-22 | Cisco Technology, Inc. | Bandwidth On-Demand Services in Multiple Layer Networks |
US10250459B2 (en) | 2012-11-21 | 2019-04-02 | Cisco Technology, Inc. | Bandwidth on-demand services in multiple layer networks |
US10411998B1 (en) * | 2012-12-27 | 2019-09-10 | Sitting Man, Llc | Node scope-specific outside-scope identifier-equipped routing methods, systems, and computer program products |
US10498642B1 (en) * | 2012-12-27 | 2019-12-03 | Sitting Man, Llc | Routing methods, systems, and computer program products |
US10411997B1 (en) * | 2012-12-27 | 2019-09-10 | Sitting Man, Llc | Routing methods, systems, and computer program products for using a region scoped node identifier |
US10404583B1 (en) * | 2012-12-27 | 2019-09-03 | Sitting Man, Llc | Routing methods, systems, and computer program products using multiple outside-scope identifiers |
US10404582B1 (en) * | 2012-12-27 | 2019-09-03 | Sitting Man, Llc | Routing methods, systems, and computer program products using an outside-scope indentifier |
US10397101B1 (en) * | 2012-12-27 | 2019-08-27 | Sitting Man, Llc | Routing methods, systems, and computer program products for mapping identifiers |
US10721164B1 (en) * | 2012-12-27 | 2020-07-21 | Sitting Man, Llc | Routing methods, systems, and computer program products with multiple sequences of identifiers |
US12058042B1 (en) * | 2012-12-27 | 2024-08-06 | Morris Routing Technologies, Llc | Routing methods, systems, and computer program products |
US9124652B1 (en) * | 2013-03-15 | 2015-09-01 | Google Inc. | Per service egress link selection |
US9537770B1 (en) | 2013-03-15 | 2017-01-03 | Google Inc. | Per service egress link selection |
US10841159B2 (en) | 2013-03-18 | 2020-11-17 | International Business Machines Corporation | Robust service deployment |
US20140325042A1 (en) * | 2013-03-18 | 2014-10-30 | International Business Machines Corporation | Robust Service Deployment |
US10091060B2 (en) * | 2013-03-18 | 2018-10-02 | International Business Machines Corporation | Robust service deployment |
US8953500B1 (en) | 2013-03-29 | 2015-02-10 | Juniper Networks, Inc. | Branch node-initiated point to multi-point label switched path signaling with centralized path computation |
US9577925B1 (en) | 2013-07-11 | 2017-02-21 | Juniper Networks, Inc. | Automated path re-optimization |
US9967191B2 (en) * | 2013-07-25 | 2018-05-08 | Cisco Technology, Inc. | Receiver-signaled entropy labels for traffic forwarding in a computer network |
US20150029849A1 (en) * | 2013-07-25 | 2015-01-29 | Cisco Technology, Inc. | Receiver-signaled entropy labels for traffic forwarding in a computer network |
US10686752B2 (en) * | 2013-09-30 | 2020-06-16 | Orange | Methods for configuring and managing an IP network, corresponding devices and computer programs |
US20160219016A1 (en) * | 2013-09-30 | 2016-07-28 | Orange | Methods for configuring and managing an ip network, corresponding devices and computer programs |
US9698994B2 (en) | 2013-11-05 | 2017-07-04 | Cisco Technology, Inc. | Loop detection and repair in a multicast tree |
US10516612B2 (en) | 2013-11-05 | 2019-12-24 | Cisco Technology, Inc. | System and method for identification of large-data flows |
US10382345B2 (en) | 2013-11-05 | 2019-08-13 | Cisco Technology, Inc. | Dynamic flowlet prioritization |
US9985794B2 (en) | 2013-11-05 | 2018-05-29 | Cisco Technology, Inc. | Traceroute in a dense VXLAN network |
US12120037B2 (en) | 2013-11-05 | 2024-10-15 | Cisco Technology, Inc. | Boosting linked list throughput |
US11888746B2 (en) | 2013-11-05 | 2024-01-30 | Cisco Technology, Inc. | System and method for multi-path load balancing in network fabrics |
US10182496B2 (en) | 2013-11-05 | 2019-01-15 | Cisco Technology, Inc. | Spanning tree protocol optimization |
US20150124642A1 (en) * | 2013-11-05 | 2015-05-07 | Cisco Technology, Inc. | Running link state routing protocol in clos networks |
US10606454B2 (en) | 2013-11-05 | 2020-03-31 | Cisco Technology, Inc. | Stage upgrade of image versions on devices in a cluster |
US9667431B2 (en) | 2013-11-05 | 2017-05-30 | Cisco Technology, Inc. | Method and system for constructing a loop free multicast tree in a data-center fabric |
US11625154B2 (en) | 2013-11-05 | 2023-04-11 | Cisco Technology, Inc. | Stage upgrade of image versions on devices in a cluster |
US10164782B2 (en) | 2013-11-05 | 2018-12-25 | Cisco Technology, Inc. | Method and system for constructing a loop free multicast tree in a data-center fabric |
US11528228B2 (en) | 2013-11-05 | 2022-12-13 | Cisco Technology, Inc. | System and method for multi-path load balancing in network fabrics |
US9654300B2 (en) | 2013-11-05 | 2017-05-16 | Cisco Technology, Inc. | N-way virtual port channels using dynamic addressing and modified routing |
US10778584B2 (en) | 2013-11-05 | 2020-09-15 | Cisco Technology, Inc. | System and method for multi-path load balancing in network fabrics |
US9634846B2 (en) * | 2013-11-05 | 2017-04-25 | Cisco Technology, Inc. | Running link state routing protocol in CLOS networks |
US9854001B1 (en) * | 2014-03-25 | 2017-12-26 | Amazon Technologies, Inc. | Transparent policies |
US9680872B1 (en) | 2014-03-25 | 2017-06-13 | Amazon Technologies, Inc. | Trusted-code generated requests |
US11489874B2 (en) | 2014-03-25 | 2022-11-01 | Amazon Technologies, Inc. | Trusted-code generated requests |
US10666684B2 (en) | 2014-03-25 | 2020-05-26 | Amazon Technologies, Inc. | Security policies with probabilistic actions |
US10511633B2 (en) | 2014-03-25 | 2019-12-17 | Amazon Technologies, Inc. | Trusted-code generated requests |
US11870816B1 (en) | 2014-03-25 | 2024-01-09 | Amazon Technologies, Inc. | Trusted-code generated requests |
US9983911B2 (en) * | 2014-08-19 | 2018-05-29 | Nec Corporation | Analysis controller, analysis control method and computer-readable medium |
US20160055037A1 (en) * | 2014-08-19 | 2016-02-25 | Nec Corporation | Analysis controller, analysis control method and computer-readable medium |
US9935854B2 (en) * | 2014-09-23 | 2018-04-03 | Uila Networks, Inc. | Infrastructure performance monitoring |
US9806895B1 (en) | 2015-02-27 | 2017-10-31 | Juniper Networks, Inc. | Fast reroute of redundant multicast streams |
US20180146031A1 (en) * | 2015-07-20 | 2018-05-24 | Huawei Technologies Co., Ltd. | Life Cycle Management Method and Apparatus |
US10701139B2 (en) * | 2015-07-20 | 2020-06-30 | Huawei Technologies Co., Ltd. | Life cycle management method and apparatus |
US20220368625A1 (en) * | 2019-10-09 | 2022-11-17 | Curated Networks | Multipath routing in communication networks |
US11700205B2 (en) | 2020-08-04 | 2023-07-11 | Gigamon Inc. | Optimal control of network traffic visibility resources and distributed traffic processing resource control system |
WO2022031757A1 (en) * | 2020-08-04 | 2022-02-10 | Gigamon Inc. | Optimal control of network traffic visibility resources and distributed traffic processing resource control system |
US11743774B2 (en) | 2021-01-08 | 2023-08-29 | Cisco Technology, Inc. | Reliable and available wireless forwarding information base (FIB) optimization |
US11463916B2 (en) * | 2021-01-08 | 2022-10-04 | Cisco Technology, Inc. | Reliable and available wireless forwarding information base (FIB) optimization |
US20220225171A1 (en) * | 2021-01-08 | 2022-07-14 | Cisco Technology, Inc. | Reliable and available wireless forwarding information base (fib) optimization |
Also Published As
Publication number | Publication date |
---|---|
US7082102B1 (en) | 2006-07-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US7082102B1 (en) | Systems and methods for policy-enabled communications networks | |
US8463916B2 (en) | Traffic engineering and bandwidth management of bundled links | |
Awduche et al. | Overview and principles of Internet traffic engineering | |
Wang et al. | An overview of routing optimization for internet traffic engineering | |
US6976087B1 (en) | Service provisioning methods and apparatus | |
JP4828865B2 (en) | Efficient and robust routing independent of traffic pattern variability | |
EP1776813A2 (en) | Method for forwarding traffic having a predetermined category of transmission service in a connectionless communications network | |
Awduche et al. | RFC3272: Overview and principles of Internet traffic engineering | |
Trimintzios et al. | An architectural framework for providing QoS in IP differentiated services networks | |
Bryskin et al. | Policy-enabled path computation framework | |
Filsfils et al. | Engineering a multiservice IP backbone to support tight SLAs | |
Rabbat et al. | Traffic engineering algorithms using MPLS for service differentiation | |
Farrel | Overview and principles of Internet traffic engineering | |
Farrel | RFC 9522: Overview and Principles of Internet Traffic Engineering | |
Chaieb et al. | Generic architecture for MPLS-TE routing | |
Lai | Capacity Engineering of IP-based Networks with MPLS | |
Chatzaki et al. | Resource allocation in multiservice MPLS | |
Asrat | Improving Quality of Service of Border Gateway Protocol Multi protocol Label Switching Virtual Private Network of EthioTelecom Service Level Agreements | |
Hodzic et al. | Online constraint-based routing as support for MPLS Traffic Engineering | |
Elwalid et al. | Overview and Principles of Internet Traffic Engineering Status of this Memo This memo provides information for the Internet community. It does not specify an Internet standard of any kind. Distribution of this memo is unlimited. | |
Zubairi et al. | MPLS: Managing the New Internet | |
Varga et al. | DISCMAN–Differentiated Services–Network Configuration and Management | |
Khan et al. | MPLS VPNs with DiffServ: A QoS Performance Study | |
Şenol | Design and Analysis of Multi-Protocol Label Switching Networks | |
Fan | Providing differentiated services using MPLS and traffic engineering |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: AT&T INTELLECTUAL PROPERTY I, L.P., NEVADA Free format text: CHANGE OF NAME;ASSIGNOR:AT&T DELAWARE INTELLECTUAL PROPERTY, INC.;REEL/FRAME:023448/0441 Effective date: 20081024 Owner name: AT&T INTELLECTUAL PROPERTY I, L.P.,NEVADA Free format text: CHANGE OF NAME;ASSIGNOR:AT&T DELAWARE INTELLECTUAL PROPERTY, INC.;REEL/FRAME:023448/0441 Effective date: 20081024 |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |