WO2006102398A2 - System and methods for identifying network path performance - Google Patents

System and methods for identifying network path performance Download PDF

Info

Publication number
WO2006102398A2
WO2006102398A2 PCT/US2006/010379 US2006010379W WO2006102398A2 WO 2006102398 A2 WO2006102398 A2 WO 2006102398A2 US 2006010379 W US2006010379 W US 2006010379W WO 2006102398 A2 WO2006102398 A2 WO 2006102398A2
Authority
WO
WIPO (PCT)
Prior art keywords
network
paths
path
performance
messages
Prior art date
Application number
PCT/US2006/010379
Other languages
French (fr)
Other versions
WO2006102398A3 (en
Inventor
James N. Guichard
Jean-Philippe Vasseur
Thomas D. Nadeau
David D. Ward
Original Assignee
Cisco Technology, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Cisco Technology, Inc. filed Critical Cisco Technology, Inc.
Priority to EP06739254A priority Critical patent/EP1861963B1/en
Priority to AT06739254T priority patent/ATE511726T1/en
Priority to CN200680004006XA priority patent/CN101151847B/en
Publication of WO2006102398A2 publication Critical patent/WO2006102398A2/en
Publication of WO2006102398A3 publication Critical patent/WO2006102398A3/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/50Testing arrangements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/12Discovery or management of network topologies
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/12Shortest path evaluation
    • H04L45/121Shortest path evaluation by minimising delays
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/12Shortest path evaluation
    • H04L45/123Evaluation of link metrics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/12Shortest path evaluation
    • H04L45/124Shortest path evaluation using a combination of metrics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/22Alternate routing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/26Route discovery packet
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/302Route determination based on requested QoS
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/42Centralised routing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/50Routing or path finding of packets in data switching networks using label swapping, e.g. multi-protocol label switch [MPLS]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/50Network service management, e.g. ensuring proper service fulfilment according to agreements
    • H04L41/5003Managing SLA; Interaction between SLA and QoS

Definitions

  • a business or enterprise connects multiple remote sites, such as Local Area Networks (LANs) or other subnetwork as an integrated virtual entity which provides seamless security and transport such that each user appears local to each other user.
  • LANs Local Area Networks
  • the set of subnetworks interconnect via one or more common public access networks operated by a service provider.
  • Such a subnetwork interconnection is typically known as a core network, and includes service providers having a high speed backbone of routers and trunk lines.
  • Each of the subnetworks and the core network have entry points known as edge routers, through which traffic ingressing and egressing from the network travels.
  • the core network has ingress/egress points handled by nodes known as provider edge (PE) routers, while the subnetworks have ingress/egress points known as customer edge (CE) routers, discussed further in Internet Engineering Task Force (IETF) RFC 2547bis, concerning Virtual Private Networks (VPNs).
  • PE provider edge
  • CE customer edge
  • An interconnection between the subnetworks of a VPN therefore, typically includes one or more core networks.
  • Each of the core networks is usually one or many autonomous systems (AS), meaning that it employs and enforces a common routing policy among the nodes (routers) included therein.
  • AS autonomous systems
  • the nodes of the core networks often employ a protocol operable to provide high- volume transport with path based routing, meaning that the protocol not only specifies a destination (as in TCP/IP), but rather implements an addressing strategy that allows for unique identification of end points, and also allows specification of a particular routing path through the core network.
  • One such protocol is the Multiprotocol Label Switching (MPLS) protocol, defined in Internet Engineering Task Force (IETF) RFC 3031.
  • MPLS is a protocol that combines the label-based forwarding of ATM networks, with the packet-based forwarding of IP networks and then builds applications upon this infrastructure.
  • MPLS has greatly simplified this operation by basing the forwarding decision on a simple label, via a so-called Label Switch Router (LSR) mechanism. Therefore, another major feature of MPLS is its ability to place IP traffic on a particular defined path through the network as specified by the label. Such path specification capability is generally not available with conventional IP traffic.
  • LSR Label Switch Router
  • MPLS provides bandwidth guarantees and other differentiated service features for a specific user application (or flow).
  • Current IP-based MPLS networks are emerging for providing advanced services such as bandwidth-based guaranteed service (i.e. Quality of Service, or QOS), priority-based bandwidth allocation, and preemption services .
  • QOS Quality of Service
  • MPLS networks are particularly suited to VPNs because of their amenability to high speed routing and security over service provider networks, or so called Carrier's Carrier interconnections.
  • Such MPLS networks therefore, perform routing decisions based on path specific criteria, designating not only a destination but also the intermediate routers (hops), rather then the source/destination specification in IP which leaves routing decisions to various nodes and routing logic at each "hop" through the network.
  • an interconnection of routers define a path through the core network from edge routers denoting the ingress and egress routers (points).
  • Provider edge (PE) routers at the edge of the core network connect to customer edge (CE) routers at the ingress/egress to a customer network, such as a LAN subnetwork.
  • CE customer edge
  • the path through the core network may include many "hops" through provider (P) routers in the core from an ingress PE router to the egress PE router.
  • P provider
  • LSP label switch path
  • the first is path verification in terms of basic connectivity that is detailed in copending U.S. Patent Application No. 11/048,077, filed on February 1, 2005, entitled “SYSTEM AND METHODS FOR NETWORK PATH DETECTION” (Atty. Docket No. CIS04-52(l 0418)), incorporated herein by reference.
  • the second group of characteristics of interest to a customer of a network-based VPN fall under the umbrella of "real-time” statistics.
  • This can be loosely defined as the ability for a customer edge router (CE) to obtain real-time statistics related to a particular path used by that CE to carry its traffic across the core of the network-based VPN provider.
  • attribute properties include (but are not limited to) delay (one way and round trip), jitter, and error rate (i.e.: packet loss/error).
  • delay one way and round trip
  • jitter i.e.: packet loss/error
  • error rate i.e.: packet loss/error
  • Conventional approaches may be able to provide information to the client of a network-based VPN service on an end-to-end basis e.g.
  • Constantly updating (up-to-the-minute) values for various path characteristics such as delay and jitter may be required in order to qualify a particular path on a real-time basis so as to ease troubleshooting should some path characteristics such as the delay be detected as abnormally high, make instantaneous repairs to broken paths, or in order to choose alternate ones (i.e.: change routing behavior so as to obscure the network defect from the customer), or simply to obtain information as to whether the requested path attributes are being delivered by the core network at any given point in time.
  • Conventional network path verification by the customer between their customer edge routers typically can only verify the end-to-end path using IP protocol packets.
  • configurations discussed herein substantially overcome such aspects of conventional path analysis by providing a system and method for aggregating performance characteristics for core network paths to allow computation of message traffic performance over each of the available candidate paths through the core for identifying an optimal core network path.
  • Particular network traffic, or messages include attributes indicative of performance, such as transport time, delay, jitter, and drop percentage over individual hops along the candidate path.
  • the diagnostic processor parses these messages to identify the attributes corresponding to performance, and analyze the resulting parsed routing information to compute an expected performance, such as available bandwidth (e.g. transport rate) over the path.
  • Messages including such attributes may include link state attribute (LSA) messages, diagnostic probe messages specifically targeted to enumerate such attributes, or other network suitable network traffic.
  • LSA link state attribute
  • diagnostic probe messages specifically targeted to enumerate such attributes, or other network suitable network traffic.
  • the messages may be Path Verification Protocol (PVP) messages, discussed further in copending U.S. patent application No. 11/001,149, filed December 1, 2004, entitled “SYSTEM AND METHODS FOR DETECTING NETWORK FAILURE” (Atty. Docket No. CIS04-40(10083)), incorporated herein by reference.
  • PVP Path Verification Protocol
  • Each of the attributes is typically indicative of performance characteristics between one or more hops through the core network. Accordingly, routing information gathered from the attributes is stored according to the particular hop to which it corresponds. Multiple instances of attributes across a particular hop (i.e. between two routers) are employed to compute performance characteristics for that hop (e.g. averaging the transport time of several messages between the nodes). Computation of performance of a particular path is achieved by aggregating, or summing, the performance characteristics for each hop along the path. For example, a timestamp attribute gathered from three successive messages transported between particular nodes may be averaged to provide an indication of typical or expected transport time between the nodes. Other attributes may be aggregated by averaging or otherwise computing deterministic performance characteristics from routing information representing a series of transmissions across a particular hop.
  • the gathered routing information may be obtained from traffic packets, from administrative messages such as Link State Attribute/Label Switched Path (LSA/LSP) messages employed by the path verification protocol, discussed above (CIS04-40).
  • LSA/LSP Link State Attribute/Label Switched Path
  • Such a series of hops define a path through the network, and identify favorable performance characteristics to enable routers to perform routing decisions to select an optimal path, or route, across which to send a particular packet or set of packets (messages).
  • the routing information is gathered from messages or packets having attributes indicative of the performance characteristics, including but not limited to transport time, delay, packet loss and jitter, to name several exemplary performance characteristics.
  • the method of identifying network routing paths disclosed in exemplary configurations below includes gathering network routing information indicative of performance characteristics between network nodes, and aggregating the identified routing information according to at least one performance characteristic.
  • a diagnostic processor applies the aggregated routing information to routing decisions for network paths between the network nodes by identifying the network paths corresponding to favorable performance characteristics, in which the network paths are defined by a plurality of the network nodes.
  • Aggregating the routing information includes identifying messages having attributes indicative of performance characteristics, and parsing the attributes to extract the routing information corresponding to the performance characteristics.
  • Such routing information typically corresponds to characteristics between a particular node and at least one other node, i.e. a network hop.
  • the diagnostic processor stores or otherwise makes available the extracted routing information according to the performance characteristics between the respective nodes for use in subsequent routing decisions made by the router.
  • the routing information is applied to routing operations by first identifying a plurality of "important" network paths as candidate paths between a source and a destination, such as bottlenecks and ingress/egress points subject to heavy demand.
  • the diagnostic processor computes, from the extracted routing information, and for each of the candidate paths, an aggregate performance indicative of message traffic performance between the source and destination for each of the candidate paths. Typically an average or mean expectation based on several samplings of a performance characteristic provide an expectation of future performance.
  • the diagnostic processor then denotes a particular candidate path as an optimal path based on the computed aggregate performance. Having identified particular paths operable to transport significant message traffic, the diagnostic processor examines, on the identified particular paths, messages having the attributes indicative of performance characteristics, and scans (parses) the examined messages to retrieve the attributes.
  • Configurations discussed herein may employ a Quality of Service (QOS) criteria in routing decisions, in which applying the performance characteristics further includes specifying the attributes to be measured according to predetermined QOS criteria.
  • the router then routes network traffic on paths having performance characteristics consistent with a particular QOS criteria, in which the performance characteristics typically including at least one of transport time, packet loss, packet delay and jitter.
  • Configurations concerned with QOS of other guaranteed delivery obligation may enumerate a set of quality of service (QOS) tier levels, the QOS levels indicative of an expected throughput performance, and associating each of the paths through the core network with a QOS level.
  • the paths are then benchmarked or designated to compare the computed performance attributes to the associated QOS level to selectively route message traffic over a particular path.
  • the messages employ the path verification protocol referenced above, in which the messages are diagnostic probe messages adapted to gather and report routing information.
  • the diagnostic processor sends a set of diagnostic probe messages to the identified particular paths, in which the diagnostic probe messages operable to trigger sending of a probe reply, and analyzes, the probe reply to determine performance attributes of the particular path. Further, such probes allow concluding, if the probe reply is not received, a connectivity issue along the identified path. Otherwise the diagnostic processor organizes the received probe replies according to the node from which it was received, in which each of the nodes defining a hop along the path, and analyzes the organized probe replies corresponding to the sent diagnostic probe messages to compute routing characteristics of the hops along the path. In this manner, the diagnostic processor computes, based on a set of successively identified messages, expected performance between the respective nodes.
  • gathering network routing information includes receiving Link State Advertisement (LSA) messages, in which the LSA messages have attributes indicative of routing information, accumulating the gathered network routing information in a repository, and analyzing the network routing information to identify path characteristics.
  • LSA Link State Advertisement
  • Particular configurations tend to focus on transport time, or speed, as a performance characteristic.
  • Such configurations identify a plurality of network paths as candidate paths between a source and a destination, and apply the network routing information to a the plurality of paths between nodes to compute a propagation time between the selected nodes.
  • the diagnostic processor computes, for each of the candidate paths, an aggregate transport time indicative of message traffic performance between the source and destination for each of the candidate paths, and denoting a particular candidate path as an optimal path based on the aggregate transport time.
  • Alternate configurations of the invention include a multiprogramming or multiprocessing computerized device such as a workstation, handheld or laptop computer or dedicated computing device or the like configured with software and/or circuitry (e.g., a processor as summarized above) to process any or all of the method operations disclosed herein as embodiments of the invention.
  • Still other embodiments of the invention include software programs such as a Java Virtual Machine and/or an operating system that can operate alone or in conjunction with each other with a multiprocessing computerized device to perform the method embodiment steps and operations summarized above and disclosed in detail below.
  • One such embodiment comprises a computer program product that has a computer-readable medium including computer program logic encoded thereon that, when performed in a multiprocessing computerized device having a coupling of a memory and a processor, programs the processor to perform the operations disclosed herein as embodiments of the invention to carry out data access requests.
  • Such arrangements of the invention are typically provided as software, code and/or other data (e.g., data structures) arranged or encoded on a computer readable medium such as an optical medium (e.g., CD-ROM), floppy or hard disk or other medium such as firmware or microcode in one or more ROM or RAM or PROM chips, field programmable gate arrays (FPGAs) or as an Application Specific Integrated Circuit (ASIC).
  • the software or firmware or other such configurations can be installed onto the computerized device (e.g., during operating system for execution environment installation) to cause the computerized device to perform the techniques explained herein as embodiments of the invention.
  • FIG. 1 is a context diagram of a network communications environment depicting a
  • VPN Virtual Private Network
  • Fig. 2 is a flowchart of applying performance characteristics to computer an optimal path
  • Fig. 3 is an example of applying the performance characteristics to computer an optimal path in the network of Fig. 1 ;
  • Figs. 4-8 are a flowchart in further detail of applying performance characteristics.
  • Network routing diagnostics in conventional IP networks are typically based on endpoint connectivity. Accordingly, conventional IP routing mechanisms are unable to take advantage of the label switch routing allowing specification of a particular path. Further, determination of an optimal path from among available paths may be unavailable in conventional label switch path (LSP) routing. Accordingly, configurations of the invention are based, in part, on the observation that conventional routers do not identify an optimal path through the core network from the ingress PE router to the egress PE router. Determination of paths that satisfy a QOS or other delivery speed/bandwidth guarantee may be difficult or unavailable. Therefore, it can be problematic to perform routing decisions for guaranteed delivery thresholds such as QOS based traffic.
  • LSP label switch path
  • a router or other connectivity device employing a diagnostic processor as defined herein employs a set of mechanisms allowing for the control of the subnetwork of a customer having rights to request such information as well as the polling rate so as to protect the PE from unreasonable overhead.
  • Particular network traffic, or messages include attributes indicative of performance, such as transport time, delay, jitter, and drop percentage.
  • the diagnostic processor parses these messages to identify the attributes corresponding to performance, and analyzes the resulting parsed routing information to compute an expected performance, such as available bandwidth (e.g. transport rate) over the path.
  • Messages including such attributes may include link state attribute (LSA) messages, diagnostic probe messages specifically targeted to enumerate such attributes, or other network suitable network traffic.
  • LSA link state attribute
  • Configurations discussed further below are directed to techniques for gathering the significant path characteristics for a network-based IP VPN. In particular, methods herein disclose how path jitter, packet loss and packet delay can be gathered by a customer of that service.
  • Fig. 1 is a context diagram of a network communications environment 100 depicting a Virtual Private Network (VPN) between subnetworks over a MPLS core network 140.
  • the environment 100 includes a local VPN subnetwork 110 (i.e. LAN) and a remote VPN subnetwork 120 interconnected by a core network 140.
  • Each of the subnetworks 110, 120 serves a plurality of users 114-1..114-6 coupled to one or more prefixes 112, 122 within the subnetworks 110, 120, respectively.
  • the subnetworks 110, 120 include customer edge routers CEl..CE4 (CE n, generally) connected to provider edge PE1..PE3 (PE n, generally) routers denoting ingress and egress points to the core network 140.
  • the core network 140 includes provider routers P 1..P3 (P n, generally) defining one or more paths 160- 1..160-2 ( 160 generally) through the core network 140. Note that while the exemplary paths 160 identify a PE to PE route, methods disclosed herein are also applicable to PE-CE and CE- CE paths in alternate configurations.
  • CE refers to a customer edge (i.e.: customer premises-based) router.
  • PE denotes a provider edge router, which demarks the edge of the provider network from that of the customer's network.
  • PE typically many CEs are attached to a single PE router, which takes on an aggregation function for many CEs.
  • Each CE is attached to the provider network by at least one network link, and is often attached in multiple places forming a redundant or "multi-homed" configuration, although sometimes simply two network links provided by different "last mile” carriers may be used to attach the CE to the same PE.
  • the "P” routers are the provider network's core routers. These routers comprise the provider's core network infrastructure. Collectively the various routers 130-1..130- 10 define a plurality of nodes in the context of the MPLS environment. Such an MPLS network typically begins and ends at the PE routers. Typically a dynamic routing protocol such as Border Gateway Protocol 4 (BGP-4), or static routing is used to route packets between the CE and PE. However, it is possible to run MPLS between the CE and PE devices. A simple MPLS topology illustrating this terminology is as follows:
  • the CE-PE links are running some non-MPLS protocol.
  • the CE-PE links are running the MPLS, such as either the Label Distribution Protocol (LDP) or BGP with label distribution, to distribute labels between each other.
  • LDP Label Distribution Protocol
  • BGP BGP with label distribution
  • Fig. 2 is a flowchart for applying performance characteristics to compute an optimal path 160 in the network of Fig. 1.
  • the method of identifying network routing paths 160 includes gathering network routing information indicative of performance characteristics between network nodes 130, as depicted at step 200.
  • the routing information includes performance characteristics computed from the attributes of one or more messages 150, such as transport time, packet delay, packet jitter and packet loss.
  • attributes are obtainable as responses to diagnostic probe messages 148, also known as path verification messages according to the path verification protocol (PVP).
  • routing information i.e. attributes
  • LSA/LSP link state
  • a client has the ability to identify a set of "important" destinations for which the gathering of the path attributes is required on a real-time basis (because of the necessity to measure the performance of a particular path). Not that the term "real-time” does not refer to the frequency at which path attributes are retrieved but is used to illustrate the fact that such information is gathered upon an explicit request of an authorized CE.
  • a client Having identified "important," or significant prefixes (which can be equal to the entire set of routes or just a subset of them), a client has the ability to trigger upon expiration of a jittered configurable timer or upon manual trigger an end-to-end data plane path attributes check for these prefixes.
  • the receiving router 130 aggregates the identified routing information according to at least one performance characteristic, such as transport time or packet loss, to consolidate the attributes and allow a deterministic criteria to be drawn (i.e. to compare apples to apples), as depicted at step 201.
  • a performance characteristic such as the propagation delay from node A to B is concerned with messages having a timestamp attribute denoting transmission from node A and arrival at node B.
  • a diagnostic probe message i.e. a PVP message is employed.
  • the client can initialize a request PVP message to the PE listing the set of path attributes to be measured.
  • the PVP protocol is defined further in copending U.S. patent application No. 11/001,149 cited above.
  • Subsequent scheduling by a router 130 then applies the aggregated routing information to perform routing decisions for network paths 160 between the network nodes 130 by identifying the network paths corresponding to favorable performance characteristics, in which the network paths are defined by a plurality of the network nodes, as shown at step 202.
  • the gathered attributes generally indicate performance characteristics between particular nodes 130.
  • a path 160 across the core network 140 typically spans at least several nodes 130 and possible many. Accordingly, routing information corresponding to each "hop" along a path 160 is employed to compute the expected performance of a given path 150 by accumulating all the hops included in the path.
  • the CE starts a dynamic timer T. Upon expiration, if no PVP reply has been received for the PVP request, a further PVP request may be sent up to a maximum number N of consecutive attempts. This timer will be dynamically computed by the CE and will be based on the specific application requiring the path 160 characteristics.
  • a PE on receipt of a PVP request 148, a PE should first verify whether the CE is authorized to send such request. If the CE request is not legal, a PVP error message is returned to the requesting CE. Then the PE should use the information contained within the request to obtain the relevant set of information (if possible). The PE achieves this by sending test traffic to the destination PE who is the next-hop exit point for a given VPN destination, and measures the attributes in question. For example, if measuring packet loss, the PE should send several messages 148 and count how many were replied too. In the case of jitter and delay, the PE should incorporate time stamp information from the test packets 148 as well as local information to keep track of time. In all cases, if the backbone of the network- based VPN service utilizes MPLS as its forwarding mechanism, it is preferable that MPLS- specific tools be used to measure these path 160 characteristics as to provide an accurate measure of the data plane.
  • the result should be provided by means of a PVP reply to the CE client (note that the PVP server process is expected to be stateless and should delete the computed values after a predetermined time threshold. If the PVP server process at the PE cannot get the information then a PVP error message 150 should be returned along with an error code specifying the error root cause. Furthermore, the PE should also monitor the rate at which such requests are received on a per-PE basis and potentially silently drop the requests in excess, pace such requests or return an PVP error code message and dampen any further requests.
  • one performance characteristic which is often carefully scrutinized is the transport time between nodes 130.
  • multiple routers 130 in a network synchronize their corresponding time clocks amongst themselves based on use of a synchronizing protocol such as the Network Time Protocol (NTP).
  • NTP Network Time Protocol
  • the routers 130 flood the network 140 with network configuration messages 148 such as those based on the LSA/LSP to advertise status information of a network configuration change to other routers.
  • NTP Network Time Protocol
  • a respective router When originating a respective network configuration message 148, a respective router generates a timestamp based on use of its synchronized clock for inclusion in a field of the network configuration message.
  • Other routers 130 receiving the network configuration message identify a travel time attribute associated with the network configuration message over the network 140 by comparing a timestamp attribute (e.g., origination time) of a received network configuration message to their own time clock (e.g., the receiving router's time clock) to calculate a transmission time value indicating how long the network configuration message took to be conveyed over the network from the originator to a corresponding receiving node 130.
  • a timestamp attribute e.g., origination time
  • their own time clock e.g., the receiving router's time clock
  • each router 130 receiving a respective network configuration message 148 identifies a travel time (or flooding time) associated with the network configuration message by comparing a respective timestamp (e.g., origination time) of the network configuration message to its own respective time clock (e.g., the receiving router's time clock) to calculate a transmission time value indicating how long the network configuration message took to be conveyed over the network from the originator router to a corresponding receiving router.
  • a travel time or flooding time
  • Fig. 3 is an example of applying the performance characteristics to compute an optimal path in the network of Fig. 1.
  • the VPN environment 100 of Fig. 1 is shown in greater detail including a plurality of messages 150..150-10 having performance attributes 152.
  • the messages 150 may be sent in response to a variety of triggers.
  • diagnostic probe messages 148 specifically for eliciting the messages 150 and corresponding attributes 152 are employed.
  • Such diagnostic probe messages 148 may be part of a path verification protocol (PVP), as discussed further in the copending patent application discussed above.
  • PVP path verification protocol
  • Such messages 150 may be Link Status (LSA/LSP) messages, or other message traffic which includes performance attributes 152.
  • LSA/LSP Link Status
  • Exemplary router PEl includes an interface 132 having a plurality of ports 134 for receiving and forwarding message traffic through the network 140 in the normal course of routing operations.
  • the router PEl also includes a diagnostic processor 140 for performing path diagnostics and validation as discussed herein.
  • the diagnostic processor 140 includes an attribute sniffer 142 operable to identify messages 150 having attributes relevant to performance, and also operable to retrieve the attributes 152 in a non-destructive manner which does not affect extraneous routing operations.
  • the diagnostic processor 140 also includes a characteristic aggregator 146, for analyzing the attributes of multiple messages 150 to identify trends, and a path scheduler 144 for applying the path characteristics to routing decisions based on criteria such as QOS guarantees.
  • the path scheduler 144 may perform routing decisions to employ that path 160 for message traffic associated with a QOS guarantee of, say, 210ms. Therefore, optimal routing decisions which route traffic on a path sufficient to satisfy such QOS requirements is obtained, yet the scheduler 144 need not route such traffic on a 100ms path 160 which may be needed for more critical traffic.
  • the attribute sniffer 142 gathers attributes 152 for storage in the repository 170.
  • the repository 170 stores the attributes 152 as routing information 172 according to normalized criteria such as paths, hops, and routers 130, as applicable to the performance characteristic in question.
  • An exemplary set of performance characteristics 174 is shown in Table I, which stores transport times between various nodes 130.
  • successive trials of performance characteristics 174 i.e. attributes
  • obtained from the various hops between nodes 150-1..150- 10 are stored in Table I along with the attribute values, such as transport time in the given example.
  • the characteristic aggregator 146 employs the performance characteristics 174 to compute path diagnostics 176 (characteristics), representing the deterministic expectations computed from the available attributes 152. As shown in Table ⁇ , an expected transport time for each hop is computable by averaging the gathered attribute 152 obtained for a series of hops between two particular nodes 130. The aggregate performance of a path 160 is computed by summing each average hop along the path 160, discussed further below.
  • Figs. 4-8 are a flowchart of applying performance characteristics in further detail using the exemplary network of Fig. 3 and the routing information of Tables I and II, above.
  • the diagnostic processor 140 configured in router PEl, is operable for performing path diagnostics as discussed further below.
  • Such a diagnostic processor 140 is also applicable to the other routers 130-1..130-10, however is illustrated from the perspective of router PEl alone for simplicity. Accordingly, the diagnostic processor 140 identifies a plurality of network paths as candidate paths 160 between a source and a destination, such as the paths 160-1, 160-2 between PEl and PE3 130-3, 130-8, as depicted at step 300.
  • such a path 160 denotes a PE-PE interconnection across the core network 140 and therefore involves identifying a plurality of nodes 130 along one or more candidate paths 160 through the core network 140 for monitoring and analysis.
  • the diagnostic processor 140 identifies particular paths 130 operable to transport significant message traffic, as depicted at step 301, therefore avoiding the burden of including low volume or an excessive number of non- contentious router connections.
  • the attribute sniffer 142, or other process operable to receive and examine network message (packets) 150 examines, on the identified particular paths 160, messages 150 having the attributes 152 indicative of performance characteristics, as shown at step 302.
  • the attribute sniffer 142 identifies packets 152 having network routing information indicative of performance characteristics between network nodes 130, as disclosed at step 303.
  • attributes include performance related metrics or variables such as the propagation (i.e. transport) time, delay, loss and jitter, to name several.
  • Such identification may be by virtue of a diagnostic protocol such as PVP, or by other parsing or scanning mechanism such as identifying protocol control sequences employed by the underlying network protocol (i.e. MPLS or TCP/IP).
  • Routers 130 enabled with such diagnostic probe capability i.e. PVP enabled
  • Probe messages 148 evoke a responsive probe reply 150 from the router 130 on the candidate path 160, and accordingly, the diagnostic processor 140 concludes, if the probe reply is not received, a connectivity issue along the identified particular path 160, as shown at step 307. Otherwise, the characteristic aggregator 146, responsive to the attribute sniffer 142, identifies the incoming messages 150 having attributes indicative of performance characteristics, as shown at step 308. As the incoming messages 150 may be probe replies, LSA, or other attribute bearing messages, gathering the routing information may result in the characteristic aggregator 146 receiving various messages such as Link State Advertisement (LSA) messages, probe messages, or router traffic messages, in which the messages include the attributes indicative of routing information, as shown at step 309.
  • LSA Link State Advertisement
  • the characteristic aggregator 146 scans the examined messages to retrieve the attributes 152, as depicted at step 310. Scanning involves parsing the attributes to extract the routing information corresponding to the performance characteristics, as shown at step 309. Since the attributes are gathered by the messages 150 as they traverse the nodes 130 of the network, the attributes contain routing information corresponding to characteristics between a particular node 130 and one or more other nodes 130. The aggregator 146 accumulates the gathered network routing information from the parsed attributes, as shown at step 312.
  • Accumulating in this manner includes aggregating the identified routing information according to one or more of the performance characteristics, as depicted at step 312, and organizing the received attributes 152, such as attributes parsed from diagnostic probe replies 150, according to the node 130 from which it was received, each of the nodes defining a hop along the path 160, as depicted at step 313.
  • a series of messages 150 results in an aggregation of attributes arranged by performance characteristics 174 and hops, enabling further processing on a path basis, as shown in Table I.
  • the aggregator 146 analyzes the attributes 152 from the organized probe replies 150 corresponding to the sent diagnostic probe messages 148 to compute routing characteristics of the hops along the path, as shown at step 314.
  • the aggregator 146 then analyzes the performance characteristics 174 specified by the attributes 152 to determine performance attributes of a particular path 160, as depicted at step 315.
  • the aggregator 146 analyzes the network routing 172 information to identify path 160 characteristics applicable to an entire path through the core 140, thus encompassing the constituent hops includes in that path, as disclosed at step 316.
  • the aggregator stores the extracted, aggregated routing information 172 as path diagnostics 176 (Table II) according to the performance characteristics 174, between the respective nodes 130, in the repository 170 as depicted at step 317.
  • the path scheduler 144 applies the aggregated routing information 172 to routing decisions for network paths 160 between the network nodes 130 by identifying the network paths 160 corresponding to favorable performance characteristics, in which the network paths 160 are each defined by a plurality of the network nodes 130, as disclosed at step 318. Therefore, the performance characteristics of each of the internodal hops of Table I, for example, determine the optimal path by adding or summing the characteristics of the respective hops.
  • the scheduler 144 applies the routing information 172 by computing, based on a set of successively identified messages 150, expected performance between the respective nodes 130, as depicted at step 319.
  • the scheduler 144 computes, from the extracted routing information 172, for each of the candidate paths 160, an aggregate performance indicative of message traffic performance between a particular source and destination (i.e. typically PE routers 130) for each of the candidate paths 160, as shown at step 320.
  • the path scheduler 144 computes, for each of the candidate paths 160-1 and 160-2, an aggregate transport time indicative of message traffic performance between the source and destination (i.e. PEl to PE3) for each of the candidate paths 160.
  • the path scheduler 144 applies the network routing information 172 to the plurality of paths 160 between nodes to compute a propagation time between the selected nodes PEl and PE3.
  • guaranteed delivery parameters are applied by specifying the attributes to be measured according to predetermined QOS criteria, as depicted at step 322.
  • the QOS criteria indicate which performance characteristics are applied and the particular performance values required, such as transport time.
  • the path scheduler 144 enumerates a set of quality of service (QOS) tier levels, in which the QOS levels are indicative of an expected throughput performance, as shown at step 323, and associates each of the candidate paths 160 with a QOS level, as depicted at step 324.
  • QOS quality of service
  • the path attributes allow the path scheduler 144 to qualify the paths as satisfying a particular QOS level, such as a transport time from PEl to PE3 in 100ms, for example.
  • the path scheduler compares the computed performance attributes to the associated QOS level to selectively the route message traffic over a particular path 160, as disclosed at step 325.
  • the path scheduler 144 may then route network traffic on paths 160 having performance characteristics consistent with a particular QOS criteria, the performance characteristics including at least one of transport time, packet loss, packet delay and jitter, as depicted at step 326. Therefore, in the example in Fig. 3, the path scheduler 144 denotes a particular candidate path 160 as an optimal path based on the aggregate transport time, as depicted at step 327. For example, the path 160-2 would be chosen for the QOS traffic requiring 100ms transport from PE1-PE3 because path 160-1 exhibits path diagnostics of 120ms and cannot support such performance.
  • the programs and methods for identifying network routing paths as defined herein are deliverable to a processing device in many forms, including but not limited to a) information permanently stored on non-writeable storage media such as ROM devices, b) information alterably stored on writeable storage media such as floppy disks, magnetic tapes, CDs, RAM devices, and other magnetic and optical media, or c) information conveyed to a computer through communication media, for example using baseband signaling or broadband signaling techniques, as in an electronic network such as the Internet or telephone modem lines.
  • the operations and methods may be implemented in a software executable object or as a set of instructions embedded in a carrier wave.
  • ASICs Application Specific Integrated Circuits
  • FPGAs Field Programmable Gate Arrays
  • state machines controllers or other hardware components or devices, or a combination of hardware, software, and firmware components.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)
  • Telephonic Communication Services (AREA)

Abstract

A system and method for aggregating performance characteristics for core network paths allows computation of message traffic performance over each of the available candidate paths through the core for identifying an optimal core network path. Particular network traffic, or messages, include attributes indicative of performance, such as transport time, delay, jitter, and drop percentage, over individual hops along the candidate path. A diagnostic processor parses these messages to identify the attributes corresponding to performance, and analyzes the resulting parsed routing information to compute an expected performance, such as available bandwidth (e.g. transport rate) over the path. Messages including such attributes may include link state attribute (LSA) messages, diagnostic probe messages specifically targeted to enumerate such attributes, or other network suitable network traffic. In a particular configuration, the messages may be Path Verification Protocol (PVP) messages.

Description

SYSTEM AND METHODS FOR IDENTIFYING NETWORK PATH PERFORMANCE
BACKGROUND
In a Virtual Private Networking (VPN) environment, a business or enterprise connects multiple remote sites, such as Local Area Networks (LANs) or other subnetwork as an integrated virtual entity which provides seamless security and transport such that each user appears local to each other user. In a conventional VPN, the set of subnetworks interconnect via one or more common public access networks operated by a service provider. Such a subnetwork interconnection is typically known as a core network, and includes service providers having a high speed backbone of routers and trunk lines. Each of the subnetworks and the core network have entry points known as edge routers, through which traffic ingressing and egressing from the network travels. The core network has ingress/egress points handled by nodes known as provider edge (PE) routers, while the subnetworks have ingress/egress points known as customer edge (CE) routers, discussed further in Internet Engineering Task Force (IETF) RFC 2547bis, concerning Virtual Private Networks (VPNs). An interconnection between the subnetworks of a VPN, therefore, typically includes one or more core networks. Each of the core networks is usually one or many autonomous systems (AS), meaning that it employs and enforces a common routing policy among the nodes (routers) included therein. Accordingly, the nodes of the core networks often employ a protocol operable to provide high- volume transport with path based routing, meaning that the protocol not only specifies a destination (as in TCP/IP), but rather implements an addressing strategy that allows for unique identification of end points, and also allows specification of a particular routing path through the core network. One such protocol is the Multiprotocol Label Switching (MPLS) protocol, defined in Internet Engineering Task Force (IETF) RFC 3031. MPLS is a protocol that combines the label-based forwarding of ATM networks, with the packet-based forwarding of IP networks and then builds applications upon this infrastructure.
Traditional MPLS, and more recently Generalized MPLS (G-MPLS) networks as well, extend the suite of IP protocols to expedite the forwarding scheme used by conventional IP routers, particularly through core networks employed by service providers (as opposed to end-user connections or taps). Conventional routers typically employ complex and time- consuming route lookups and address matching schemes to determine the next hop for a received packet, primarily by examining the destination address in the header of the packet. MPLS has greatly simplified this operation by basing the forwarding decision on a simple label, via a so-called Label Switch Router (LSR) mechanism. Therefore, another major feature of MPLS is its ability to place IP traffic on a particular defined path through the network as specified by the label. Such path specification capability is generally not available with conventional IP traffic. In this way, MPLS provides bandwidth guarantees and other differentiated service features for a specific user application (or flow). Current IP-based MPLS networks are emerging for providing advanced services such as bandwidth-based guaranteed service (i.e. Quality of Service, or QOS), priority-based bandwidth allocation, and preemption services .
Accordingly, MPLS networks are particularly suited to VPNs because of their amenability to high speed routing and security over service provider networks, or so called Carrier's Carrier interconnections. Such MPLS networks, therefore, perform routing decisions based on path specific criteria, designating not only a destination but also the intermediate routers (hops), rather then the source/destination specification in IP which leaves routing decisions to various nodes and routing logic at each "hop" through the network.
SUMMARY
In a core network such as an MPLS network supporting a VPN environment, an interconnection of routers define a path through the core network from edge routers denoting the ingress and egress routers (points). Provider edge (PE) routers at the edge of the core network connect to customer edge (CE) routers at the ingress/egress to a customer network, such as a LAN subnetwork. The path through the core network may include many "hops" through provider (P) routers in the core from an ingress PE router to the egress PE router. Further, there are typically multiple possible paths through the core network. Conventional IP routing mechanisms may be unable to take advantage of the label switch routing allowing specification of a particular path. However, determination of an optimal path from among available paths is unavailable in conventional label switch path (LSP) routing. Accordingly, configurations of the invention are based, in part, on the observation that conventional routers do not identify an optimal path through the core network from the ingress PE router to the egress PE router. Determination of paths that satisfy a QOS or other delivery speed/bandwidth guarantee may be difficult or unavailable. Therefore, it can be problematic to perform routing decisions for QOS based traffic. It would therefore be beneficial to compute performance characteristics of particular paths through the core network to allow identification of an optimal path for traffic subject to a particular delivery guarantee or expectation. Network performance attributes employed for core network diagnostics generally fall into two families of path characteristics, and the verification/diagnostics thereof, that are of interest when considering conventional network-based IP VPNs. The first is path verification in terms of basic connectivity that is detailed in copending U.S. Patent Application No. 11/048,077, filed on February 1, 2005, entitled "SYSTEM AND METHODS FOR NETWORK PATH DETECTION" (Atty. Docket No. CIS04-52(l 0418)), incorporated herein by reference.
The second group of characteristics of interest to a customer of a network-based VPN fall under the umbrella of "real-time" statistics. This can be loosely defined as the ability for a customer edge router (CE) to obtain real-time statistics related to a particular path used by that CE to carry its traffic across the core of the network-based VPN provider. Such attribute properties include (but are not limited to) delay (one way and round trip), jitter, and error rate (i.e.: packet loss/error). Currently these types of statistics are provided by some service providers, but are based largely on average values that are insufficient to enable the customer to compute real-time path characterization. Conventional approaches may be able to provide information to the client of a network-based VPN service on an end-to-end basis e.g. from customer site to customer site. However, such conventional approaches may be unable to cover the computation of path jitter, delay and loss with the network-based VPN backbone network from the customer site perspective. This information must be obtained by the provider of the service and is usually delivered to the client by way of an average measurement over a given period of time, usually monthly.
Constantly updating (up-to-the-minute) values for various path characteristics such as delay and jitter may be required in order to qualify a particular path on a real-time basis so as to ease troubleshooting should some path characteristics such as the delay be detected as abnormally high, make instantaneous repairs to broken paths, or in order to choose alternate ones (i.e.: change routing behavior so as to obscure the network defect from the customer), or simply to obtain information as to whether the requested path attributes are being delivered by the core network at any given point in time. Conventional network path verification by the customer between their customer edge routers typically can only verify the end-to-end path using IP protocol packets. These provide important information about the overall end-to-end path, but do not provide any direct information about the core network paths between the provider's PE routers that actually carry the IP traffic between their sites. For this reason the customer may be unable to ascertain in which segment of the network a particular problem is located, or what specific path characteristics are being delivered at any particular point in time. Such information may, for example, be employed by a network-based EP customer to trigger appropriate QoS parameter setting adjustment on their PE to CE links, trigger a local link update and so on, should an SLA degradation cause be located on such links. Rather, such information is gathered by the Service Provider using MPLS-specific tools and algorithms to assure their accuracy and their efficiency when used to correct any defects detected by them. Disclosed herein is a method by which such MPLS-specific path characteristics may be gathered by the customers of a network-based D? VPN service. Accordingly, configurations discussed herein substantially overcome such aspects of conventional path analysis by providing a system and method for aggregating performance characteristics for core network paths to allow computation of message traffic performance over each of the available candidate paths through the core for identifying an optimal core network path. Particular network traffic, or messages, include attributes indicative of performance, such as transport time, delay, jitter, and drop percentage over individual hops along the candidate path. The diagnostic processor parses these messages to identify the attributes corresponding to performance, and analyze the resulting parsed routing information to compute an expected performance, such as available bandwidth (e.g. transport rate) over the path. Messages including such attributes may include link state attribute (LSA) messages, diagnostic probe messages specifically targeted to enumerate such attributes, or other network suitable network traffic. In a particular configuration, the messages may be Path Verification Protocol (PVP) messages, discussed further in copending U.S. patent application No. 11/001,149, filed December 1, 2004, entitled "SYSTEM AND METHODS FOR DETECTING NETWORK FAILURE" (Atty. Docket No. CIS04-40(10083)), incorporated herein by reference.
Each of the attributes is typically indicative of performance characteristics between one or more hops through the core network. Accordingly, routing information gathered from the attributes is stored according to the particular hop to which it corresponds. Multiple instances of attributes across a particular hop (i.e. between two routers) are employed to compute performance characteristics for that hop (e.g. averaging the transport time of several messages between the nodes). Computation of performance of a particular path is achieved by aggregating, or summing, the performance characteristics for each hop along the path. For example, a timestamp attribute gathered from three successive messages transported between particular nodes may be averaged to provide an indication of typical or expected transport time between the nodes. Other attributes may be aggregated by averaging or otherwise computing deterministic performance characteristics from routing information representing a series of transmissions across a particular hop.
The gathered routing information may be obtained from traffic packets, from administrative messages such as Link State Attribute/Label Switched Path (LSA/LSP) messages employed by the path verification protocol, discussed above (CIS04-40). Such a series of hops define a path through the network, and identify favorable performance characteristics to enable routers to perform routing decisions to select an optimal path, or route, across which to send a particular packet or set of packets (messages). La general, therefore, the routing information is gathered from messages or packets having attributes indicative of the performance characteristics, including but not limited to transport time, delay, packet loss and jitter, to name several exemplary performance characteristics. In further detail the method of identifying network routing paths disclosed in exemplary configurations below includes gathering network routing information indicative of performance characteristics between network nodes, and aggregating the identified routing information according to at least one performance characteristic. A diagnostic processor applies the aggregated routing information to routing decisions for network paths between the network nodes by identifying the network paths corresponding to favorable performance characteristics, in which the network paths are defined by a plurality of the network nodes.
Aggregating the routing information includes identifying messages having attributes indicative of performance characteristics, and parsing the attributes to extract the routing information corresponding to the performance characteristics. Such routing information typically corresponds to characteristics between a particular node and at least one other node, i.e. a network hop. The diagnostic processor stores or otherwise makes available the extracted routing information according to the performance characteristics between the respective nodes for use in subsequent routing decisions made by the router.
The routing information is applied to routing operations by first identifying a plurality of "important" network paths as candidate paths between a source and a destination, such as bottlenecks and ingress/egress points subject to heavy demand. The diagnostic processor computes, from the extracted routing information, and for each of the candidate paths, an aggregate performance indicative of message traffic performance between the source and destination for each of the candidate paths. Typically an average or mean expectation based on several samplings of a performance characteristic provide an expectation of future performance. The diagnostic processor then denotes a particular candidate path as an optimal path based on the computed aggregate performance. Having identified particular paths operable to transport significant message traffic, the diagnostic processor examines, on the identified particular paths, messages having the attributes indicative of performance characteristics, and scans (parses) the examined messages to retrieve the attributes.
Configurations discussed herein may employ a Quality of Service (QOS) criteria in routing decisions, in which applying the performance characteristics further includes specifying the attributes to be measured according to predetermined QOS criteria. The router then routes network traffic on paths having performance characteristics consistent with a particular QOS criteria, in which the performance characteristics typically including at least one of transport time, packet loss, packet delay and jitter. Configurations concerned with QOS of other guaranteed delivery obligation may enumerate a set of quality of service (QOS) tier levels, the QOS levels indicative of an expected throughput performance, and associating each of the paths through the core network with a QOS level. The paths are then benchmarked or designated to compare the computed performance attributes to the associated QOS level to selectively route message traffic over a particular path.
In particular configurations, the messages employ the path verification protocol referenced above, in which the messages are diagnostic probe messages adapted to gather and report routing information. Accordingly, the diagnostic processor sends a set of diagnostic probe messages to the identified particular paths, in which the diagnostic probe messages operable to trigger sending of a probe reply, and analyzes, the probe reply to determine performance attributes of the particular path. Further, such probes allow concluding, if the probe reply is not received, a connectivity issue along the identified path. Otherwise the diagnostic processor organizes the received probe replies according to the node from which it was received, in which each of the nodes defining a hop along the path, and analyzes the organized probe replies corresponding to the sent diagnostic probe messages to compute routing characteristics of the hops along the path. In this manner, the diagnostic processor computes, based on a set of successively identified messages, expected performance between the respective nodes.
In alternate configurations, gathering network routing information includes receiving Link State Advertisement (LSA) messages, in which the LSA messages have attributes indicative of routing information, accumulating the gathered network routing information in a repository, and analyzing the network routing information to identify path characteristics.
Particular configurations, particularly those concerned with QOS driven throughput, tend to focus on transport time, or speed, as a performance characteristic. Such configurations, identify a plurality of network paths as candidate paths between a source and a destination, and apply the network routing information to a the plurality of paths between nodes to compute a propagation time between the selected nodes. The diagnostic processor computes, for each of the candidate paths, an aggregate transport time indicative of message traffic performance between the source and destination for each of the candidate paths, and denoting a particular candidate path as an optimal path based on the aggregate transport time. Alternate configurations of the invention include a multiprogramming or multiprocessing computerized device such as a workstation, handheld or laptop computer or dedicated computing device or the like configured with software and/or circuitry (e.g., a processor as summarized above) to process any or all of the method operations disclosed herein as embodiments of the invention. Still other embodiments of the invention include software programs such as a Java Virtual Machine and/or an operating system that can operate alone or in conjunction with each other with a multiprocessing computerized device to perform the method embodiment steps and operations summarized above and disclosed in detail below. One such embodiment comprises a computer program product that has a computer-readable medium including computer program logic encoded thereon that, when performed in a multiprocessing computerized device having a coupling of a memory and a processor, programs the processor to perform the operations disclosed herein as embodiments of the invention to carry out data access requests. Such arrangements of the invention are typically provided as software, code and/or other data (e.g., data structures) arranged or encoded on a computer readable medium such as an optical medium (e.g., CD-ROM), floppy or hard disk or other medium such as firmware or microcode in one or more ROM or RAM or PROM chips, field programmable gate arrays (FPGAs) or as an Application Specific Integrated Circuit (ASIC). The software or firmware or other such configurations can be installed onto the computerized device (e.g., during operating system for execution environment installation) to cause the computerized device to perform the techniques explained herein as embodiments of the invention.
BRIEF DESCRIPTION OF THE DRAWINGS The foregoing and other objects, features and advantages of the invention will be apparent from the following description of particular embodiments of the invention, as illustrated in the accompanying drawings in which like reference characters refer to the same parts throughout the different views. The drawings are not necessarily to scale, emphasis instead being placed upon illustrating the principles of the invention. Fig. 1 is a context diagram of a network communications environment depicting a
Virtual Private Network (VPN) between subnetworks over a MPLS core network;
Fig. 2 is a flowchart of applying performance characteristics to computer an optimal path;
Fig. 3 is an example of applying the performance characteristics to computer an optimal path in the network of Fig. 1 ; and
Figs. 4-8 are a flowchart in further detail of applying performance characteristics.
DETAILED DESCRIPTION
Network routing diagnostics in conventional IP networks are typically based on endpoint connectivity. Accordingly, conventional IP routing mechanisms are unable to take advantage of the label switch routing allowing specification of a particular path. Further, determination of an optimal path from among available paths may be unavailable in conventional label switch path (LSP) routing. Accordingly, configurations of the invention are based, in part, on the observation that conventional routers do not identify an optimal path through the core network from the ingress PE router to the egress PE router. Determination of paths that satisfy a QOS or other delivery speed/bandwidth guarantee may be difficult or unavailable. Therefore, it can be problematic to perform routing decisions for guaranteed delivery thresholds such as QOS based traffic. It would therefore be beneficial to compute performance characteristics of particular paths through the core network to allow identification of an optimal path for traffic subject to a particular delivery guarantee or expectation. Accordingly, configurations discussed herein substantially overcome such aspects of conventional path analysis by providing a system and method for aggregating performance characteristics for core network paths to allow computation of message traffic performance over each of the available candidate paths through the core for identifying an optimal core network path. Further, a router or other connectivity device employing a diagnostic processor as defined herein employs a set of mechanisms allowing for the control of the subnetwork of a customer having rights to request such information as well as the polling rate so as to protect the PE from unreasonable overhead. Particular network traffic, or messages, include attributes indicative of performance, such as transport time, delay, jitter, and drop percentage. The diagnostic processor parses these messages to identify the attributes corresponding to performance, and analyzes the resulting parsed routing information to compute an expected performance, such as available bandwidth (e.g. transport rate) over the path. Messages including such attributes may include link state attribute (LSA) messages, diagnostic probe messages specifically targeted to enumerate such attributes, or other network suitable network traffic. Configurations discussed further below are directed to techniques for gathering the significant path characteristics for a network-based IP VPN. In particular, methods herein disclose how path jitter, packet loss and packet delay can be gathered by a customer of that service.
Fig. 1 is a context diagram of a network communications environment 100 depicting a Virtual Private Network (VPN) between subnetworks over a MPLS core network 140. Referring to Fig. 1, the environment 100 includes a local VPN subnetwork 110 (i.e. LAN) and a remote VPN subnetwork 120 interconnected by a core network 140. Each of the subnetworks 110, 120 serves a plurality of users 114-1..114-6 coupled to one or more prefixes 112, 122 within the subnetworks 110, 120, respectively. The subnetworks 110, 120 include customer edge routers CEl..CE4 (CE n, generally) connected to provider edge PE1..PE3 (PE n, generally) routers denoting ingress and egress points to the core network 140. The core network 140 includes provider routers P 1..P3 (P n, generally) defining one or more paths 160- 1..160-2 ( 160 generally) through the core network 140. Note that while the exemplary paths 160 identify a PE to PE route, methods disclosed herein are also applicable to PE-CE and CE- CE paths in alternate configurations.
In the context of an exemplary MPLS network serving a VPN, the following is a sample network topology used for the purposes of illustration. In the figure "CE" refers to a customer edge (i.e.: customer premises-based) router. A "PE" denotes a provider edge router, which demarks the edge of the provider network from that of the customer's network. Typically many CEs are attached to a single PE router, which takes on an aggregation function for many CEs. Each CE is attached to the provider network by at least one network link, and is often attached in multiple places forming a redundant or "multi-homed" configuration, although sometimes simply two network links provided by different "last mile" carriers may be used to attach the CE to the same PE. The "P" routers are the provider network's core routers. These routers comprise the provider's core network infrastructure. Collectively the various routers 130-1..130- 10 define a plurality of nodes in the context of the MPLS environment. Such an MPLS network typically begins and ends at the PE routers. Typically a dynamic routing protocol such as Border Gateway Protocol 4 (BGP-4), or static routing is used to route packets between the CE and PE. However, it is possible to run MPLS between the CE and PE devices. A simple MPLS topology illustrating this terminology is as follows:
CE — PE - P - P - PE — CE
There are two basic scenarios in which the mechanism described below applies. In the first case, the CE-PE links are running some non-MPLS protocol. In the second case, the CE-PE links are running the MPLS, such as either the Label Distribution Protocol (LDP) or BGP with label distribution, to distribute labels between each other. This type of configuration is typical when the customer is obtaining Carrier's Carrier services from the network-based VPN provider. The mechanism herein is applicable in either case.
Fig. 2 is a flowchart for applying performance characteristics to compute an optimal path 160 in the network of Fig. 1. Referring to Figs. 1 and 2, the method of identifying network routing paths 160 as disclosed in exemplary configurations herein includes gathering network routing information indicative of performance characteristics between network nodes 130, as depicted at step 200. The routing information includes performance characteristics computed from the attributes of one or more messages 150, such as transport time, packet delay, packet jitter and packet loss. In the exemplary configuration, as described above, attributes are obtainable as responses to diagnostic probe messages 148, also known as path verification messages according to the path verification protocol (PVP). Alternatively, routing information (i.e. attributes) is obtainable from other messages 150, such as link state (LSA/LSP) messages and other routing traffic.
In the exemplary configuration, discussed further below, a client has the ability to identify a set of "important" destinations for which the gathering of the path attributes is required on a real-time basis (because of the necessity to measure the performance of a particular path). Not that the term "real-time" does not refer to the frequency at which path attributes are retrieved but is used to illustrate the fact that such information is gathered upon an explicit request of an authorized CE.
Having identified "important," or significant prefixes (which can be equal to the entire set of routes or just a subset of them), a client has the ability to trigger upon expiration of a jittered configurable timer or upon manual trigger an end-to-end data plane path attributes check for these prefixes.
The receiving router 130 aggregates the identified routing information according to at least one performance characteristic, such as transport time or packet loss, to consolidate the attributes and allow a deterministic criteria to be drawn (i.e. to compare apples to apples), as depicted at step 201. For example, a performance characteristic such as the propagation delay from node A to B is concerned with messages having a timestamp attribute denoting transmission from node A and arrival at node B. In particular configurations, a diagnostic probe message (i.e. a PVP message is employed). Upon expiration of the timer, or upon manual trigger, the client can initialize a request PVP message to the PE listing the set of path attributes to be measured. The PVP protocol is defined further in copending U.S. patent application No. 11/001,149 cited above.
Subsequent scheduling by a router 130 then applies the aggregated routing information to perform routing decisions for network paths 160 between the network nodes 130 by identifying the network paths corresponding to favorable performance characteristics, in which the network paths are defined by a plurality of the network nodes, as shown at step 202. The gathered attributes generally indicate performance characteristics between particular nodes 130. However, as indicated above, a path 160 across the core network 140 typically spans at least several nodes 130 and possible many. Accordingly, routing information corresponding to each "hop" along a path 160 is employed to compute the expected performance of a given path 150 by accumulating all the hops included in the path.
In the exemplary scenario, employing PVP, the CE starts a dynamic timer T. Upon expiration, if no PVP reply has been received for the PVP request, a further PVP request may be sent up to a maximum number N of consecutive attempts. This timer will be dynamically computed by the CE and will be based on the specific application requiring the path 160 characteristics.
Alternatively, on receipt of a PVP request 148, a PE should first verify whether the CE is authorized to send such request. If the CE request is not legal, a PVP error message is returned to the requesting CE. Then the PE should use the information contained within the request to obtain the relevant set of information (if possible). The PE achieves this by sending test traffic to the destination PE who is the next-hop exit point for a given VPN destination, and measures the attributes in question. For example, if measuring packet loss, the PE should send several messages 148 and count how many were replied too. In the case of jitter and delay, the PE should incorporate time stamp information from the test packets 148 as well as local information to keep track of time. In all cases, if the backbone of the network- based VPN service utilizes MPLS as its forwarding mechanism, it is preferable that MPLS- specific tools be used to measure these path 160 characteristics as to provide an accurate measure of the data plane.
If the request 148 can be satisfied, the result should be provided by means of a PVP reply to the CE client (note that the PVP server process is expected to be stateless and should delete the computed values after a predetermined time threshold. If the PVP server process at the PE cannot get the information then a PVP error message 150 should be returned along with an error code specifying the error root cause. Furthermore, the PE should also monitor the rate at which such requests are received on a per-PE basis and potentially silently drop the requests in excess, pace such requests or return an PVP error code message and dampen any further requests.
For example, one performance characteristic which is often carefully scrutinized is the transport time between nodes 130. In particular configurations, multiple routers 130 in a network synchronize their corresponding time clocks amongst themselves based on use of a synchronizing protocol such as the Network Time Protocol (NTP). The routers 130 flood the network 140 with network configuration messages 148 such as those based on the LSA/LSP to advertise status information of a network configuration change to other routers. When originating a respective network configuration message 148, a respective router generates a timestamp based on use of its synchronized clock for inclusion in a field of the network configuration message. Other routers 130 receiving the network configuration message identify a travel time attribute associated with the network configuration message over the network 140 by comparing a timestamp attribute (e.g., origination time) of a received network configuration message to their own time clock (e.g., the receiving router's time clock) to calculate a transmission time value indicating how long the network configuration message took to be conveyed over the network from the originator to a corresponding receiving node 130.
In this example, each router 130 receiving a respective network configuration message 148 identifies a travel time (or flooding time) associated with the network configuration message by comparing a respective timestamp (e.g., origination time) of the network configuration message to its own respective time clock (e.g., the receiving router's time clock) to calculate a transmission time value indicating how long the network configuration message took to be conveyed over the network from the originator router to a corresponding receiving router.
Fig. 3 is an example of applying the performance characteristics to compute an optimal path in the network of Fig. 1. Referring to Fig. 3, the VPN environment 100 of Fig. 1 is shown in greater detail including a plurality of messages 150..150-10 having performance attributes 152. As indicated above, the messages 150 may be sent in response to a variety of triggers. In the exemplary configuration, diagnostic probe messages 148 specifically for eliciting the messages 150 and corresponding attributes 152 are employed. Such diagnostic probe messages 148 may be part of a path verification protocol (PVP), as discussed further in the copending patent application discussed above. Also, such messages 150 may be Link Status (LSA/LSP) messages, or other message traffic which includes performance attributes 152. In each of the above cases, attributes 152 indicative of performance characteristics are received by the router PE 1 130-3. Exemplary router PEl includes an interface 132 having a plurality of ports 134 for receiving and forwarding message traffic through the network 140 in the normal course of routing operations. The router PEl also includes a diagnostic processor 140 for performing path diagnostics and validation as discussed herein. The diagnostic processor 140 includes an attribute sniffer 142 operable to identify messages 150 having attributes relevant to performance, and also operable to retrieve the attributes 152 in a non-destructive manner which does not affect extraneous routing operations. The diagnostic processor 140 also includes a characteristic aggregator 146, for analyzing the attributes of multiple messages 150 to identify trends, and a path scheduler 144 for applying the path characteristics to routing decisions based on criteria such as QOS guarantees. For example, having identified a path 160 which provides transport across the core network 140 in 200ms, for example, the path scheduler 144 (scheduler) may perform routing decisions to employ that path 160 for message traffic associated with a QOS guarantee of, say, 210ms. Therefore, optimal routing decisions which route traffic on a path sufficient to satisfy such QOS requirements is obtained, yet the scheduler 144 need not route such traffic on a 100ms path 160 which may be needed for more critical traffic.
Figure imgf000016_0001
TABLE I : PERFORMANCE CHARACTERISTICS
The attribute sniffer 142 gathers attributes 152 for storage in the repository 170. The repository 170 stores the attributes 152 as routing information 172 according to normalized criteria such as paths, hops, and routers 130, as applicable to the performance characteristic in question. An exemplary set of performance characteristics 174 is shown in Table I, which stores transport times between various nodes 130. Thus, successive trials of performance characteristics 174 (i.e. attributes), obtained from the various hops between nodes 150-1..150- 10 are stored in Table I along with the attribute values, such as transport time in the given example.
The characteristic aggregator 146 employs the performance characteristics 174 to compute path diagnostics 176 (characteristics), representing the deterministic expectations computed from the available attributes 152. As shown in Table π, an expected transport time for each hop is computable by averaging the gathered attribute 152 obtained for a series of hops between two particular nodes 130. The aggregate performance of a path 160 is computed by summing each average hop along the path 160, discussed further below.
Figure imgf000017_0001
TABLE H: PATH DIAGNOSTICS
Figs. 4-8 are a flowchart of applying performance characteristics in further detail using the exemplary network of Fig. 3 and the routing information of Tables I and II, above. Referring to Figs. 1, 3 and 8, as well as tables I and π, the diagnostic processor 140, configured in router PEl, is operable for performing path diagnostics as discussed further below. Such a diagnostic processor 140 is also applicable to the other routers 130-1..130-10, however is illustrated from the perspective of router PEl alone for simplicity. Accordingly, the diagnostic processor 140 identifies a plurality of network paths as candidate paths 160 between a source and a destination, such as the paths 160-1, 160-2 between PEl and PE3 130-3, 130-8, as depicted at step 300. In the exemplary configuration, such a path 160 denotes a PE-PE interconnection across the core network 140 and therefore involves identifying a plurality of nodes 130 along one or more candidate paths 160 through the core network 140 for monitoring and analysis. Typically, the diagnostic processor 140 identifies particular paths 130 operable to transport significant message traffic, as depicted at step 301, therefore avoiding the burden of including low volume or an excessive number of non- contentious router connections. The attribute sniffer 142, or other process operable to receive and examine network message (packets) 150 examines, on the identified particular paths 160, messages 150 having the attributes 152 indicative of performance characteristics, as shown at step 302. Accordingly, the attribute sniffer 142 identifies packets 152 having network routing information indicative of performance characteristics between network nodes 130, as disclosed at step 303. As indicated above, such attributes include performance related metrics or variables such as the propagation (i.e. transport) time, delay, loss and jitter, to name several. Such identification may be by virtue of a diagnostic protocol such as PVP, or by other parsing or scanning mechanism such as identifying protocol control sequences employed by the underlying network protocol (i.e. MPLS or TCP/IP).
A check is performed, at step 304, to determine if a protocol such as PVP is in use, as depicted at step 304. IfPVP is in use, then the received (i.e. sniffed) messages are diagnostic probe messages adapted to gather and report routing information, as depicted at step 305. Routers 130 enabled with such diagnostic probe capability (i.e. PVP enabled) employ diagnostic probe messages 148 on the identified particular paths 160, in which the diagnostic probe messages 148 are operable to trigger sending of a probe reply 150 from other destination routers 130. Accordingly, the router PEl sends a plurality of diagnostic probe messages 148 to at least one node 130 along the candidate path, as shown at step 306. Probe messages 148 evoke a responsive probe reply 150 from the router 130 on the candidate path 160, and accordingly, the diagnostic processor 140 concludes, if the probe reply is not received, a connectivity issue along the identified particular path 160, as shown at step 307. Otherwise, the characteristic aggregator 146, responsive to the attribute sniffer 142, identifies the incoming messages 150 having attributes indicative of performance characteristics, as shown at step 308. As the incoming messages 150 may be probe replies, LSA, or other attribute bearing messages, gathering the routing information may result in the characteristic aggregator 146 receiving various messages such as Link State Advertisement (LSA) messages, probe messages, or router traffic messages, in which the messages include the attributes indicative of routing information, as shown at step 309.
Accordingly, the characteristic aggregator 146 scans the examined messages to retrieve the attributes 152, as depicted at step 310. Scanning involves parsing the attributes to extract the routing information corresponding to the performance characteristics, as shown at step 309. Since the attributes are gathered by the messages 150 as they traverse the nodes 130 of the network, the attributes contain routing information corresponding to characteristics between a particular node 130 and one or more other nodes 130. The aggregator 146 accumulates the gathered network routing information from the parsed attributes, as shown at step 312. Accumulating in this manner includes aggregating the identified routing information according to one or more of the performance characteristics, as depicted at step 312, and organizing the received attributes 152, such as attributes parsed from diagnostic probe replies 150, according to the node 130 from which it was received, each of the nodes defining a hop along the path 160, as depicted at step 313. A series of messages 150 results in an aggregation of attributes arranged by performance characteristics 174 and hops, enabling further processing on a path basis, as shown in Table I. The aggregator 146 analyzes the attributes 152 from the organized probe replies 150 corresponding to the sent diagnostic probe messages 148 to compute routing characteristics of the hops along the path, as shown at step 314.
The aggregator 146 then analyzes the performance characteristics 174 specified by the attributes 152 to determine performance attributes of a particular path 160, as depicted at step 315. The aggregator 146 analyzes the network routing 172 information to identify path 160 characteristics applicable to an entire path through the core 140, thus encompassing the constituent hops includes in that path, as disclosed at step 316. The aggregator stores the extracted, aggregated routing information 172 as path diagnostics 176 (Table II) according to the performance characteristics 174, between the respective nodes 130, in the repository 170 as depicted at step 317.
The path scheduler 144 applies the aggregated routing information 172 to routing decisions for network paths 160 between the network nodes 130 by identifying the network paths 160 corresponding to favorable performance characteristics, in which the network paths 160 are each defined by a plurality of the network nodes 130, as disclosed at step 318. Therefore, the performance characteristics of each of the internodal hops of Table I, for example, determine the optimal path by adding or summing the characteristics of the respective hops.
Therefore, the scheduler 144 applies the routing information 172 by computing, based on a set of successively identified messages 150, expected performance between the respective nodes 130, as depicted at step 319. The scheduler 144 computes, from the extracted routing information 172, for each of the candidate paths 160, an aggregate performance indicative of message traffic performance between a particular source and destination (i.e. typically PE routers 130) for each of the candidate paths 160, as shown at step 320. In the exemplary scenario in Fig. 3 and tables I and π, the path scheduler 144 computes, for each of the candidate paths 160-1 and 160-2, an aggregate transport time indicative of message traffic performance between the source and destination (i.e. PEl to PE3) for each of the candidate paths 160. In other words, using transport time as the performance characteristic, the path scheduler 144 applies the network routing information 172 to the plurality of paths 160 between nodes to compute a propagation time between the selected nodes PEl and PE3. To compute the optimal path in a particular context (i.e. guaranteed delivery scenario), guaranteed delivery parameters are applied by specifying the attributes to be measured according to predetermined QOS criteria, as depicted at step 322. The QOS criteria indicate which performance characteristics are applied and the particular performance values required, such as transport time. Accordingly, the path scheduler 144 enumerates a set of quality of service (QOS) tier levels, in which the QOS levels are indicative of an expected throughput performance, as shown at step 323, and associates each of the candidate paths 160 with a QOS level, as depicted at step 324. The path attributes allow the path scheduler 144 to qualify the paths as satisfying a particular QOS level, such as a transport time from PEl to PE3 in 100ms, for example. The path scheduler compares the computed performance attributes to the associated QOS level to selectively the route message traffic over a particular path 160, as disclosed at step 325.
The path scheduler 144 may then route network traffic on paths 160 having performance characteristics consistent with a particular QOS criteria, the performance characteristics including at least one of transport time, packet loss, packet delay and jitter, as depicted at step 326. Therefore, in the example in Fig. 3, the path scheduler 144 denotes a particular candidate path 160 as an optimal path based on the aggregate transport time, as depicted at step 327. For example, the path 160-2 would be chosen for the QOS traffic requiring 100ms transport from PE1-PE3 because path 160-1 exhibits path diagnostics of 120ms and cannot support such performance. Those skilled in the art should readily appreciate that the programs and methods for identifying network routing paths as defined herein are deliverable to a processing device in many forms, including but not limited to a) information permanently stored on non-writeable storage media such as ROM devices, b) information alterably stored on writeable storage media such as floppy disks, magnetic tapes, CDs, RAM devices, and other magnetic and optical media, or c) information conveyed to a computer through communication media, for example using baseband signaling or broadband signaling techniques, as in an electronic network such as the Internet or telephone modem lines. The operations and methods may be implemented in a software executable object or as a set of instructions embedded in a carrier wave. Alternatively, the operations and methods disclosed herein may be embodied in whole or in part using hardware components, such as Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs), state machines, controllers or other hardware components or devices, or a combination of hardware, software, and firmware components.
While the system and method for identifying network routing paths has been particularly shown and described with references to embodiments thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the scope of the invention encompassed by the appended claims. Accordingly, the present invention is not intended to be limited except by the following claims.

Claims

CLAIMS What is claimed is:
1. A method of identifying network routing paths comprising: gathering network routing information indicative of performance characteristics between network nodes; aggregating the identified routing information according to at least one performance characteristic; and applying the aggregated routing information to routing decisions for network paths between the network nodes by identifying the network paths corresponding to favorable performance characteristics, the network paths defined by a plurality of the network nodes.
2. The method of claim 1 wherein aggregating further comprises: identifying messages having attributes indicative of performance characteristics; parsing the attributes to extract the routing information corresponding to the performance characteristics, routing information corresponding to characteristics between a particular node and at least one other node; and storing the extracted routing information according to the performance characteristics between the respective nodes.
3. The method of claim 2 wherein applying further comprises: identifying a plurality of network paths as candidate paths between a source and a destination; computing, from the extracted routing information, for each of the candidate paths, an aggregate performance indicative of message traffic performance between the source and destination for each of the candidate paths; and
denoting a particular candidate path as an optimal path based on the computed aggregate performance.
4. The method of claim 3 wherein applying further comprises: specifying the attributes to be measured according to predetermined QOS criteria; and routing network traffic on paths having performance characteristics consistent with a particular QOS criteria, the performance characteristics including at least one of transport time, packet loss, packet delay and jitter.
5. The method of claim 1 wherein gathering further comprises: identifying particular paths operable to transport significant message traffic; examining, on the identified particular paths, messages having the attributes indicative of performance characteristics; and scanning the examined messages to retrieve the attributes.
6. The method of claim 5 wherein the messages are diagnostic probe messages adapted to gather and report routing information, further comprising: sending a set of diagnostic probe messages to the identified particular paths, the diagnostic probe messages operable to trigger sending of a probe reply; analyzing, if a probe reply is received, the probe reply to determine performance attributes of the particular path; and concluding, if the probe reply is not received, a connectivity issue along the identified particular path.
7. The method of claim 1 further comprising: identifying a plurality of nodes along a candidate path;
sending a plurality of diagnostic probe messages to at least one node along the candidate path; organizing the received probe replies according to the node from which it was received, each of the nodes defining a hop along the path; and analyzing the organized probe replies corresponding to the sent diagnostic probe messages to compute routing characteristics of the hops along the path.
8. The method of claim 7 further comprising computing, based on a set of successively identified messages, expected performance between the respective nodes.
9. The method of claim 8 wherein gathering network routing information further includes: receiving Link State Advertisement (LSA) messages, the LSA messages having attributes indicative of routing information; and accumulating the gathered network routing information; and analyzing the network routing information to identify path characteristics.
10. The method of claim 1 further comprising: identifying a plurality of network paths as candidate paths between a source and a destination; applying the network routing information to the plurality of paths between nodes to compute a propagation time between the selected nodes. computing, for each of the candidate paths, an aggregate transport time indicative of message traffic performance between the source and destination for each of the candidate paths; and denoting a particular candidate path as an optimal path based on the aggregate transport time.
11. The method of claim 10 wherein applying further comprises: enumerating a set of quality of service (QOS) tier levels, the QOS levels indicative of an expected throughput performance; associating each of the paths with a QOS level; and comparing the computed performance attributes to the associated QOS level to selectively route message traffic over a particular path.
12. A data communications device having a diagnostic processor for analyzing network routing paths comprising: an attribute sniffer operable to gather network routing information indicative of performance characteristics between network nodes; a characteristic aggregator operable to aggregate the identified routing information according to at least one performance characteristic; and a path scheduler operable to apply the aggregated routing information to routing decisions for network paths between the network nodes by identifying the network paths corresponding to favorable performance characteristics, the network paths defined by a plurality of the network nodes.
13. The data communications device of claim 12 wherein the characteristic aggregator is further operable to: parse the attributes to extract the routing information corresponding to the performance characteristics, routing information corresponding to characteristics between a particular node and at least one other node, further comprising a repository operable to store the extracted routing information according to the performance characteristics between the respective nodes.
14. The data communications device of claim 13 wherein the path scheduler is operable to: identify a plurality of network paths as candidate paths between a source and a destination; compute, from the extracted routing information, for each of the candidate paths, an aggregate performance indicative of message traffic performance between the source and destination for each of the candidate paths; and denote a particular candidate path as an optimal path based on the computed aggregate performance.
15. The data communications device of claim 14 further comprising a Quality of Service (QOS) specification indicative of QOS criteria, the path scheduler operable to: specify the attributes to be measured according to predetermined QOS criteria; and route network traffic on paths having performance characteristics consistent with a particular QOS criteria, the performance characteristics including at least one of transport time, packet loss, packet delay and jitter.
16. The data communications device of claim 12 wherein the messages are diagnostic probe messages according to a predetermined protocol adapted to gather and report routing information, wherein the diagnostic processor is further operable to: identify particular paths operable to transport significant message traffic; send a set of diagnostic probe messages to the identified particular paths, the diagnostic probe messages operable to trigger sending of a probe reply; analyze, if a probe reply is received, the probe reply to determine performance attributes of the particular path; and conclude, if the probe reply is not received, a connectivity issue along the identified particular path.
17. The data communications device of claim 12 wherein the diagnostic processor is further operable to : identify a plurality of nodes along a candidate path; send a plurality of diagnostic probe messages to at least one node along the candidate path; organize the received probe replies according to the node from which it was received, each of the nodes defining a hop along the path; analyze the organized probe replies corresponding to the sent diagnostic probe messages to compute routing characteristics of the hops along the path; and compute, based on a set of successively identified messages, expected performance between the respective nodes.
18. The data communications device of claim 12 wherein the diagnostic processor is further operable to: identify a plurality of network paths as candidate paths between a source and a destination; aPpty the network routing information to the plurality of paths between nodes to compute a propagation time between the selected nodes; compute, for each of the candidate paths, an aggregate transport time indicative of message traffic performance between the source and destination for each of the candidate paths; and denote a particular candidate path as an optimal path based on the aggregate transport time.
19. The data communications device of claim 18 wherein the characteristic aggregator is further operable to: enumerate a set of quality of service (QOS) tier levels, the QOS levels indicative of an expected throughput performance; associate each of the paths with a QOS level; compare the computed performance attributes to the associated QOS level to selectively route message traffic over a particular path; and perform routing decisions for routing the message traffic according to a QOS level attributable to the message traffic.
20. A computer program product having a computer readable medium operable to store computer program logic embodied in computer program code encoded thereon for identifying network routing paths comprising: computer program code for gathering network routing information indicative of performance characteristics between network nodes; computer program code for identifying particular paths operable to transport significant message traffic; computer program code for examining, on the identified particular paths, messages having the attributes indicative of performance characteristics; computer program code for scanning the examined messages to retrieve the attributes; computer program code for aggregating the identified routing information according to at least one performance characteristic; and computer program code for applying the aggregated routing information to routing decisions for the identified particular paths between the network nodes by identifying the network paths corresponding to favorable performance characteristics, the network paths defined by a plurality of the network nodes.
21. A data communications device having a diagnostic processor for analyzing network routing paths comprising: means for gathering network routing information indicative of performance characteristics between network nodes; means for aggregating the identified routing information according to at least one performance characteristic; means for computing, from the gathered routing information, for each of the candidate paths, an aggregate performance indicative of message traffic performance between the source and destination for each of the candidate paths; means for denoting a particular candidate path as an optimal path based on the computed aggregate performance; and means for applying the aggregated routing information to routing decisions for network paths between the network nodes by identifying the network paths corresponding to favorable performance characteristics, the network paths defined by a plurality of the network nodes.
PCT/US2006/010379 2005-03-22 2006-03-22 System and methods for identifying network path performance WO2006102398A2 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
EP06739254A EP1861963B1 (en) 2005-03-22 2006-03-22 System and methods for identifying network path performance
AT06739254T ATE511726T1 (en) 2005-03-22 2006-03-22 SYSTEM AND METHOD FOR IDENTIFYING NETWORK PATH PERFORMANCE
CN200680004006XA CN101151847B (en) 2005-03-22 2006-03-22 System and methods for identifying network path performance

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US11/086,007 US20060215577A1 (en) 2005-03-22 2005-03-22 System and methods for identifying network path performance
US11/086,007 2005-03-22

Publications (2)

Publication Number Publication Date
WO2006102398A2 true WO2006102398A2 (en) 2006-09-28
WO2006102398A3 WO2006102398A3 (en) 2007-11-22

Family

ID=37024560

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2006/010379 WO2006102398A2 (en) 2005-03-22 2006-03-22 System and methods for identifying network path performance

Country Status (5)

Country Link
US (1) US20060215577A1 (en)
EP (1) EP1861963B1 (en)
CN (1) CN101151847B (en)
AT (1) ATE511726T1 (en)
WO (1) WO2006102398A2 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2244423A1 (en) * 2009-04-23 2010-10-27 Vodafone Group PLC Routing traffic in a cellular communication network
WO2011103913A1 (en) 2010-02-23 2011-09-01 Telefonaktiebolaget L M Ericsson (Publ) Summarisation in a multi-domain network
WO2013130330A1 (en) * 2012-02-28 2013-09-06 Google Inc. Identifying an egress point to a network location
EP2955885A1 (en) * 2014-04-14 2015-12-16 Huawei Technologies Co., Ltd. Method and apparatus for determining traffic forwarding path and communications system

Families Citing this family (107)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7189027B2 (en) * 2003-10-01 2007-03-13 Infiltrator Systems, Inc. Corrugated leaching chamber
US7583593B2 (en) * 2004-12-01 2009-09-01 Cisco Technology, Inc. System and methods for detecting network failure
US7990888B2 (en) * 2005-03-04 2011-08-02 Cisco Technology, Inc. System and methods for network reachability detection
US7675856B2 (en) * 2005-03-24 2010-03-09 Microsoft Corporation Bandwidth estimation in broadband access networks
US8189481B2 (en) * 2005-04-08 2012-05-29 Avaya, Inc QoS-based routing for CE-based VPN
US9197533B1 (en) 2005-05-09 2015-11-24 Cisco Technology, Inc. Technique for maintaining and enforcing relative policies with thresholds
US20070058660A1 (en) * 2005-07-22 2007-03-15 Interdigital Technology Corporation Wireless communication method and apparatus for controlling access to Aloha slots
WO2007067693A2 (en) * 2005-12-07 2007-06-14 Tektronix, Inc. Systems and methods for discovering sctp associations in a network
US7983174B1 (en) 2005-12-19 2011-07-19 Cisco Technology, Inc. Method and apparatus for diagnosing a fault in a network path
US7912934B1 (en) 2006-01-09 2011-03-22 Cisco Technology, Inc. Methods and apparatus for scheduling network probes
JP4583312B2 (en) * 2006-01-30 2010-11-17 富士通株式会社 Communication status determination method, communication status determination system, and determination device
US7852778B1 (en) 2006-01-30 2010-12-14 Juniper Networks, Inc. Verification of network paths using two or more connectivity protocols
US7835378B2 (en) * 2006-02-02 2010-11-16 Cisco Technology, Inc. Root node redundancy for multipoint-to-multipoint transport trees
US9094257B2 (en) 2006-06-30 2015-07-28 Centurylink Intellectual Property Llc System and method for selecting a content delivery network
US20080002711A1 (en) * 2006-06-30 2008-01-03 Bugenhagen Michael K System and method for access state based service options
US8184549B2 (en) 2006-06-30 2012-05-22 Embarq Holdings Company, LLP System and method for selecting network egress
US8717911B2 (en) 2006-06-30 2014-05-06 Centurylink Intellectual Property Llc System and method for collecting network performance information
US8194643B2 (en) 2006-10-19 2012-06-05 Embarq Holdings Company, Llc System and method for monitoring the connection of an end-user to a remote network
US8488447B2 (en) 2006-06-30 2013-07-16 Centurylink Intellectual Property Llc System and method for adjusting code speed in a transmission path during call set-up due to reduced transmission performance
US8289965B2 (en) 2006-10-19 2012-10-16 Embarq Holdings Company, Llc System and method for establishing a communications session with an end-user based on the state of a network connection
US8307065B2 (en) 2006-08-22 2012-11-06 Centurylink Intellectual Property Llc System and method for remotely controlling network operators
US8238253B2 (en) 2006-08-22 2012-08-07 Embarq Holdings Company, Llc System and method for monitoring interlayer devices and optimizing network performance
US7843831B2 (en) 2006-08-22 2010-11-30 Embarq Holdings Company Llc System and method for routing data on a packet network
US8223655B2 (en) 2006-08-22 2012-07-17 Embarq Holdings Company, Llc System and method for provisioning resources of a packet network based on collected network performance information
US8064391B2 (en) 2006-08-22 2011-11-22 Embarq Holdings Company, Llc System and method for monitoring and optimizing network performance to a wireless device
US9479341B2 (en) 2006-08-22 2016-10-25 Centurylink Intellectual Property Llc System and method for initiating diagnostics on a packet network node
US8576722B2 (en) * 2006-08-22 2013-11-05 Centurylink Intellectual Property Llc System and method for modifying connectivity fault management packets
US8743703B2 (en) 2006-08-22 2014-06-03 Centurylink Intellectual Property Llc System and method for tracking application resource usage
US8015294B2 (en) 2006-08-22 2011-09-06 Embarq Holdings Company, LP Pin-hole firewall for communicating data packets on a packet network
US8224255B2 (en) 2006-08-22 2012-07-17 Embarq Holdings Company, Llc System and method for managing radio frequency windows
US8537695B2 (en) 2006-08-22 2013-09-17 Centurylink Intellectual Property Llc System and method for establishing a call being received by a trunk on a packet network
US8531954B2 (en) 2006-08-22 2013-09-10 Centurylink Intellectual Property Llc System and method for handling reservation requests with a connection admission control engine
US8407765B2 (en) 2006-08-22 2013-03-26 Centurylink Intellectual Property Llc System and method for restricting access to network performance information tables
US8130793B2 (en) 2006-08-22 2012-03-06 Embarq Holdings Company, Llc System and method for enabling reciprocal billing for different types of communications over a packet network
US7684332B2 (en) 2006-08-22 2010-03-23 Embarq Holdings Company, Llc System and method for adjusting the window size of a TCP packet through network elements
US8189468B2 (en) 2006-10-25 2012-05-29 Embarq Holdings, Company, LLC System and method for regulating messages between networks
US8274905B2 (en) 2006-08-22 2012-09-25 Embarq Holdings Company, Llc System and method for displaying a graph representative of network performance over a time period
US8750158B2 (en) 2006-08-22 2014-06-10 Centurylink Intellectual Property Llc System and method for differentiated billing
US8619600B2 (en) 2006-08-22 2013-12-31 Centurylink Intellectual Property Llc System and method for establishing calls over a call path having best path metrics
US8144587B2 (en) 2006-08-22 2012-03-27 Embarq Holdings Company, Llc System and method for load balancing network resources using a connection admission control engine
US7904533B1 (en) 2006-10-21 2011-03-08 Sprint Communications Company L.P. Integrated network and customer database
US7751392B1 (en) 2007-01-05 2010-07-06 Sprint Communications Company L.P. Customer link diversity monitoring
US8355316B1 (en) 2009-12-16 2013-01-15 Sprint Communications Company L.P. End-to-end network monitoring
US7839796B2 (en) * 2007-03-14 2010-11-23 Cisco Technology, Inc. Monitor for multi-protocol label switching (MPLS) networks
US8289878B1 (en) 2007-05-09 2012-10-16 Sprint Communications Company L.P. Virtual link mapping
US8111627B2 (en) * 2007-06-29 2012-02-07 Cisco Technology, Inc. Discovering configured tunnels between nodes on a path in a data communications network
US7830816B1 (en) * 2007-08-13 2010-11-09 Sprint Communications Company L.P. Network access and quality of service troubleshooting
US7636789B2 (en) * 2007-11-27 2009-12-22 Microsoft Corporation Rate-controllable peer-to-peer data stream routing
US7831709B1 (en) 2008-02-24 2010-11-09 Sprint Communications Company L.P. Flexible grouping for port analysis
US8068425B2 (en) 2008-04-09 2011-11-29 Embarq Holdings Company, Llc System and method for using network performance information to determine improved measures of path states
US7904553B1 (en) 2008-11-18 2011-03-08 Sprint Communications Company L.P. Translating network data into customer availability
US8014275B1 (en) 2008-12-15 2011-09-06 At&T Intellectual Property L, L.P. Devices, systems, and/or methods for monitoring IP network equipment
CN101494802B (en) * 2008-12-26 2011-04-20 中国移动通信集团四川有限公司 Method for automatically recycling transmission network circuit resource
US8274914B2 (en) 2009-02-03 2012-09-25 Broadcom Corporation Switch and/or router node advertising
US8301762B1 (en) 2009-06-08 2012-10-30 Sprint Communications Company L.P. Service grouping for network reporting
US8458323B1 (en) 2009-08-24 2013-06-04 Sprint Communications Company L.P. Associating problem tickets based on an integrated network and customer database
US8644146B1 (en) 2010-08-02 2014-02-04 Sprint Communications Company L.P. Enabling user defined network change leveraging as-built data
US9112535B2 (en) * 2010-10-06 2015-08-18 Cleversafe, Inc. Data transmission utilizing partitioning and dispersed storage error encoding
US9385917B1 (en) 2011-03-31 2016-07-05 Amazon Technologies, Inc. Monitoring and detecting causes of failures of network paths
US8937946B1 (en) * 2011-10-24 2015-01-20 Packet Design, Inc. System and method for identifying tunnel information without frequently polling all routers for all tunnel information
US9014190B2 (en) 2011-11-11 2015-04-21 Itron, Inc. Routing communications based on node availability
US9305029B1 (en) 2011-11-25 2016-04-05 Sprint Communications Company L.P. Inventory centric knowledge management
US20130232193A1 (en) * 2012-03-04 2013-09-05 Zafar Ali Control-Plane Interface Between Layers in a Multilayer Network
EP2699040B1 (en) 2012-08-06 2015-04-08 Itron, Inc. Multi-media multi-modulation and multi-data rate mesh network
US8902780B1 (en) 2012-09-26 2014-12-02 Juniper Networks, Inc. Forwarding detection for point-to-multipoint label switched paths
US9306841B2 (en) 2012-11-05 2016-04-05 Cisco Technology, Inc. Enabling dynamic routing topologies in support of real-time delay traffic
US9258234B1 (en) 2012-12-28 2016-02-09 Juniper Networks, Inc. Dynamically adjusting liveliness detection intervals for periodic network communications
US8953460B1 (en) 2012-12-31 2015-02-10 Juniper Networks, Inc. Network liveliness detection using session-external communications
US9385933B2 (en) * 2013-02-05 2016-07-05 Cisco Technology, Inc. Remote probing for remote quality of service monitoring
CN104365061B (en) * 2013-05-30 2017-12-15 华为技术有限公司 A kind of dispatching method, apparatus and system
US9553797B2 (en) * 2014-03-12 2017-01-24 International Business Machines Corporation Message path selection within a network
US9813259B2 (en) * 2014-05-13 2017-11-07 Cisco Technology, Inc. Probing available bandwidth along a network path
US10439909B2 (en) * 2014-05-16 2019-10-08 Cisco Technology, Inc. Performance monitoring in a multi-site environment
US9906425B2 (en) * 2014-07-23 2018-02-27 Cisco Technology, Inc. Selective and dynamic application-centric network measurement infrastructure
US10560314B2 (en) * 2014-09-16 2020-02-11 CloudGenix, Inc. Methods and systems for application session modeling and prediction of granular bandwidth requirements
US9769017B1 (en) 2014-09-26 2017-09-19 Juniper Networks, Inc. Impending control plane disruption indication using forwarding plane liveliness detection protocols
US10038601B1 (en) * 2014-09-26 2018-07-31 Amazon Technologies, Inc. Monitoring a multi-tier network fabric
WO2016070901A1 (en) * 2014-11-03 2016-05-12 Telefonaktiebolaget L M Ericsson (Publ) Multi-path time synchronization
US10402765B1 (en) 2015-02-17 2019-09-03 Sprint Communications Company L.P. Analysis for network management using customer provided information
US9629033B2 (en) * 2015-06-16 2017-04-18 Cisco Technology, Inc. System and method to facilitate service hand-outs using user equipment groups in a network environment
EP3357196B1 (en) 2015-09-30 2019-11-06 British Telecommunications public limited company Analysis of network performance
WO2017055227A1 (en) 2015-09-30 2017-04-06 British Telecommunications Public Limited Company Analysis of network performance
CN107852347B (en) * 2015-10-08 2019-02-15 英国电讯有限公司 Analysis includes the method and apparatus of the network performance of the network of multiple network nodes
CN106878036A (en) * 2015-12-10 2017-06-20 中国电信股份有限公司 Method, management server and system for improving efficiency of network resources
US10374936B2 (en) 2015-12-30 2019-08-06 Juniper Networks, Inc. Reducing false alarms when using network keep-alive messages
US10439956B1 (en) * 2016-06-23 2019-10-08 8×8, Inc. Network path selection for routing data
US10250450B2 (en) 2016-06-29 2019-04-02 Nicira, Inc. Distributed network troubleshooting using simultaneous multi-point packet capture
US10397085B1 (en) 2016-06-30 2019-08-27 Juniper Networks, Inc. Offloading heartbeat responses message processing to a kernel of a network device
US10230543B2 (en) 2016-07-20 2019-03-12 Cisco Technology, Inc. Reducing data transmissions in a virtual private network
US10243827B2 (en) 2016-09-26 2019-03-26 Intel Corporation Techniques to use a network service header to monitor quality of service
CN108540380B (en) * 2017-03-02 2021-08-20 华为技术有限公司 Multi-sub-stream network transmission method and device
US20180331946A1 (en) * 2017-05-09 2018-11-15 vIPtela Inc. Routing network traffic based on performance
US20190320383A1 (en) * 2018-04-16 2019-10-17 General Electric Company Methods and apparatus for dynamic network evaluation and network selection
US10715409B2 (en) * 2018-06-27 2020-07-14 Microsoft Technology Licensing, Llc Heuristics for end to end digital communication performance measurement
US11750441B1 (en) 2018-09-07 2023-09-05 Juniper Networks, Inc. Propagating node failure errors to TCP sockets
CN109561028B (en) * 2019-01-07 2023-04-07 中国联合网络通信集团有限公司 Method and equipment for selecting transmission path based on traffic engineering
CN111865781B (en) * 2019-04-25 2022-10-11 伊姆西Ip控股有限责任公司 Method, apparatus and computer program product for path optimization
US11310135B2 (en) 2019-06-13 2022-04-19 Toyota Motor North America, Inc. Managing transport network data access
US20200396787A1 (en) 2019-06-13 2020-12-17 Toyota Motor North America, Inc. Managing transport network data access
CN113839874A (en) * 2019-08-02 2021-12-24 华为技术有限公司 Method and device for obtaining routing table entry
CN110597510B (en) 2019-08-09 2021-08-20 华为技术有限公司 Dynamic layout method and device for interface
CN112543144A (en) * 2019-09-20 2021-03-23 北京华为数字技术有限公司 Link attribute determining method, route calculating method and device
TWI756998B (en) * 2019-12-20 2022-03-01 美商尼安蒂克公司 Data hierarchy protocol for data transmission pathway selection
CN113300914A (en) * 2021-06-28 2021-08-24 北京字跳网络技术有限公司 Network quality monitoring method, device, system, electronic equipment and storage medium
US12010012B2 (en) * 2021-11-05 2024-06-11 At&T Intellectual Property I, L.P. Application-aware BGP path selection and forwarding
US11929907B2 (en) 2022-03-08 2024-03-12 T-Mobile Usa, Inc. Endpoint assisted selection of routing paths over multiple networks
CN117135059B (en) * 2023-10-25 2024-02-09 苏州元脑智能科技有限公司 Network topology structure, construction method, routing algorithm, equipment and medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4807705A (en) 1987-09-11 1989-02-28 Cameron Iron Works Usa, Inc. Casing hanger with landing shoulder seal insert
US20020145981A1 (en) 2001-04-10 2002-10-10 Eric Klinker System and method to assure network service levels with intelligent routing
US20020165957A1 (en) 2001-05-02 2002-11-07 Devoe Jiva Gandhara Intelligent dynamic route selection based on active probing of network operational characteristics
US11490406B2 (en) 2019-05-02 2022-11-01 Wilus Institute Of Standards And Technology Inc. Method, device, and system for downlink data reception and HARQ-ACK transmission in wireless communication system

Family Cites Families (36)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB9521831D0 (en) * 1995-10-25 1996-01-03 Newbridge Networks Corp Crankback and loop detection in ATM SVC routing
US6222824B1 (en) * 1998-04-24 2001-04-24 International Business Machines Corporation Statistical call admission control
JP3623680B2 (en) * 1999-01-11 2005-02-23 株式会社日立製作所 Network system having path verification function, path management apparatus, and exchange
US6856627B2 (en) * 1999-01-15 2005-02-15 Cisco Technology, Inc. Method for routing information over a network
US6477522B1 (en) * 1999-06-10 2002-11-05 Gateway, Inc. Dynamic performance based server selection
US6813240B1 (en) * 1999-06-11 2004-11-02 Mci, Inc. Method of identifying low quality links in a telecommunications network
US6662223B1 (en) * 1999-07-01 2003-12-09 Cisco Technology, Inc. Protocol to coordinate network end points to measure network latency
US6621670B2 (en) * 1999-08-12 2003-09-16 Kabushiki Kaisha Sanyo Denki Seisakusho Ground fault protection circuit for discharge tube lighting circuit
JP2001217839A (en) * 2000-01-31 2001-08-10 Fujitsu Ltd Node device
JP3575381B2 (en) * 2000-03-24 2004-10-13 日本電気株式会社 Link state routing communication device and link state routing communication method
US20020093954A1 (en) * 2000-07-05 2002-07-18 Jon Weil Failure protection in a communications network
WO2002023934A1 (en) * 2000-09-15 2002-03-21 Mspect, Inc. Wireless network monitoring
US7146630B2 (en) * 2000-09-22 2006-12-05 Narad Networks, Inc. Broadband system with intelligent network devices
US7336613B2 (en) * 2000-10-17 2008-02-26 Avaya Technology Corp. Method and apparatus for the assessment and optimization of network traffic
US20020118636A1 (en) * 2000-12-20 2002-08-29 Phelps Peter W. Mesh network protection using dynamic ring
US20040179471A1 (en) * 2001-03-07 2004-09-16 Adisak Mekkittikul Bi-directional flow-switched ring
US20030055925A1 (en) * 2001-08-06 2003-03-20 Mcalinden Paul Discovering client capabilities
US7113485B2 (en) * 2001-09-04 2006-09-26 Corrigent Systems Ltd. Latency evaluation in a ring network
US6834139B1 (en) * 2001-10-02 2004-12-21 Cisco Technology, Inc. Link discovery and verification procedure using loopback
US7561517B2 (en) * 2001-11-02 2009-07-14 Internap Network Services Corporation Passive route control of data networks
US7330435B2 (en) * 2001-11-29 2008-02-12 Iptivia, Inc. Method and system for topology construction and path identification in a routing domain operated according to a link state routing protocol
US7099277B2 (en) * 2002-02-20 2006-08-29 Mitsubishi Electric Research Laboratories, Inc. Dynamic optimal path selection in multiple communications networks
US6744774B2 (en) * 2002-06-27 2004-06-01 Nokia, Inc. Dynamic routing over secure networks
US20050254429A1 (en) * 2002-06-28 2005-11-17 Takeshi Kato Management node deice, node device, network configuration management system, network configuration management method, node device control method, management node device control method
US20040132409A1 (en) * 2002-08-28 2004-07-08 Siemens Aktiengesellschaft Test method for message paths in communications networks and redundant network arrangements
US7881214B2 (en) * 2002-10-25 2011-02-01 General Instrument Corporation Method for performing remote testing of network using IP measurement protocol packets
US7584298B2 (en) * 2002-12-13 2009-09-01 Internap Network Services Corporation Topology aware route control
CA2422258A1 (en) * 2003-03-14 2004-09-14 Alcatel Canada Inc. Ethernet route trace
US7394809B2 (en) * 2003-03-31 2008-07-01 Intel Corporation Method and apparatus for packet classification using a forest of hash tables data structure
CN1188984C (en) * 2003-07-11 2005-02-09 清华大学 Selecting method based on path-time delay probability distribution
US7085290B2 (en) * 2003-09-09 2006-08-01 Harris Corporation Mobile ad hoc network (MANET) providing connectivity enhancement features and related methods
US7466655B1 (en) * 2003-09-16 2008-12-16 Cisco Technology, Inc. Ant-based method for discovering a network path that satisfies a quality of service equipment
US7382738B2 (en) * 2003-11-24 2008-06-03 Nortel Networks Limited Method and apparatus for computing metric information for abstracted network links
CN1299478C (en) * 2004-03-26 2007-02-07 清华大学 Route searching of detgredd of node based on radio self-organizing network and maitenance method thereof
US7733856B2 (en) * 2004-07-15 2010-06-08 Alcatel-Lucent Usa Inc. Obtaining path information related to a virtual private LAN services (VPLS) based network
US7583593B2 (en) * 2004-12-01 2009-09-01 Cisco Technology, Inc. System and methods for detecting network failure

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4807705A (en) 1987-09-11 1989-02-28 Cameron Iron Works Usa, Inc. Casing hanger with landing shoulder seal insert
US20020145981A1 (en) 2001-04-10 2002-10-10 Eric Klinker System and method to assure network service levels with intelligent routing
US20020165957A1 (en) 2001-05-02 2002-11-07 Devoe Jiva Gandhara Intelligent dynamic route selection based on active probing of network operational characteristics
US11490406B2 (en) 2019-05-02 2022-11-01 Wilus Institute Of Standards And Technology Inc. Method, device, and system for downlink data reception and HARQ-ACK transmission in wireless communication system

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2244423A1 (en) * 2009-04-23 2010-10-27 Vodafone Group PLC Routing traffic in a cellular communication network
WO2011103913A1 (en) 2010-02-23 2011-09-01 Telefonaktiebolaget L M Ericsson (Publ) Summarisation in a multi-domain network
US9288111B2 (en) 2010-02-23 2016-03-15 Telefonaktiebolaget Lm Ericsson (Publ) Summarization in a multi-domain network
WO2013130330A1 (en) * 2012-02-28 2013-09-06 Google Inc. Identifying an egress point to a network location
US9143429B2 (en) 2012-02-28 2015-09-22 Google Inc. Identifying an egress point to a network location
EP2955885A1 (en) * 2014-04-14 2015-12-16 Huawei Technologies Co., Ltd. Method and apparatus for determining traffic forwarding path and communications system

Also Published As

Publication number Publication date
CN101151847A (en) 2008-03-26
US20060215577A1 (en) 2006-09-28
ATE511726T1 (en) 2011-06-15
EP1861963A4 (en) 2009-12-30
EP1861963B1 (en) 2011-06-01
EP1861963A2 (en) 2007-12-05
CN101151847B (en) 2012-10-10
WO2006102398A3 (en) 2007-11-22

Similar Documents

Publication Publication Date Title
EP1861963B1 (en) System and methods for identifying network path performance
US10644977B2 (en) Scalable distributed end-to-end performance delay measurement for segment routing policies
EP1891526B1 (en) System and methods for providing a network path verification protocol
US9769070B2 (en) System and method of providing a platform for optimizing traffic through a computer network with distributed routing domains interconnected through data center interconnect links
US7561517B2 (en) Passive route control of data networks
US8139475B2 (en) Method and system for fault and performance recovery in communication networks, related network and computer program product therefor
US7584298B2 (en) Topology aware route control
US7606160B2 (en) System and method to provide routing control of information over networks
US9654383B2 (en) Route optimization using measured congestion
US10637767B2 (en) Determination and use of link performance measures
US7222190B2 (en) System and method to provide routing control of information over data networks
US7269157B2 (en) System and method to assure network service levels with intelligent routing
US20070064611A1 (en) Method for monitoring packet loss ratio
KR20140088206A (en) Service assurance using network measurement triggers
US20210203596A1 (en) Measuring packet residency and travel time

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 200680004006.X

Country of ref document: CN

121 Ep: the epo has been informed by wipo that ep was designated in this application
WWE Wipo information: entry into national phase

Ref document number: 2006739254

Country of ref document: EP

NENP Non-entry into the national phase

Ref country code: DE

NENP Non-entry into the national phase

Ref country code: RU

ENP Entry into the national phase

Ref document number: 1013241

Country of ref document: GB

Kind code of ref document: A

Free format text: PCT FILING DATE = 20090127