EP2008476A2 - Système de mise en réseau de périphérique ethernet intelligent - Google Patents

Système de mise en réseau de périphérique ethernet intelligent

Info

Publication number
EP2008476A2
EP2008476A2 EP07734176A EP07734176A EP2008476A2 EP 2008476 A2 EP2008476 A2 EP 2008476A2 EP 07734176 A EP07734176 A EP 07734176A EP 07734176 A EP07734176 A EP 07734176A EP 2008476 A2 EP2008476 A2 EP 2008476A2
Authority
EP
European Patent Office
Prior art keywords
network
node
traffic
path
nodes
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP07734176A
Other languages
German (de)
English (en)
Other versions
EP2008476A4 (fr
Inventor
Jim Arseneault
Brian Smith
Ken Young
Fabio Katz
Pablo Frank
Chris Barrett
Natalie Giroux
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Gridpoint Systems Inc
Original Assignee
Gridpoint Systems Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US11/446,316 external-priority patent/US8218445B2/en
Priority claimed from US11/495,479 external-priority patent/US7729274B2/en
Priority claimed from US11/500,052 external-priority patent/US8509062B2/en
Priority claimed from US11/519,503 external-priority patent/US9621375B2/en
Priority claimed from US11/706,756 external-priority patent/US8363545B2/en
Application filed by Gridpoint Systems Inc filed Critical Gridpoint Systems Inc
Publication of EP2008476A2 publication Critical patent/EP2008476A2/fr
Publication of EP2008476A4 publication Critical patent/EP2008476A4/fr
Withdrawn legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/50Network service management, e.g. ensuring proper service fulfilment according to agreements
    • H04L41/5003Managing SLA; Interaction between SLA and QoS
    • H04L41/5019Ensuring fulfilment of SLA
    • H04L41/5025Ensuring fulfilment of SLA by proactively reacting to service quality change, e.g. by reconfiguration after service quality degradation or upgrade
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/08Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters
    • H04L43/0852Delays
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/12Avoiding congestion; Recovering from congestion
    • H04L47/125Avoiding congestion; Recovering from congestion by balancing the load, e.g. traffic engineering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/22Traffic shaping
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/24Traffic characterised by specific attributes, e.g. priority or QoS
    • H04L47/2425Traffic characterised by specific attributes, e.g. priority or QoS for supporting services specification, e.g. SLA
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/24Traffic characterised by specific attributes, e.g. priority or QoS
    • H04L47/2441Traffic characterised by specific attributes, e.g. priority or QoS relying on flow classification, e.g. using integrated services [IntServ]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/32Flow control; Congestion control by discarding or delaying data units, e.g. packets or frames
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W28/00Network traffic management; Network resource management
    • H04W28/02Traffic management, e.g. flow control or congestion control
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/50Testing arrangements

Definitions

  • the present invention generally relates to Ethernet access and, in particular, to bandwidth efficient Ethernet grid networking systems and bandwidth-efficient Ethernet LAN with Service Level Agreements.
  • Ethernet is rapidly becoming the protocol of choice for consumer, enterprise and carrier networks. It is expected that most networks will evolve such that Ethernet will be the technology used to transport all the multimedia applications including, for example, triple-play, fixed-mobile-convergence (FMC), and IP multimedia sub-systems (IMS).
  • FMC fixed-mobile-convergence
  • IMS IP multimedia sub-systems
  • Telecommunications carriers are constantly looking for new revenue sources. They need to be able to deploy rapidly a wide ranging variety of services and applications without the need to constantly modify the network infrastructure.
  • Ethernet is a promising technology that is able to support a variety of application requiring different quality of service (QoS) from the network.
  • QoS quality of service
  • the technology is now being standardized to offer different types of services which have different combinations of quality objectives, such as loss, delay and bandwidth.
  • Bandwidth objectives are defined in terms of committed information rate (CIR) or excess information rate (EIR). The CIR guarantees bandwidth to a connection, while the EIR allows operation at higher bandwidth when available.
  • CIR committed information rate
  • EIR excess information rate
  • New high bandwidth wireless technology such as WiMAX or high speed RF technology allows the carrier to reach a new customer or a customer that is not currently serviced with high bandwidth without the high cost of deploying new fiber routes.
  • WiMax any point-to-point RF technology could be used.
  • WiMAX operates at higher speed, it is still important to maximize the use of its bandwidth since spectrum is a limited resource. But because the (WiMAX radio 105 and the router 102) are separate, the router has no knowledge of the radio status, it is difficult to make maximum use of the wireless bandwidth. WiMAX currently allows for multiple users to share a base station. If a subscriber does not reach the base station directly, it can tunnel through another subscriber which has connectivity. This architecture allows multiple subscribers to reach a base station which is connected to the wired network.
  • IP networks, and connections are setup by signaling protocols such as RSVP-TE. These protocols use shortest-path algorithms combined with non-real-time information on available QoS and bandwidth resources. Each node needs to maintain forwarding tables based on control traffic sent in the network. The paths available can be constrained by pruning links not meeting the bandwidth requirements. Bandwidth is wasted because of control messaging to establish and update forwarding tables. Protocols such as OSPF, LDP and RSVP are required to set up such paths, and these control protocols consume overhead bandwidth proportional to the size of the network and the number of connections. Pure Ethernet networks require spanning tree and broadcast messages to select a
  • Additive the total value of an additive constraint for an end-to-end path is given by the sum of the individual link constraint values along the path (e.g.: delay, jitter, cost).
  • Non-additive the total value of a non-additive constraint for an end-to-end path is determined by the value of that constraint at the bottleneck link (e.g.: bandwidth).
  • Non-additive constraints can be easily dealt with using a preprocessing step by pruning all links that do not satisfy these constraints. Multiple simultaneous additive constraints are more challenging.
  • QoS or constraint-based routing consists of identifying a feasible route that satisfies multiple constraints (e.g.: bandwidth, delay, jitter) while simultaneously achieving efficient utilization of network resources.
  • constraints e.g.: bandwidth, delay, jitter
  • Multi-constrained path selection with or without optimization, is an NP-complete problem (e.g., cannot be exactly solved in polynomial time) and therefore computationally complex and expensive. Heuristics and approximation algorithms with polynomial-time complexities are necessary to solve the problem.
  • shortest-path algorithms which take into account a single constraint for path computation, such as hop-count or delay.
  • Those routing algorithms are inadequate for multimedia applications (e.g., video or voice) which require multiple constraints to guarantee QoS, such as delay, jitter and loss.
  • Path computation algorithms for single-metric are well known; for example,
  • Dijkstra's algorithm is efficient in finding the optimal path that maximizes or minimizes one single metric or constraint.
  • a single primitive parameter such as delay is not sufficient to support the different types of services offered in the network.
  • a single metric is derived from multiple constraints by combining them in a formula, such as:
  • the single metric, a composite constraint is a combination of various single constraints. In this case, a high value of the composite constraint is achieved if there is high available bandwidth, low delay and low jitter.
  • the selected path based on the single composite constraint most likely does not simultaneously optimize all three individual constraints (maximum bandwidth, minimal delay and loss probability), and thus QoS may not be guaranteed. Any of the constraints by itself may not even satisfy the original path requirement.
  • Algorithms such as spanning trees are used to prevent loops in the data path in Ethernet networks because of their connectionless nature and the absence of a Time-To- Live (TTL) attribute, which can create infinite paths
  • TTL Time-To- Live
  • Such algorithms proactively remove links from being considered in a path in order to prevent loops. This artifact of the connectionless routing scheme is costly as it prevents the use of expensive links, which remain underutilized.
  • an offline traffic engineering system With an offline traffic engineering system, the state of all existing connection requests and the utilization of all network links are known before the new requested paths are computed. Using topology information, such as link capacities and a traffic demand matrix, a centralized server performs global optimization algorithms to determine the path for each connection request. Once a path design is completed, the connections are generally set up by a network management system. It is well known that an offline system with global optimization can achieve considerable improvement in resource utilization over an online system, if the traffic matrix accurately reflects the current load the network is carrying.
  • Existing traffic engineering systems do not keep in sync with the actual network or maintain real-time information on the bandwidth consumed while the network changes due to link failures, variations in the traffic generated by the applications, and unplanned link changes. The existing traffic engineering systems also do not take into account business policies such as limiting how much high priority traffic is going on a link. PATH ASSOCIATION
  • bi-directional connections are set up using two uni-directional tunnels.
  • a concept of pseudo-wire has been standardized to pair the two tunnels at both end-points of the tunnels (see FIG. 12).
  • intermediate nodes are not aware of the pairing and treat the two tunnels independently.
  • the routing mechanism does not attempt to route both connections through the same path. It is therefore impossible for a carrier to use operation administration and maintenance (OAM) packets, in order to create loopbacks within the connection path to troubleshoot a connection without setting up out-of-service explicit paths. There is therefore a need for a mechanism to make a unidirectional path look like a bi-directional path.
  • OAM operation administration and maintenance
  • Carriers need the ability to set up flexible Ethernet OAM path in-service and out- of-service anywhere in the network in order to efficiently perform troubleshooting.
  • Ethernet technology In order to provide reliable carrier-grade Ethernet services, the Ethernet technology has to be able to support stringent protection mechanisms for each Ethernet point-to-point (E-LINE) link.
  • E-LINE Ethernet point-to-point
  • link protection There are two main types of protection required by a carrier, link protection and path protection.
  • link protection techniques There are a number of standard link protection techniques in the marketplace, such as ring protection and bypass links which protect against a node going down.
  • connection oriented protocols such as MPLS use path protection techniques.
  • path protection techniques assume a routed network where the routes are dynamically configured and protected based on the resource requirements.
  • ZERO-LOSS PROTECTION SWITCHING Some communication applications, such as medical and security applications, require a very reliable service. In these cases, a 50-ms switch over time may be inadequate due to the critical data lost during this time period. For example, a 50-ms switch over in a security monitoring application could be misconstrued as a "man-in-the- middle" attack, causing resources to be wasted resolving the cause of the "glitch.”
  • Ethernet To address access challenges, telecommunications carriers have selected Ethernet. They need to be able to deploy rapidly a wide ranging variety of services and applications without the need to constantly modify the network infrastructure. Enterprises have long used Ethernet as the technology to support a variety of applications requiring different qualities of service (QoS) from the network. Carriers are leveraging this flexibility and are standardizing on this technology to offer data access services. Using this service definition, existing network elements which offer network access using Ethernet technology are not designed to make maximum use of the legacy network links existing at the edge of the carrier networks. Many access technologies such as DSL or WiMAX are prone to errors which affect the link speed. The network devices are unable to react to these errors to ensure that the service level agreements are met. The following inventions are focused on addressing these challenges.
  • QoS qualities of service
  • a service level agreement is entered with the customer which defines the parameters of the network connection.
  • bandwidth objectives are defined in terms of Committed Information Rate (CIR) and Excess Information Rate (EIR).
  • CIR Committed Information Rate
  • EIR Excess Information Rate
  • the CIR guarantees bandwidth to a connection while the EIR allows the connection to send at higher bandwidth when available.
  • the telecommunications provider verifies the traffic from each connection for conformance at the access by using a traffic admission mechanism such as policing or traffic shaping.
  • the policing function can take action on the non-conforming packets such as lowering the priority or discarding the packets. Policing is necessary because the service provider can not rely on an end-point not under the control of the network provider to behave according to the traffic descriptor.
  • Policing does not take into account the reality of the application traffic flow and the dynamic modification encountered by a s traffic flow when it is moving through the network. As packets get multiplexed and demultiplexed to and from network links, their traffic characterization is greatly modified. Another issue with policing and static characterization is that it is extremely difficult to set these traffic descriptors (i.e., CIR, EIR and burst tolerance) to match a given application requirement. The needs of the application change with time in a very o dynamic and unpredictable way. Traffic shaping, in turn, buffers the incoming traffic and transmits it into the network according to the contracted rate.
  • Ethernet transport service In a provider's network, sufficient bandwidth is allocated assuming the connections fully use the committed bandwidth, even though that is not always the case, leading to inefficiencies. In case of excess low s priority traffic, the network generally over-provisions the network in order to ensure that sufficient traffic gets through such that the application performance does not deteriorate.
  • Ethernet networks Another inefficiency currently encountered in Ethernet networks is that traffic that has traveled through many nodes and has almost reached destination is treated the same as traffic just entering the network which has not consumed any resources.
  • Current 0 Ethernet network implementations handle congestion locally where it occurs, by discarding overflow packets. This wastes bandwidth in the network in two ways:
  • bandwidth capacity is wasted as a result of retransmission of packets by higher layer protocols (e.g., TCP)
  • higher layer protocols e.g., TCP
  • Ethernet protocol includes a flow control mechanism referred to as Ethernet
  • Ethernet Pause The problem with Ethernet Pause flow control is it totally shuts off the transmission of the port rather than shaping and backing off traffic that it could handle. 0 It is currently acceptable to do this at the edge of the network, but for a network link it would cause too much transmission loss, and overall throughput would suffer more than causing a retransmission due to dropping packets.
  • a traffic admission mechanism can be implemented using a policing function or a traffic shaper.
  • Traffic shaping has a number of benefits from both the application and the network point of view.
  • the shaper can delay the transmission of a packet into the network if the traffic sent by the application is very different from the configured traffic descriptors. It would be useful to make the shaper flexible to take into account the delays that a packet encounters so that different actions, such as lowering the priority or discarding, can be applied.
  • SLA service level agreement
  • each customer application is required to characterize its traffic in terms of static traffic descriptors.
  • traffic patterns for videoconferencing, peer-to-peer communication, video streaming and multimedia sessions are very unpredictable and bursty in nature.
  • These applications can be confined to a set of bandwidth parameters, but usually that is to the detriment of the application's performance or else it would trigger underutilization of the network.
  • a customer's connections can carry traffic from multiple applications, and the aggregate behavior is impossible to predict.
  • the demand is dynamic since the number of new applications is growing rapidly, and their behavior is very difficult to characterize.
  • a service level agreement (SLA) is entered with the customer, defining the parameters of the network connections.
  • SLA service level agreement
  • a bandwidth profile is defined in terms of Committed Information Rate (CIR), Committed Burst Size (CBS), Excess Information Rate (EIR) and Excess Burst Size (EBS).
  • CIR Committed Information Rate
  • CBS Committed Burst Size
  • EIR Excess Information Rate
  • EBS Excess Burst Size
  • the Service Provider allocates an amount of bandwidth, referred to herein as the Allocated Bandwidth (ABW), which is a function of the CIR, CBS, EIR, EBS and the user line rate.
  • ABW Allocated Bandwidth
  • Ethernet service In a provider's network, sufficient bandwidth needs to be allocated to each connection to meet the QoS, even though not always used, leading to inefficiencies.
  • the service provider generally over-provisions the network in order to ensure that sufficient traffic gets through such that the application performance does not deteriorate.
  • E-LAN can also provide different bandwidth profiles to different sites where a site needs to send more or less information.
  • One site can talk at any time to another site (point-to-point or "pt-pt” — by using unicast address of the 0 destination), or one site can talk at any time to many other sites (point-to-multipoint or "pt-mpt" - by using Ethernet multicast address).
  • one site can send to all other sites (broadcast - by using Ethernet broadcast address).
  • an E-LAN is provisioned among five customer sites 6101 (sites 1, 2, 3, 4 and 5) in a network consisting of five nodes 6102 (nodes A, B, C, D, E and F) 5 connected to each other using physical links 6103.
  • the E-LAN can be implemented using VPLS technology (pseudowires with MPLS or L2TP) or traditional crossconnects with WAN links using PPP over DSC.
  • the E-LAN can also be implemented using GRE, PPP or L2TP tunnels.
  • the physical links are lOOMbps.
  • the customer subscribes to an o E-LAN to mesh its sites with a CIR of 20Mbps and an EIR of 50Mbps.
  • the SP needs to allocate a corresponding ABW of 30Mbps between each possible pair of sites 104 such that if site 1 sends a burst to site 5 while site 4 sends a burst to site 5, they each receive the QoS for the 20Mbps of traffic. Since any site can talk to any other site, there needs to be sufficient bandwidth allocated to account for all combinations.
  • (n-l)xABW needs to be allocated on the links 6104 between B and C, where n is the number of sites in the E-LAN.
  • n is the number of sites in the E-LAN.
  • One embodiment of this invention provides a system for making connections in a telecommunications system that includes a network for transporting communications between selected subscriber connections, and a wireless network for coupling connections to the network.
  • the network and wireless network are interfaced with a traffic management element and at least one radio controller shared by connections, with the traffic management element and the radio controller forming a single integrated network element. Connections are routed from the wireless network to the network via the single integrated network element.
  • a system for selecting connection paths in a telecommunications network having a multiplicity of nodes interconnected by a multiplicity of links.
  • the system identifies multiple constraints for connection paths through the network between source and destination nodes, and identifies paths that satisfy all of the constraints for a connection path between a selected source node and a selected destination node.
  • One particular implementation selects a node adjacent to the selected source node; determines whether the inclusion of a link from the source node to the adjacent node, in a potential path from the source node to the destination node, violates any of the constraints; adds to the potential path the link from the source node to the adjacent node, if all of the constraints are satisfied with that link added to the potential path; and iterates the selecting, determining and adding steps for a node adjacent to the downstream node of each successive added link, until a link to the destination node has been added.
  • a system for optimizing utilization of the resources of a telecommunications network having a multiplicity of nodes interconnected by a multiplicity of links.
  • the system identifies multiple constraints for connection paths through the network, between source and destination nodes; establishes connection paths through the network between selected source and destination nodes, the established connection paths satisfying the constraints; for each established connection path, determines whether other connection paths exist between the selected source and destination nodes, and that satisfy the constraints; and if at least one such other connection path exists, determines whether any such other connection path is more efficient than the established connection path and, if the answer is affirmative, switches the connection from the established connection path to the most efficient other connection path.
  • Another embodiment provides a telecommunications system comprising a network for transporting packets on a path between selected subscriber end points.
  • the network has multiple nodes connected by links, with each node (a) pairing the forward and backward paths of a connection and (b) allowing for the injection of messages in the backward direction of a connection from any node in the path without needing to consult a higher OSI layer.
  • each node switches to a backup path when one of the paired paths fails, and a new backup path is created after a path has switched to a backup path for a prescribed length of time.
  • a system for protecting connection paths for transporting data packets through an Ethernet telecommunications network having a multiplicity of nodes interconnected by a multiplicity of links.
  • Primary and backup paths are provided through the network for each of multiple connections, with each path including multiple links.
  • Data packets arriving at a first node common to the primary and backup paths are duplicated, and one of the duplicate packets is transported over the primary path, the other duplicate packet is transported over the backup path, and the duplicate packets are recombined at a second node common to the primary and backup paths.
  • Another embodiment of the present invention provides a method of controlling the flow of data-packet traffic through an Ethernet telecommunications network having a multiplicity of nodes interconnected by multiple network links.
  • Incoming data-packet traffic from multiple customer connections are received at a first node for entry into the network via the first node.
  • Flow control messages are generated to represent the states of the first node and, optionally, one or more network nodes upstream from the first node, and these states are used as factors in controlling the rate at which the incoming packets are admitted to the network.
  • the flow control messages may be used to control the rate at which packets generated by a client application are transmitted to the first node.
  • transit traffic is also received at the first node, from one or more other nodes of the network, and the flow control messages are used to control the rate at which the transit traffic is transmitted to the first node.
  • the transit traffic may be assigned a higher transmission priority than the incoming traffic to be admitted to the network at the first node.
  • Another embodiment provides a method of controlling the entry of data-packet traffic presented by a client application to the Ethernet telecommunications network.
  • the rate at which the incoming packets from the client application are admitted to the network is controlled with a traffic shaper that buffers incoming packets and controllably delays admission of the buffered packets into the network.
  • the delays may be controlled at least in part by multiple thresholds representing contracted rates of transmission and delays that can be tolerated by the client application.
  • the delays may also be controlled in part by the congestion state of the network and/or by prescribed limits on the percentage of certain types of traffic allowed in the overall traffic admitted to the network.
  • a further embodiment provides a method of controlling the flow of data-packet traffic in an Ethernet telecommunications network having a flow control mechanism and nodes that include legacy nodes.
  • Loopback control messages are inserted into network paths that include the legacy nodes. Then the congestion level of the paths is determined from the control messages, and the flow control mechanism is triggered when the congestion level reaches a predetermined threshold.
  • the control messages may be inserted only for each priority of traffic on the paths that include the legacy nodes.
  • the delay in a path is determined by monitoring incoming traffic and estimating the actual link occupancy from the actual traffic flow on a link. If nodes transmitting and receiving the control messages have clocks that are not synchronized, the congestion level may be estimated by the delay in the path traversed by a control message, determined as the relative delay using the clocks of the nodes transmitting and receiving the control messages.
  • Another embodiment provides a method of automatically renegotiating the contracted bandwidth of a client application presenting a flow of data-packet traffic to an Ethernet telecommunications network.
  • the actual bandwidth requirement of the client application is assessed on the basis of the actual flow of data-packet traffic to the network from the client application.
  • the actual bandwidth requirement is compared with the contracted bandwidth for the client application, and the customer is informed of an actual bandwidth requirement that exceeds the contracted bandwidth for the client application, to determine whether the customer wishes to increase the contracted bandwidth. If the customer's answer is affirmative, the contracted bandwidth is increased.
  • the contracted bandwidth corresponds to a prescribed quality of service, and the contracted bandwidth is increased or decreased by changing the contracted quality of service.
  • Yet another embodiment provides different sub-classes of service within a prescribed class of service in an Ethernet telecommunications network by setting different levels of loss or delay for different customer connections having a common contracted class of service, receiving incoming data-packet traffic from multiple customer connections and transmitting the traffic through the network to designated destinations, generating flow control messages representing the states of network nodes through which the traffic flows for each connection, and using the flow control messages to control the data-packet flow in different connections at different rates corresponding to the different levels of loss or delay set for the different connections.
  • the different rates vary with prescribed traffic descriptors (such as contracted CIR and EIR) and/or with preset parameters.
  • the connections in which the flow rates are controlled may be selected randomly, preferably with a weight that is preset or proportional to a contracted rate.
  • a method of controlling the flow of data packet traffic from a first point to at least two second point in an Ethernet telecommunications network having a multiplicity of nodes interconnected by multiple network links comprises monitoring the level of utilization of a link between the first and second points, generating flow control messages representing the level of utilization and transmitting the control messages to the first point, and using the states represented in the flow control messages as factors in controlling the rate at which the packets are transmitted from the first point to the second point.
  • a method of controlling the flow of data packet traffic through an Ethernet telecommunications network having a multiplicity of nodes interconnected by multiple network links comprises receiving incoming data packet traffic from multiple customer connections at a first node for entry into the network via the first node, the first node having an ingress trunk, and limiting the rate at which the incoming data packets are admitted to the network via the ingress trunk.
  • FIG. 1 is a diagram illustrating existing network architecture with separate WiMAX base station and routers.
  • FIG. 2 illustrates an integrated network element containing a WiMAX base station with an Ethernet switch networked and managed with a VMS
  • FIG. 3 illustrates one implementation for performing the switching in the WiMAX switch in the integrated network element of FIG. 2.
  • FIG. 4 illustrates a logical view of the traffic management bloc
  • FIG. 5 illustrates a radio impairment detection mechanism
  • FIG. 6 illustrates one implementation of an algorithm that detects radio impairments.
  • FIG. 7 illustrates one implementation of the path selection algorithm
  • FIG. 8 illustrates one implementation of the network pruning algorithm o
  • FIG. 9 illustrates one implementation of the path searching based on multiple constraints.
  • FIG. 10 illustrates one implementation of the path searching based on multiple constraints using heuristics.
  • FIG. 11 illustrates one implementation of a network resource optimization 5 algorithm.
  • FIG. 12 illustrates a prior art network where both directions of the connections use different paths.
  • FIG. 13 illustrates an example where both directions of the connection use the same path.
  • FIG. 14 illustrates the pairing of the connection at each node and the use of hairpins for continuity checking.
  • FIG. 15 illustrates the use of hairpins for creating path snakes.
  • FIG. 15a illustrates the management of rules
  • FIG. 16 Illustrates the use of control messages to trigger protection switching
  • FIG. 17 illustrates the ability to duplicate packets to ensure zero-loss for a path.
  • FIG. 18 illustrates one example of an implementation of a packet duplication algorithm.
  • FIG. 19 illustrates one example of a packet recombination algorithm.
  • FIG. 20 is a diagram of an Ethernet transport service connection.
  • FIG. 21 is a diagram of an Ethernet transport service switch.
  • FIG. 22 is a diagram of an logical view of the traffic management bloc.
  • FIG. 23 is a diagram of an example of a threshold-based flow control mechanism.
  • FIG. 24 is a diagram of an example of flow control elements
  • FIG. 25 is a diagram of an example of flow control handling at interim nodes
  • FIG. 26 is a diagram of one implementation of a flexible shaper mechanism.
  • FIG. 27 is a diagram of the use of control messages to estimate the behavior of non-participating elements.
  • FIG. 28 is a diagram of a typical delay curve as a function of utilization.
  • FIG. 29 is a diagram of the elements that can be involved in a bandwidth renegotiation process.
  • FIG. 30 is a diagram of one implementation of a bandwidth renegotiation mechanism.
  • FIG. 31 is a diagram of one implementation of a bandwidth renegotiation mechanism.
  • FIG. 32 is a diagram of one implementation of a bandwidth renegotiation mechanism.
  • FIG. 33 is a diagram of one implementation of a bandwidth renegotiation with a logical network.
  • FIG. 34 is a diagram of one implementation of a bandwidth renegotiation with real-time handling of client requests.
  • FIG. 35 is a diagram of an example network with existing E-LAN offering.
  • FIG. 36 is a diagram of an example network using network level flow control.
  • FIG. 37 is a diagram representing the terminology used in the description.
  • Ethernet over WiMAX solutions as illustrated in FIG. 1 require a separate packet switch 10 such as an Ethernet switch, MPLS switch or router to switch traffic between WiMAX radios 105 and client interfaces 100.
  • the WiMAX (or other point-to-point RF) radio 105 connects Ethernet from the Ethernet switch or IP router 102 and converts it into the WiMAX standard, then transmitted over a antenna connector
  • Integrating the switching and WiMAX radio functions reduces operational costs and improves monitoring and control of WiMAX radios.
  • WiMAX switch Integration of the switching of service traffic among WiMAX radio links and access interfaces into a single network element, which is referred to as a WiMAX switch
  • FIG. 2 The switching can be accomplished by using Ethernet switching, MPLS label switching or routing technology.
  • One embodiment integrates only the control 107c of the radio with an external radio controller with the switching function to prevent loosing packets between the switch and the radio controller when the radio bandwidth degrades.
  • the external radio controller provides periodical information about the status of the wireless link and thus acts as an integrated radio controller.
  • the client application 100 connects to the switch 107 based on the service interface type 101 and is switched to the appropriate antenna connector 121 then to the antenna 123.
  • the configuration of the switch is done by a management device called the VMS 124.
  • FIG. 3 provides an example of an implementation for the switching.
  • the network element 107 includes one or more radio controllers 120 or external radio controllers 120a fully within the control of the network element and it can add/drop traffic to/from different types of interfaces 101 including but not limited to any number of Ethernet, ATM or Tl /El interfaces. Different types of network trunks can also be added using optical links or other types of high speed links 122.
  • the packet forwarding is connection-oriented and can be done using simple labels such as multi-protocol label switching (MPLS) labels, Ethernet VLAN-ID or 802.16 connection ID (CID) labels. Connections are established by a traffic engineering element referred to as the value management system (VMS) 124, which is a network management system.
  • the VMS manages all the connections such that the QoS and path protection requirements are met.
  • the WiMAX switch includes amongst other components, a data plane 110, which includes packet forwarding 111 and traffic management 1 12.
  • the packet forwarding, 1 11 receives packets and performs classification to select which interface 101, trunk connector 116 or wired trunk connector, 109 to queue the packet.
  • the traffic management 112 manages all the queues and the scheduling. It can also implement traffic shaping and flow control.
  • the network and link configurations are sent to the Process controller 113 and stored in persistent storage 114.
  • the Process controller configures the Packet Forwarding 111, the Traffic Management 112 and the Radio Controller 120 using the Control Bus 115.
  • One logical implementation is shown in FIG. 4.
  • the traffic shaper can be optionally set up to react to flow control information from the network.
  • the scheduler 132 is responsible for selecting which packet to transmit next from any of the connections that are ready to send on the outgoing connector (NNI, UNI or Trunk).
  • Intermediate queues 131 can be optionally used to store shaped packets that are awaiting transmission on the link. These queues can be subject to congestion and can implement flow control notification.
  • the radio controller is monitored via the process controller to be immediately notified of its state and operating speed.
  • a grid network topology can be implemented which permits the optimized use of the bandwidth as each subscriber's traffic is controlled from end-to-end. This topology alleviates the need for subscribers to tunnel through another subscriber and therefore removes the one-hop limitation.
  • the radio is integrated with the switching layer (FIG. 3).
  • the radio status information is conveyed on a regular basis to the process controller 113 which can evaluate impending degradation of the link and take proactive actions, such as priority discards, flow control, protection switching etc.
  • the objective is to avoid loss between the traffic management 112 and the radio controller 120 when the link speed is reduced due to performance degradations.
  • the sceduler 132 as seen in FIG. 4 matches any change in throughput as a result of expected changing transmission speeds (e.g. drop from QAM 64 to QAM 16).
  • One algorithm that estimates the link performance is as follows :
  • the scheduler 132 limits the rate of traffic forwarded to the radio controller 120 and buffers this traffic as necessary in queues 131 or 130 (FIG. 4).
  • the scheduler 132 increases the rate of traffic forwarded to the radio controller 120 and draining the traffic buffered in queues 131 or 130 (FIG. 4).
  • the radio controller 120 is responsible to commute traffic between the trunk connector 116 and the antenna connector 121.
  • the process includes 3 functions:
  • a radio media access controller 117 which controls how packets are transmitted over the radio. It performs access control and negotiation for transmission of packets.
  • a modulator 1 18 which prepares packets for transmission over the air. It converts packets into a set of symbols to be transmitted over the air. It also mitigates the "over-the-air" impairments.
  • a RF amplifier which takes the modulated symbols and passes these to the antenna 123 over the antenna connector 121.
  • the process controller 113 is responsible for handling the detection of radio performance 140. It starts by retrieving 141 performance statistics from elements in the radio controller 120. The process controller 113 needs to look at data from the media access controller 117 which includes radio grant times, retransmissions, packet drops, etc. From the modulator 1 18, the process controller retrieves the overall performance of the transmission of symbols across the air interface. The process controller 113 also looks at the RF layer 1 19 to look at the current signal to noise ratio and other radio parameters. Changes in these levels can indicate changes in modulation are required. Once the process controller 113 has the current performance data, it is processed to produce the current trend data 142. Examples of these trends can be:
  • the process controller 113 stores this in persistent storage 114.
  • the process controller then retrieves the historical data 144 and compares the current trends to the historical trends 145.
  • the process controller 113 decides whether the s current trends will result in a change in radio performance 146. If the radio will be impaired, the process controller 113 adjusts the scheduler 132 in traffic Management 1 12 to reduce/increase the amount of traffic 150 supplied to radio controller 120.
  • the process controller 113 retrieves the radio impairment policy 147 from persistent Storage 114.
  • the Process Controller compares the current trends against the policy 148. If this is not considered a change radio performance 149, the process ends 151. If this is considered a change radio performance 149, the process controller 113 adjusts the scheduler 132 in Traffic management 1 12 to reduce/increase 150 the S amount of traffic supplied to radio controller 120.
  • the effect of a reduction in the scheduler transmission may cause the queues 130 or 131 to grow. This can result in the execution of different local mechanisms such as priority discards, random early discards. It can also tie to end-to-end mechanisms such as flow control to reduce the rate of transmission at the source of the traffic.
  • the 0 degradation can also reach a level where process controller 1 13 triggers a protection switching on some of the traffic going on the degraded link.
  • the effect of an increase in the scheduler transmission may cause the queues 130 or 131 to empty thus underutilizing the link. This phenomenon can tie to an end-to-end mechanisms such as flow control to increase the rate of transmission at the source of the traffic.
  • the establishment of the paths for the Ethernet connections is executed using an offline provisioning system (referred to herein as a Value Management System or VMS).
  • VMS Value Management System
  • the system can set up paths using any networking technology, such as MPLS, GRE, L2TP or pseudowires.
  • the VMS provides all the benefits of an offline provisioning system, but it also understands business rules and it is s kept constantly in synch with the network.
  • the VMS is optimized to be used with connection oriented switching devices, such as the WiMAX switch described above, which implements simple low-cost switching without requiring complex dynamic routing and signaling protocols.
  • FIG. 7 depicts one implementation of a path selection algorithm.
  • the goal is to 0 set up a main and optional backup paths 3100.
  • a main path and an optional backup path are requested for a designated pair of end points (the source and the destination). These paths must satisfy designated customer requirements or SLA, including: a selected Class of Service (CoS), Bandwidth Profile (CIR, CBS, EIR, EBS), and QoS parameters (e.g., Delay, Delay Jitter, Loss Ratio, Availability).
  • CoS Class of Service
  • CIR Bandwidth Profile
  • CBS CBS
  • EIR EIR
  • EBS Bandwidth Profile
  • QoS parameters e.g., Delay, Delay Jitter, Loss Ratio, Availability
  • the goal of the algorithm is to find a path that allows the provider to satisfy the subscriber's requirements in the most efficient way.
  • the most efficient path (from the provider's point of view) is the one that optimizes a combination of 0 selected provider criteria such as cost, resource utilization or load balancing.
  • Step 3102 retrieves site-wide policies that capture non-path specific rules that the provider specifies for the entire network. These policies reflect the provider's concerns or priorities, such as:
  • Step 3103 retrieves path-specific policies which can override site-wide policies for the particular path being requested. For example:
  • Step 3104 retrieves network state and utilization parameters maintained by the VMS over time.
  • the VMS discovers nodes and queries their resources, keeps track of 0 reserved resources as it sets up paths through the network (utilization), and keeps in sync with the network by processing updates from nodes.
  • the available information that can be considered when searching for paths includes:
  • Utilization - VMS keeps track of resource allocation (and availability)per node, per link, per CoS
  • Step 3105 prunes from the network invalid links and nodesusing a pruning subroutine, such as one illustrated in FIG. 8 described below.
  • step 3107 takes each additive constraint (delay, jitter, loss, availability) separately and finds the path with the optimal value for that constraint using, for example, Dijsktra's algorithm. For example, if the smallest path delay is higher than the requested delay, step 3108 determines that no path will be found to satisfy all constraints and sets a "Path setup failed" flag at step 3109.
  • the results of the single-constraint optimal paths can be saved for later use.
  • the algorithm can also gather information about nodes and links by performing a linear traversal of nodes/links to compute for example minimum, maximum and average values for each constraint.
  • the previously collected information can be used during the full multiple-constraint search to improve and speed up decisions if there is not enough time to perform an exhaustive search (e.g.: when to stop exploring a branch and move on to a different one). See SCALABLE PATH SEARCH ALGORITHM below.
  • the multi-constraint search algorithm FIG. 9 is performed 3110, as described below. If no path is found that satisfies all subscriber's requirements, the path setup failed 31 12.
  • a main path is selected 3113 from the list of candidate paths. Any provider-specific policies such as cost, load-balancing, resource utilization can be taken into account to select the most efficient path from the list.
  • a backup path is required 3114, it is selected from the list pf candidate paths.
  • the system will select the path that is most distinct from the main path and that also optimizes the carrier-specific policies. Once both the main and backup paths have been selected they can be provisioned
  • FIG. 8 illustrates one implementation of the function used to prune the links and nodes from the network prior to search for the paths 3200.
  • the algorithm starts with the entire network topology 3201, excludes node/links based on explicit exclusion lists, or exclusion rules.
  • the steps within a control loop 3202 are executed.
  • the node exclusion list 3203 and the node exclusion policies 3204 are consulted. If the node is to be excluded 3205, it is removed 3206 from the set of nodes to be considered during the path search. As indicated by the control loop 3202, the steps 3203-3206 are repeated for each node in the network.
  • the steps within a control loop 3207 are executed.
  • the link exclusion list and exclusion policies 3208 are consulted. If the link is to be excluded 3209, or violates non-additive or additive constraints 3210, it is removed 3211 from the set of links to be considered during the path search.
  • An example of a link violating a non-additive constraint is when its bandwidth is smaller than the requested path bandwidth.
  • An example of a link violating an additive constraint is when its delay is longer than the total delay requested for the entire path.
  • the steps 3208-3211 are repeated for each link in the network
  • the links are pruned recursively and then nodes with no incoming or outgoing links are pruned. For each node still remaining the steps within a control loop 3212 are executed. If there are no links 3213 reaching the node, it is removed 3214. As indicated by the control loop 3212, the steps 3213-3214 are repeated for each node in the network. Then for each link still remaining the steps within a control loop 3215 are executed. If it does not reach any node 3216, it is removed 3217. As indicated by the control loop 3215, the steps 3216-3217 are repeated for each link in the network.
  • the steps 3212-3217 are repeated while at least a node or link has been removed 3218.
  • FIG. 9 illustrates one example of a search algorithm that finds all paths satisfying a set of constraints 3300.
  • the algorithm sees the network as a graph of nodes (vertices) connected by links (edges).
  • a possible path is a sequence of links from the source node to the destination node such that all specified constraints are met.
  • additive constraints are added 3307 to ensure they do not exceed the requirements.
  • the algorithm checks both directions of the path to ensure that the constraints are met in both directions.
  • the algorithm starts by initializing the list of all candidate paths 3301, and the first path to explore starting at the source node 3302.
  • the algorithm traverses the network graph depth first 3304 looking at each potential end-to-end path from source to destination considering all constraints simultaneously in both directions. Other graph traversing techniques, such as breadth first, could also be used.
  • the steps within the depth first search control loop 3304 are executed.
  • One of the adjacent nodes is selected 3303 to explore a possible path to the destination. If the node is not yet in the path 3305, for each additive constraint the steps within a control loop 3306 are executed.
  • the path total for that constraint is updated 3307 by adding the value of that constraint for the link to the node being considered.
  • the node being considered is not added to the path.
  • the steps 3307-3308 are repeated for each additive constraint. If all constraints for the path are satisfied the node is added to the path 3309 and one of its adjacent nodes will be considered next 3304 & 3305. If the node just added happens to be the destination node 3310, the path is a candidate, so it is added to the list of all candidate paths 3311. As indicated by the control loop 3304, the steps 3305-3311 are repeated all nodes are traversed depth first.
  • the algorithm backtracks to the last node with adjacent nodes not yet explored 3304, thus following a depth first traversal order. Once the whole graph representing the relevant subset of the network has been explored, the set of all candidate paths is returned 3312.
  • the standard depth first graph traversal goes through links in a fixed arbitrary order.
  • a "sort function” can be plugged into the depth first search graph to decide which link from the current node to explore next (FIG. 10).
  • the algorithm shown in FIG. 10 is based on the algorithm shown in FIG. 9 described above, with only two differences to highlight:
  • a new check is added to ensure the algorithm does not go over a specified time limit 3400.
  • a sort function 3401 is used to choose in which order adjacent nodes are explored 3401.
  • the sort function 3401 can range from simple random order, which might improve load balancing over time, all the way to a complex composite of multiple heuristic functions with processing order and relative weight dynamically adjusted based on past use and success rate.
  • a heuristic function could for example look at the geographical location of nodes in order to explore first links that point in the direction of the destination. Also heuristic functions can make use of the information gathered while running single-constraint searches using Dijkstra's algorithm, or by traversing the nodes and links and computing minimum, maximum and average values for various parameters. Another heuristic function could take into account performance stats collected over time to improve the chances of selecting a more reliable path. There are infinite heuristic functions that can be used, and the network wide view is available to the VMS to improve the chances of making better decisions along the path search. A practical limitation is that the sorting time should be much shorter than the path exploration time.
  • Two passes of the multiple-constraint path search may be needed when looking for main and backup paths in a very large network. Even if the provider's criteria to optimize both main and backup paths are identical, there is one crucial difference: to achieve effective protection, the backup path must be as distinct from the main path as possible, overriding all other concerns. Since not all paths can be examined (time limit), choosing which ones to explore first is important. When looking for candidates for backup path the main path must already be known, so being distinct from it can be used as a selection criterion. This means that the first pass results in a list of candidates from which the main path is selected, and the second pass results in a list of candidate paths from which the backup path is selected, as distinct as possible from the main path.
  • the VMS runs through its path search algorithm to find candidate paths that satisfy the subscriber's requirements. Then it selects from those candidate paths the main and backup paths that optimize the provider's goals such as cost and load balancing.
  • This selection makes use of business policies and data available at the time a new service is requested, as reflected by the constraints used in the algorithms described above in connection with the flow charts in FIGs.7-9. Those constraints are modified from time to time as the provider's policies change. Over time, more paths are requested, some paths are torn down, the network evolves (nodes and links may be added or removed), and the provider's concerns may change. The paths that remain active still satisfy the original subscriber's requirements, but the network resources may be not optimally utilized in the new context, i.e., new paths may exist that are more efficient under the current constraints established to reflect the provider's policies.
  • the VMS runs the same path search algorithm to find the optimal paths that would be allocated at the present time to satisfy existing services (end-to-end connections).
  • the VMS compares the currently provisioned paths with the new ones found. If the new paths are substantially better, the VMS suggests that those services be re-implemented with the new paths. The provider can then select which services to re-implement.
  • One example of an algorithm for optimizing utilization of the network resources is illustrated by the flow chart in FIG. 11. The algorithm starts at step 3501 with an empty list of services to optimize. For each existing service, the steps within a control loop 3502 are executed.
  • Step 3504 determines whether the new paths are more efficient than the ones currently provisioned 3504. If the answer is affirmative, the service is added to the list of services worth optimizing at step 3505. If the answer is negative, the loop is completed for that particular service. As indicated by the control loop 3502, the steps 3503-3506 are repeated for each existing service.
  • Step 3507 determines whether the provider wants the service to be re- implemented by the new found paths. If the answer is affirmative, the service is rerouted at step 3508. If the answer is negative, the loop is completed for that particular service. As indicated by the control loop 3506, the steps 3507 and 3508 are repeated for each of the services on the list generated at step 3505, and then the algorithm is completed at step 3509.
  • each network element e.g. WiMAX switch
  • each network element is able to (1) pair each forward 4202 and backward 4203 path of a connection at each node in the path of a connection and (2) allow for injection of messages in the backward direction of a connection from any node in the path.
  • This capability depicted in FIG. 14, is referred to herein as creating a "hairpin" 4303.
  • the knowledge of the pairs at each node 4201 allows creating loopbacks, and then using control packets at any point in a connection in order to perform troubleshooting. Loopbacks can also be created by the VMS or manually generated. The pairing is possible in this case because the VMS ensures that both paths of the connections (forward and backward) take the same route which is not currently the case for Ethernet
  • the hairpin allows nodes in the network to send messages (such as port-state) back to their ingress point by sending packets back along the hairpin path without the need to hold additional information about the entire path without the need to consult higher level functions outside of the datapath, or to involve the transit end of the path. If the path is already bi-directional, no hairpin is required for pairing.
  • ingress matching criteria 4407 this is a check to see if the packet in question is to be acted upon or to simply pass though the rule subsystem with no action.
  • action mechanism 4408 that is called if a packet does meet the criteria of a packet to be acted upon.
  • An example of an action mechanism is where a rule was placed on an ingress interface looking for a prescribed bit-pattern within the packet. When the system receives a packet that matches the prescribed bit-pattern, the action mechanism is run.
  • This action mechanism may be one that directs the system to send this packet back out the interface at which it was received after altering it in some way. All other packets pass through the system unaffected.
  • Rules can be placed at each node along a path to use the hairpin to loop-back one or more types of packet, or all packets crossing a port. Rules can also be activated by types of packets or other rules, allowing complicated rules that activate other rules upon receiving an activation packet or and deactivate rules on receiving a deactivation packet. As exemplified in FIG. 14, the pairing 4302 allows the system to create flexible in-service or out-of-service continuity checking at any node 4201 in the path.
  • Rule checking points can be set along a path from ingress to egress to allow continuity checks 4304 at each hop along a path.
  • Each rule can consist of looking for a different pattern in a packet, and only hairpin traffic matching that bit pattern, as defined in the ingress matching criteria 407 of each individual rule.
  • the pattern or ingress matching criteria can consist of a special pattern in the header or the body of a packet, or any way to classify the packet against a policy to identify it should be turned around on the hairpin. This allows a network operator to check each hop while live traffic runs on the path and is unaffected (in-service loopback) or to provide an out-of-service loopback that sends all traffic back on the hairpin interfaces.
  • a snake of a path 4401 can be created using a number of rules, e.g., one rule causing a specific type of packet to be put on a path that exists on the node, and then other rules directing that packet to other paths or hairpins on the node to allow a network operator to "snake" packets across a number of paths to test connectivity of the network.
  • This also allows a test port with a diagnostic device 4402 to be inserted into a node to source (inject) and receive traffic that does not require insertion on the ingress or egress of a path.
  • a rule 4405 is placed on the ingress port for a path 4401 that sends all traffic, regardless of bit pattern, out a specific egress port 4404 towards a downstream node 4201b.
  • An additional rule 4403 placed on node 4201 ingress port sends all traffic with a unique (but configurable) bit-pattern out the interface's hairpin back towards node 4201.
  • a final rule 4406 sends all traffic matching the aforementioned unique bit pattern out the interface connected to the test device 4407.
  • the hairpin is always available at each node for each connection. Rules can be enabled (and later disabled) to look for specific types of control messages (e.g., loop- back) and act on them. Hairpins can also used for other mechanisms described below such as protection switching, network migration and flow control.
  • E-LINE PROTECTION CONFIGURATION AND OPERATION One embodiment provides sub-50msec path protection switching for Ethernet point-to-point path failures in order to meet the reliability requirements of the carriers without using a large amount of control messages. Furthermore, the back-up path is established and triggered based not only on available resources but also on business policies as described above. The back-up path is calculated using the VMS, and not via typical signaling mechanisms, which configures the switches' 4201 control plane with the protected path. The back-up path is set up by the VMS and does not require use of routing protocols such as OSPF. Once the back-up path is set up, the VMS is not involved in the protection switching. The process is illustrated in FIG. 16.
  • a node 4201 When a node 4201 detects a link failure 501 (via any well-known method, such as loss of signal), it creates a control message 4504 and sends the message back along the system using the hairpin 4303 (as described above) to indicate to the source endpoint of each connection using the failed link that they need to switch to the back-up path. The switching is then done instantaneously to the back-up path 4505. If the uni-directional paths are MPLS-Label switched paths, the hairpin allows the system to send the message back to the path's origination point without the need to consult a higher-level protocol.
  • the node can use the same mechanisms to notify the sources that the primary path failure has been restored.
  • the connection can revert to the primary path.
  • the VMS is notified via messaging that the failure has occurred.
  • the VMS can be configured to make the current path the primary path and to recalculate a new back-up path for the connection after some predetermined amount of time has elapsed and the primary path was not restored (e.g., after 1 minute).
  • the information about the new back-up path is then sent down to the nodes without impact to the current data flow, and the old configuration (failed path) is removed from the configuration.
  • the VMS can also be configured to find a new primary path and send a notification for switch over.
  • the backup protection path remains as configured previously. If the a User-Network-Interface (UNI) or Network-Network-Interface (NNI) at an end-point of a path fails, the endpoint can also use hairpins to send a control message to the traffic source to stop the traffic flow until the failure is restored or a new path to the destination can be created by the VMS, which is notified of the failure via messaging.
  • UNI User-Network-Interface
  • NNI Network-Network-Interface
  • the Switch 4201 can create duplicate packet streams using the active and the backup paths. Sequence numbers are used to re- combine the traffic streams and provide a single copy to the server application. If the application does not provide native sequence numbers, they are added by the system.
  • FIG. 17 One implementation of this behavior is shown in FIG. 17.
  • a client application 4600 has a stream of packets destined for a server application 4601.
  • a packet duplication routine 4610 creates two copies of the packets sourced by the client application 4600.
  • a sequence number is added to these duplicate packets, and one copy is sent out on an active link 4620 and another is sent out on a backup link 4621.
  • FIG. 18 One example of a packet duplication routine is depicted in FIG. 18.
  • a packet is received at the packet duplication system 4701 from a Client Application 4600.
  • the packet is examined by the system 4702, which determines whether an appropriate sequence number is already contained in the packet (this is possible if the packet type is known to contain sequence numbers, such as TCP). If no well-known sequence number is contained in the packet, a sequence number is added by the packet duplication system 4703.
  • the packet is then duplicated 4704 by being sent out both links, 4620 and 4621, first on the active link 4704 and then on the back-up link 4705. If there is no need to add a sequence number 4702, because the packet already contains such a number, the routine proceeds to duplicate the packet 4704.
  • a packet recombination routine 4611 listens for the sequenced packets and provides a single copy to the server application 4601. It removes the sequence numbers if these are not natively provided by the client application 4600 data.
  • FIG. 19 One example of a packet recombination routine is shown in FIG. 19.
  • a packet is received 4801 by the packet recombination system 4611 from the packet duplication system 4610.
  • the system examines the sequence number and determines if it has received a packet with the same sequence number before 4802. If it has not received a packet with this sequence number before, the fact that is has now received a packet with this sequence number is recorded. If the sequence number was added by the packet duplication system 4803 then this sequence number is now removed from the packet and the packet system sends the packet to the Server Application 4804. If the sequence number was not added by the packet duplication system 4600, then the packet is sent to the Server Application 4601 unchanged 4805. If a new packet is received by the packet recombination system 4802 with a sequence number that was recorded previously, then the packet is immediately discarded as it is known to be a duplicate 4806.
  • This system does the duplication at the more relevant packet level as opposed to the bit level of other previous implementations (as data systems transport packets not raw bit-streams) and that both streams are received and examined, with a decision to actively discard the duplicate packet after it has been received at the far end.
  • a switch or link failure does not result in corrupted packets while the system switches to the other stream, because the system simply stops receiving duplicated packets,
  • Ethernet transport services provide point-to- point connections.
  • the attributes of this service are defined using a SLA which may define delay, jitter and loss objectives along with a bandwidth commitment which must be achieved by the telecommunication provider's network.
  • connection-oriented protocol across the access network.
  • FIG. 20 illustrates the key attributes of an Ethernet transport service.
  • the telecommunications provider establishes a path between a client application 5100 and a server 5101.
  • the upstream path 5160 carries packets from the client application 5100 to the server application 5101 via switch 5120, switch 5140, sub-network 5150 and switch 5130.
  • Switch 5120 is the edge switch for the client application 5100. It is the entry point to the network.
  • Switch 5140 is a transit switch for the client application 5100.
  • the downstream path 5161 carries packets from the server 5101 to the client 5100 via the switch 5130, sub-network 5150, switch 5140 and switch 5120.
  • FIG. 21 illustrates the elements required in the switch 5120 to provide ethernet transport services.
  • the switch 5120 contains a process controller 5121 which controls the behavior of the switch. All the static behavior (e.g., connection classification data and VLAN provisioning) is stored in a persistent storage 5122 to ensure that the switch 5120 can restore its behavior after a catastrophic failure.
  • the switch 5120 connects the client application 5100 to a sub-network 5150 via data plane 5124.
  • Client packets are received on a client link 5140 and passed to a packet forwarding engine 5125.
  • a packet forwarding engine 5125 Based upon the forwarding policy (e.g., VLAN 5 on.port 3 is forwarded on MPLS interface 5 using label 60) downloaded from the process controller 5121 from the persistent storage 5122 via control bus 5123, the client application 5100 data is forwarded to the network link 5141.
  • the behavior of the switch 5120 can be changed over time by a management application 51 10 over a management interface 5124 to add, modify or delete Ethernet transport services or to change policies. These changes are stored in the persistent storage 5122 and downloaded to the data plane 5124.
  • a traffic admission mechanism 5401 (see FIG. 20) is required.
  • the traffic admission mechanism monitors the traffic on the client link 5140.
  • the switch 5120 classifies all the traffic in its packet forwarding engine 5125 and passes this to the traffic management block 5126.
  • the traffic management block 5126 manages all the queues and the scheduling for the network link 5141.
  • FIG. 22 One implementation is shown in FIG. 22. Once a customer's traffic is classified, it is monitored using either a classic policing function or a traffic shaper in the traffic admission mechanism 5401 (FIG. 20).
  • the advantage of using a traffic shaper at the edge instead of a policing function is that it smoothes the traffic sent by the client application to make it conforming to the specified traffic descriptors, making the system more adaptive to the application need.
  • the traffic shaper is included in the nodes, and is therefore within the control of the network provider which can rely on its behavior.
  • per-customer queues 5405 are provided and are located where the traffic for a connection is admitted to the network.
  • a scheduler 5402 (FIG. 22) is responsible for selecting which packet to transmit next from any of the connections that are ready to send on the outgoing link 5403 (NNI, UNI or Trunk).
  • Each outgoing link 5403 requires a scheduler 5402 that is designed to prioritize traffic.
  • the prioritization takes into account the different CoS and QoS supported such that delay and jitter requirements are met.
  • the scheduler 5402 treats traffic that is entering the network at that node with lower priority than transit traffic 5406 that has already gone over a link, since the transit traffic has already consumed network resources, while still ensuring fairness at the network level.
  • schedulers capable of prioritizing traffic. However, the additional ability to know which traffic is entering the network at a given node is particularly useful, given the connection-oriented centrally managed view of the system.
  • the scheduler 5402 can queue the traffic from each connection separately or combine traffic of multiple connections within a single intermediate queue 5404.
  • Multiple intermediate queues 5404 can be used to store packets that are awaiting transmission on the link. At this point in switch 5120, traffic is aggregated, and the rate at which traffic arrives at the queuing point may exceed the rate at which it can leave the queuing point. When this occurs, the intermediate queues 5404 can monitor their states and provide feedback to the traffic admission mechanism 5401.
  • FIG. 23 shows an example of how the queue state is monitored. For each queue, multiple sets of ON/OFF thresholds are configured. When the queue size reaches the ONl threshold, a flow control message indicating that this level has been reached is sent to the traffic admission function for this connection. The state is stored for this connection to avoid continuous flow of control messages to this connection. For each subsequent packet passed to this queue, if the queue state of the connection does not match the local queue state, a flow control message is transmitted back to its traffic admission function, and its local queue state is updated.
  • Flow control messages are very small and are sent at the highest priority on the hairpin of the connection. The probability of losing a backward flow control message while the forward path is active is very low. Flow control messages are only sent to indicate different levels of congestion, providing information about the state of a given queuing point.
  • the traffic admission mechanism 5401 When a message is received by the traffic admission mechanism 5401, it reduces the rate at which the customer's traffic is admitted to the network. In general, this is accomplished by reducing the rate of EIR traffic admitted. For a policing function, more traffic is discarded at the ingress client link 5140 (FIG. 21). For a traffic shaper, packets are transmitted to the network at a reduced rate.
  • an intermediate queue 5404 continues to grow beyond the ON2 threshold, another message is sent, and the traffic admission mechanism further reduces the customers EIR.
  • a control message is sent to indicate that this level is cleared, and the traffic admission mechanism starts to slowly ramp up.
  • More thresholds allow for a more granular control of the traffic shapers, but can lead to more control traffic on the network. Different threshold combinations can be used for different types of traffic (non-real-time vs. real-time).
  • One simplistic implementation of this technique is to generate control messages when packets are being discarded for a given connection, because the queue overflowed or some congestion control mechanism has triggered it.
  • the response of the traffic admission mechanism to a flow control message is engineered based on the technique used to generate the message.
  • the traffic admission mechanism steps down the transmission rate each time an ON message is received, and steps up the transmission rate each time an OFF message is received.
  • the size of the steps can be engineered.
  • the step down can be exponential while the step up is linear.
  • the step can also be proportional to the traffic descriptors to ensure fairness. The system slowly oscillates between the increase and decrease of the rates until some applications need less bandwidth. If the number of connections using the flow controlled queue is available to each traffic admission mechanism, the steps can be modified accordingly. With a larger number of connections, a smaller step is required since more connections are responsive to the flow control.
  • the flow control mechanism In order for the flow control mechanism to work end-to-end, it may be applied to all queuing points existing in the path. That is, the flow control mechanism is applied to all points where packets are queued and congestion is possible, unless non-participating nodes are handled using the network migration technique described below.
  • FIG. 24 illustrates the flow control mechanism described above.
  • Flow control messages 5171 from a queuing point 5404 in a switch 5130 in the path of a connection are created when different congestion levels are reached and relieved.
  • the flow control message 5171 is conveyed to the connection's traffic admission mechanism 5401 which is located where the connection's traffic enters the network.
  • the control messages 5171 can be sent directly in the backward path of the connection using a hairpin 5139 (as described above). This method minimizes the delay before the flow control information reaches the traffic admission mechanism 5401. The quicker the flow control information reaches the traffic admission mechanism 5401, the more efficient is the control loop.
  • the traffic admission mechanism 5401 keeps all the information, but responds to the most congested state. For example, when one node notifies an OFF2 level, and another node is at OFF3, the traffic admission mechanism adjusts to the OFF3 level until an ON3 is received. If an ONl is received for that node before the other node which was at OFF2 level has sent an ON2, then the traffic shaper remains at OFF2.
  • each interim node can aggregate the state of its upstream queue states and announce the aggregate state queue downstream.
  • FIG. 25 depicts an example of this implementation.
  • Each connection using an intermediate queue 5154 or 5404 maintains a local queue state and a remote queue state. If a queue 5404 reaches the ONl threshold, a flow control message is generated and sent downstream to the traffic admission mechanism 5401.
  • a switch 5151 receives the flow control message, it updates the remote congestion state for the customer connection. If the local state of the connection is less than the remote connection state, the flow control message is forwarded to the traffic admission mechanism 5401. Subsequently, if the intermediate queue 5154 should enter the ON2 state, the local connection state is higher than the remote connection state. As a result, an additional flow control message is communicated downstream.
  • both queues need to clear their congestion state.
  • a flow control message is generated to indicate the new queue state.
  • the switch 5150 receives the flow control message and clears the remote queue state for the customer connection.
  • a flow control message is not generated upstream since the local queue state is in the ON2 state.
  • the local queue state changes, such as reaching 0FF2, a flow control message is generated and sent to the traffic admission mechanism 5401 which affects the shaping rate.
  • the rate at which the queue grows can be used to evaluate the need for flow control. If the growth rate is beyond a predetermined rate, then a flow control message indicating the growth rate is sent to the traffic admission mechanism 5401. When the growth rate is reduced below another predetermined rate, then another message indicating a reduction in the rate is sent to the traffic admission mechanism 5401.
  • multiple thresholds can be configured to create a more granular control loop. But the number of thresholds is directly proportional to the amount of traffic consumed by the control loop.
  • Another technique consists of having each queuing point calculate how much traffic each connection should be sending and periodically send control messages to the traffic shapers to adjust to the required amount. This technique is more precise and allows better network utilization, but it requires per-connection information at each queuing point, which can be expensive or difficult to scale.
  • the traffic admission mechanism When a new connection is established, there are different ways it can join the flow control.
  • One approach is to have the traffic admission mechanism start at its minimum rate (CIR) and slowly attempt to increase the transmission rate until it reaches the EIR or until it receives a flow control message, at which point it continues to operate according to the flow control protocol.
  • Another more aggressive approach is to start the rate at the EIR and wait until a congestion control message is received to reduce the rate to the required by the flow control protocol level.
  • a third approach consists of starting to send at the CIR and have the nodes programmed to send the actual link state when it first detects that a connection is transmitting data.
  • Each approach generates different behavior in terms of speed of convergence to the fair share of the available bandwidth.
  • the queuing point can include the number of connections sharing this queue when the flow control is triggered, which can help the traffic shaper establish a more optimal shaping rate.
  • the traffic admission mechanism can extend the flow control loop in FIG. 25 by conveying the status, e.g., using an API, of the shaper to the upper-layer application either in real-time or periodically such that an application can be design to optimized its flow based on the network status. Even if the reaction of the application cannot be trusted by the network, the information can be used to avoid loss at the traffic shaper, preventing the resending of packets and therefore optimizing the network end-to- end.
  • the robust flow control mechanism meets several objectives, including:
  • Fairness among all connections where fairness definition can be implemented in a variety of modes. - Keep the per-connection intelligence and the complexity at the edge and minimize the per-connection information required at each queuing point.
  • a traffic shaper When a traffic shaper is used as the traffic admission mechanism, delay can be added to packets at the network edge.
  • a flexible traffic shaping algorithm can take delay into account when transmitting the packets into the network to ensure that SLA delay budgets are not violated.
  • An example of such a flexible traffic shaper algorithm is shown in FIG. 26.
  • the function is triggered when each packet reaches the front of a shaper queue 5101. At this point the time to wait before that packet would conform to CIR and EIR is calculated at 5102, in variables C and E, respectively. There are several known methods to perform these calculations.
  • the packet is discarded at 5110 as it is deemed no longer useful for the client application. If C is lower than a predetermined threshold WaitForCIR, determined at 5104, then the shaper waits and sends the packet unmarked at 5109. Otherwise, if E is greater than another predetermined threshold WaitForEIR, determined at 5105, then the packet is discarded at 5110. If the difference in wait time between compliance to CIR and EIR is less than another predetermined threshold DIFF, determined at 5106, then the packet is sent as CIR after a delay of C at 5109. Otherwise the packet is sent, marked low priority, after a delay of EIR at 5107. In either case, once the packet is transmitted, the shaper timers are updated at 5108.
  • the settings of these thresholds can enable or disable the different behaviors of the algorithm. Also, the setting of the threshold impacts the average delay for the packets to get through the shapers and the amount of marked packets sent into the network.
  • the shaper can respond to flow control messages as described above (FIG. 24), but the algorithm shown still applies except that the actual sending of the message might be delayed further depending on the rate at which the shaper is allowed to send by the network.
  • the traffic shaper can perform different congestion control actions depending upon the type of traffic that it is serving. For example, a deep packet inspection device could be placed upstream from the traffic shaper and use different traffic shapers for different types of traffic sent on a connection. For TCP/IP type traffic, the traffic shaper could perform head-of-the-line drop to more quickly notify the application that there is congestion in the network. Other types of congestion controls such as Random Early Discard could be applied for other types of traffic as configured by the operator. Another configuration could limit the overall amount of Ethernet multicast/broadcast traffic admitted by the traffic shaper. For example, the shaper could only allow 10% broadcast and 30% multicast traffic on a particular customer's connection over a pre-defined period.
  • NETWORK MIGRATION Network migration is a critical consideration when using systems that include an end-to-end flow control protocol into an existing network.
  • the flow control protocol must operate, even sub-optimally, if legacy (or non-participating) nodes in the subnetwork 5150 are included in the path (see FIG. 27).
  • the path across the sub-network 5150 can be established in a number of ways depending on the technology deployed.
  • the path can be established statically using a VLAN, an MPLS LSP or a GRE tunnel via a network management element.
  • the path can also be established dynamically using RSVP-TE or LDP protocol in an MPLS network, SIP protocol in an IP network or PPPoE protocol in an Ethernet Network.
  • Another approach is to multiplex paths into a tunnel which reserves an aggregate bandwidth across a sub-network 5150.
  • a MPLS- TE tunnel can be established using RSVP-TE.
  • IP a L2TP connection can be created between the switches 5120 and 5130. The paths are mapped into L2TP sessions.
  • Ethernet a VLAN can be reserved to connect traffic between switches 5120 and 5130. Then paths can use Q-in-Q tagging over this VLAN to transport traffic through the sub-network 5150.
  • switch 5130 uses its hairpin 5139 to determine the behavior of that path and estimate the congestion level and failures.
  • switch 5120 inserts a periodic timestamped control message 5170 in the path being characterized. The control message is set at the same priority as the traffic. The switch 5120 does not need to insert control messages for each connection going from the downstream to the upstream node, only one for each priority of traffic.
  • an analysis function 5138 calculates different metrics based on the timestamp.
  • the analysis function can calculate various metrics and combine them to estimate the level of congestion, including, for example:
  • the analysis function can also estimate the average growth in the delay to estimate the growth of the delay curve, such as: which provides an estimate as to when the non-participating elements are reaching the knee of the curve (FIG. 28).
  • the analysis function can also keep a history of delay and loss measurements based on different time of day periods. For example during work day time, the network may be generally more loaded but congestion would occur more slowly, and in the evening the load on the network is lighter, but congestion (e.g., due to simultaneous downloads) will be immediate and more severe.
  • the analysis function 5138 estimates congestion on the sub-network 5150 assuming that the packet delay follows the usual trend as a function of network utilization, as shown in FIG. 28. Using this assumption, delays through a network which exceeds approximately 60-70% utilization rise sharply. The analysis function can estimate when the sub-network 5150 reaches different levels of utilization.
  • the switch 5130 If the analysis function 5138 determines that the upstream path is becoming congested, the switch 5130 generates an indication to switch 5120, using a protocol consistent with the flow control implemented in the participating node. It can then trigger flow control notifications to the source end-point by sending a high priority flow control message 5171 in the downstream path 5161 , as per the flow control description above.
  • both nodes 5120 and 5130 need to have synchronized clocks, such that the timestamp provided by the upstream node 5120 can be compared to the clock of the downstream node 5130. If this capability is not available, the clocks from the upstream and downstream nodes can be used and only a relative delay value is measured. That is sufficient to estimate possible congestion or delay growth in the non-participating element.
  • Another technique is for the downstream node to look at the time it is expecting messages (e.g., if they are sent every 100 msec.) and compare that to the time it is actually receiving the messages. That also provides estimates on the delay, jitter and delay growth through the non-participating element.
  • the drift in clocks from both nodes is insignificant compared to the delay growth encountered in congestion.
  • This information can be used even for: non-delay-sensitive connections as it allows estimating the congestion in the non- participating elements. for delay-sensitive connections, the information can be used to trigger a reroute to a backup path when the QoS is violated, The analysis function is set up when the path is created. If the path is statically provisioned, this policy is provided to the switch 5130 using the management interface. If the path is dynamically established, this policy may be signaled in-band with the path- establishment messages.
  • the analysis function detects that periodic control messages are no longer received, it can indicate to the source via control messages that the path in the non- participating element has failed. This mechanism is particularly useful when the path across subnetwork 5150 is statically provisioned.
  • Sequence numbers can be added to the control message 5170 so that the analysis function can detect that some of the control messages are lost. The analysis function can then also estimate the loss probability on the path and take more aggressive flow control or protection switching actions in order to alleviate/minimize the loss.
  • flow-controlled network elements can be deployed on a pay-as-you-grow basis around existing network nodes.
  • the network provides the ability to assess an application's bandwidth requirement dynamically.
  • service providers can leverage data available from the traffic shapers to enable new revenue streams.
  • FIG. 1 A system which leverages the end-to-end flow control elements is shown in FIG.
  • This figure contains the elements required to establish a service between a client application 5100 and a server application 5101. Examples of these applications are: 1. A VoIP phone connecting to a registrar or proxy server via SIP. 2. A video set top box registering with a video middleware server via HTTP.
  • a PC connecting to the Internet via PPPoE or DHCP.
  • the client application 5100 connects to an access network 5150 through a switch 5120, which operates as described above.
  • a network management element 5110 oversees all the switches in the sub-network 5150. It provides an abstraction layer for higher-level management elements to simplify the provisioning and maintenance of services implemented in the access network.
  • Access to the server application 5101 is controlled by a service signaling element 5130 and a client management system 5112.
  • the service signaling element 5130 processes requests from the client application 5100. It confers with the client management system 5112 to ensure that the client application 5100 can access the server application 5101.
  • the client management system 5112 can also initiate a billing record (i.e., a CDR) as these events occur.
  • the service management system 5111 oversees network and customer management systems 51 10 and 51 12 to provision and maintain new services. Both need to be updated to allow a new client to access the server application 5101.
  • One method to leverage flow control is for the service management system 5111 to be notified when a particular client's service demands continually exceed or underrun the service level agreement between the client and the service provider.
  • One possible method to implement this is depicted in FIG. 30, which leverages the network definitions of FIG. 29.
  • a process controller 5121 polls the data plane 5124 for client application 5100 statistics at 5200. These statistics are stored for future reference at 5201 and passed to the network management system 5110 at 5202. If the customer's demand exceeds the current service level agreement 5203, the network management system 5110 informs the service management system 5111 5204. The service management system 5111 contacts the client management system 5112 5205. If customer management decides to contact the client application 5100 5206, the service management element 51,11 contacts the customer at 5207. If the customer decides to change the service level agreement at 5208, the service management element 5111 contacts the network management system 5110 to increase the bandwidth at 5209. The network management system 5110 changes the bandwidth profile for the customer and informs the process controller 5121 in the switch 5120 at 5210. The process controller 5121 changes the provisioning of the customer in the traffic management element 5126 at 5211.
  • Information provided by the traffic shaper 5126 can include, for example: - Average delay for packets in the traffic shaper queue.
  • % of time packets are marked by the traffic shaper, if applicable. - % of time packets are dropped at the head of the traffic shaper, if applicable.
  • the above information can be manipulated in different types of averaging periods and is sufficient to evaluate whether a connection's traffic descriptors match the applications' requirements for a given time period.
  • the information can also be used to figure out time-of-day and time-of-year usage patterns to optimize the network utilization.
  • the per-client statistics and the server application usage statistics can be aggregated to provide usage patterns by the service management system to create "Time- of-Day” and “Time-of-the-Year” patterns. These patterns can be used to "re-engineer” a network on demand to better handle the ongoing service demand patterns.
  • One possible method to implement this is depicted in FIG. 31.
  • the service management system 5111 decides to change the level of service for a set of customers at 5200 and 5201. For each customer in the list, the service management system 5111 contacts the client management system 5112 to retrieve the customer profile at 5203. The service management system 5111 programs the changes into network management at 5204 which is passed to the process controller 5121 at 5205. The process controller 5121 changes the provisioning of the customer in traffic management 5126 at 5206. This process is repeated at 5207 and 5208 until all customers have been updated. For some applications, it is desirable to perform these changes in real-time and allow the changes to persist for a limited period of time. An example of an application of this nature is "on-line" gaming. The client requires a low bandwidth with low delay connection-type to the server application. When the client logs into the server, the service signaling engine can tweak the access network to classify and provide the correct QoS treatment for this gaming traffic. One method to implement this is depicted in FIG. 32.
  • the client application 5100 initiates a service to the service application 5101 at 5200.
  • the switch 5120 passes this request through the packet network 5150 to the signaling server 5130 at 5201 and 5202.
  • the service signaling element 5130 validates the request using the client management system 5112 at 5203 5204. Assuming the request is valid, the service request is passed to the server application 5101 at 5205.
  • the server application 5101 decides to tweak the customers profile and contacts the service management system 5111 to modify the client access link 5140 at 5206.
  • the service management system 5111 contacts the client management system 5112 to retrieve the customer profile at 5207 and programs the changes into the network management at 5208.
  • the change is passed to the process controller 5121 at 5209, which changes the provisioning of the customer in traffic management 5126 at 5210, and the classification of the customer's traffic in the packet forwarding block at 5211.
  • Network management also adjusts all other switches in the packet access network 5150 to ensure smooth service at 5212.
  • An alternative to handling these QoS changes in real-time is to allow the process controller 5121 to participate in the service signaling path between the client application 5100 and the server application 5101.
  • the service provider could create a logical network (i.e., a VLAN) to handle a particular application. Examples for these on- demand applications are:
  • VoIP signaled using SIP The service provider can map this to a high priority/low latency path.
  • Peer-to-Peer protocols using the bit torrent protocol The service provider can map this to a best-effort service.
  • the service management system 5111 can provision this logical network in the access network 5150.
  • the service management system 5111 decides to create, and instructs the network management system 5110 to implement, a new virtual LAN at 5200.
  • the network management system determines which customers are affected, and the switches require the new virtual LAN at 5201. Since the client application 5100 is affected, the switch 5120 is modified to apply the new LAN at 5202.
  • the change is passed to the process controller 5121 at 5203 and stored in persistent storage to ensure the behavior can be restored across reboots at 5204. Then the changes are provisioned in traffic management 5126 at 5206, and the packet forwarding block at 5205 and 5206.
  • the process controller changes the classification of the customer's traffic in the packet forwarding block at 5211 to add the new virtual LAN.
  • the real-time handling of the client application request is effected as depicted in FIG. 34.
  • This process affects the behavior of the switch 5120 based upon service signaling.
  • the client application signals the server application 5101 to start a new session at 5200.
  • This packet arrives in the switch 5120 via the client link 5140 and is processed by the dataplane.
  • the packet forwarding bloc classifies the packet and sees that the packet matches the virtual LAN at 5201 and 5202.
  • the request is forwarded to the process controller which identifies the packet as a request for the new virtual LAN at 5203, 5204 and 5205.
  • This request is forwarded to server application 5101 via the access network 5150 at 5206.
  • the request is accepted and the response is forwarded back to the client application 5100 via the access network 5150 at 5207.
  • the packet forwarding block identifies the packet and forwards to the process controller 5121 at 5209 and 5210.
  • the process controller 5121 notices that the client applications request has been accepted by the server application 5101, and changes are provisioned in traffic management 5126 at 5206 and the packet forwarding block at 5211 and 5212. Then the response is forwarded to the client application 5100 at 5213.
  • One method to differentiate SLAs with a particular class of service is to provision a flow control handling policy.
  • This policy can be unique for every path providing different handling at each level of congestion of flow control. The flexibility makes traffic engineering more difficult.
  • the policies can be defined as templates to reduce the complexity and limit the amount of system resources needed to store and implement these policies.
  • connection with the larger weight increases its transmission rate more slowly than the one with the smaller weight
  • the use of a weight allows differentiating connections with the same traffic descriptors.
  • Another implementation which does not require the use of an additional weight parameter, decreases and increases the transmission rate in proportion to the existing traffic descriptors, i.e., the weight is calculated as a function of CIR and EIR.
  • a weight for connection i could be calculated as follows:
  • Wj (EIRj-CIRi)/AccesslinkRatei Using such weight calculation, the connections that have the lower CIR have a lower service weights and therefore trigger the flow control more aggressively. It is assumed in this example that such connections pay a lower fee for their service.
  • the nodes could randomly choose which connections to send flow control information to (to increase or decrease the rate) and use the service weights to increase or decrease the probability that a given type of connection receives a flow control message.
  • This characterization can be implemented in several ways, such as, for example, having the nodes agnostic to the sub-class differentiation and triggering backoff messages to all the connections, but the connections would react according to their sub-class's policy.
  • Another way is to have the nodes knowledgeable of the subclass differentiation and trigger the flow control based on each connection's policies. That implementation requires more information on a per connection basis at the node, along with multiple flow control triggers, but the nodal behavior is more predictable. These mechanisms allow a carrier to deploy many different types of sub-classes within one service type and charge different customers based on the preferential treatment their connections are receiving.
  • Ethernet services provide point-to-multipoint connections referred to as E-LAN or V-LAN.
  • E-LAN any site can talk to any one or more sites at a given time.
  • the attributes of this service are defined using an SLA which defines a bandwidth profile and may include quality of service (QoS) objectives such as delay, jitter and loss which must be achieved by the service provider.
  • QoS quality of service
  • an E-LAN that includes multiple customer sites 101a, 101b, 101c, 101d and lOle is created using an Ethernet network having a number of Ethernet switching elements 6201a, 6201b, 6201c, 620 Id and 620 Ie.
  • the Ethernet switching elements performing Media Access Control (MAC) switching, can be implemented using native Ethernet or VPLS (layer 2 VPN or MPLS using pseudo wires).
  • Transit nodes such as the node 6202, performing physical encapsulation switching (e.g., label switching), are used to carry the trunks 6203a, 6203b, 6203c and 6203 d to establish the E-LAN connectivity.
  • Trunks can be established using pseudowires (MPLS/L2TP), GRE (Generic Routing Encapsulation) or native Ethernet.
  • Each of the Ethernet switching elements 6201a- 620 Ie has one or more subscriber interfaces, such as interfaces 6210a, 6210b, 6210c, 621Od and 621Oe, and one or more trunk interfaces, such as interfaces 6212a, 6212b, 6212c, 6212d, 6214a, 6214b, 6214c and 6214d.
  • the transit nodes do not have trunks or subscriber ports; they provide physical connectivity between a subscriber and an Ethernet switch or between Ethernet switches.
  • the term “upstream” refers to the direction going back to the source of the traffic, while the term “downstream” refers to the direction going to the destination of the traffic, as depicted in FIG. 37.
  • the Ethernet switches 6201a- 620 Ie and the transit node 6202 in the network perform flow control, at the subscriber and trunk levels, respectively, to adapt the transmission rates of the subscribers to the bandwidth available.
  • One way to evaluate the bandwidth available is for each node to monitor the queues sizes at the egress ports or analyze changes in the size of the queue and/or change in delay (e.g., at the queue 6204) and buffer traffic outgoing on a link.
  • Status control messages can include a status level for a link ranging from 0 to n (n 1) depending on the size of the queue, where level 0 indicates that there is no or little contention for the link and n means the queue is nearly full.
  • trunk shaper 6206a, 6206b, 6206c, 6206d or 6206e that is responsible for keeping track of the status of the MCM and adapting the traffic rate accordingly.
  • trunk interface 6212a, 6212b, 6212c and 6212d there is a pair of trunk shapers 6207a, 6207b, 6207c, 6207d, 6208a, 6208b, 6208c and 6208d, which dynamically shape the traffic according to the TCM.
  • the trunk shapers modify the transmission rate between maximum and minimum trunk rates depending on the status control message. The modification can be done in multiple programmable steps using multiple status levels. The maximum can correspond to the physical line rate, or the configured LAN (CIR+EIR).
  • the minimum trunk rate can correspond to CIR.
  • the shaper is a rate-limiting function that can be implemented using traffic shaping functions or policing functions.
  • the 6206e is located at each subscriber interface 6210a, 6210b, 6210c, 621Od and 621Oe to shape the traffic between maximum and minimum subscriber rates depending on the status level indicated in the MCM.
  • the maximum and minimum subscriber rates can correspond to the (CIR+EIR) and CIR, respectively.
  • the subscriber shaper can also be located on the customer side of the UNI.
  • the MCM information can also be carried over the UNI to the application to extend the control loop.
  • Each transit node tracks the status of its queues and/or buffers and, if necessary, it sends the TCM according to the level of congestion and current state to the upstream nodes such that the corresponding shapers adapt to the status of the transit node.
  • Each Ethernet switch tracks the status of its queues and creates MCM notifications when necessary, and also tracks downstream MCM status for each destination MAC address. It also takes into account the TCM provided by the transit nodes.
  • the network can include one or more E-LANs with the same characteristics.
  • Point-to-Point transmission can include one or more E-LANs with the same characteristics.
  • FIG. 36 Based on the network described above (FIG. 36), an example of a flow-controlled pt-pt transmission is provided. Assume site 5 sends traffic to site 4, but there is contention on the link 103 at node F 6202 because of other traffic (not shown) sharing the same link 103.
  • the queue in node F 6204 grows, and a TCM is sent to node E 620 Ie which maintains the status information and controls the trunk shaper 6207c.
  • the trunk shaper adapts the rate according to the TCM.
  • the trunk shaper queue grows, and the node E enters the flow control mode at the MAC level and starts generating MCM upstream.
  • Another embodiment sends the MCM upstream immediately upon receipt. Since traffic is originating from site 5 lOle, node E sends the MCM notification to node C 6201c through interface El 6214a.
  • Node E keeps track of the trunk status through the trunk shaper queue status and also keeps track of the MCM.
  • Each Ethernet switch maintains a table of the status of its links.
  • Node E notifies node C of the contention through MCM.
  • the subscriber shaper 6206d controlling the site 5 to site 4 transmissions is modified according to the MCM.
  • Node C updates its MAC status table to reflect the status of the path to site 4. It also includes information to indicate that the user-network interface (UNI) 621Od to site 5 has been notified.
  • An example of the table is as follows:
  • Each node can optionally keep track of congestion downstream, and only convey worst-case congestion upstream to the source in order to reduce overhead.
  • the node uses the table to set the shapers of any subscribers trying to transmit to any destination. Now, if site 2 6201a starts to send traffic to site 4, node E 620 Ie receives the traffic on interface E3 6214c. Since interface E3 is not in the list of interfaces notified in its status table, node E sends a MCM to node C with the current level as it starts to receive traffic. Node C 6201c updates its status table accordingly and modifies the shaper controlling the traffic of site 2 6206b. Node E updates its table accordingly:
  • node 1 already knows the status of the path and sets the shaper of site 1 accordingly while updating the table as follows :
  • Congestion can happen not only on a transit node but at any egress link, including egress to a subscriber UNI and, depending on the node architecture, at intermediate points in the nodes.
  • the transit node 6202 When the congestion level at the queue 6204 reduces, the transit node 6202 indicates to node E 620 Ie via a TCM lower level of congestion.
  • Node E updates the corresponding entry in the status table, updates its trunk shaper and sends status control message with a lower level to all interfaces that had been notified of the status (in this case El and E3) such that node A and node C can also update their entries and control their shapers.
  • the entries in the tables are updated as follows :
  • a node ages a MAC address because it has not been used by any of its interfaces for a predetermined amount of time, it clears the corresponding entry in the status table and sends a MCM to all the interfaces that had been using the address.
  • the node also indicates to the downstream node that it has aged the MAC address so that the downstream node can remove it from its list of notified interfaces.
  • node E 620 Ie determines that site 5 lOle has not sent traffic for a predetermined amount of time, node E ages the MAC address of site 5 and sends the notification to interface Cl clearing the status.
  • the node E status table becomes :
  • Another possibility is to have a shaper per congestion level.
  • a subscriber port can have stations speaking to many other stations in the LAN. While these conversations proceed, each of these stations can be experiencing different levels of congestion across the LAN. As such, the subscriber traffic is metered at different rates based upon the destination addresses. These rates
  • Ethernet switches maintain a table of which of its interfaces is used to reach a given MAC address.
  • the node maintains a table for each group address of any of its interfaces that are used to reach all the destinations.
  • node E maintains the following table and notifies the source (site 1) of the worst case congestion of all sites - in this case level 3 :
  • This scheme applies to multicast, anycast and broadcast. Any type of multicast addressing can be used for these tables, including PIM, IGMPv2, IGMPv3 and MLD.
  • Node A uses the table to set the dynamic shaper to the rate corresponding to the worst case status of the group addresses, in this case site 4, and therefore slows down transmission to all destinations to account for the slowest leaf.
  • EFFICIENT SLA-based RATE-LIMITED E LAN The entire E-LAN bandwidth may be limited by rate limiting the E-LAN to an E-
  • the bandwidth profile defines the maximum rate that the trunk shapers 6208a, 6208b, 6208c and 6208d can achieve at the ingress to the E-LAN.
  • AU the sites that are transmitting through the E-LAN are rate-limited by the trunk bandwidth such that the total E-LAN bandwidth consumed is controlled with limited or no packet loss in the E- LAN, allowing for deterministic engineering and reduced packet loss.
  • Using a rate- limited E-LAN allows significant reduction of the bandwidth allocated to a single E- LAN while maintaining the SLA.
  • each ingress trunk 6208a , 6208b, 6208c and 6208d is rate-limited to lOMbps over a link speed of lOOMbps.
  • Each site transmitting on the ingress trunk sends within its bandwidth profile using the subscriber shaper, but it is further confined within the rate limited by the trunk, so the total traffic transmitted by all the sites on one node does not exceed the lOMbps.
  • rate limiting at the ingress trunk shaper MCM flow control can be triggered when multiple subscribers are sending simultaneously.
  • the trunks internal to the E-LAN 6207a, 6207b, 6207c and 6207d are also rate-limited to further control the amount of bandwidth that is consumed by the E-LAN within the network.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Quality & Reliability (AREA)
  • Environmental & Geological Engineering (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)
  • Mobile Radio Communication Systems (AREA)
  • Small-Scale Networks (AREA)

Abstract

L'invention concerne un système permettant d'établir des connexions dans un système de télécommunications qui comporte un réseau pour transporter des communications entre des connexions d'abonnés choisies, et un réseau sans fil pour coupler des connexions au réseau. Le réseau et le réseau sans fil sont interfacés avec un élément de gestion de trafic et au moins un contrôleur radio partagé par des connexions, avec l'élément de gestion de trafic et le contrôleur radio formant un élément de réseau intégré simple. Des connexions sont acheminées du réseau sans fil au réseau par le biais de l'élément de réseau intégré simple. L'invention concerne un système permettant de choisir des chemins de connexion dans un réseau de télécommunications ayant une multitude de noeuds reliés par une multitude de liens. Le système identifie plusieurs contraintes pour les chemins de connexion entre les noeuds source et les noeuds destination; il identifie également les chemins qui répondent à toutes les contraintes pour un chemin de connexion entre un noeud source choisi et un noeud destination choisi.
EP07734176A 2006-03-31 2007-03-30 Système de mise en réseau de périphérique ethernet intelligent Withdrawn EP2008476A4 (fr)

Applications Claiming Priority (8)

Application Number Priority Date Filing Date Title
US78839006P 2006-03-31 2006-03-31
US79649206P 2006-05-01 2006-05-01
US11/446,316 US8218445B2 (en) 2006-06-02 2006-06-02 Smart ethernet edge networking system
US11/495,479 US7729274B2 (en) 2006-03-31 2006-07-28 Smart ethernet mesh edge device
US11/500,052 US8509062B2 (en) 2006-08-07 2006-08-07 Smart ethernet edge networking system
US11/519,503 US9621375B2 (en) 2006-09-12 2006-09-12 Smart Ethernet edge networking system
US11/706,756 US8363545B2 (en) 2007-02-15 2007-02-15 Efficient ethernet LAN with service level agreements
PCT/IB2007/000855 WO2007113645A2 (fr) 2006-03-31 2007-03-30 Système de mise en réseau de périphérique ethernet intelligent

Publications (2)

Publication Number Publication Date
EP2008476A2 true EP2008476A2 (fr) 2008-12-31
EP2008476A4 EP2008476A4 (fr) 2010-11-03

Family

ID=38564026

Family Applications (1)

Application Number Title Priority Date Filing Date
EP07734176A Withdrawn EP2008476A4 (fr) 2006-03-31 2007-03-30 Système de mise en réseau de périphérique ethernet intelligent

Country Status (3)

Country Link
EP (1) EP2008476A4 (fr)
CA (1) CA2648197A1 (fr)
WO (1) WO2007113645A2 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11973548B2 (en) 2022-02-03 2024-04-30 T-Mobile Usa, Inc. Adjusting a configuration of a wireless telecommunication network

Families Citing this family (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR2925808A1 (fr) * 2007-12-21 2009-06-26 Thomson Licensing Sas Procede de communication dans un reseau comprenant un reseau primaire et un reseau secondaire
US9426029B2 (en) 2008-11-12 2016-08-23 Teloip Inc. System, apparatus and method for providing improved performance of aggregated/bonded network connections with cloud provisioning
US10122829B2 (en) 2008-11-12 2018-11-06 Teloip Inc. System and method for providing a control plane for quality of service
US9929964B2 (en) 2008-11-12 2018-03-27 Teloip Inc. System, apparatus and method for providing aggregation of connections with a secure and trusted virtual network overlay
US9692713B2 (en) 2008-11-12 2017-06-27 Teloip Inc. System, apparatus and method for providing a virtual network edge and overlay
US20110041002A1 (en) 2009-08-12 2011-02-17 Patricio Saavedra System, method, computer program for multidirectional pathway selection
WO2011118470A1 (fr) 2010-03-26 2011-09-29 イーグル工業株式会社 Dispositif d'étanchéité mécanique
US8737214B2 (en) * 2010-07-12 2014-05-27 Teloip Inc. System, method and computer program for intelligent packet distribution
FI124649B (en) 2012-06-29 2014-11-28 Tellabs Oy Method and system for finding the lowest hop-per-bit rate
CN106922211A (zh) 2014-09-17 2017-07-04 特洛伊普公司 用于提供以多协议标签交换改善聚合/捆绑网络连接的性能的系统、装置和方法
WO2017083975A1 (fr) * 2015-11-19 2017-05-26 Teloip Inc. Système, appareil, et procédé pour la fourniture d'un bord et d'une superposition de réseau virtuel avec plan de commande virtuel
US10904142B2 (en) 2015-11-19 2021-01-26 Adaptiv Networks Inc. System, apparatus and method for providing a virtual network edge and overlay with virtual control plane
WO2018006163A1 (fr) * 2016-07-06 2018-01-11 Teloip Inc. Système et procédé de fourniture de plan de contrôle pour la qualité de service
CN107786456B (zh) * 2016-08-26 2019-11-26 南京中兴软件有限责任公司 流量控制方法及系统,分组交换设备及用户设备
CN109525505B (zh) * 2017-09-19 2022-12-16 中国移动通信有限公司研究院 数据传输方法、电子设备及存储介质
CN110875862B (zh) 2018-08-31 2022-07-19 中兴通讯股份有限公司 一种报文传输方法及装置、计算机存储介质
EP3629548B1 (fr) * 2018-09-25 2021-07-07 Siemens Aktiengesellschaft Procédé de transmission de données dans un réseau de communication industriel et appareil de communication
CN114697772A (zh) * 2020-12-31 2022-07-01 华为技术有限公司 一种业务的配置方法及装置
CN112996146A (zh) * 2021-02-05 2021-06-18 深圳市瀚晖威视科技有限公司 一种无线ap与ap组网视频传输系统及方法
EP4064758A1 (fr) * 2021-03-23 2022-09-28 Deutsche Telekom AG Procédé de routage d'au moins un paquet de données d'un émetteur à un récepteur au moyen d'une pluralité de n uds dans un réseau de télécommunication, réseau ou système de télécommunication, noeud de routage et de relais, noeud de relais, détecteur de charge de noeud ou collecteur de charge de n ud, programme informatique et support lisible par ordinateur
US20230291781A1 (en) * 2022-03-11 2023-09-14 Qualcomm Incorporated Techniques for multimedia uplink packet handling
CN115473855B (zh) * 2022-08-22 2024-04-09 阿里巴巴(中国)有限公司 网络系统、数据传输方法
CN116419363B (zh) * 2023-05-31 2023-08-29 深圳开鸿数字产业发展有限公司 数据传输方法、通信设备和计算机可读存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1124356A2 (fr) * 2000-02-08 2001-08-16 Lucent Technologies Inc. Type de service garanti dans un système à paquets
WO2004057817A2 (fr) * 2002-12-19 2004-07-08 Koninklijke Philips Electronics N.V. Protection de donnees en temps reel dans des reseaux hertziens
US20040170186A1 (en) * 2003-02-28 2004-09-02 Huai-Rong Shao Dynamic resource control for high-speed downlink packet access wireless channels
US20040170179A1 (en) * 2003-02-27 2004-09-02 Klas Johansson Radio resource management with adaptive congestion control
US6904286B1 (en) * 2001-07-18 2005-06-07 Cisco Technology, Inc. Method and system of integrated rate control for a traffic flow across wireline and wireless networks

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2001056233A1 (fr) * 2000-01-31 2001-08-02 Aeptec Microsystems Inc. Dispositif d'acces a des communications a large bande
US7474616B2 (en) * 2002-02-19 2009-01-06 Intel Corporation Congestion indication for flow control
US7536723B1 (en) * 2004-02-11 2009-05-19 Airtight Networks, Inc. Automated method and system for monitoring local area computer networks for unauthorized wireless access
US20050220096A1 (en) * 2004-04-06 2005-10-06 Robert Friskney Traffic engineering in frame-based carrier networks
US20060099972A1 (en) * 2004-11-08 2006-05-11 Nair Sureshbabu P Method and apparatus for paging an idle mobile unit in a distributed network

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1124356A2 (fr) * 2000-02-08 2001-08-16 Lucent Technologies Inc. Type de service garanti dans un système à paquets
US6904286B1 (en) * 2001-07-18 2005-06-07 Cisco Technology, Inc. Method and system of integrated rate control for a traffic flow across wireline and wireless networks
WO2004057817A2 (fr) * 2002-12-19 2004-07-08 Koninklijke Philips Electronics N.V. Protection de donnees en temps reel dans des reseaux hertziens
US20040170179A1 (en) * 2003-02-27 2004-09-02 Klas Johansson Radio resource management with adaptive congestion control
US20040170186A1 (en) * 2003-02-28 2004-09-02 Huai-Rong Shao Dynamic resource control for high-speed downlink packet access wireless channels

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of WO2007113645A2 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11973548B2 (en) 2022-02-03 2024-04-30 T-Mobile Usa, Inc. Adjusting a configuration of a wireless telecommunication network

Also Published As

Publication number Publication date
EP2008476A4 (fr) 2010-11-03
CA2648197A1 (fr) 2007-10-11
WO2007113645A3 (fr) 2008-04-10
WO2007113645A2 (fr) 2007-10-11

Similar Documents

Publication Publication Date Title
WO2007113645A2 (fr) Système de mise en réseau de périphérique ethernet intelligent
US10044593B2 (en) Smart ethernet edge networking system
US7046665B1 (en) Provisional IP-aware virtual paths over networks
EP3172876B1 (fr) Assurance d'équité pour une mise en forme de trafic dynamique
US8218445B2 (en) Smart ethernet edge networking system
US7082102B1 (en) Systems and methods for policy-enabled communications networks
US7920472B2 (en) Quality of service network and method
US7729274B2 (en) Smart ethernet mesh edge device
US20110205933A1 (en) Routing method and system
US8363545B2 (en) Efficient ethernet LAN with service level agreements
US8139485B2 (en) Logical transport resource traffic management
US7742416B2 (en) Control of preemption-based beat-down effect
WO2013059683A1 (fr) Routage étendu à multiples trajets pour encombrement et qualité de service dans des réseaux de communication
Jaron et al. QoS-aware multi-plane routing for future IP-based access networks
CN112714072B (zh) 一种调整发送速率的方法及装置
Nakayama et al. Path selection algorithm for shortest path bridging in access networks
Nakayama Rate-based path selection for shortest path bridging in access networks
Bakiras et al. Quality of service support in differentiated services packet networks
Nabeshima et al. Performance improvement of active queue management with per-flow scheduling
Priano et al. Exploring Routing and Quality of Service in Software-Defined Networks: Interactions with Legacy Systems and Flow Management
Yuzhong et al. Traffic engineering with constraint-based routing in DiffServ/MPLS network
Nakayama et al. Rate-based path selection based on link metric optimization in SPBM
Sabri QoS in MPLS and IP Networks
Beyene et al. Improving Ouality of Service of Border
Stathopoulos et al. A Network-Driven Architecture for the Multicast Delivery of Layered Video and A Comparative Study

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20081029

AK Designated contracting states

Kind code of ref document: A2

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LI LT LU LV MC MT NL PL PT RO SE SI SK TR

AX Request for extension of the european patent

Extension state: AL BA HR MK RS

RIN1 Information on inventor provided before grant (corrected)

Inventor name: GIROUX, NATALIE

Inventor name: BARRETT, CHRIS

Inventor name: FRANK, PABLO

Inventor name: KATZ, FABIO

Inventor name: YOUNG, KEN

Inventor name: SMITH, BRIAN

Inventor name: ARSENEAULT, JIM

A4 Supplementary search report drawn up and despatched

Effective date: 20100930

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN

18D Application deemed to be withdrawn

Effective date: 20101001