US20140293791A1 - Ethernet differentiated services conditioning - Google Patents

Ethernet differentiated services conditioning Download PDF

Info

Publication number
US20140293791A1
US20140293791A1 US14/302,995 US201414302995A US2014293791A1 US 20140293791 A1 US20140293791 A1 US 20140293791A1 US 201414302995 A US201414302995 A US 201414302995A US 2014293791 A1 US2014293791 A1 US 2014293791A1
Authority
US
United States
Prior art keywords
frame
ethernet
priority
network device
forwarder
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US14/302,995
Inventor
Sameh Rabie
Osama Aboul-Magd
Bashar Abdullah
Baghdad Barka
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
RPX Clearinghouse LLC
Original Assignee
Rockstar Consortium US LP
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority to US53748704P priority Critical
Priority to US10/868,568 priority patent/US8804728B2/en
Application filed by Rockstar Consortium US LP filed Critical Rockstar Consortium US LP
Priority to US14/302,995 priority patent/US20140293791A1/en
Publication of US20140293791A1 publication Critical patent/US20140293791A1/en
Assigned to RPX CLEARINGHOUSE LLC reassignment RPX CLEARINGHOUSE LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: BOCKSTAR TECHNOLOGIES LLC, CONSTELLATION TECHNOLOGIES LLC, MOBILESTAR TECHNOLOGIES LLC, NETSTAR TECHNOLOGIES LLC, ROCKSTAR CONSORTIUM LLC, ROCKSTAR CONSORTIUM US LP
Application status is Abandoned legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic regulation in packet switching networks
    • H04L47/10Flow control or congestion control
    • H04L47/32Packet discarding or delaying
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/28Data switching networks characterised by path configuration, e.g. local area networks [LAN], wide area networks [WAN]
    • H04L12/46Interconnection of networks
    • H04L12/4641Virtual LANs, VLANs, e.g. virtual private networks [VPN]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance or administration or management of packet switching networks
    • H04L41/50Network service management, i.e. ensuring proper service fulfillment according to an agreement or contract between two parties, e.g. between an IT-provider and a customer
    • H04L41/5019Ensuring SLA
    • H04L41/5022Ensuring SLA by giving priorities, e.g. assigning classes of service
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing packet switching networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/74Address processing for routing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic regulation in packet switching networks
    • H04L47/10Flow control or congestion control
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic regulation in packet switching networks
    • H04L47/10Flow control or congestion control
    • H04L47/12Congestion avoidance or recovery
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic regulation in packet switching networks
    • H04L47/10Flow control or congestion control
    • H04L47/20Policing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic regulation in packet switching networks
    • H04L47/10Flow control or congestion control
    • H04L47/22Traffic shaping
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic regulation in packet switching networks
    • H04L47/10Flow control or congestion control
    • H04L47/24Flow control or congestion control depending on the type of traffic, e.g. priority or quality of service [QoS]
    • H04L47/2408Different services, e.g. type of service [ToS]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic regulation in packet switching networks
    • H04L47/10Flow control or congestion control
    • H04L47/24Flow control or congestion control depending on the type of traffic, e.g. priority or quality of service [QoS]
    • H04L47/2425Service specification, e.g. SLA
    • H04L47/2433Allocation of priorities to traffic types
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic regulation in packet switching networks
    • H04L47/10Flow control or congestion control
    • H04L47/24Flow control or congestion control depending on the type of traffic, e.g. priority or quality of service [QoS]
    • H04L47/2441Flow classification
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/90Queuing arrangements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/90Queuing arrangements
    • H04L49/901Storage descriptor, e.g. read or write pointers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance or administration or management of packet switching networks
    • H04L41/08Configuration management of network or network elements
    • H04L41/0893Assignment of logical groupings to network elements; Policy based network management or configuration
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing packet switching networks
    • H04L43/08Monitoring based on specific metrics
    • H04L43/0823Errors
    • H04L43/0829Packet loss
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing packet switching networks
    • H04L43/08Monitoring based on specific metrics
    • H04L43/0852Delays
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing packet switching networks
    • H04L43/08Monitoring based on specific metrics
    • H04L43/0876Network utilization
    • H04L43/0894Packet rate

Abstract

A network includes an edge node configured to define the per-hop behaviors using a set of bits in an Ethernet header of a frame and a core node configured to receive the frame and to forward the frame according to the per-hop-behaviors. The network can also include a defined set of differentiated service classes, each differentiated service class associated with the set of per-hop behaviors, indicated in the set of priority bits. The network classifies the Ethernet frame based on at least one of a set of priority bits or information in at least one protocol layer in the frame header of the Ethernet frame and determines a per-hop behavior based on the classification.

Description

    CROSS-REFERENCE TO RELATED APPLICATIONS
  • This Application is a continuation of U.S. patent application Ser. No. 10/868,568, filed Jun. 15, 2004, entitled “ETHERNET DIFFERENTIATED SERVICES CONDITIONING”, which claims the benefit of U.S. Provisional Patent Application Ser. No. 60/537,487, filed Jan. 20, 2004, entitled “ETHERNET DIFFERENTIATED SERVICES”, the entire contents of both of which are hereby incorporated herein by reference.
  • BACKGROUND
  • This invention relates to quality of service support in Ethernet networks.
  • Ethernet is a widely installed local area network (LAN) technology. Ethernet technology can be cost effective, easy to configure, and is widely understood by network managers. Ethernet technology is increasingly being deployed in service provider metro and wide-area networks. Success of Ethernet in provider networks depends on the ability to provide service level agreements (SLAs) that can guarantee bandwidth, delay, loss, and jitter requirements to end-users. Service providers can offer multiple services with different quality-of-service (QoS) characteristics and performance guarantees.
  • The base Ethernet technology is specified in the IEEE 802.3 standard. Traditionally, Ethernet did not include QoS capabilities. More recently, the IEEE has introduced the user priority capability that enables the definition of up to eight classes of service (CoS). The user priority capability is often referred to as “the p-bits.” The p-bits are carried in the 802.1Q tag and are intended for use to identify different service classes.
  • An Ethernet network may include multiple customer edge (CE) devices, switches, and routers. These devices may communicate using the Ethernet protocols and/or other networking technologies and protocols.
  • SUMMARY
  • In one aspect, a method for conditioning Ethernet traffic includes receiving an Ethernet frame, classifying the frame based on a set of priority bits in a frame header of the Ethernet frame, and determining a per-hop behavior for the frame based on the classification.
  • Embodiments may include one or more of the following. The set of bits can include a set of p-bits in the Ethernet header. Setting the set of bits can include mapping the Ethernet per-hop behaviors to a set of bits in a frame according to a core network technology. Setting the set of bits can include mapping the Ethernet per-hop behaviors to a set of connections according to a core network technology.
  • The method can also include metering the frame. Metering the frame can include modifying the drop precedence and per-hop behavior of the frame. The method can also include determining a forwarding treatment for the frame based on the per-hop behavior or dropping the frame based on the per-hop behavior. The method can also include marking the frame based on the assigned PHB. The method can also include shaping the frame based on the assigned PHB.
  • The method can include scheduling the frame for delivery on the Ethernet network. Scheduling can include allocating a link bandwidth based on the PHBs. Scheduling can include allocating a link bandwidth among multiple virtual local area networks (VLANs), the VLANs including multiple E-Diff traffic classes and allocating portions of the allocated bandwidths for the multiple virtual local area networks among at least one VLAN class for the multiple local area networks based on the priority bits. Scheduling can include allocating a bandwidth among a set of service classes, allocating portions of the allocated bandwidths for the set of service classes among at least one particular service class, the service class including multiple VLAN classes, and allocating portions of the allocated bandwidths for the particular service classes among a particular VLAN class based on the priority bits.
  • The forwarding treatment can be based on an Ethernet differentiated services class. The Ethernet differentiated services class can include one or more of Ethernet expedited forwarding (E-EF), Ethernet assured forwarding (E-AF), Ethernet class selector (E-CS), and Ethernet default forwarding (E-DF). Determining a forwarding treatment can include defining additional per-hop behaviors based on networking or application needs.
  • The frame can include a canonical format indicator (CFI) bit, which can be used for CoS indication. Classifying the frame based on a set of predetermined criteria associated with combinations of the priority bits can include classifying the frame based a set of predetermined criteria associated with combinations of the priority bits and the CFI bit. The priority bits can include a congestion indication. The congestion indication pan include at least one of a forward and a backward congestion indication.
  • The above aspects or other aspects of the invention may provide one or more of the following advantages.
  • Aspects may provide a scalable Ethernet differentiated services architecture that is capable of supporting different services and performance characteristics. The architecture can accommodate a wide variety of services and provisioning policies. The Ethernet differentiated services architecture can allow for incremental deployment, and permitting interoperability with non-Ethernet differentiated services compliant network nodes.
  • A variation of the architecture where Ethernet is used at the access and a different technology at the network core provides an advantage of allowing differentiated services across heterogeneous networks.
  • Ethernet differentiated services domains are multiple enterprise and/or provider networks/segments that employ different Ethernet differentiated services methods and policies within each domain, such as different p-bits interpretations, number/type of PHBs, etc. Mapping or traffic conditioning can be used at the boundary nodes between different domains.
  • Ethernet class of service (CoS) bits identifies nodal behavior, (e.g., how an incoming frame should be handled at queuing and scheduling levels based on p-bits encoding) and allows frames to be forwarded according to the specified nodal behaviors. Ethernet per-hop-behaviors are determined or encoded by a specific assignment of the p-bits. The p-bits can also include congestion information to indicate network congestion.
  • The particular use of the 802.1Q VLAN Tag Control Information (e.g., p-bits) enables the introduction of the differentiated services to Ethernet technologies. The use of the p-bits allows the definition of a number of defined per-hop behaviors (PHBs) that determine the forwarding treatment of the Ethernet frames throughout the network.
  • The details of one or more embodiments of the invention are set forth in the accompanying drawings and the description below. Other features, objects, and advantages of the invention will be apparent from the description and drawings, and from the claims.
  • DESCRIPTION OF DRAWINGS
  • FIG. 1 is a block diagram of a tagged Ethernet frame.
  • FIG. 2 is a block diagram of a Ethernet differentiated services architecture.
  • FIG. 3 is a block diagram of a set of components included in a device at an edge node of a network.
  • FIG. 4 is a block diagram of a Ethernet differentiated services architecture.
  • FIG. 5 is a block diagram of Ethernet differentiated services per-hop behaviors.
  • FIG. 6 is block diagram of a class-based scheduler using multiple queues.
  • FIG. 7 is table of priority bit assignments.
  • FIG. 8 is a block diagram of a differentiated services network having multiple domains.
  • FIG. 9 is an architecture for end-to-end service across multiple provider networks.
  • DETAILED DESCRIPTION
  • Referring to FIG. 1, an example of an Ethernet frame 10 is shown. The frame includes a header portion 12 and a data portion 14. The header 12 includes a: destination address 16, a source address 18, an 802.1Q. tag 20, and a protocol type 22. The Institute for Electrical Engineers (IEEE) standard 802.1Q describes the 802.1Q tag 20. The 802.1Q tag in an Ethernet frame that defines a virtual-LAN (VLAN) membership. Three bits of this tag, referred to as the priority bits 24, identify user priority. The three priority bits 50 provide eight combinations and describe up to eight levels of service. The three priority bits can be used to describe the per-hop behavior of a frame. Per-hop-behaviors include for example, externally observable forwarding behavior applied to a frame by a frame forwarding device 20 in an Ethernet differentiated services architecture 30.
  • Referring to FIG. 2, the Ethernet differentiated services architecture 30 is shown. This architecture 30 forwards frames based on the per-hop-behaviors defined by the p-bits 24 for the frames. One embodiment of the architecture 30 includes a frame forwarding device 20 that includes an ingress switch 34, a core switch 38, and egress switch 46. The ingress switch 34 performs traffic conditioning functions and class-based forwarding functions. The core switch 38 includes a behavior aggregate (BA) classifier 40 and a class-based egress scheduler that uses multiple queues 44. The egress switch 46 may perform similar functions to either the ingress switch 34 or core switch 38 (or a subset of those functions), depending on network configurations and policies. For example, if the egress switch 46 is connected to a customer edge node, the egress switch 46 can perform core node-like forwarding functions. Alternately, if the egress switch 46 is connected to another provider network using an network-network interface (NNI), the egress switch 46 Deforms traffic conditioning functions according to the service contract between the two providers. The architecture 30 includes Ethernet differentiated services functions implemented at both the edge and the network core 36, although other arrangements may be possible.
  • Unlike the IP DiffServ (“Differentiated services”) Architecture, described in RFC 2475, the architecture 30 shown in FIG. 2 does not use the IP DSCP for indicating frame per-hop behaviors. Instead, the architecture 30 uses the Ethernet p-bits 24. Architecture 30 assumes that edge and core nodes are p-bit aware nodes, meaning that e.g., that the nodes can set, clear and/or process frames based on the states of the p-bits. For example, all edge and core nodes are VLAN-aware Ethernet nodes that can set and/or interpret the p-bits. The network core 36 may be an Ethernet network such as is common in enterprise networks or a provider metro Ethernet network, and may use some Ethernet tunneling/aggregation techniques such as stacked virtual large area networks (VLAN) support such as Q-in-Q (referring to the 802.1Q tag), Media access control in media access control (MAC-in-MAC, or an equivalent scheme).
  • The architecture 30 separates edge and network core node functions. That is, the edge includes traffic conditioning that may include multi-field classification, metering, and marking of the per-hop behavior (PHB) in the p-bits 24, together with class-based forwarding. On the other hand, the edge functions may occur at the user-network interface (UNI) for example, between the customer edge (CE) node and service provider, or at the network-network interface (NNI) between networks/domains. The core node 36 is scalable and performs simple behavior and aggregate classification based on the frame per-hop-behavior (PHB) (indicated in the p-bits 24), and class-based forwarding based on the PHB value.
  • Referring to FIG. 3, components 50 included in a device at the network edge nodes are shown. For example, the set of components 50 are included in an ingress switch such as switch 34 (FIG. 2). The set of components 50 includes a classifier 52, meter 54, marker 56, and shaper/dropper 58. These components 50 perform Ethernet traffic conditioning functions at the network edge nodes to classify incoming traffic based on predetermined criteria.
  • The classification identifies flows and correlates the flows to corresponding bandwidth profiles for the flows and corresponding forwarding treatments defined or provided for the flows. The classifier 52 selects frames in a traffic stream based on content of some portion of the frame header (e.g., based on the p-bits). Two types of classifiers include behavior aggregate (BA) classifiers and multi-field (ME) classifiers. A BA classifier classifies frames based on the p-bits only. The MF classifier on the other hand selects frames based on the value of a combination of one or more header fields, such as source and destination address, p-bits, protocol ID, source and destination port numbers, and other information such as incoming interface/connection. In general, classifier 52 (e.g., a behavior aggregate (BA) classifier or multi-field (MF) classifier) is used to “steer” frames matching a rule to a different element of the traffic conditioner for further processing.
  • Frames enter classifier 52 (indicated by arrow 51) and may or may not be metered based on the service level agreement. Metered frames are passed to meter 54. Meter 54 measures the temporal properties of the stream of frames selected by a classifier and compares the properties to a traffic profile. A meter 54 passes state information to other components to trigger a particular action for each frame that is either in- or out-of-profile. Non-metered frames are passed from classifier 52 to marker 56.
  • Flows are marked (or remarked) by marker 56 to identify the Ethernet PHB applied to the incoming frame. For instance, frame marker 56 sets a particular field of a frame to a particular p-bit combination, adding the marked frame to a particular behavior aggregate. The marker 56 can be configured to mark all received frames to a single p-bit combination, or can be configured to mark a frame to one of a set of p-bit combinations used to select a particular PHB from a PHB group according to the state of the meter 54.
  • A PHB group is a set of one or more PHBs that can be specified and implemented simultaneously, due to a common constraint applying to all PHBs in the set such as a queue servicing or queue management policy. A PHB group allows a set of related forwarding behaviors to be specified together (e.g., four dropping priorities). A single PHB is a special case of a PHB group. When the marker 54 changes the p-bit combination in a frame it is referred to as having “re-marked” the frame.
  • Remarking may also occur across Ethernet-differentiated services domain boundaries, such as a user to network interface (UNI) or network to network interface (NNI) interface. Remarking could be used for such purposes as performing PHBs mapping or compression, or to effect p-bits translation.
  • If tunneling is used, the outer tunnel p-bits are usually also set to the desired PHB indication for forwarding through the aggregated core. The p-bits in the original Ethernet frame may be preserved through the network, or changed by the edge nodes.
  • Frames that exceed their assigned rates may be dropped, shaped, or remarked with a drop precedence indication. The shaper/dropper 58 shapes the traffic before sending the frames to the network as indicated by arrow 60. Shaper/dropper 58 discards some or all of the frames in a traffic stream in order to bring the stream into compliance with a traffic profile. This discarding is sometimes referred to as “policing” the stream. A dropper can be implemented as a special case of a shaper by setting the shaper buffer size to zero (or a few) frames.
  • In general, multi-field traffic classification is based on any of the L1-L7 protocol layer fields, either individually or in combination. Common L2 Ethernet fields used are the incoming Ethernet Interface (port), the Destination/Source MAC addresses, the virtual local area network identification (VLAN ID or VID), and the User Priority (p-bits). Based on the Destination/source media access, control (MAC) addresses all of the frames originating at a certain source and/or destined to a certain destination are assigned to the same flow. Thus, based on the VLAN ID all frames of a certain VLAN belong to the same flow.
  • Alternatively a Group of VLANs may be combined together for the purpose of class of service (CoS) functions. The user priority bits (p-bits 24) provide a finer granularity for flow identification.
  • The L2 Ethernet fields can be combined for traffic classification. Common combinations include: “port+p-bits”, “VID(s)+p-bits.” Common upper layer fields include IP differentiated services, IP source, IP Destination, IP Protocol Type, TCP port number, UDP port number.
  • Frame classification determines the forwarding treatment and metering of frames. Determining the forwarding treatment (e.g., congestion control, queuing and scheduling) by the edge nodes includes assigning PHBs to the group of frames that require the same treatment (e.g., Voice is assigned E-EF PHB, and Data is assigned E-AFx PHB). Metering can be used for determining and enforcing the bandwidth profile/traffic contract, and verifying the Service Level Agreements (SLAs), and allocating nodal resource to the flow.
  • The classification function may be different for the purpose of forwarding and metering. For example, voice and data typically receive different forwarding treatment, but their traffic bandwidth profile could be combined into a single traffic contract to resemble a leased line service.
  • Referring to FIG. 4, another example of an Ethernet differentiated services architecture 70 is shown. The architecture 70 includes an ingress switch 84 at an interface between an Ethernet network 82 and a non-Ethernet network core 86. The architecture 70 also includes an egress switch 88. In this example, different technologies are used for forwarding the Ethernet frames through the non-Ethernet network core 86. For example, the non-Ethernet network core 86 could use asynchronous transfer mode (ATM), multi-protocol label switching (MPLS), frame relay (FR), Internet protocol (IP), or other network protocols.
  • The ingress switch 84 includes a classifier 72, traffic meter 74, marker 76, shaper/dropper 78, and a mapping unit 80. The classifier 72, traffic meter 74, marker 76, and shaper/dropper 78 function in a similar manner to those described above in FIG. 3. The mapping unit 80 maps and encapsulates the Ethernet frames for forwarding on the core network 86.
  • The architecture 70 shown in FIG. 4 is similar to architecture 30 shown in FIG. 2, however, architecture 70 uses Ethernet at the access, and a different networking technology in the core 86. The edge conditioning functions are similar to the edge conditioning functions in architecture 30. The Edge node performs the class of service (CoS) mapping from the Ethernet PHB into the core network 86. Many mapping methods are possible such as mapping the PHB to an ATM virtual channel connection (VCC) (e.g., E-EF to constant bit rate (CBR) VCC), a link-state packet (LSP), an IP Differentiated services Core, etc. In all cases, the original information in the Ethernet frame is maintained through transport through the core using tunneling and/or encapsulation techniques.
  • In the above example, frames are placed into class queues based on the PHB. Alternately, frames could be placed on different logical or physical ports or connections with different levels of service based on the PHB.
  • In both architecture 30 (FIG. 2) and architecture 70 (FIG. 4) edge CoS functions define per-hop behaviors for a frame. However, in architecture 30, a frame is forwarded based on per-hop-behaviors indicated in the p-bits 24, whereas in architecture 70, a frame is forwarded based on the core network technology CoS transport mechanism.
  • Referring to FIG. 5, a grouping 90 of the nodal behaviors into, e.g., four categories is shown. The grouping 90 includes an Ethernet expedited forwarding category 92 (E-EF), Ethernet assured forwarding 94 (E-AP), Ethernet class selector 96 (E-CS), and Ethernet default-forwarding category 98 (E-DF). Other groupings of behaviors are possible.
  • The first category, referred to as an Ethernet expedited forwarding category 92 (E-EF) is primarily for traffic sensitive to delay and loss. This category is suitable for implementing services that require delivery of frames within tight delay and loss bounds and is characterized by a time constraint. A frame arriving to a network node and labeled as an Ethernet EF frame departs the node according to a time constraint (e.g., dk-ak is less. than or equal to tmax where ak and dk are the arrival and the departure times of the kth frame to the node and tmax is the time constraint) E-EF allows for frame loss when buffer capacity is exceeded, however, the probability of frame loss in this service is typically low (e.g., 10−5-10−7). E-EF identifies a single drop precedence and frames that exceed a specified rate are dropped. For E-EF frames, no remarking (e.g., re-assigning the drop precedence of frame to a different value) is allowed. The Ethernet expedited forwarding category 92 does not allow re-ordering of frames.
  • A complete end-to-end user service can include edge rules or conditioning in addition to forwarding treatment according to the assigned PHB. For example, a “premium” service' level (also be referred to as virtual leased line), uses E-EF PHB defined by a peak rate only. This “premium” service has low delay and small loss performance. A frame in the E-EF category can have forwarding treatment where the departure rate of the aggregate frames from a diff-serv node is set to equal or exceed a configurable rate. This rate is available independent of other traffic sharing the link. In addition, edge rules describe metering and peak rate shaping. For example, the metering/policing can enforce a peak rate and discard frames in excess of the peak rate. The metering/policing may not allow demotion or promotion. Peak rate shaping can smooth traffic to the network and convert traffic to constant rate arrival pattern. A combination of the forwarding behaviors and edge rules offer a “premium” service level. A premium service queue typically holds one frame or a few frames. An absolute priority scheduler increases the level of delay performance and could be offered initially on over-provisioning basis.
  • A second, more complex category, referred to as Ethernet assured forwarding (E-AF) 94 divides traffic into classes of service, and when the network is congested, frames can be discarded based on a drop precedence. More specifically, E-AF defines m (m>=1) classes with each class having n (n>1) drop precedence, levels. Frames marked with high drop precedence indication are discarded before frames with a low drop precedence on nodal congestion. At the Ethernet traffic meter, E-AF frames that exceed their assigned rate may be marked with high drop precedence indication (instead of dropping). The network typically does not extend any performance assurances to E-AF frames that are marked with high drop precedence indication. The nodal discard algorithm treats all frames within the same class and with the same drop precedence level equally. E-AF per-hop behavior does not allow re-ordering of frames that belong to the same flow and to the same E-AF class.
  • A third category, referred to as an Ethernet Class Selector (E-CS) 96 provides compatibility with legacy switches. Ethernet Class Selector includes up to eight p-bit combinations. For example, E-CS7 to E-CSO with E-CS7 assigned the highest priority and E-CSO assigned the lowest priority. E-CS frames can be metered at the network edge. E-CS does not allow significant re-ordering of frames that belong to the same CS class. For example, the node will attempt to deliver CS class frames in order, but does not guarantee that reordering will not occur, particularly under transient and fault conditions. All E-CS frames belonging to the same class are carried at the same drop precedence level.
  • The fourth category, a default-forwarding category 98 (E-DF), is suitable for implementing services with no performance guarantees. For example, this class can offer a “best-effort” type of service. E-DF frames can be metered at the network edge. This class of service should not allow (significant) re-ordering of E-DF frames that belong to the same flow and all E-DF frames are carried at the same drop precedence level.
  • Frame treatment can provide “differentiated services”, for example, policing, marking, or re-coloring of p-bits, queuing, congestion control, scheduling, and shaping. While, the proposed Ethernet per-hop behaviors (PHB) include expedited forwarding (E-EF), assured forwarding (E-AF), default forwarding (E-DE), and class selector (E-CS), additional custom per-hop behaviors PHBs can be defined for a network. The three p-bits allow up to eight PHBs). If more PHBs are desired, multiple Ethernet connections (e.g., Ethernet interfaces or VLANs) can be used, each with up to eight additional PHBs. The mapping of the p-bits to PHBs may be signaled or configured for each interface/connection. Alternatively, in the network core, tunnels may be, used for supporting a larger number of PHBs.
  • Referring to FIG. 6, an arrangement 100 for placing an incoming frame 101 in an appropriate class queue based on its p-bits 24 is shown. The arrangement 100 includes four queues 102, 104, 106, and 108. The queues 102, 104, 106, and 108 are assigned different priorities for forwarding the frame based on the different levels of services defined in, e.g., the Ethernet differentiated, service protocol. In this configuration, frames with p-bits mapped to E-EF differentiated service behaviors are placed in the highest priority queue 102. This queue does not allow frames to be discarded and all frames are of equal importance. In this example, queues 104 and 106 are allocated for forwarding frames with the assured service class of the differentiated services and frames are placed in this queue according to their p-bit assignment. In order to provide the level of service desired for assured services forwarding, each queue may be assigned a guaranteed minimum link bandwidth and frames are not re-ordered. However, if the network is congested the queues discard frames based on the assigned drop precedence. Queue 108 corresponds to a “best effort” queue. Frames placed in this queue are typically given a lower priority than frames in queues 102, 104, and 106. Queue 108 does not re-order the frames or allow for drop precedence differentiation.
  • While in the example above, at incoming frame was placed in one of four queues based on the p-bits 24, any number of queues could be used. For example, eight queues could provide placement of frames with each combination of p-bits 24 in a different queue.
  • In addition, the p-bits 24 can include congestion information in the forward and/or backward direction. This congestion information can be similar to forward explicit congestion notification (FECN) and backward explicit congestion notification (BECN) bits of the frame relay protocol. The congestion information signals a network device, for example, edge nodes or CEs, to throttle traffic until congestion abates. Out of the eight p-bit combinations, two combinations can be used for FECN (signaling congestion and no congestion) and two for the BECN direction.
  • In addition, the canonical format indicator (CFI), a one bit field in the Ethernet header, can be used for signaling congestion, or other QoS indicators such as frame drop precedence. The use of the CFI field in addition to (or in combination with) the p-bits 24 allows for support of additional PHBs. The p-bits can be used for signaling up to eight emission classes, and the CFI is used for drop precedence (two values) or a more flexible scheme, where the combined (p-bits+CFI) four bits can support 16 PHBs (instead of 8).
  • Referring to FIG. 7, an example of the assignment of p-bits 24 to represent nodal behaviors by mapping the p-bits 24 to combinations of the Ethernet differentiated service PHBs is shown. This assignment designates four groupings of nodal behaviors: E-EF, E-AF2, E-AF1, and E-DF. Each of the E-AF levels includes two drop precedence levels (i.e., E-AFX2 and E-AFX1) and thus, is assigned to two combinations of p-bits. The E-EF nodal behavior is mapped to the ‘111’ combination 120 of p-bits, the E-AF2 nodal behaviors are mapped to the’ ‘110’ and ‘101’ combinations 122 and 124, the E-AF1 nodal behaviors are mapped to the ‘100’ and ‘011’ combinations 126 and 128, and the E-DF nodal behavior is mapped to the ‘010’ combination 130. In this mapping of p-bits to nodal behaviors, two p-bits combinations 132 and 134 are reserved for congestion indication in the forward or backward direction.
  • For example, if the p-bits are assigned according to the mapping shown in FIG. 7 and the network includes a set of queues as shown in FIG. 6, frames can be routed to the appropriate queue based on the p-bit combination. Frames with a p-bit combination of ‘111’ are placed in queue 102 and frames with a p-bit combination of ‘010’ are placed in queue 108 frames with either a ‘011’ or ‘100’ p-bit combination are placed in queue 106 and frames with either a ‘101’ or ‘110’ p-bit combination are placed in queue 106. If the network is congested (e.g., the queue is full), frames in queue 104 or 106 are dropped according to their drop precedence based on the p-bit combination. For example, a high drop precedence (e.g., AF22) frame is discarded before a low drop precedence frame (e.g., AF21) under congestion. In queue 106 frames with the E-AF12 designation are discarded before frames with the E-AF11 designation. Based on the p-bits, dropping frames having an E-AF12 designation before dropping frames having an E-AF11 designation corresponds to frames with a p-bit combination of ‘100’ being dropped before frames with a p-bit combination of ‘011’.
  • The assignment of p-bits shown in FIG. 7 is only one possible assignment. Other service configurations and p-bit assignments are possible. For example, the assignment can include three levels of assured services (E-AF), each having two different assignments to define the drop precedence of the frames and two remaining combinations of p-bits for congestion indication. Alternately, four assured services with two drop precedents could be mapped to the eight combinations. In another example, four combinations could be dedicated to fully define congestion in the forward and backward directions. In this example, two p-bit combinations are dedicated to forward congestion (or lack of), two p-bit combinations are dedicated to backward congestion (or lack of), and the remaining four p-bit combinations are used to define the nodal behaviors. These four p-bit combinations could include one assured service with two drop precedence and two CS services, or two assured services each having two different assignments to define the drop precedence of the frames.
  • The edge node (at either customer or provider side) may perform IP Differentiated services to Ethernet differentiated services mapping if the application traffic uses IP differentiated services. The mapping could be straightforward (e.g., IP-EF to E-EF, IP-AF to E-AF) if the number of IP PHBs used is limited to 8. Otherwise, some form of compression may be required to combine multiple IP PHBs into one E-PHB. Alternatively, multiple Ethernet connections (e.g., VLANs) can be used at the access and/or core, each supporting a subset of the required PHBs (e.g., VLAN-A supports E-EF/E-AF4/E-AF3, VLAN-B supports E-AF2/E-AF1/DF).
  • Typically, a class-based Queuing (CBQ) or a weighted fair queuing (WFQ) scheduler is used for forwarding frames on the egress link, at both edge and core nodes. The scheduling can be based on the PHB (subject to the constraints that some related PHBs such as an AFx group follow the same queue). The use of p-bits to indicate per-hop-behaviors allows for up to eight queues, or eight queue/drop precedence combinations.
  • Additional information may be available/acquired through configuration, signaling, or examining frame headers, and used for performing more advanced scheduling/resource management. Additional information can include, for example, service type, interface, or VID. For example, a 2-level hierarchical scheduler, where the first level allocates the link bandwidth among the VLANs, and the second level allocates the BW among the VLAN Differentiated services classes according to their PHB. Another example includes a 3-level hierarchical scheduler, where the first level allocates the link bandwidth among the service classes (e.g., business vs. residential), the second level allocates BW among the service VLANs, and the third level allocates the BW among the VLAN differentiated services classes according to their PHB.
  • The described Ethernet differentiated services architecture allows incremental deployment for supporting legacy equipment and network migration. Non-differentiated services capable nodes may forward all traffic as one class, which is equivalent to the E-DF class. Other 801.1Q nodes that use the p-bits simply to designate priority can interwork with Ethernet differentiated services nodes supporting the E-CS PHB. Some CoS degradation may occur under congestion in a network that uses a combination of E-differentiated services and legacy nodes.
  • Referring to FIG. 8, an Ethernet differentiated services network 150 having multiple domains 160 and 162 is shown. An Ethernet Differentiated services domain has a set of common QoS Policies, and may be part of an enterprise or provider network. The set of QoS policies can include Ethernet PHBs support, p-bits interpretation, etc. Edge nodes (e.g., nodes 152) interconnect sources external to a defined network (e.g., customer equipment). The Ethernet edge node 152 typically performs extensive conditioning functions. Interior Nodes 154 connect trusted sources in the same Differentiated services domain. Interior nodes 154 perform simple class-based forwarding. Boundary nodes 156 interconnect Differentiated services domains and may perform E-Differentiated services conditioning functions similar to edge nodes. This may include performing p-bit mapping, due to of different domain capabilities or policies.
  • Traffic streams may be classified, marked, and otherwise conditioned on either end of a boundary node. The service level agreement between the domains specifies which domain has responsibility for mapping traffic streams to behavior aggregates and conditioning those aggregates in conformance with the appropriate behavior. When frames are pre-marked and conditioned in the upstream domain, potentially fewer classification and traffic conditioning rules need to be supported in the downstream E-DS domain. In this circumstance, the downstream E-DS domain may re-mark or police the incoming behavior aggregates to enforce the service level agreements. However, more sophisticated services that are path-dependent or source-dependent may require MF classification in the downstream domain's ingress nodes. If an ingress node is connected to an upstream non-Ethernet differentiated services capable domain, the ingress node performs, all necessary traffic conditioning functions on the incoming traffic.
  • Referring to FIG. 9, an example 170 for end-to-end service across multiple provider networks is shown. The example architecture shows the connection of two enterprise campuses, campus 172 and campus 194 through provider networks 178, 184, and 190. A user network interface (UNI) is used between the enterprise and provider edges and a network-network interface (NNI) is used between two providers. The end-to-end service level agreements are offered through bilateral agreements between the enterprise 172 and provider 178 and enterprise 194 and provider 190. Provider 178 has a separate SLA agreement with provider 184 and provider 190 has a separate SLA agreement with provider 184, to ensure that it can meet, the enterprise end-to-end QoS. Three Ethernet differentiated services domains are shown: Enterprise A, Access Provider 1, and Backbone Provider 2. Each domain has its own set of Ethernet PHBs and service policies.
  • Although the basic architecture assumes that complex classification and traffic conditioning functions are located only in a network's ingress and egress boundary nodes, deployment of these functions in the interior of the network is not precluded. For example, more restrictive access policies may be enforced on a transoceanic link, requiring MF classification and traffic conditioning functionality in the upstream node on the link.
  • A number of embodiments of the invention have been described. Nevertheless, it will be understood that various modifications may be made without departing from the spirit and scope of the invention.

Claims (29)

What is claimed is:
1. A network device, comprising:
at least one communication interface configured to receive an Ethernet frame; and
a frame forwarder communicatively coupled to the at least one communication interface, the frame forwarder being configured with a priority capability that enables mapping of the Ethernet frame to one of a plurality of traffic classes with each traffic class being associated with a respective combination of three (3) priority bits in a priority code point (PCP) field of a VLAN tag of the Ethernet frame, at least two (2) of the traffic classes being associated with a common priority but different respective drop precedence levels and no one priority bit of the 3 priority bits in an Ethernet frame being completely determinative of either the priority or the drop precedence level of the Ethernet frame without reference to the other 2 priority bits of the 3 priority bits in the Ethernet frame.
2. The network device of claim 1, wherein the 3 priority bits provide 8 different combinations of the 3 priority bits, and the frame forwarder is configured to use some of the 8 different combinations to classify the Ethernet frame into less than 8 priorities and others of the 8 different combinations for other purposes.
3. The network device of claim 2, wherein the frame forwarder is further configured to associate a plurality of respective drop precedence levels with some of the less than 8 priorities and to associate a single drop precedence level with others of the less than 8 priorities.
4. The network device of claim 2, wherein the frame forwarder is further configured to associate combinations of priority bits not associated with priorities with congestion indications.
5. The network device of claim 4, wherein the frame forwarder is further configured to associate with congestion indications 2 combinations of priority bits not associated with priorities.
6. The network device of claim 1, further comprising at least one other communication interface configured to transmit an Ethernet frame, wherein the frame forwarder comprises at least one queue for queuing Ethernet frames to be forwarded, the frame forwarder being further configured to schedule Ethernet frames in the queue for transmission via the at least one other communication interface.
7. The network device of claim 6, wherein the frame forwarder further comprises a respective queue for each of the less than 8 priorities.
8. The network device of claim 6, wherein the frame forwarder enqueues in one queue a first Ethernet frame having respective combination of priority bits associated with a first priority and a first drop precedence level and a second Ethernet frame having a respective combination of priority bits associated with the first priority and a second drop precedence level.
9. The network device of claim 1, wherein the frame forwarder is further configured to selectively discard Ethernet frames based on respective priority bit combinations associated with the Ethernet frames.
10. The network device of claim 9, wherein the frame forwarder is further configured to selectively discard Ethernet frames based on respective drop precedence levels associated with the respective priority bit combinations associated with the Ethernet frames.
11. The network device of claim 2, wherein the frame forwarder is further configured to change the combination of the 3 priority bits in the PCP field of the VLAN tag of the Ethernet frame to remark the Ethernet frame with a drop precedence indication.
12. The network device of claim 1, wherein the frame forwarder is further configured to change the combination of the 3 priority bits in the PCP field of the VLAN tag of the Ethernet frame to remark the Ethernet frame to map per hop behavior between multiple domains.
13. The network device of claim 1, wherein the frame forwarder is further configured to change the combination of the 3 priority bits in the PCP field of the VLAN tag of the Ethernet frame to remark the Ethernet frame to perform per hop behavior compression.
14. The network device of claim 1, wherein the frame forwarder is further configured to change the combination of the 3 priority bits in the PCP field of the VLAN tag of the Ethernet frame to remark the Ethernet frame to effect priority bit translation.
15. The network device of claim 1, wherein the frame forwarder is further configured to condition the Ethernet frame.
16. The network device of claim 1, wherein the frame forwarder is further configured to determine a bandwidth profile based on the set of priority bits.
17. The network device of claim 1, wherein the frame forwarder is further configured to determine a forwarding treatment for the Ethernet frame based on the set of priority bits.
18. A network device, comprising:
at least one communication interface configured to transmit Ethernet frames; and
a frame forwarder communicatively coupled to the at least one communication interface, the frame forwarder being configured with a priority capability that enables mapping of the Ethernet frames to respective traffic classes of a plurality of traffic classes with each traffic class being associated with a respective combination of three (3) priority bits in a priority code point (PCP) field of an Ethernet VLAN tag, at least two (2) of the traffic classes being associated with a common priority but different respective drop precedence levels and no one priority bit of the 3 priority bits in an Ethernet frame being completely determinative of either the priority or the drop precedence level of the Ethernet frame without reference to the other 2 priority bits of the 3 priority bits in the Ethernet frame, the frame forwarder being configured to enqueue in one queue Ethernet frames having a common priority but different drop precedence levels, and being configured to selectively discard Ethernet frames based on respective drop precedence levels associated with the Ethernet frames.
19. The network device of claim 18, wherein the 3 priority bits provide 8 different combinations of the 3 priority bits, and the frame forwarder is further configured to use some of the 8 different combinations to classify the Ethernet frames into less than 8 priorities and others of the 8 different combinations for other purposes.
20. The network device of claim 19, wherein the frame forwarder is further configured to associate a plurality of respective drop precedence levels with some of the less than 8 priorities and to associate a single drop precedence level with others of the less than 8 priorities.
21. The network device of claim 19, wherein the frame forwarder is further configured to associate combinations of priority bits not associated with priorities with congestion indications.
22. The network device of claim 21, wherein the frame forwarder is further configured to associate with congestion indications 2 combinations of priority bits not associated with priorities.
23. The network device of claim 18, wherein the frame forwarder is further configured to change the combination of the 3 priority bits in the PCP field of the VLAN tag of the Ethernet frames to remark the Ethernet frames with a drop precedence indication.
24. The network device of claim 18, wherein the frame forwarder is further configured to change the combination of the 3 priority bits in the PCP fields of the VLAN tags of the Ethernet frames to remark the Ethernet frames to map per hop behavior between multiple domains.
25. The network device of claim 18, wherein the frame forwarder is further configured to change the combination of the 3 priority bits in the PCP fields of the VLAN tags of the Ethernet frames to remark the Ethernet frames to perform per hop behavior compression.
26. The network device of claim 18, wherein the frame forwarder is further configured to change the combination of the 3 priority bits in the PCP fields of the VLAN tags of the Ethernet frames to remark the Ethernet frames to effect priority bit translation.
27. The network device of claim 18, wherein the frame forwarder is further configured to condition the Ethernet frames.
28. The network device of claim 18, wherein the frame forwarder is further configured to determine a bandwidth profile based on the sets of priority bits.
29. The network device of claim 18, wherein the frame forwarder is configured to determine forwarding treatments for the Ethernet frames based on the sets of priority bits.
US14/302,995 2004-01-20 2014-06-12 Ethernet differentiated services conditioning Abandoned US20140293791A1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
US53748704P true 2004-01-20 2004-01-20
US10/868,568 US8804728B2 (en) 2004-01-20 2004-06-15 Ethernet differentiated services conditioning
US14/302,995 US20140293791A1 (en) 2004-01-20 2014-06-12 Ethernet differentiated services conditioning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US14/302,995 US20140293791A1 (en) 2004-01-20 2014-06-12 Ethernet differentiated services conditioning

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US10/868,568 Continuation US8804728B2 (en) 2004-01-20 2004-06-15 Ethernet differentiated services conditioning

Publications (1)

Publication Number Publication Date
US20140293791A1 true US20140293791A1 (en) 2014-10-02

Family

ID=37700887

Family Applications (6)

Application Number Title Priority Date Filing Date
US10/868,536 Expired - Fee Related US7764688B2 (en) 2004-01-20 2004-06-15 Ethernet differentiated services
US10/868,568 Expired - Fee Related US8804728B2 (en) 2004-01-20 2004-06-15 Ethernet differentiated services conditioning
US10/868,607 Expired - Fee Related US7843925B2 (en) 2004-01-20 2004-06-15 Ethernet differentiated services architecture
US12/939,304 Expired - Fee Related US8687633B2 (en) 2004-01-20 2010-11-04 Ethernet differentiated services architecture
US14/093,900 Abandoned US20140086251A1 (en) 2004-01-20 2013-12-02 Ethernet differentiated services architecture
US14/302,995 Abandoned US20140293791A1 (en) 2004-01-20 2014-06-12 Ethernet differentiated services conditioning

Family Applications Before (5)

Application Number Title Priority Date Filing Date
US10/868,536 Expired - Fee Related US7764688B2 (en) 2004-01-20 2004-06-15 Ethernet differentiated services
US10/868,568 Expired - Fee Related US8804728B2 (en) 2004-01-20 2004-06-15 Ethernet differentiated services conditioning
US10/868,607 Expired - Fee Related US7843925B2 (en) 2004-01-20 2004-06-15 Ethernet differentiated services architecture
US12/939,304 Expired - Fee Related US8687633B2 (en) 2004-01-20 2010-11-04 Ethernet differentiated services architecture
US14/093,900 Abandoned US20140086251A1 (en) 2004-01-20 2013-12-02 Ethernet differentiated services architecture

Country Status (2)

Country Link
US (6) US7764688B2 (en)
CN (1) CN1910856A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10291517B1 (en) * 2016-12-16 2019-05-14 Juniper Networks, Inc. Generating a dummy VLAN tag for indicating quality of service classification information in a distributed routing system

Families Citing this family (112)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7764688B2 (en) 2004-01-20 2010-07-27 Nortel Networks Limited Ethernet differentiated services
US7680139B1 (en) * 2004-03-25 2010-03-16 Verizon Patent And Licensing Inc. Systems and methods for queue management in packet-switched networks
US7733770B2 (en) * 2004-11-15 2010-06-08 Intel Corporation Congestion control in a network
US7672319B1 (en) * 2004-11-17 2010-03-02 Adtran, Inc. Integrated router/switch-based mechanism for mapping COS value to QOS value for optimization of LAN-to-WAN traffic flow
US7830887B2 (en) * 2004-11-30 2010-11-09 Broadcom Corporation Method and apparatus for direct memory access based on class-of-service
WO2006085292A1 (en) * 2005-02-14 2006-08-17 Telefonaktiebolaget L M Ericsson (Publ) Method and nodes for performing bridging of data traffic over an access domain
FR2882939B1 (en) * 2005-03-11 2007-06-08 Centre Nat Rech Scient A fluid separation
US20070058532A1 (en) * 2005-09-15 2007-03-15 Manoj Wadekar System and method for managing network congestion
US7733780B2 (en) * 2005-12-07 2010-06-08 Electronics And Telecommunications Research Institute Method for managing service bandwidth by customer port and EPON system using the same
US20070230369A1 (en) * 2006-03-31 2007-10-04 Mcalpine Gary L Route selection in a network
EP1863231A1 (en) * 2006-05-29 2007-12-05 Nokia Siemens Networks Gmbh & Co. Kg Managing QoS in an unified way
US20080080382A1 (en) * 2006-09-28 2008-04-03 Dahshan Mostafa H Refined Assured Forwarding Framework for Differentiated Services Architecture
US20080112318A1 (en) * 2006-11-13 2008-05-15 Rejean Groleau Traffic shaping and scheduling in a network
CA2670766A1 (en) * 2007-01-17 2008-07-24 Nortel Networks Limited Method and apparatus for interworking ethernet and mpls networks
KR100964190B1 (en) * 2007-09-06 2010-06-17 한국전자통신연구원 QoS management method for an Ethernet based NGN
US8077709B2 (en) 2007-09-19 2011-12-13 Cisco Technology, Inc. Redundancy at a virtual provider edge node that faces a tunneling protocol core network for virtual private local area network (LAN) service (VPLS)
US8878219B2 (en) * 2008-01-11 2014-11-04 Cree, Inc. Flip-chip phosphor coating method and devices fabricated utilizing method
US8259569B2 (en) * 2008-09-09 2012-09-04 Cisco Technology, Inc. Differentiated services for unicast and multicast frames in layer 2 topologies
CN101409656B (en) 2008-10-15 2012-04-18 华为技术有限公司 Method for checking virtual circuit connectivity, network node and communication system
KR101479011B1 (en) * 2008-12-17 2015-01-13 삼성전자주식회사 Method of schedulling multi-band and broadcasting service system using the method
US8792490B2 (en) * 2009-03-16 2014-07-29 Cisco Technology, Inc. Logically partitioned networking devices
CN101510855B (en) * 2009-04-10 2011-06-15 华为技术有限公司 Method and apparatus for processing QinQ message
US8638799B2 (en) * 2009-07-10 2014-01-28 Hewlett-Packard Development Company, L.P. Establishing network quality of service for a virtual machine
US9264341B2 (en) * 2009-07-24 2016-02-16 Broadcom Corporation Method and system for dynamic routing and/or switching in a network
CA2790247A1 (en) * 2010-02-19 2011-08-25 David Berechya Method and device for conveying oam messages across an inter-carrier network
CN102170374A (en) * 2010-02-26 2011-08-31 杭州华三通信技术有限公司 Tunnel service configuration checking method, system and equipment
US8867552B2 (en) 2010-05-03 2014-10-21 Brocade Communications Systems, Inc. Virtual cluster switching
US9001824B2 (en) 2010-05-18 2015-04-07 Brocade Communication Systems, Inc. Fabric formation for virtual cluster switching
US9716672B2 (en) 2010-05-28 2017-07-25 Brocade Communications Systems, Inc. Distributed configuration management for virtual cluster switching
US9461840B2 (en) 2010-06-02 2016-10-04 Brocade Communications Systems, Inc. Port profile management for virtual cluster switching
US9270486B2 (en) 2010-06-07 2016-02-23 Brocade Communications Systems, Inc. Name services for virtual cluster switching
US9769016B2 (en) 2010-06-07 2017-09-19 Brocade Communications Systems, Inc. Advanced link tracking for virtual cluster switching
US9608833B2 (en) 2010-06-08 2017-03-28 Brocade Communications Systems, Inc. Supporting multiple multicast trees in trill networks
US8446914B2 (en) 2010-06-08 2013-05-21 Brocade Communications Systems, Inc. Method and system for link aggregation across multiple switches
US9628293B2 (en) 2010-06-08 2017-04-18 Brocade Communications Systems, Inc. Network layer multicasting in trill networks
US9806906B2 (en) 2010-06-08 2017-10-31 Brocade Communications Systems, Inc. Flooding packets on a per-virtual-network basis
US8989186B2 (en) 2010-06-08 2015-03-24 Brocade Communication Systems, Inc. Virtual port grouping for virtual cluster switching
US9246703B2 (en) 2010-06-08 2016-01-26 Brocade Communications Systems, Inc. Remote port mirroring
US9807031B2 (en) 2010-07-16 2017-10-31 Brocade Communications Systems, Inc. System and method for network configuration
US8718071B2 (en) 2010-09-10 2014-05-06 Futurewei Technologies, Inc. Method to pass virtual local area network information in virtual station interface discovery and configuration protocol
US20120099591A1 (en) * 2010-10-26 2012-04-26 Dell Products, Lp System and Method for Scalable Flow Aware Network Architecture for Openflow Based Network Virtualization
US9667539B2 (en) * 2011-01-17 2017-05-30 Alcatel Lucent Method and apparatus for providing transport of customer QoS information via PBB networks
US8650285B1 (en) 2011-03-22 2014-02-11 Cisco Technology, Inc. Prevention of looping and duplicate frame delivery in a network environment
US8611212B2 (en) 2011-03-30 2013-12-17 Fujitsu Limited Method and system for writing to a VLAN tag
US9379938B2 (en) 2011-03-30 2016-06-28 Fujitsu Limited Method and system for SOAM flow switching
US8964537B2 (en) * 2011-03-30 2015-02-24 Fujitsu Limited Method and system for egress policy indications
US20120254397A1 (en) * 2011-03-30 2012-10-04 Fujitsu Network Communications, Inc. Method and System for Frame Discard on Switchover of Traffic Manager Resources
US8982699B2 (en) 2011-03-30 2015-03-17 Fujitsu Limited Method and system for protection group switching
US9270572B2 (en) 2011-05-02 2016-02-23 Brocade Communications Systems Inc. Layer-3 support in TRILL networks
WO2012172319A1 (en) 2011-06-15 2012-12-20 Bae Systems Plc Data transfer
EP2536070A1 (en) * 2011-06-15 2012-12-19 BAE Systems Plc Data transfer
US9401861B2 (en) 2011-06-28 2016-07-26 Brocade Communications Systems, Inc. Scalable MAC address distribution in an Ethernet fabric switch
US8948056B2 (en) 2011-06-28 2015-02-03 Brocade Communication Systems, Inc. Spanning-tree based loop detection for an ethernet fabric switch
US9407533B2 (en) 2011-06-28 2016-08-02 Brocade Communications Systems, Inc. Multicast in a trill network
US8885641B2 (en) 2011-06-30 2014-11-11 Brocade Communication Systems, Inc. Efficient trill forwarding
US9736085B2 (en) * 2011-08-29 2017-08-15 Brocade Communications Systems, Inc. End-to end lossless Ethernet in Ethernet fabric
US9699117B2 (en) 2011-11-08 2017-07-04 Brocade Communications Systems, Inc. Integrated fibre channel support in an ethernet fabric switch
US9450870B2 (en) 2011-11-10 2016-09-20 Brocade Communications Systems, Inc. System and method for flow management in software-defined networks
US9860188B2 (en) * 2011-12-22 2018-01-02 International Business Machines Corporation Flexible and scalable enhanced transmission selection method for network fabrics
US8995272B2 (en) 2012-01-26 2015-03-31 Brocade Communication Systems, Inc. Link aggregation in software-defined networks
US9742693B2 (en) 2012-02-27 2017-08-22 Brocade Communications Systems, Inc. Dynamic service insertion in a fabric switch
US9154416B2 (en) 2012-03-22 2015-10-06 Brocade Communications Systems, Inc. Overlay tunnel in a fabric switch
US9374301B2 (en) 2012-05-18 2016-06-21 Brocade Communications Systems, Inc. Network feedback in software-defined networks
US10277464B2 (en) 2012-05-22 2019-04-30 Arris Enterprises Llc Client auto-configuration in a multi-switch link aggregation
US9602430B2 (en) 2012-08-21 2017-03-21 Brocade Communications Systems, Inc. Global VLANs for fabric switches
US9215181B2 (en) * 2012-11-06 2015-12-15 Comcast Cable Communications, Llc Systems and methods for managing a network
US9401872B2 (en) 2012-11-16 2016-07-26 Brocade Communications Systems, Inc. Virtual link aggregations across multiple fabric switches
US9350680B2 (en) 2013-01-11 2016-05-24 Brocade Communications Systems, Inc. Protection switching over a virtual link aggregation
US9548926B2 (en) 2013-01-11 2017-01-17 Brocade Communications Systems, Inc. Multicast traffic load balancing over virtual link aggregation
US9413691B2 (en) 2013-01-11 2016-08-09 Brocade Communications Systems, Inc. MAC address synchronization in a fabric switch
US9565113B2 (en) 2013-01-15 2017-02-07 Brocade Communications Systems, Inc. Adaptive link aggregation and virtual link aggregation
US9565099B2 (en) 2013-03-01 2017-02-07 Brocade Communications Systems, Inc. Spanning tree in fabric switches
US9143582B2 (en) 2013-03-08 2015-09-22 International Business Machines Corporation Interoperability for distributed overlay virtual environments
US9432287B2 (en) 2013-03-12 2016-08-30 International Business Machines Corporation Virtual gateways and implicit routing in distributed overlay virtual environments
US9374241B2 (en) 2013-03-14 2016-06-21 International Business Machines Corporation Tagging virtual overlay packets in a virtual networking system
US10142236B2 (en) 2013-03-14 2018-11-27 Comcast Cable Communications, Llc Systems and methods for managing a packet network
US9112801B2 (en) 2013-03-15 2015-08-18 International Business Machines Corporation Quantized congestion notification in a virtual networking system
US9401818B2 (en) 2013-03-15 2016-07-26 Brocade Communications Systems, Inc. Scalable gateways for a fabric switch
US9794379B2 (en) 2013-04-26 2017-10-17 Cisco Technology, Inc. High-efficiency service chaining with agentless service nodes
US9565028B2 (en) 2013-06-10 2017-02-07 Brocade Communications Systems, Inc. Ingress switch multicast distribution in a fabric switch
US9699001B2 (en) 2013-06-10 2017-07-04 Brocade Communications Systems, Inc. Scalable and segregated network virtualization
US9806949B2 (en) 2013-09-06 2017-10-31 Brocade Communications Systems, Inc. Transparent interconnection of Ethernet fabric switches
US9912612B2 (en) 2013-10-28 2018-03-06 Brocade Communications Systems LLC Extended ethernet fabric switches
US10122639B2 (en) 2013-10-30 2018-11-06 Comcast Cable Communications, Llc Systems and methods for managing a network
EP2869513A1 (en) * 2013-10-30 2015-05-06 Telefonaktiebolaget L M Ericsson (Publ) Method and network node for controlling sending rates
US9548873B2 (en) 2014-02-10 2017-01-17 Brocade Communications Systems, Inc. Virtual extensible LAN tunnel keepalives
US10063473B2 (en) 2014-04-30 2018-08-28 Brocade Communications Systems LLC Method and system for facilitating switch virtualization in a network of interconnected switches
US9800471B2 (en) 2014-05-13 2017-10-24 Brocade Communications Systems, Inc. Network extension groups of global VLANs in a fabric switch
US9544219B2 (en) 2014-07-31 2017-01-10 Brocade Communications Systems, Inc. Global VLAN services
US9807007B2 (en) 2014-08-11 2017-10-31 Brocade Communications Systems, Inc. Progressive MAC address learning
DE102014218823A1 (en) * 2014-09-18 2016-03-24 Siemens Aktiengesellschaft Network nodes, control module for a component and Ethernet ring
US9524173B2 (en) 2014-10-09 2016-12-20 Brocade Communications Systems, Inc. Fast reboot for a switch
US9699029B2 (en) 2014-10-10 2017-07-04 Brocade Communications Systems, Inc. Distributed configuration management in a switch group
CN104363170B (en) * 2014-11-25 2017-08-11 新华三技术有限公司 Data forwarding method and apparatus for software-defined network
US9660909B2 (en) 2014-12-11 2017-05-23 Cisco Technology, Inc. Network service header metadata for load balancing
US9628407B2 (en) 2014-12-31 2017-04-18 Brocade Communications Systems, Inc. Multiple software versions in a switch group
US9626255B2 (en) 2014-12-31 2017-04-18 Brocade Communications Systems, Inc. Online restoration of a switch snapshot
US10003552B2 (en) 2015-01-05 2018-06-19 Brocade Communications Systems, Llc. Distributed bidirectional forwarding detection protocol (D-BFD) for cluster of interconnected switches
US9942097B2 (en) 2015-01-05 2018-04-10 Brocade Communications Systems LLC Power management in a network of interconnected switches
US10038592B2 (en) 2015-03-17 2018-07-31 Brocade Communications Systems LLC Identifier assignment to a new switch in a switch group
US9807005B2 (en) 2015-03-17 2017-10-31 Brocade Communications Systems, Inc. Multi-fabric manager
US10171303B2 (en) 2015-09-16 2019-01-01 Avago Technologies International Sales Pte. Limited IP-based interconnection of switches with a logical chassis
US9912614B2 (en) 2015-12-07 2018-03-06 Brocade Communications Systems LLC Interconnection of switches based on hierarchical overlay tunneling
US10187306B2 (en) 2016-03-24 2019-01-22 Cisco Technology, Inc. System and method for improved service chaining
US10320664B2 (en) 2016-07-21 2019-06-11 Cisco Technology, Inc. Cloud overlay for operations administration and management
US10218616B2 (en) 2016-07-21 2019-02-26 Cisco Technology, Inc. Link selection for communication with a service function cluster
US10225270B2 (en) 2016-08-02 2019-03-05 Cisco Technology, Inc. Steering of cloned traffic in a service function chain
US10218593B2 (en) 2016-08-23 2019-02-26 Cisco Technology, Inc. Identifying sources of packet drops in a service function chain environment
US10237090B2 (en) 2016-10-28 2019-03-19 Avago Technologies International Sales Pte. Limited Rule-based network identifier mapping
US10225187B2 (en) 2017-03-22 2019-03-05 Cisco Technology, Inc. System and method for providing a bit indexed service chain
US10257033B2 (en) 2017-04-12 2019-04-09 Cisco Technology, Inc. Virtualized network functions and service chaining in serverless computing infrastructure
US10333855B2 (en) 2017-04-19 2019-06-25 Cisco Technology, Inc. Latency reduction in service function paths

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6885638B2 (en) * 2002-06-13 2005-04-26 Motorola, Inc. Method and apparatus for enhancing the quality of service of a wireless communication
US20050144327A1 (en) * 2003-12-24 2005-06-30 Sameh Rabie Ethernet to frame relay interworking with multiple quality of service levels

Family Cites Families (42)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6185203B1 (en) 1997-02-18 2001-02-06 Vixel Corporation Fibre channel switching fabric
US6208649B1 (en) * 1998-03-11 2001-03-27 Cisco Technology, Inc. Derived VLAN mapping technique
AU760313B2 (en) 1998-06-19 2003-05-15 Juniper Networks, Inc. A quality of service facility in a device for performing IP forwarding and ATM switching
US6625156B2 (en) * 1998-06-29 2003-09-23 Nortel Networks Limited Method of implementing quality-of-service data communications over a short-cut path through a routed network
US6167445A (en) * 1998-10-26 2000-12-26 Cisco Technology, Inc. Method and apparatus for defining and implementing high-level quality of service policies in computer networks
US6577642B1 (en) * 1999-01-15 2003-06-10 3Com Corporation Method and system for virtual network administration with a data-over cable system
US7184413B2 (en) * 1999-02-10 2007-02-27 Nokia Inc. Adaptive communication protocol for wireless networks
JP3733784B2 (en) 1999-05-21 2006-01-11 株式会社日立製作所 Packet relay device
US6798775B1 (en) * 1999-06-10 2004-09-28 Cisco Technology, Inc. Virtual LANs over a DLSw network
JP2001053794A (en) 1999-08-09 2001-02-23 Nec Corp Real time backup communication method for ip communication
EP1111862A1 (en) * 1999-12-23 2001-06-27 TELEFONAKTIEBOLAGET LM ERICSSON (publ) Method and devices to provide a defined quality of service in a packet switched communication network
US7106737B1 (en) * 2000-04-10 2006-09-12 Siemens Communications, Inc. System and method for reinterpreting TOS bits
US6647428B1 (en) * 2000-05-05 2003-11-11 Luminous Networks, Inc. Architecture for transport of multiple services in connectionless packet-based communication networks
DE10123821A1 (en) * 2000-06-02 2001-12-20 Ibm Switched Ethernet network has a method for assigning priorities to user groups so that a quality of service guarantee can be provided by ensuring that packets for one or more groups are given priority over other groups
US7228358B1 (en) * 2000-07-25 2007-06-05 Verizon Services Corp. Methods, apparatus and data structures for imposing a policy or policies on the selection of a line by a number of terminals in a network
GB2369526B (en) * 2000-11-24 2003-07-09 3Com Corp TCP Control packet differential service
US7050396B1 (en) * 2000-11-30 2006-05-23 Cisco Technology, Inc. Method and apparatus for automatically establishing bi-directional differentiated services treatment of flows in a network
US6839327B1 (en) * 2000-12-01 2005-01-04 Cisco Technology, Inc. Method and apparatus for maintaining consistent per-hop forwarding behavior in a network using network-wide per-hop behavior definitions
US20020176450A1 (en) 2001-01-31 2002-11-28 Sycamore Networks, Inc. System and methods for selectively transmitting ethernet traffic over SONET/SDH optical network
US20020172229A1 (en) 2001-03-16 2002-11-21 Kenetec, Inc. Method and apparatus for transporting a synchronous or plesiochronous signal over a packet network
US7295562B1 (en) * 2001-03-26 2007-11-13 Advanced Micro Devices, Inc. Systems and methods for expediting the identification of priority information for received packets
AT287169T (en) * 2001-09-12 2005-01-15 Cit Alcatel Method and apparatus for differentiating service in a data network
EP1294202A1 (en) 2001-09-18 2003-03-19 Lucent Technologies Inc. A method of sending data packets through a MPLS network, and a MPLS network
US7126952B2 (en) * 2001-09-28 2006-10-24 Intel Corporation Multiprotocol decapsulation/encapsulation control structure and packet protocol conversion method
CA2411806A1 (en) * 2001-11-16 2003-05-16 Muthucumaru Maheswaran Wide-area content-based routing architecture
EP1313274A3 (en) * 2001-11-19 2003-09-03 Matsushita Electric Industrial Co., Ltd. Packet transmission apparatus and packet transmission processing method
US7787458B2 (en) * 2001-11-30 2010-08-31 Alcatel-Lucent Canada Inc. Method and apparatus for communicating data packets according to classes of service
US7257121B2 (en) * 2001-12-21 2007-08-14 Alcatel Canada Inc. System and method for mapping quality of service levels between MPLS and ATM connections in a network element
KR100451794B1 (en) 2001-12-28 2004-10-08 엘지전자 주식회사 Method for Interfacing IEEE802.1p and DiffServ
US7277442B1 (en) 2002-04-26 2007-10-02 At&T Corp. Ethernet-to-ATM interworking that conserves VLAN assignments
US7298750B2 (en) 2002-07-31 2007-11-20 At&T Knowledge Ventures, L.P. Enhancement of resource reservation protocol enabling short-cut internet protocol connections over a switched network
JP3788803B2 (en) * 2002-10-30 2006-06-21 富士通株式会社 L2 switch
US7702357B2 (en) * 2002-11-26 2010-04-20 Sony Corporation Wireless intelligent switch engine
KR100448635B1 (en) * 2002-11-27 2004-09-13 한국전자통신연구원 Communication node system, control node system, communication system using the node systems in the ethernet passive optical network
EP1455488B1 (en) * 2003-03-07 2006-06-07 Telefonaktiebolaget LM Ericsson (publ) System and method for providing differentiated services
US7386010B2 (en) 2003-06-13 2008-06-10 Corrigent Systems Ltd Multiprotocol media conversion
US20050141509A1 (en) 2003-12-24 2005-06-30 Sameh Rabie Ethernet to ATM interworking with multiple quality of service levels
US7505466B2 (en) * 2004-01-20 2009-03-17 Nortel Networks Limited Method and system for ethernet and ATM network interworking
US7764688B2 (en) * 2004-01-20 2010-07-27 Nortel Networks Limited Ethernet differentiated services
JP4509177B2 (en) * 2004-05-05 2010-07-21 クゥアルコム・インコーポレイテッドQualcomm Incorporated Method and apparatus for adaptive delay management in a wireless communication system
US7710887B2 (en) * 2006-12-29 2010-05-04 Intel Corporation Network protection via embedded controls
US8179909B2 (en) * 2009-12-15 2012-05-15 Mitsubishi Electric Research Laboratories, Inc. Method and system for harmonizing QoS in home networks

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6885638B2 (en) * 2002-06-13 2005-04-26 Motorola, Inc. Method and apparatus for enhancing the quality of service of a wireless communication
US20050144327A1 (en) * 2003-12-24 2005-06-30 Sameh Rabie Ethernet to frame relay interworking with multiple quality of service levels

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10291517B1 (en) * 2016-12-16 2019-05-14 Juniper Networks, Inc. Generating a dummy VLAN tag for indicating quality of service classification information in a distributed routing system

Also Published As

Publication number Publication date
US8687633B2 (en) 2014-04-01
US20050157721A1 (en) 2005-07-21
US20050157737A1 (en) 2005-07-21
CN1910856A (en) 2007-02-07
US20140086251A1 (en) 2014-03-27
US8804728B2 (en) 2014-08-12
US7764688B2 (en) 2010-07-27
US20110051723A1 (en) 2011-03-03
US7843925B2 (en) 2010-11-30
US20050157645A1 (en) 2005-07-21

Similar Documents

Publication Publication Date Title
Nichols et al. Definition of the differentiated services field (DS field) in the IPv4 and IPv6 headers
Bernet et al. An informal management model for diffserv routers
EP1739914B1 (en) Method, apparatus, edge router and system for providing a guarantee of the quality of service (qos)
US6504819B2 (en) Classes of service in an MPOA network
US7936770B1 (en) Method and apparatus of virtual class of service and logical queue representation through network traffic distribution over multiple port interfaces
US7477599B2 (en) System and method for guaranteeing quality of service in IP networks
US7907519B2 (en) Packet forwarding
US10033650B2 (en) Preserving quality of service across trill networks
US8467342B2 (en) Flow and congestion control in switch architectures for multi-hop, memory efficient fabrics
US7756026B2 (en) Providing a quality of service for various classes of service for transfer of electronic data packets
US7185073B1 (en) Method and apparatus for defining and implementing high-level quality of service policies in computer networks
US6680933B1 (en) Telecommunications switches and methods for their operation
EP1650908A2 (en) Internal load balancing in a data switch using distributed network process
EP1551136B1 (en) Hierarchical flow-characterizing multiplexor
US20070206602A1 (en) Methods, systems and apparatus for managing differentiated service classes
US20050066053A1 (en) System, method and apparatus that isolate virtual private network (VPN) and best effort traffic to resist denial of service attacks
US7492779B2 (en) Apparatus for and method of support for committed over excess traffic in a distributed queuing system
US6748435B1 (en) Random early demotion and promotion marker
US7020143B2 (en) System for and method of differentiated queuing in a routing system
US7523188B2 (en) System and method for remote traffic management in a communication network
US9009812B2 (en) System, method and apparatus that employ virtual private networks to resist IP QoS denial of service attacks
Davies et al. An architecture for differentiated services
US7406088B2 (en) Method and system for ethernet and ATM service interworking
US7046665B1 (en) Provisional IP-aware virtual paths over networks
US7724754B2 (en) Device, system and/or method for managing packet congestion in a packet switching network

Legal Events

Date Code Title Description
AS Assignment

Owner name: RPX CLEARINGHOUSE LLC, CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:ROCKSTAR CONSORTIUM US LP;ROCKSTAR CONSORTIUM LLC;BOCKSTAR TECHNOLOGIES LLC;AND OTHERS;REEL/FRAME:034924/0779

Effective date: 20150128

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION