EP1258114A1 - Method and device for data traffic shaping - Google Patents

Method and device for data traffic shaping

Info

Publication number
EP1258114A1
EP1258114A1 EP01904830A EP01904830A EP1258114A1 EP 1258114 A1 EP1258114 A1 EP 1258114A1 EP 01904830 A EP01904830 A EP 01904830A EP 01904830 A EP01904830 A EP 01904830A EP 1258114 A1 EP1258114 A1 EP 1258114A1
Authority
EP
European Patent Office
Prior art keywords
data
data packets
packet
priority
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP01904830A
Other languages
German (de)
French (fr)
Inventor
Otto Andreas Schmid
Manju Hegde
Jean Pierre Bordes
Xingguo Zhao
Monier Maher
Curtis Davis
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
BORDES, JEAN PIERRE
DAVIS, CURTIS
Hegde Manju
MAHER, MONIER
Schmid Otto Andreas
ZHAO, XINGGUO
Celox Networks Inc
Original Assignee
Bordes Jean Pierre
Hegde Manju
Maher Monier
Schmid Otto Andreas
Zhao Xingguo
Celox Networks Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Bordes Jean Pierre, Hegde Manju, Maher Monier, Schmid Otto Andreas, Zhao Xingguo, Celox Networks Inc filed Critical Bordes Jean Pierre
Publication of EP1258114A1 publication Critical patent/EP1258114A1/en
Withdrawn legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • H04L12/5602Bandwidth control in ATM Networks, e.g. leaky bucket
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q11/00Selecting arrangements for multiplex systems
    • H04Q11/04Selecting arrangements for multiplex systems for time-division multiplexing
    • H04Q11/0428Integrated services digital network, i.e. systems for transmission of different types of digitised signals, e.g. speech, data, telecentral, television signals
    • H04Q11/0478Provisions for broadband connections
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • H04L2012/5629Admission control
    • H04L2012/5631Resource management and allocation
    • H04L2012/5636Monitoring or policing, e.g. compliance with allocated rate, corrective actions
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • H04L2012/5678Traffic aspects, e.g. arbitration, load balancing, smoothing, buffer management
    • H04L2012/5679Arbitration or scheduling

Definitions

  • the present invention relates to the field of high speed data packet processing for computer networking systems, and in particular to shaping streams of data packets to conform to varying data packet size and format requirements of these computer networking systems.
  • ISPs Internet Service Providers
  • Other network managers are recognizing the need to manage data traffic such that different users, as well as different types of data, get different treatment by the networks (i.e., different data transfer parameters).
  • Some users require greater bandwidth transfer rates and certain "priority" data requires more stringent transfer parameters (e.g., real-time traffic such as voice or video) .
  • stringent transfer parameters e.g., real-time traffic such as voice or video
  • bandwidth is a scarce resource
  • network connections or interfaces must be managed to isolate users or groups of users that are using common connections or interfaces in order to limit the amount of bandwidth available to each of these users or group of users based on their subscription rates. Further, these network connections or interfaces must be able to recognize "priority" data traffic and provide data transfer based on the more stringent transfer requirements of this data.
  • Switches and routers are known to "police" data on an incoming connection or interface to determine whether the transfer bandwidth is in compliance with a subscriber level. It is also known to shape data traffic in these switches and routers to control the outgoing connection or interface to ensure that the data being transferred is limited to the bandwidth assigned to the particular user or subscriber whose data packet or packets are in the traffic stream.
  • traffic management or “data transfer management” is known.
  • traffic management is currently limited to only single level data management (e.g., traffic shaping of individual users or traffic shaping of groups of users) .
  • a method and device for efficiently managing data traffic at multiple levels, and in particular, for multi-level data shaping of traffic on the Internet.
  • a device and method capable of processing in parallel data traffic to two or more rates (i.e., a total rate and a sub-rate) and to provide multi-level traffic shaping to shape, for example, oversubscribed bandwidth provided to groups of users.
  • traffic shaping is needed that has the ability to determine priority data such that it provides low latency to delay sensitive connections.
  • the present invention provides a method and device for traffic shaping at multiple levels or layers in parallel and that can provide priority shaping to certain data.
  • the invention provides for shaping connections or groups of connections over a common channel or interface in order to identify and/or isolate users or groups of users from each other in order to control the data transfer rate of each user or group of users (i.e., limit bandwidth of each user or group of users to their subscription rate) .
  • the traffic shaper of the present invention provides parallel and efficient use of circular queues to confirm, without a search, when data packets are conformant with a provisioned bandwidth.
  • the traffic shaper also schedules the provisioned bandwidth to a logical link and thereafter to an output port .
  • the traffic shaper also provides for transmission of bursts of data packets .
  • the invention includes the efficient use of linked lists to provide priorities to certain data on specific connections for low latency to delay sensitive connections. Shaping is also possible at different bandwidths (i.e., a total rate and a sub-rate) and at multiple levels or layers to enable the shaping of, for example, oversubscribed bandwidth provided to groups of users.
  • the traffic shaping of the present invention is preferably software based, but uses hardware control for implementing overall data traffic transfer.
  • the invention holds data packets in a buffer until user and protocol information is processed and the data packet is transmitted to its next network destination.
  • the invention provides inputs to examine incoming traffic from various interfaces, for example asynchronous transfer mode (ATM) , Gigabit Ethernet or Packet over synchronous optical network (POS) .
  • ATM asynchronous transfer mode
  • POS synchronous optical network
  • This allows for the extraction of relevant control information from data packets (e.g., characterizing information, such as user identification information) , which is then forwarded to appropriate internet processing engines (IPEs) of the present invention.
  • the IPEs provide protocol processing and management of users and tunnels. For example, data packet identification may be provided as described in co-pending U.S.
  • PPUs protocol processing units
  • Multiple PPUs may be implemented depending upon the amount of bandwidth managed.
  • the invention shapes the flow of traffic on the egress side of the traffic shaper to ensure conformity to negotiated transfer parameters. The invention thereby gives a pre-defined shape to the data stream profile.
  • the invention provides both a method and device for traffic shaping.
  • the method of shaping data packets in a data stream is provided to control the rate of transfer of the data packets having characterizing information corresponding to users and a predefined data packet transfer rate for each of the packets.
  • the method preferably comprises the steps of processing the characterizing information in parallel to thereby determine a plurality of data transfer requirements, and forwarding the data packets to a next destination at the predefined data packet transfer rate based on the determined data transfer requirements.
  • the method may further comprise processing a plurality of levels of user information in parallel and wherein the processing includes processing in parallel a first level of user information comprising individual user information and processing another level of user information comprising group user information.
  • processing the characterizing information in parallel may include determining a plurality of levels of transfer requirements.
  • the method may further include forwarding the data packets at a plurality of predefined data packet transfer rates based on the plurality of levels of determined data transfer requirements and storing the data packets while the parallel processing is performed.
  • the data packets being processed may be of variable length.
  • the method may further provide forwarding the data packets at a higher data packet transfer rate than the predefined data packet transfer and storing the data packets by logically associating the data packets of each individual user as they are stored.
  • the step of processing the characterizing information may further comprise determining whether a data packet is a priority data packet or a non- priority data packet and parallel processing further comprises processing in parallel the individual user and group user information for both the priority and non-priority data packets.
  • the method may also include the step of separately scheduling a data transfer rate for each of the priority and non-priority data packets .
  • the invention further provides a method of shaping a data stream to control the transfer rate of a plurality of data packets comprising the data stream with the method comprising the steps of storing the data packets while a data transfer rate is determined for each data packet, determining individual user and group user desired data transfer rates for each of the stored data packets, processing in parallel the individual user and group user desired data transfer rates to determine an allowable data packet transmission time for each of the stored data packets, and transmitting the stored data packets on or after the allowable data packet transmission times.
  • the method may further comprise determining a plurality of levels of desired data transfer rates for each of the individual user and group user data transfer rates and logically associating the data packets of each individual user.
  • the method may further include determining whether each data packet is a priority or non-priority data packet and parallel processing the desired data transfer rates for both the priority and non-priority data packets for the individual and group users.
  • the method may provide separately scheduling the allowable departure time for each of the priority and non-priority data packets and processing the characterizing information for variable length data packets.
  • the method may also provide for transmitting the data packets before the allowable data packet transmission times.
  • the device of the present invention is preferably a data stream shaper providing multiple level shaping of a data stream, with the data stream comprising data packets having characterizing information.
  • the data stream shaper comprises a plurality of processors for multiple level parallel processing of the characterizing information to determine a plurality of allowable user data transfer rates.
  • a buffer may be connected between an input and the processors for storing the data packets as the processors process the characterizing information and the processors may be configured to process the characterizing information for data packets in different processors.
  • the data stream shaper may be provided wherein each of the plurality of processors is configured for processing each level in separate of the different processors.
  • the data stream shaper may also provide logical association of related data packets in the buffer.
  • the data stream shaper buffer may further comprise a priority storage area for storing priority data packets until the priority data packets are to be transmitted and a non-priority storage area for storing non-priority data packets until the non-priority packets are to be transmitted.
  • the data stream may comprise a plurality of data streams emanating from a plurality of users, and wherein the multiple users are associated into groups, with the plurality of processors being configured to determine allowable data transfer rates based on characterizing information for the users and the groups.
  • a plurality of line cards may also be provided with the buffer comprising a plurality of buffer elements mounted on the plurality of line cards.
  • the plurality of processors may also be mounted on the line cards, and the data stream shaper may further comprise a packet identifier for determining characterizing information connected to the line cards through a switch fabric and a packet manager for processing the data packets into a preselected format, also connected to the line cards through said switch fabric.
  • the device of the present invention may also be a controller for controlling the rate of transfer of data packets in a data stream, with the controller comprising at least one packet identifier for processing protocol and user information and at least one data stream shaper connected to said packet identifier.
  • the data stream shaper comprises an input interface for receiving the data packets, an output for forwarding the data packets at a determined allowable transfer rate, and a plurality of processors for multiple level processing of the data packets in parallel to determine the determined allowable transfer rate for each of the data packets.
  • Each of the packet identifiers may have a packet inspector for determining the protocol and user information for users with the controller further comprising a packet manager connected to the processors for formatting the data packets into one of a plurality of predetermined data protocols .
  • the controller may include a buffer connected to the packet identifier for storing the data packets as the characterizing information is processed and the packet identifiers may determine whether each data packet is a priority or non-priority data packet.
  • a device of the invention may also be provided for shaping a plurality of data streams, with each of the data streams comprising a plurality of data packets and the device including a plurality of processors for shaping the data streams, a plurality of packet managers for formatting the data packets comprising the data streams, and a switch fabric interconnecting the plurality of processors and the plurality of packet managers.
  • a device of the invention may further be provided for shaping a plurality of data streams, with each of the data streams comprising a plurality of variable length data packets and the device including a plurality of line cards and a plurality of data processing cards.
  • the cards shape the data streams and the switch fabric interconnects the plurality of line cards and plurality of data processing cards . Therefore, the present invention provides traffic shaping for individual users, as well as for groups of users. Although a data packet may consist of a plurality of cells, shaping is preferably performed at the data packet level and not at the data cell level. As an example, traffic shaping of the present invention is first performed on a user level, where data packets of individual users are shaped according to the individual user's profile. Next, shaping is performed on a "logical link" level which may carry a plurality or group of users. Both levels of shaping are performed in parallel to increase speed in processing. The invention also provides shaping based on priority and non-priority traffic, wherein priority traffic is preferably given strict priority over non-priority traffic (e.g., real-time traffic versus non-real-time traffic) .
  • priority traffic is preferably given strict priority over non-priority traffic (e.g., real-time traffic versus non-real-time traffic) .
  • the present invention controls the flow of data packets such that the characteristics of the flow, after being processed and shaped, are readily definable which allows users to negotiate these determined transmission parameters more easily.
  • the invention also facilitates the use of network capacity with traffic management because of the ability to better predict the traffic stream characteristics. Monitoring the traffic on the network side (policing) is much easier and more reliable because input data flow has known characteristics . While the principal advantages and features of the present invention have been explained above, a more complete understanding of the invention may be attained by referring to the description of the preferred embodiment which follows .
  • Fig. 1 is a schematic block diagram of a system constructed according to the principals of one embodiment of the present invention for shaping data traffic-
  • Fig. 2 is a schematic block diagram of a line card in the system of Fig. 1;
  • Fig. 3 is another schematic block diagram of a line card in the system of Fig. 1
  • Fig. 4 is a schematic block diagram of an IPE card in the system of Fig. 1;
  • Fig. 5 is another schematic block diagram of an IPE card in the system of Fig. 1
  • Fig. 6 is a chart of the user table of the present invention.
  • Fig. 7 is a chart of the logical link table of the present invention
  • Fig. 8 is a schematic block diagram of portion of the memory in a PPU of the system of Fig. 1;
  • Fig. 9 is a block time line representation of processing functions of the present invention.
  • Fig. 10 is a schematic block diagram of a user circular queue process of the present invention.
  • Fig. 11 is a schematic block diagram of a logical link circular queue process of the present invention.
  • Fig. 12 is a schematic block diagram of the circular queues of Fig. 8 and 9;
  • Fig. 13 is a schematic block diagram of a portion of the memory of a PPU of the present invention.
  • Fig. 14 is a flow chart of a priority • data packet transmitting procedure of the present invention.
  • Fig. 15 is a flow chart of a non-priority data packet transmitting procedure of the present invention.
  • Fig. 16 is a flow chart of a non-priority data packet receiving procedure of the present invention.
  • Fig. 17 is a flow chart of a priority data packet receiving procedure of the present invention
  • Fig. 18 is a block diagram of the process of data packet scheduling of the present invention
  • Fig. 19 is an illustration of "traffic management”; and Fig. 20 is a flow diagram of the "traffic management” process.
  • FIG. 1 A system in which the preferred traffic shaping of the present invention is implemented is shown in Fig. 1 and is indicated generally by reference character 100.
  • the system may be provided as a mid-network router or hub, but may be any type of high-speed switch providing transmission of data.
  • streams of data cells or packets 102 are provided at inputs to the system. The number of inputs may be varied depending upon bandwidth demands.
  • the data packets are then processed, formatted and shaped (using the traffic shaper of the present invention) before being provided at the output of the system 100 as a shaped data stream 103 for transmission to their next destination.
  • the data stream is shaped at the data packet level and not at the data cell level. Therefore, common connections or inputs provided with different data packets from different users and groups of users are shaped according to the bandwidth available to each user or group of users (i.e., subscribed transmission rate) .
  • the system 100 is preferably provided with a plurality of line cards 104, a plurality of Internet processing (IPE) cards 106 and a switch fabric 108 providing bi- directional communication between the line cards 104 and the IPE cards 106.
  • IPE Internet processing
  • the line cards 104 provide the physical interface to the transmission medium and examine data traffic from various interfaces, such as ATM sources, to extract relevant characterizing information from the data packets including, for example, protocol and user identification information.
  • the relevant control information extracted from the data packets in the data streams are forwarded to appropriate IPE cards 106.
  • the IPE cards 106 use this control information to provide protocol processing, and for managing users and tunnels.
  • the line cards 104 and IPE cards 106 are provided with a plurality of general purpose processors, shown in Figs. 2-5 as protocol processing units (PPUs) 110.
  • PPUs protocol processing units
  • Each of the line cards 104 and IPE cards 106 are provided with a master processor, which is shown in line card 104 as illustrated in Figs. 2 and 3 and IPE card 106 as illustrated in Figs. 4 and 5, as a Master PPU (MPPU) 112.
  • the MPPUs 112 are provided mainly to implement functions relating to protocol processing, as well as to supervise and control the PPUs 110 on its card.
  • the MPPU 112 also provides bandwidth management and processing within a card, as well as aggregating the bandwidth needs of all the PPUs 100 on a given card.
  • the PPUs 110 and MPPUs 112 may be any type of general purpose processors, depending upon the demand requirements of the system 100, and may be for example, Pentium ® Processor chips or Power PC Processor chips.
  • the line cards 104 together terminate the link protocol and distribute the received packets based on user, tunnel or logical link (i.e., group of users) information to a particular PPU 110 on a particular IPE card 106 through the switch fabric 108. It should be recognized that if more bandwidth is needed than a single PPU can handle, the data packets will be distributed and processed over cascaded multiple PPUs 110, as shown in Figs. 2-5.
  • the line cards 104 perform both ingress functions and egress functions.
  • the PPUs 110 of the line cards 104 perform load distribution to the various PPUs 110 on the IPE cards 106 of the system 100.
  • Data packets processed through this system 100 are queued for their destined PPU or PPUs 110 based on the packet requirements and/or limitations of the data packet, with the data packets forwarded when they are eligible for service based on the distribution of switch fabric bandwidth.
  • the traffic shaping of the present invention is performed on the egress side of the line cards 104. This traffic shaping controls the flow of data traffic (i.e., bandwidth) transmitted from the egress interfaces of the line cards 104.
  • This traffic shaping ensures that the data packet transmission rate is within the negotiated parameters of the particular user or groups of users, so that the data packet will not be rejected by the network or user.
  • the traffic shaping operation modifies the data traffic, giving the data packets a pre- defined shape to their profile.
  • each of the line cards 104 includes a plurality of physical input interfaces (PHYs) 114 for receiving and transmitting data packets.
  • PHYs physical input interfaces
  • the number of these PHYs may be modified and the system constructed according to its specific data traffic demands. For example, in a system providing 10 Giga- bits per second (Gbps) transmission rate, the input and output data stream transmission rate at each PHY is equal to the total transmission rate or bandwidth of the system divided by the number of PHYs 114.
  • the preferred system is also provided with packet inspectors or identifiers 123 and PPUs 110, and packet managers 124, each of which includes a- packet formatter.
  • the packet identifiers 123 and packet formatters may be provided as described in the co-pending U.S. applications disclosed herein. However, any appropriate data packet processing system may be used which provides the required information.
  • the preferred embodiment includes eight PPUs 110 in each line card 104 and sixteen PPUs in each IPE card 106. However, the number of PPUs 110 is easily increased or decreased depending upon the requirements of the particular system (i.e., switch or router). Specifically, as shown in Figs.
  • each IPE card 106 is preferably provided with one packet inspector 123 and one packet manager 124.
  • the packet inspectors 123 provide for examining the data packets and extracting the relevant control information for providing to the PPUs 110 for processing, as well as receiving back processed information from the packet managers 124 for use in
  • the packet inspectors 123 provide characterizing information from the data packets, such as user identification information to the packet managers 124 to enable traffic shaping of the data packets based on the stored user information (i.e., bandwidth, priority and burst limits) in user tables 128 of the line cards 104.
  • the user tables 128 are preferably provided in the memory storage connected to each of the PPUs 110 on the egress side of the line cards 104.
  • Each PPU 110 includes a central processing unit (CPU) or general processing unit and memory. Additionally, as shown in Figs. 2 and 3, two packet inspectors 123 and two packet managers 124 are provided on each line card 104, one on the ingress side and one on the egress side of the line card 104. In particular, the processing is performed in the PPUs 110 with the data maintained in the packet buffer (egress) 131 until the shaping is complete and the data packets are transmitted from the PHYs 114. Data is communicated between the packet inspector 123, the packet buffers and the PPUs 110 using the buffer access controllers (BAG) 133. This provides for "splicing" or dividing the data packets provided by the packet inspector 123. As shown in Figs. 4 and 5, the IPE cards 106 are also provided with BACs 133 for communicating with the packet buffer on those cards .
  • BAG buffer access controllers
  • packet buffers 130 are provided throughout the system 100 to ensure that data packets transmitted and processed through the system 100 using the switch fabric 108 are held until such time that transferring of the data packets is available. Specifically, a buffer 130 is provided on the ingress side and buffer 131 on the egress side of the line cards 104, as shown in Figs. 2 and 3, as well as between the packet inspectors 123 and packet managers 124 on the card. A buffer 130 is likewise provided between the packet inspector 123 and packet manager 124 of the IPE cards 106 as shown in Figs. 4 and 5.
  • the packet buffer (egress) 131 between the packet inspector 123 and packet manager 124 on the egress side of line cards 104 hold the data packets while the PPUs 110 of the line cards 104 use the characterizing information extracted by the packet inspectors 123 to process the data packets based on the user information stored in the user tables 128.
  • the traffic shaping of the present invention is performed in the PPUs 110 on the egress side of the line cards 104.
  • these PPU's 110 use the characterizing information from the ingress side packet inspectors 123, as well as processed information from the packet managers 124, to identify a specific user or group of users within the user table 128 or logical link table 129, respectively.
  • TB Total Bandwidth
  • L Burst Limit
  • PB Priority Bandwidth
  • the TB parameter defines the user's average bandwidth that the user is provisioned based on that user's subscription rate.
  • L defines the maximum allowed transfer of a burst of data for that user.
  • the PB parameter defines the user's average bandwidth for priority traffic that the user is provisioned based on that user's subscription rate.
  • the PB is part of TB. Therefore, for example, if a user is assigned a TB of 10 Mbps with a PB of 4 Mbps, the user is entitled to a total of 10 Mbps of bandwidth, out of which up to 4 Mbps can be for priority traffic (e.g, real-time traffic).
  • priority traffic e.g, real-time traffic
  • the user defined parameters include a user identification (UID) to determine the user associated with the particular data packet and a physical identification (PHYID) to determine which PHY 114 is associated with the relevant data packet.
  • UID user identification
  • PHYID physical identification
  • Each individual is a member of a group of users defined preferably as the logical link.
  • an individual user subscribes through an Internet Service Provider (ISP) for service and access to the Internet.
  • ISP Internet Service Provider
  • Each ISP contracts for bandwidth and that bandwidth is defined by the parameters of the logical link.
  • the logical link defined parameters are in the logical link table 129, as shown in Fig. 7, and include three parameters associated with each logical link to describe that particular logical link's (e.g., ISP) traffic shaping profile. Specifically, these parameter include the following: Logical Link Total Bandwidth (bits per second) (LLTB) , Logical Link Burst Limit (L) and Logical Link Priority Bandwidth (LLPB) .
  • LLTB Logical Link Total Bandwidth
  • L Logical Link Burst Limit
  • LLPB Logical Link Priority Bandwidth
  • the LLTB parameter defines the logical link's average bandwidth that the logical link is provisioned based on that logical link's subscription rate.
  • the L parameter defines the maximum allowed transfer of a burst of data for that particular logical link.
  • the LLPB parameter defines the logical link's average bandwidth for priority traffic that the logical link is provisioned based on that logical link's subscription rate. The LLPB is part of LLTB.
  • a logical link is assigned an LLTB of 10 Mbps with an LLPB of 4 Mbps, the logical link is entitled to a total of 10 Mbps of bandwidth, out of which up to 4 Mbps can be for priority traffic (e.g, real-time traffic) .
  • priority traffic e.g, real-time traffic
  • Data packets are maintained in the buffer 131 on the egress side of the line cards 104 until the user or logical link information described above is processed and the data packets are transmitted out of the PHYs 114 to their next destination. It should be noted that the data packets remain in this buffer while the PPUs 110 process the relevant information to determine the limits of the user or logical link associated with the data packets in the buffer 131.
  • the data packets are logically organized on a per user basis in the buffer 131 on the egress side of the line cards 104. As shown in that figure, the data packets are maintained in two separate queues, one for priority data packets and one for non-priority data packets. These queues are organized on a first-in- first-out basis.
  • the PPUs 110 processing the user parameters in the tables determine the Earliest Theoretical Departure Time Total (ETDTT) for any user with data packets in the non-priority queue and the Earliest Theoretical Departure Time Priority (ETDTP) for any user with packets in the priority queue.
  • ETDTT Earliest Theoretical Departure Time Total
  • ETDTP Earliest Theoretical Departure Time Priority
  • This value is calculated and stored in the user table 128 the first time a user's data packet enters the buffer's queue, which may be the first time that particular user has ever transmitted data packets through the system 100, or may be after the particular user's queue is empty or clear and a first data packet is received again in the queue.
  • the PPUs calculate the time between which data packets for a particular user are conformant with their defined parameters (i.e., TB and PB) . This time, as shown in Fig. 9, is used by the PPUs 110 to determine the next time at which a particular user's data packet becomes conformant and is ready for processing using the logical link's parameters .
  • a user who has both priority and non-priority packets in the queues of the buffer 131, will have a ETDTT and a ETDTP pointer.
  • each PPU 110 in the system 100 has a real time counter (RTC) , which maintains real time for the operations of the system 100.
  • RTC real time counter
  • Each PPU 110 also maintains within its memory a software defined User Circular Queue (UCQ) 132 as shown in Fig. 10 and a software defined logical link Circular Queue (LLCQ) 134 as shown in Fig. 11.
  • the UCQ has n number of logical bins 136 (0, 1, 2, . . . . n-1) , with each bin representing a time interval of T units.
  • the Time unit T for each bin 136 can be defined in the software as required by the bandwidth and other transfer parameters of the system 100.
  • the UCQ 132 and LLCQ 134 are provided with a wrap-around feature such that a particular user's and/or a particular logical link's ETDTT does not have to be processed in one cycle of the UCQ 132 or LLCQ 134 (i.e., n-1 bins or m-1 bins) .
  • the UCQ 132 preferably has an associated current bin pointer (CBPTR) which points to one of the bins 136 of the UCQ 132.
  • CBPTR current bin pointer
  • the CBPTR is updated to point to the next consecutive bin 136.
  • the CBPTR will point to the same bin. Note that the time nT must be greater than the maximum increment (packet length / bandwidth) corresponding to the lowest rate user that the system supports .
  • NPLL Non-Priority Linked List
  • PLL Priority Linked List
  • the NPLL is a doubly linked list and the PLL is a singly linked list as shown in this figure. Therefore, for non-priority data packets, pointers point to both the previous and next user in the linked list. For priority data packets, pointers only point to the next user in the linked list. This is because priority data packets may be processed and have to be updated, which results in both the ETDTP and ETDTT being recalculated.
  • the ETDTT and the ETDTP are calculated whenever a packet is transmitted on the physical interface. For example, if at current time (CT) , a priority packet for user Ul is transmitted, then for that user,
  • ETDTP CT + (Packet Length) / Priority Bandwidth)- L.
  • ETDTT ETDTT + (Packet Length) / Total Bandwidth) .
  • the NPLL 140 must be a doubly linked list.
  • ETDTT ETDTT + (Packet Length / Total Bandwidth)
  • ETDTT CT + (Packet Length) / Total Bandwidth) - L.
  • Updates to the entries in a logical link queue are based on the logical link corresponding to the user for which a data packet was transmitted.
  • every user in the NPLL 140 is allowed to transmit the head-of-line or first-in-line packet in the user's non-priority buffer (indicated by the NHP pointer)
  • every user in the PLL 142 is allowed to transmit the head-of-line or first-in-line packet in the user's priority buffer (indicated by the PHP) .
  • each PPU 110 also maintains the LLCQ 134 with m number of logical bins 138 each representing a time interval of S time units. Therefore, as in the UCQ 132, a CBPTR will point to the same bin 138 every mS time interval. Note that the time mS must be greater than the maximum increment (packet length / bandwidth) corresponding to the lowest rate logical link that the system supports.
  • the invention calculates and maintains an Earliest Theoretical Departure Time Total (ETDTT) for any logical link with any conformant users, and calculates and maintains an Earliest Theoretical Departure Time Priority (ETDTP) for a logical link with any conformant users having data packets in the priority queue.
  • ETDTT Earliest Theoretical Departure Time Total
  • ETDTP Earliest Theoretical Departure Time Priority
  • the ETDTT and ETDTP for the LLCQ 134 are calculated based on the defined parameters of that particular Logical Link (i.e., LLTB and LLPB).
  • Logical Link i.e., LLTB and LLPB
  • a logical link that has both priority and non- priority conformant users will have an ETDTT and an ETDTP pointer.
  • Corresponding to each bin 138 of the LLCQ 134 are also two linked lists of logical links, a Non-Priority Linked List (NPLL) 144 and Priority Linked List (PLL) 146.
  • NPLL Non-Priority Linked List
  • PLL Priority Linked List
  • a CBPTR is also provided such that at a given time when the CBPTR points to a specific bin 138, all the logical links in the two linked lists are considered conformant .
  • Each individual user e.g., individual subscriber that contracts with an ISP for access to the Internet
  • a logical link e.g., ISP
  • LID logical link identification
  • the logical link may be assigned a certain amount of bandwidth on the particular PHYs 114 with which the user is associated. The amount of bandwidth is that amount for which the ISP contracts with a bandwidth reseller. It is possible, and in fact common, for the logical link to comprise a group of individual users whose total TB may exceed the LLTB of that logical link and/or whose total PB may exceed the LLPB of that logical link (i.e., oversubscribed) . Referring now to Fig.
  • each logical link is provided with two schedulers, a priority scheduler 148 for scheduling the transmission of priority data packets and a non-priority scheduler 150 for scheduling the transmission of non-priority data packets.
  • the data packets of users that become conformant in the USC 132 based upon their user parameters are linked to the logical link to which they belong in the LLCQ 134, and when that logical link becomes conformant, the users are placed in one of the two schedulers of the logical link.
  • the transmission of data packets of users in the schedulers is determined and scheduled by Deficit Round Robin as disclosed in "Efficient Fair Queueing using Deficit Round Robin" by Madhavapeddi Shreedhar and George Varghese, Proceedings of SIGCOMM, August 1995.
  • FIG. 14 illustrates the process used by the traffic shaper to determine if a particular priority data packet of a particular user has become conformant with the user's parameters as defined in the user table 128. The process shown also illustrates how the ETDTP pointer is updated.
  • the ETDTT and the ETDTP pointers are calculated whenever a packet is first received in the buffer 131 and is updated when a user is linked to the proper linked list and delinked from the original linked list if necessary.
  • ETDTP ETDTP + (Packet Length) / Priority Bandwidth
  • LLPS continue to schedule the User (Process based on DRR algori thm) else Upda te User' s informa tion for LLPS (Deficit Regis ter upda te)
  • Figure 15 illustrates the process of transmitting non-priority packets using the traffic shaping of the present invention. As shown, if at current time CT, a non-priority packet for a user Ul is transmitted by LLNPS, then for that user, the ETDTT is updated as follows :
  • ETDTT CT + (Packet Length / Total Bandwidth) - L else
  • ETDTT ETDTT + (Packet Length) / Total Bandwidth) .
  • Figure 16 illustrates in flow chart form the process executed by the traffic shaper of the present invention when a priority packet is received for shaping.
  • the pseudo code for that process is as follows:
  • Figure 17 illustrates the process executed by the traffic shaper of the present invention when non-priority packets are received.
  • the pseudo code for that process is as follows:
  • the traffic shaper first calculates and then updates the ETDTT and ETDTP pointers for use in the UCQ 132 and LLCQ 134 preferably based upon the above definitions.
  • the traffic shaper first calculates and then updates the ETDTT and ETDTP pointers for use in the UCQ 132 and LLCQ 134 preferably based upon the above definitions.
  • other procedures may be implemented to achieve the same or similar processing depending upon the application and requirements of the system that is shaping the data stream.
  • the UCQ 132 is provided in each bin 136 with a priority pointer and non-priority pointer.
  • the priority pointer includes the PNXTPTR pointing to the next user in the PLL 142. Only one pointer is required in this singly linked list.
  • the non-priority pointers include a NXTPTR to point to the next user in the NPLL 140 and a PREVPTR to point to the previous user in the NPLL 140. Two pointers are required because as described herein, the NPLL 140 is doubly linked. Each of the users in the linked lists is associated with a logical link. Therefore, as shown in Fig.
  • each user has a logical link association, such that when any of the users become conformant to that user's transfer parameters, it is associated with either the NPLL 144 or PLL 146 of the LLCQ 134 and added to that logical link's linked list, depending upon whether the user is a non-priority or priority user.
  • a logical link that includes both priority and non-priority conformant users will have a corresponding ETDTT and ETDTP.
  • the CBPTR pointer points to the next consecutive bin in the circular queue.
  • the CBPTR points to a bin 126 in the UCQ 132 all of the users, both priority and non- priority, are considered conformant.
  • the NPLL 140 is allowed to transmit the NHP and the PLL 142 is allowed to transmit the PHP, assuming the logical link with which the user is associated can schedule the data packets of the users that are conformant.
  • group shaping when the CBPTR points to a bin 138 in the LLCQ 134, both the priority and non-priority data packets belonging to the conformant users in the bins 138 are scheduled for transmission.
  • the pointer is incremented in both the UCQ 132 and LLCQ 134. This provides the parallel multi-level or multi-layer shaping of the present invention.
  • FIG. 18 A specific example of the process of traffic shaping relating to receiving data packets is shown in Fig. 18.
  • a non-priority packet for a user is received, if the user has non-priority packets already in buffer 131, the packet is just appended to the non- priority packet queue. If the user has no non-priority packet, the user's ETDTT is initialized and the user is linked in the doubly linked NPLL 140 in the UCQ 140 or into the NPLL 144 in the LLCQ 134 corresponding to the user.
  • a priority packet for a user When a priority packet for a user is received, if the user has priority packets already in buffer 131, the packet is just appended to priority packet queue. ' If the user has no priority packets, the user's ETDTP is initialized and the user is linked into the PLL 142 in the UCQ 140 or into the LLCQ 134 corresponding to the user.
  • the packet is queued up in the packet buffer 131, which is organized on a per user basis. For transmission, an entry is created in the corresponding bin. For example if the current time is X and the ETDTT for a packet for user U4 is X5, an entry is created in the X5 bin of the scheduler. The entry is preferably just a pointer to a row in the user table 128. Similarly if the next packet comes (U5) and its ETDTT is again calculated as X5, a new entry is not created for this user in the X5 bin of the scheduler.
  • a bin preferably contains only a single pointer to an entry in the user table and if there are more users in a bin, the users are linked together in the order they are received.
  • Data packets of a single bin are preferably rearranged according to the priority of the users (i.e., packets with high priority are scheduled for transmission prior to the packets of the lower priorities in a single bin) .
  • the priority of the users i.e., packets with high priority are scheduled for transmission prior to the packets of the lower priorities in a single bin.
  • transmission sequence will be U4,U5,U7 .
  • Fig. 19 is an illustration of the complementary functions of "traffic policing" and "traffic shaping.”
  • a system such as that in Fig. 1, takes input logical link groups comprised of individual users and provides “policing" of the data packets and thereafter "shapes" the data packets to a pre-defined profile for transmission based on the individual user's transfer parameters and logical link's transfer parameters.
  • the overall process is shown in flow form in Fig. 20.
  • the processing and shaping of the traffic shaper of the present invention occurs at high speed and at multiple levels in parallel due to the structure of the circular queues and the provision of the packet buffers to hold the data packets while the parallel processing is occurring.
  • the traffic shaping of the present invention may be configured in alternate ways, and is not limited by the number of component parts and the specific code as described in the preferred embodiment.
  • additional circular queues may be included for additional layers of parallel processing.
  • the number of line cards 104 and IPE cards 106 may be scaled up or down depending upon the requirements for the "packet shaping" to be performed.
  • the number of PPUs 110 on the line cards may be scaled depending upon the processing demands of the system.
  • traffic shaper of the present invention has been described in detail only in the context of shaping data packets through routers and switches, the traffic shaper may also be readily configured to shape data in other non-networking applications and anywhere shaping of a data stream is required.
  • the various block representations as described herein represent hardware implementation of the invention, such as in chip architecture. Several of the chips or functions could be incorporated into a custom chip. Although not preferable, one or more of the functions of the traffic shaping performed in software could be implemented in hardware .

Abstract

A method and device for traffic shaping is provided for multiple level or multi-layer shaping in parallel, which can also provide priority 'shaping' to multiple data streams comprised of data packets. Data streams transmitted over connections or groups of connections comprised of a common channel or interface are 'shaped' in order to control the data transfer rate allowed each user or group of users to be within a subscription rate. The traffic shaper uses software configured to provide logical circular queues to confirm, without a search, when data packets are conformant with a provisioned bandwidth. The traffic shaper also schedules the provisioned bandwidth to a logical link and thereafter to an output port. The traffic shaper also allows for transmission of bursts of data packets as permitted by a subscription schedule. Shaping is also possible to control data packet transmission to be within different bandwidths (i.e., a total rate and a sub-rate) and at multiple levels or layers to enable the shaping of, for example, oversubscribed bandwidth provided to groups of users.

Description

METHOD AND DEVICE FOR DATA TRAFFIC SHAPING
FIELD OF THE INVENTION
The present invention relates to the field of high speed data packet processing for computer networking systems, and in particular to shaping streams of data packets to conform to varying data packet size and format requirements of these computer networking systems. BACKGROUND OF THE INVENTION
The demand for increased data transfer speed using the Internet continues to grow and the different types of data that are transferred have varying transfer requirements and limitations. Therefore, "data traffic management" which addresses these issues has become ever more important to ensure the proper and speedy transfer of data on the Internet.
Numerous issues have arisen relating to managing the data traffic. For example, as data transfer speed increases, a need has arisen to ensure that the connections and interfaces of networks are able to control the rate of speed of such data transfer to avoid data transfer problems, such as overflow and loss of data. Additionally, different users or subscribers, to accommodate the different types of data being transferred, demand different data transfer rates (i.e., different bandwidth requirements) . Thus, the network interfaces must be able to determine the defined parameter and limits (e.g., the maximum bandwidth for a given user) of the transfer rate for each user or subscriber, as well as the specific transfer requirements for the particular data packets of that user or subscriber. Further, some types of data have stricter transfer requirements than "normal" data traffic. Internet Service Providers (ISPs) and other network managers are recognizing the need to manage data traffic such that different users, as well as different types of data, get different treatment by the networks (i.e., different data transfer parameters). Some users require greater bandwidth transfer rates and certain "priority" data requires more stringent transfer parameters (e.g., real-time traffic such as voice or video) . As bandwidth is a scarce resource, network connections or interfaces must be managed to isolate users or groups of users that are using common connections or interfaces in order to limit the amount of bandwidth available to each of these users or group of users based on their subscription rates. Further, these network connections or interfaces must be able to recognize "priority" data traffic and provide data transfer based on the more stringent transfer requirements of this data. The concept of data traffic policing and data traffic shaping is known. Switches and routers, for example, are known to "police" data on an incoming connection or interface to determine whether the transfer bandwidth is in compliance with a subscriber level. It is also known to shape data traffic in these switches and routers to control the outgoing connection or interface to ensure that the data being transferred is limited to the bandwidth assigned to the particular user or subscriber whose data packet or packets are in the traffic stream. Thus, the concept of "traffic management" or "data transfer management" is known. However, as best known to the inventors herein, such "traffic management" is currently limited to only single level data management (e.g., traffic shaping of individual users or traffic shaping of groups of users) .
Therefore, what is needed is a method and device for efficiently managing data traffic at multiple levels, and in particular, for multi-level data shaping of traffic on the Internet. Specifically, what is needed is a device and method capable of processing in parallel data traffic to two or more rates (i.e., a total rate and a sub-rate) and to provide multi-level traffic shaping to shape, for example, oversubscribed bandwidth provided to groups of users. Additionally, traffic shaping is needed that has the ability to determine priority data such that it provides low latency to delay sensitive connections. SUMMARY OF THE INVENTION
The present invention provides a method and device for traffic shaping at multiple levels or layers in parallel and that can provide priority shaping to certain data. The invention provides for shaping connections or groups of connections over a common channel or interface in order to identify and/or isolate users or groups of users from each other in order to control the data transfer rate of each user or group of users (i.e., limit bandwidth of each user or group of users to their subscription rate) . Generally, the traffic shaper of the present invention provides parallel and efficient use of circular queues to confirm, without a search, when data packets are conformant with a provisioned bandwidth. The traffic shaper also schedules the provisioned bandwidth to a logical link and thereafter to an output port . The traffic shaper also provides for transmission of bursts of data packets .
Further, the invention includes the efficient use of linked lists to provide priorities to certain data on specific connections for low latency to delay sensitive connections. Shaping is also possible at different bandwidths (i.e., a total rate and a sub-rate) and at multiple levels or layers to enable the shaping of, for example, oversubscribed bandwidth provided to groups of users.
The traffic shaping of the present invention is preferably software based, but uses hardware control for implementing overall data traffic transfer. Preferably the invention holds data packets in a buffer until user and protocol information is processed and the data packet is transmitted to its next network destination. The invention provides inputs to examine incoming traffic from various interfaces, for example asynchronous transfer mode (ATM) , Gigabit Ethernet or Packet over synchronous optical network (POS) . This allows for the extraction of relevant control information from data packets (e.g., characterizing information, such as user identification information) , which is then forwarded to appropriate internet processing engines (IPEs) of the present invention. The IPEs provide protocol processing and management of users and tunnels. For example, data packet identification may be provided as described in co-pending U.S. application entitled "Device and Method for Packet Inspection" having serial no. 09/494,235 and a filing date of January 30, 2000, and data packet formatting may be provided as described in co-pending U.S. application entitled "Device and Method for Packet Formatting" having serial no. 09/494,236 and a filing date of January 30, 2000, the disclosures of each of which are incorporated herein by reference . In hardware implementation, general-purpose processors
(protocol processing units (PPUs) ) provide for the processing of data packets for shaping the traffic stream. Multiple PPUs may be implemented depending upon the amount of bandwidth managed. Thus, the invention shapes the flow of traffic on the egress side of the traffic shaper to ensure conformity to negotiated transfer parameters. The invention thereby gives a pre-defined shape to the data stream profile.
Succinctly, the invention provides both a method and device for traffic shaping. The method of shaping data packets in a data stream is provided to control the rate of transfer of the data packets having characterizing information corresponding to users and a predefined data packet transfer rate for each of the packets. The method preferably comprises the steps of processing the characterizing information in parallel to thereby determine a plurality of data transfer requirements, and forwarding the data packets to a next destination at the predefined data packet transfer rate based on the determined data transfer requirements. The method may further comprise processing a plurality of levels of user information in parallel and wherein the processing includes processing in parallel a first level of user information comprising individual user information and processing another level of user information comprising group user information. Further, processing the characterizing information in parallel may include determining a plurality of levels of transfer requirements. The method may further include forwarding the data packets at a plurality of predefined data packet transfer rates based on the plurality of levels of determined data transfer requirements and storing the data packets while the parallel processing is performed. The data packets being processed may be of variable length. The method may further provide forwarding the data packets at a higher data packet transfer rate than the predefined data packet transfer and storing the data packets by logically associating the data packets of each individual user as they are stored. The step of processing the characterizing information may further comprise determining whether a data packet is a priority data packet or a non- priority data packet and parallel processing further comprises processing in parallel the individual user and group user information for both the priority and non-priority data packets. The method may also include the step of separately scheduling a data transfer rate for each of the priority and non-priority data packets .
The invention further provides a method of shaping a data stream to control the transfer rate of a plurality of data packets comprising the data stream with the method comprising the steps of storing the data packets while a data transfer rate is determined for each data packet, determining individual user and group user desired data transfer rates for each of the stored data packets, processing in parallel the individual user and group user desired data transfer rates to determine an allowable data packet transmission time for each of the stored data packets, and transmitting the stored data packets on or after the allowable data packet transmission times. The method may further comprise determining a plurality of levels of desired data transfer rates for each of the individual user and group user data transfer rates and logically associating the data packets of each individual user.
The method may further include determining whether each data packet is a priority or non-priority data packet and parallel processing the desired data transfer rates for both the priority and non-priority data packets for the individual and group users. The method may provide separately scheduling the allowable departure time for each of the priority and non-priority data packets and processing the characterizing information for variable length data packets.
The method may also provide for transmitting the data packets before the allowable data packet transmission times.
The device of the present invention is preferably a data stream shaper providing multiple level shaping of a data stream, with the data stream comprising data packets having characterizing information. The data stream shaper comprises a plurality of processors for multiple level parallel processing of the characterizing information to determine a plurality of allowable user data transfer rates. A buffer may be connected between an input and the processors for storing the data packets as the processors process the characterizing information and the processors may be configured to process the characterizing information for data packets in different processors. The data stream shaper may be provided wherein each of the plurality of processors is configured for processing each level in separate of the different processors. The data stream shaper may also provide logical association of related data packets in the buffer. The data stream shaper buffer may further comprise a priority storage area for storing priority data packets until the priority data packets are to be transmitted and a non-priority storage area for storing non-priority data packets until the non-priority packets are to be transmitted.
The data stream may comprise a plurality of data streams emanating from a plurality of users, and wherein the multiple users are associated into groups, with the plurality of processors being configured to determine allowable data transfer rates based on characterizing information for the users and the groups.
A plurality of line cards may also be provided with the buffer comprising a plurality of buffer elements mounted on the plurality of line cards. The plurality of processors may also be mounted on the line cards, and the data stream shaper may further comprise a packet identifier for determining characterizing information connected to the line cards through a switch fabric and a packet manager for processing the data packets into a preselected format, also connected to the line cards through said switch fabric.
The device of the present invention may also be a controller for controlling the rate of transfer of data packets in a data stream, with the controller comprising at least one packet identifier for processing protocol and user information and at least one data stream shaper connected to said packet identifier. The data stream shaper comprises an input interface for receiving the data packets, an output for forwarding the data packets at a determined allowable transfer rate, and a plurality of processors for multiple level processing of the data packets in parallel to determine the determined allowable transfer rate for each of the data packets. Each of the packet identifiers may have a packet inspector for determining the protocol and user information for users with the controller further comprising a packet manager connected to the processors for formatting the data packets into one of a plurality of predetermined data protocols . The controller may include a buffer connected to the packet identifier for storing the data packets as the characterizing information is processed and the packet identifiers may determine whether each data packet is a priority or non-priority data packet. A device of the invention may also be provided for shaping a plurality of data streams, with each of the data streams comprising a plurality of data packets and the device including a plurality of processors for shaping the data streams, a plurality of packet managers for formatting the data packets comprising the data streams, and a switch fabric interconnecting the plurality of processors and the plurality of packet managers.
A device of the invention may further be provided for shaping a plurality of data streams, with each of the data streams comprising a plurality of variable length data packets and the device including a plurality of line cards and a plurality of data processing cards.
The cards shape the data streams and the switch fabric interconnects the plurality of line cards and plurality of data processing cards . Therefore, the present invention provides traffic shaping for individual users, as well as for groups of users. Although a data packet may consist of a plurality of cells, shaping is preferably performed at the data packet level and not at the data cell level. As an example, traffic shaping of the present invention is first performed on a user level, where data packets of individual users are shaped according to the individual user's profile. Next, shaping is performed on a "logical link" level which may carry a plurality or group of users. Both levels of shaping are performed in parallel to increase speed in processing. The invention also provides shaping based on priority and non-priority traffic, wherein priority traffic is preferably given strict priority over non-priority traffic (e.g., real-time traffic versus non-real-time traffic) .
Thus, the present invention controls the flow of data packets such that the characteristics of the flow, after being processed and shaped, are readily definable which allows users to negotiate these determined transmission parameters more easily. The invention also facilitates the use of network capacity with traffic management because of the ability to better predict the traffic stream characteristics. Monitoring the traffic on the network side (policing) is much easier and more reliable because input data flow has known characteristics . While the principal advantages and features of the present invention have been explained above, a more complete understanding of the invention may be attained by referring to the description of the preferred embodiment which follows . BRIEF DESCRIPTION OF THE DRAWINGS
Fig. 1 is a schematic block diagram of a system constructed according to the principals of one embodiment of the present invention for shaping data traffic- Fig. 2 is a schematic block diagram of a line card in the system of Fig. 1;
Fig. 3 is another schematic block diagram of a line card in the system of Fig. 1
Fig. 4 is a schematic block diagram of an IPE card in the system of Fig. 1; Fig. 5 is another schematic block diagram of an IPE card in the system of Fig. 1
Fig. 6 is a chart of the user table of the present invention;
Fig. 7 is a chart of the logical link table of the present invention; Fig. 8 is a schematic block diagram of portion of the memory in a PPU of the system of Fig. 1;
Fig. 9 is a block time line representation of processing functions of the present invention;
Fig. 10 is a schematic block diagram of a user circular queue process of the present invention;
Fig. 11 is a schematic block diagram of a logical link circular queue process of the present invention;
Fig. 12 is a schematic block diagram of the circular queues of Fig. 8 and 9; Fig. 13 is a schematic block diagram of a portion of the memory of a PPU of the present invention;
Fig. 14 is a flow chart of a priority data packet transmitting procedure of the present invention;
Fig. 15 is a flow chart of a non-priority data packet transmitting procedure of the present invention;
Fig. 16 is a flow chart of a non-priority data packet receiving procedure of the present invention;
Fig. 17 is a flow chart of a priority data packet receiving procedure of the present invention; Fig. 18 is a block diagram of the process of data packet scheduling of the present invention;
Fig. 19 is an illustration of "traffic management"; and Fig. 20 is a flow diagram of the "traffic management" process. DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
A system in which the preferred traffic shaping of the present invention is implemented is shown in Fig. 1 and is indicated generally by reference character 100. The system may be provided as a mid-network router or hub, but may be any type of high-speed switch providing transmission of data. As shown therein, streams of data cells or packets 102, are provided at inputs to the system. The number of inputs may be varied depending upon bandwidth demands. The data packets are then processed, formatted and shaped (using the traffic shaper of the present invention) before being provided at the output of the system 100 as a shaped data stream 103 for transmission to their next destination. Preferably, the data stream is shaped at the data packet level and not at the data cell level. Therefore, common connections or inputs provided with different data packets from different users and groups of users are shaped according to the bandwidth available to each user or group of users (i.e., subscribed transmission rate) .
Referring again to Fig. 1, the system 100 is preferably provided with a plurality of line cards 104, a plurality of Internet processing (IPE) cards 106 and a switch fabric 108 providing bi- directional communication between the line cards 104 and the IPE cards 106.
The line cards 104 provide the physical interface to the transmission medium and examine data traffic from various interfaces, such as ATM sources, to extract relevant characterizing information from the data packets including, for example, protocol and user identification information. The relevant control information extracted from the data packets in the data streams are forwarded to appropriate IPE cards 106. The IPE cards 106 use this control information to provide protocol processing, and for managing users and tunnels.
Specifically, the line cards 104 and IPE cards 106 are provided with a plurality of general purpose processors, shown in Figs. 2-5 as protocol processing units (PPUs) 110. Each of the line cards 104 and IPE cards 106 are provided with a master processor, which is shown in line card 104 as illustrated in Figs. 2 and 3 and IPE card 106 as illustrated in Figs. 4 and 5, as a Master PPU (MPPU) 112. The MPPUs 112 are provided mainly to implement functions relating to protocol processing, as well as to supervise and control the PPUs 110 on its card. The MPPU 112 also provides bandwidth management and processing within a card, as well as aggregating the bandwidth needs of all the PPUs 100 on a given card. It should be noted that the PPUs 110 and MPPUs 112 may be any type of general purpose processors, depending upon the demand requirements of the system 100, and may be for example, Pentium® Processor chips or Power PC Processor chips.
In the preferred packet shaping system 100, the line cards 104 together terminate the link protocol and distribute the received packets based on user, tunnel or logical link (i.e., group of users) information to a particular PPU 110 on a particular IPE card 106 through the switch fabric 108. It should be recognized that if more bandwidth is needed than a single PPU can handle, the data packets will be distributed and processed over cascaded multiple PPUs 110, as shown in Figs. 2-5.
The line cards 104 perform both ingress functions and egress functions. On the ingress side, the PPUs 110 of the line cards 104 perform load distribution to the various PPUs 110 on the IPE cards 106 of the system 100. Data packets processed through this system 100 are queued for their destined PPU or PPUs 110 based on the packet requirements and/or limitations of the data packet, with the data packets forwarded when they are eligible for service based on the distribution of switch fabric bandwidth. On the egress side of the line cards 104, the traffic shaping of the present invention is performed. This traffic shaping controls the flow of data traffic (i.e., bandwidth) transmitted from the egress interfaces of the line cards 104. This traffic shaping ensures that the data packet transmission rate is within the negotiated parameters of the particular user or groups of users, so that the data packet will not be rejected by the network or user. Generally, the traffic shaping operation modifies the data traffic, giving the data packets a pre- defined shape to their profile.
Referring again to Fig. 1, the line cards 104 and IPE cards 106 are provided with other component parts in their preferred embodiment to process incoming data packets and output formatted and shaped data packets. Specifically, as shown in Fig. 2, each of the line cards 104 includes a plurality of physical input interfaces (PHYs) 114 for receiving and transmitting data packets. The number of these PHYs may be modified and the system constructed according to its specific data traffic demands. For example, in a system providing 10 Giga- bits per second (Gbps) transmission rate, the input and output data stream transmission rate at each PHY is equal to the total transmission rate or bandwidth of the system divided by the number of PHYs 114.
The preferred system is also provided with packet inspectors or identifiers 123 and PPUs 110, and packet managers 124, each of which includes a- packet formatter. The packet identifiers 123 and packet formatters, may be provided as described in the co-pending U.S. applications disclosed herein. However, any appropriate data packet processing system may be used which provides the required information. Further, regarding the number of PPUs 110 provided, the preferred embodiment includes eight PPUs 110 in each line card 104 and sixteen PPUs in each IPE card 106. However, the number of PPUs 110 is easily increased or decreased depending upon the requirements of the particular system (i.e., switch or router). Specifically, as shown in Figs. 4 and 5, each IPE card 106 is preferably provided with one packet inspector 123 and one packet manager 124. The packet inspectors 123 provide for examining the data packets and extracting the relevant control information for providing to the PPUs 110 for processing, as well as receiving back processed information from the packet managers 124 for use in
"traffic shaping." Specifically, the packet inspectors 123 provide characterizing information from the data packets, such as user identification information to the packet managers 124 to enable traffic shaping of the data packets based on the stored user information (i.e., bandwidth, priority and burst limits) in user tables 128 of the line cards 104. The user tables 128 are preferably provided in the memory storage connected to each of the PPUs 110 on the egress side of the line cards 104.
Each PPU 110 includes a central processing unit (CPU) or general processing unit and memory. Additionally, as shown in Figs. 2 and 3, two packet inspectors 123 and two packet managers 124 are provided on each line card 104, one on the ingress side and one on the egress side of the line card 104. In particular, the processing is performed in the PPUs 110 with the data maintained in the packet buffer (egress) 131 until the shaping is complete and the data packets are transmitted from the PHYs 114. Data is communicated between the packet inspector 123, the packet buffers and the PPUs 110 using the buffer access controllers (BAG) 133. This provides for "splicing" or dividing the data packets provided by the packet inspector 123. As shown in Figs. 4 and 5, the IPE cards 106 are also provided with BACs 133 for communicating with the packet buffer on those cards .
It should be noted that packet buffers 130 are provided throughout the system 100 to ensure that data packets transmitted and processed through the system 100 using the switch fabric 108 are held until such time that transferring of the data packets is available. Specifically, a buffer 130 is provided on the ingress side and buffer 131 on the egress side of the line cards 104, as shown in Figs. 2 and 3, as well as between the packet inspectors 123 and packet managers 124 on the card. A buffer 130 is likewise provided between the packet inspector 123 and packet manager 124 of the IPE cards 106 as shown in Figs. 4 and 5. Of specific note, and as previously identified, the packet buffer (egress) 131 between the packet inspector 123 and packet manager 124 on the egress side of line cards 104 hold the data packets while the PPUs 110 of the line cards 104 use the characterizing information extracted by the packet inspectors 123 to process the data packets based on the user information stored in the user tables 128. Referring again to Figs. 2 and 3, the traffic shaping of the present invention is performed in the PPUs 110 on the egress side of the line cards 104. Generally, these PPU's 110 use the characterizing information from the ingress side packet inspectors 123, as well as processed information from the packet managers 124, to identify a specific user or group of users within the user table 128 or logical link table 129, respectively. These tables are maintained within the memories of the PPUs 100 of the packet inspectors 123 on the egress side of the line cards 104. Regarding user defined parameters in the user table as shown in Fig. 6, three parameters associated with each user are used to describe that particular user's traffic shaping profile. Specifically, these parameter include the following: Total Bandwidth (bits per second) (TB), Burst Limit (L) and Priority Bandwidth (PB) . The TB parameter defines the user's average bandwidth that the user is provisioned based on that user's subscription rate. The L parameter defines the maximum allowed transfer of a burst of data for that user. The PB parameter defines the user's average bandwidth for priority traffic that the user is provisioned based on that user's subscription rate. The PB is part of TB. Therefore, for example, if a user is assigned a TB of 10 Mbps with a PB of 4 Mbps, the user is entitled to a total of 10 Mbps of bandwidth, out of which up to 4 Mbps can be for priority traffic (e.g, real-time traffic).
Additionally, the user defined parameters include a user identification (UID) to determine the user associated with the particular data packet and a physical identification (PHYID) to determine which PHY 114 is associated with the relevant data packet.
Each individual is a member of a group of users defined preferably as the logical link. For example, an individual user subscribes through an Internet Service Provider (ISP) for service and access to the Internet. Each ISP contracts for bandwidth and that bandwidth is defined by the parameters of the logical link. The logical link defined parameters are in the logical link table 129, as shown in Fig. 7, and include three parameters associated with each logical link to describe that particular logical link's (e.g., ISP) traffic shaping profile. Specifically, these parameter include the following: Logical Link Total Bandwidth (bits per second) (LLTB) , Logical Link Burst Limit (L) and Logical Link Priority Bandwidth (LLPB) . The LLTB parameter defines the logical link's average bandwidth that the logical link is provisioned based on that logical link's subscription rate. The L parameter defines the maximum allowed transfer of a burst of data for that particular logical link. The LLPB parameter defines the logical link's average bandwidth for priority traffic that the logical link is provisioned based on that logical link's subscription rate. The LLPB is part of LLTB.
Therefore, for example, as with the user defined parameters, if a logical link is assigned an LLTB of 10 Mbps with an LLPB of 4 Mbps, the logical link is entitled to a total of 10 Mbps of bandwidth, out of which up to 4 Mbps can be for priority traffic (e.g, real-time traffic) .
Data packets are maintained in the buffer 131 on the egress side of the line cards 104 until the user or logical link information described above is processed and the data packets are transmitted out of the PHYs 114 to their next destination. It should be noted that the data packets remain in this buffer while the PPUs 110 process the relevant information to determine the limits of the user or logical link associated with the data packets in the buffer 131.
Referring to Fig. 8, the data packets are logically organized on a per user basis in the buffer 131 on the egress side of the line cards 104. As shown in that figure, the data packets are maintained in two separate queues, one for priority data packets and one for non-priority data packets. These queues are organized on a first-in- first-out basis. Regarding the specific processing of the data packets for traffic shaping using user information in the user table 128, the PPUs 110 processing the user parameters in the tables determine the Earliest Theoretical Departure Time Total (ETDTT) for any user with data packets in the non-priority queue and the Earliest Theoretical Departure Time Priority (ETDTP) for any user with packets in the priority queue. This value is calculated and stored in the user table 128 the first time a user's data packet enters the buffer's queue, which may be the first time that particular user has ever transmitted data packets through the system 100, or may be after the particular user's queue is empty or clear and a first data packet is received again in the queue.
Therefore, based upon the transfer rate of the system 100, the PPUs calculate the time between which data packets for a particular user are conformant with their defined parameters (i.e., TB and PB) . This time, as shown in Fig. 9, is used by the PPUs 110 to determine the next time at which a particular user's data packet becomes conformant and is ready for processing using the logical link's parameters . A user who has both priority and non-priority packets in the queues of the buffer 131, will have a ETDTT and a ETDTP pointer. In order to maintain the timing functions for the "traffic shaping, " each PPU 110 in the system 100 has a real time counter (RTC) , which maintains real time for the operations of the system 100. Each PPU 110 also maintains within its memory a software defined User Circular Queue (UCQ) 132 as shown in Fig. 10 and a software defined logical link Circular Queue (LLCQ) 134 as shown in Fig. 11. With reference specifically to Fig. 10, the UCQ has n number of logical bins 136 (0, 1, 2, . . . . n-1) , with each bin representing a time interval of T units. The Time unit T for each bin 136 can be defined in the software as required by the bandwidth and other transfer parameters of the system 100. Further, the UCQ 132 and LLCQ 134 are provided with a wrap-around feature such that a particular user's and/or a particular logical link's ETDTT does not have to be processed in one cycle of the UCQ 132 or LLCQ 134 (i.e., n-1 bins or m-1 bins) .
In the preferred embodiment of the invention, the UCQ 132 preferably has an associated current bin pointer (CBPTR) which points to one of the bins 136 of the UCQ 132. At each time interval T, the CBPTR is updated to point to the next consecutive bin 136. Thus, at every time nT the CBPTR will point to the same bin. Note that the time nT must be greater than the maximum increment (packet length / bandwidth) corresponding to the lowest rate user that the system supports .
As shown in Fig. 11, corresponding to every bin 136 of the UCQ 132 are two software defined linked lists of users, a Non-Priority Linked List (NPLL) 140 and a Priority Linked List (PLL) 142. The NPLL is a doubly linked list and the PLL is a singly linked list as shown in this figure. Therefore, for non-priority data packets, pointers point to both the previous and next user in the linked list. For priority data packets, pointers only point to the next user in the linked list. This is because priority data packets may be processed and have to be updated, which results in both the ETDTP and ETDTT being recalculated.
The ETDTT and the ETDTP are calculated whenever a packet is transmitted on the physical interface. For example, if at current time (CT) , a priority packet for user Ul is transmitted, then for that user,
IF CT<=ETDTP + L ETDTP = ETDTP + (Packet Length / Priority Bandwidth) ELSE
ETDTP = CT + (Packet Length) / Priority Bandwidth)- L.
If there is another priority packet for that user in the buffer then the user is linked to the PLL 142 of the corresponding bin 136.
In order to achieve the sharing of bandwidth between the priority and non-priority users, the following calculation is also performed: ETDTT = ETDTT + (Packet Length) / Total Bandwidth) .
Then, if bin 136 that the user is linked to for non-priority traffic changes, the user must be delinked from the current linked list and linked to the appropriate bin 136. Thus, the NPLL 140 must be a doubly linked list.
If at a current time CT, a non-priority packet for user Ul is transmitted, then for that user,
IF CT<=ETDTT + L
ETDTT = ETDTT + (Packet Length / Total Bandwidth) ELSE ETDTT = CT + (Packet Length) / Total Bandwidth) - L.
If there is another non-priority packet for that user in the buffer 131 then the user is linked to the NPLL 140 of the corresponding bin 136. Therefore, the updating of the ETDTP might result in the ETDTT in a linked list having to point to a different bin 136 before it is processed. Thus, when this ETDTT pointer is removed from the linked list, the pointer before and after that pointer must be able to know how to point to. each other. So, a doubly linked list is used for the data packets of non-priority users.
Similar updates are performed in the LLCQ 134 when a data packet from any of the users belonging to a logical link is transmitted. Updates to the entries in a logical link queue are based on the logical link corresponding to the user for which a data packet was transmitted.
Note that all the update calculations are performed taking into account the finite arithmetic available in processors, and therefore modular arithmetic is used to account for "wrap-around" calculations. Referring Figs. 8 and 10-12, a user who has both priority and non-priority data packets in the buffer 131, and thereby is assigned an ETDTT and an ETDTP, will be included in two linked lists of the bins 136 of the UCQ 132 depending on the user's calculated ETDTT and ETDTP. All users in the two linked lists of a given bin 136 are considered conformant when CBPTR points to that bin 136. Thus, every user in the NPLL 140 is allowed to transmit the head-of-line or first-in-line packet in the user's non-priority buffer (indicated by the NHP pointer) , and every user in the PLL 142 is allowed to transmit the head-of-line or first-in-line packet in the user's priority buffer (indicated by the PHP) .
Referring again to Fig. 11, each PPU 110 also maintains the LLCQ 134 with m number of logical bins 138 each representing a time interval of S time units. Therefore, as in the UCQ 132, a CBPTR will point to the same bin 138 every mS time interval. Note that the time mS must be greater than the maximum increment (packet length / bandwidth) corresponding to the lowest rate logical link that the system supports. The invention calculates and maintains an Earliest Theoretical Departure Time Total (ETDTT) for any logical link with any conformant users, and calculates and maintains an Earliest Theoretical Departure Time Priority (ETDTP) for a logical link with any conformant users having data packets in the priority queue. The ETDTT and ETDTP for the LLCQ 134 are calculated based on the defined parameters of that particular Logical Link (i.e., LLTB and LLPB). As with the USQ 132, a logical link that has both priority and non- priority conformant users will have an ETDTT and an ETDTP pointer. Corresponding to each bin 138 of the LLCQ 134 are also two linked lists of logical links, a Non-Priority Linked List (NPLL) 144 and Priority Linked List (PLL) 146. Again, the NPLL 144 is a doubly linked list and the PLL 146 is a singly linked list. A CBPTR is also provided such that at a given time when the CBPTR points to a specific bin 138, all the logical links in the two linked lists are considered conformant .
Each individual user (e.g., individual subscriber that contracts with an ISP for access to the Internet) is a member of a logical link (e.g., ISP) which information is provided by the logical link identification (LID) number. For example, the logical link may be assigned a certain amount of bandwidth on the particular PHYs 114 with which the user is associated. The amount of bandwidth is that amount for which the ISP contracts with a bandwidth reseller. It is possible, and in fact common, for the logical link to comprise a group of individual users whose total TB may exceed the LLTB of that logical link and/or whose total PB may exceed the LLPB of that logical link (i.e., oversubscribed) . Referring now to Fig. 13, each logical link is provided with two schedulers, a priority scheduler 148 for scheduling the transmission of priority data packets and a non-priority scheduler 150 for scheduling the transmission of non-priority data packets. The data packets of users that become conformant in the USC 132 based upon their user parameters are linked to the logical link to which they belong in the LLCQ 134, and when that logical link becomes conformant, the users are placed in one of the two schedulers of the logical link. The transmission of data packets of users in the schedulers is determined and scheduled by Deficit Round Robin as disclosed in "Efficient Fair Queueing using Deficit Round Robin" by Madhavapeddi Shreedhar and George Varghese, Proceedings of SIGCOMM, August 1995.
In operation the traffic shaping of the present invention provides fast and efficient processing and shaping of data packets in a data stream. In particular, Figures 14-17 illustrate in flow chart form the software needed to achieve a portion of the traffic shaping of the present invention. Specifically, Fig. 14 illustrates the process used by the traffic shaper to determine if a particular priority data packet of a particular user has become conformant with the user's parameters as defined in the user table 128. The process shown also illustrates how the ETDTP pointer is updated.
As previously described herein, the ETDTT and the ETDTP pointers are calculated whenever a packet is first received in the buffer 131 and is updated when a user is linked to the proper linked list and delinked from the original linked list if necessary.
Regarding the process of transmitting conformant priority packets as shown in Fig. 14, if at a current time CT, a priority packet for User Ul is transmitted by the priority scheduler 148, then for that user, the ETDTP is updated as follows:
if ETDTP + L <= CT ETDTP = CT + (Packet Length / Priori ty Bandwidth) - L else
ETDTP = ETDTP + (Packet Length) / Priority Bandwidth
A pseudo code of that process is as follows:
Call the Transmitting Non-Priori ty Packet Process (In order to achieve the sharing of bandwidth between the priority and non-priority users, the user is processed just as if the user were transmitting a non-priority packet.)
ETDTP Update
if the User priori ty packet buffer is empty (Defici t Round Robin Register upda te) the User is delinked from LLPS else if ETDTP <= CT
LLPS continue to schedule the User (Process based on DRR algori thm) else Upda te User' s informa tion for LLPS (Deficit Regis ter upda te)
Put the User in the User PLL of the corresponding Bin of the User Circular Table
Figure 15 illustrates the process of transmitting non-priority packets using the traffic shaping of the present invention. As shown, if at current time CT, a non-priority packet for a user Ul is transmitted by LLNPS, then for that user, the ETDTT is updated as follows :
if ETDTT + L <= CT
ETDTT = CT + (Packet Length / Total Bandwidth) - L else
ETDTT = ETDTT + (Packet Length) / Total Bandwidth) .
A pseudo code of that process is as follows : ETDTT Upda te
if the User non -priori ty packet buffer is empty (Deficit Round Robin Register upda te) the User is delinked from LLNPS else if ETDTT <= CT LLNPS continue to schedule the User (Process based on DRR algori thm) Else
Put the User in the User Non-priori ty Linked List of the corresponding Bin of the User Circular Table
Figure 16 illustrates in flow chart form the process executed by the traffic shaper of the present invention when a priority packet is received for shaping. The pseudo code for that process is as follows:
Receives a Priori ty packet for the User if the User priori ty packet buffer is not empty queue the new packet return else queue the new packet if ETDTP < CT
Call LLPS receiving conforman t Priori ty User procedure else
Link the User to PLL of the corresponding Bin of the User Circular Table
Figure 17 illustrates the process executed by the traffic shaper of the present invention when non-priority packets are received. The pseudo code for that process is as follows:
if the User non-priority packet buffer is not empty queue the new packet return else queue the new packet if ETDTT < CT
Call LLNPS receiving conformant non-Priority User procedure Else
Link the User to NPLL of the corresponding Bin of the User Circular Table Thus, as illustrated, the traffic shaper first calculates and then updates the ETDTT and ETDTP pointers for use in the UCQ 132 and LLCQ 134 preferably based upon the above definitions. However, it should be noted that other procedures may be implemented to achieve the same or similar processing depending upon the application and requirements of the system that is shaping the data stream.
In operation, as shown in Fig. 12, the UCQ 132 is provided in each bin 136 with a priority pointer and non-priority pointer. The priority pointer includes the PNXTPTR pointing to the next user in the PLL 142. Only one pointer is required in this singly linked list. The non-priority pointers include a NXTPTR to point to the next user in the NPLL 140 and a PREVPTR to point to the previous user in the NPLL 140. Two pointers are required because as described herein, the NPLL 140 is doubly linked. Each of the users in the linked lists is associated with a logical link. Therefore, as shown in Fig. 12, each user has a logical link association, such that when any of the users become conformant to that user's transfer parameters, it is associated with either the NPLL 144 or PLL 146 of the LLCQ 134 and added to that logical link's linked list, depending upon whether the user is a non-priority or priority user. A logical link that includes both priority and non-priority conformant users will have a corresponding ETDTT and ETDTP.
Therefore, as the CBPTR pointer is incremented after each time interval T, the CBPTR points to the next consecutive bin in the circular queue. For individual user shaping, when the CBPTR points to a bin 126 in the UCQ 132, all of the users, both priority and non- priority, are considered conformant. Thus, the NPLL 140 is allowed to transmit the NHP and the PLL 142 is allowed to transmit the PHP, assuming the logical link with which the user is associated can schedule the data packets of the users that are conformant. Now, referring to group shaping, when the CBPTR points to a bin 138 in the LLCQ 134, both the priority and non-priority data packets belonging to the conformant users in the bins 138 are scheduled for transmission. When the CBPTR is incremented after each time interval T, the pointer is incremented in both the UCQ 132 and LLCQ 134. This provides the parallel multi-level or multi-layer shaping of the present invention.
It should be noted that during the time interval T, not all users in the UCQ 132 and not all logical links in the LLCQ 134 may be processed. In such a case, the remaining users are moved to the next consecutive bin and placed ahead of all users in that bin.
A specific example of the process of traffic shaping relating to receiving data packets is shown in Fig. 18. When a non-priority packet for a user is received, if the user has non-priority packets already in buffer 131, the packet is just appended to the non- priority packet queue. If the user has no non-priority packet, the user's ETDTT is initialized and the user is linked in the doubly linked NPLL 140 in the UCQ 140 or into the NPLL 144 in the LLCQ 134 corresponding to the user.
When a priority packet for a user is received, if the user has priority packets already in buffer 131, the packet is just appended to priority packet queue.' If the user has no priority packets, the user's ETDTP is initialized and the user is linked into the PLL 142 in the UCQ 140 or into the LLCQ 134 corresponding to the user.
As shown in Fig. 18, for example, when a packet is received which is to be sent out by a particular user, the packet is queued up in the packet buffer 131, which is organized on a per user basis. For transmission, an entry is created in the corresponding bin. For example if the current time is X and the ETDTT for a packet for user U4 is X5, an entry is created in the X5 bin of the scheduler. The entry is preferably just a pointer to a row in the user table 128. Similarly if the next packet comes (U5) and its ETDTT is again calculated as X5, a new entry is not created for this user in the X5 bin of the scheduler. Instead the entry for the user U5 in the user table is linked to the entry of the user U4. Therefore, a bin preferably contains only a single pointer to an entry in the user table and if there are more users in a bin, the users are linked together in the order they are received. Data packets of a single bin are preferably rearranged according to the priority of the users (i.e., packets with high priority are scheduled for transmission prior to the packets of the lower priorities in a single bin) . In the example shown in Fig. 18, for a time slot X5, all the data packets with priority RT1 are scheduled ahead of RT2 (i.e., transmission sequence will be U4,U5,U7) .
Fig. 19 is an illustration of the complementary functions of "traffic policing" and "traffic shaping." A system, such as that in Fig. 1, takes input logical link groups comprised of individual users and provides "policing" of the data packets and thereafter "shapes" the data packets to a pre-defined profile for transmission based on the individual user's transfer parameters and logical link's transfer parameters. The overall process is shown in flow form in Fig. 20. The processing and shaping of the traffic shaper of the present invention occurs at high speed and at multiple levels in parallel due to the structure of the circular queues and the provision of the packet buffers to hold the data packets while the parallel processing is occurring. However, it should be understood by one skilled in the art that the traffic shaping of the present invention may be configured in alternate ways, and is not limited by the number of component parts and the specific code as described in the preferred embodiment. For example, additional circular queues may be included for additional layers of parallel processing. Additionally, the number of line cards 104 and IPE cards 106 may be scaled up or down depending upon the requirements for the "packet shaping" to be performed. Further, the number of PPUs 110 on the line cards may be scaled depending upon the processing demands of the system. These modifications would merely require minor programming changes and would not require any significant hardware changes, and these changes would be apparent to one of ordinary skill in the art given the present disclosure.
Although the traffic shaper of the present invention has been described in detail only in the context of shaping data packets through routers and switches, the traffic shaper may also be readily configured to shape data in other non-networking applications and anywhere shaping of a data stream is required.
Additionally, the various block representations as described herein represent hardware implementation of the invention, such as in chip architecture. Several of the chips or functions could be incorporated into a custom chip. Although not preferable, one or more of the functions of the traffic shaping performed in software could be implemented in hardware .
There are other various changes and modifications which may be made to the particular embodiments of the invention described herein, as recognized by those skilled in the art. However, such changes and modifications of the invention may be constructed without departing from the scope of the invention. Thus, the invention should be limited only by the scope of the claims appended hereto , and their equivalents .

Claims

What is claimed is:
1. A method of shaping data streams, each of said data streams being comprised of data packets having characterizing information corresponding to users and a pre-defined data packet transfer rate for each of said data packets, the method comprising the steps of: processing the characterizing information in parallel to thereby determine a plurality of data transfer requirements; and forwarding the data packets to a next destination at the predefined data packet transfer rate based on the determined data transfer requirements.
2. The method of claim 1 wherein the step of processing in parallel further comprises processing a plurality of levels of user information in parallel.
3. The method of claim 2 wherein the step of processing in parallel further comprises processing a first level of user information comprising individual user information and processing another level of user information comprising group user information, said levels being processed in parallel.
4. The method of claim 3 wherein the step of processing the characterizing information in parallel comprises determining a plurality of levels of transfer requirements.
5. The method of claim 4 wherein the step of forwarding the data packets to a next destination further comprises forwarding the data packets at a plurality of predefined data packet transfer rates based on the .plurality of levels of determined data transfer requirements.
6. The method of claim 5 further comprising the step of storing the data packets while the parallel processing is performed.
7. The method of claim 6 wherein the data packets comprise variable lengths and the step of processing the characterizing information further comprises processing the characterizing information for the variable length data packets .
8. The method of claim 7 wherein the step of forwarding the data packets to a next destination further comprises forwarding the data packets for a specified period at a higher data packet transfer rate than the predefined data packet transfer rate.
9. The method of claim 8 wherein the step of storing the data packets further comprises logically associating the data packets of each individual user as they are stored.
10. The method of claim 9 wherein the step of processing the characterizing information further comprises determining whether a data packet is a priority data packet or a non-priority data packet.
11. The method of claim 10 wherein the parallel processing step further comprises processing in parallel the individual user and group user information for both the priority and non-priority data packets .
12. The method of claim 11 wherein the step of forwarding the data packets to a next destination further comprises separately scheduling a data transfer rate for each of the priority and non-priority data packets .
13. A method of shaping a data stream to control the transfer rate of a plurality of data packets comprising said data stream, the method comprising the steps of: storing the data packets while a data transfer rate is determined for each data packet; determining individual user and group user desired data transfer rates for each of the stored data packets; processing in parallel the individual user and group user desired data transfer rates to determine an allowable data packet transmission time for each of the stored data packets; and transmitting the stored data packets on or after the allowable data packet transmission times.
14. The method of claim 13 wherein the step of determining individual user and group user desired data transfer rates further comprises determining a plurality of levels of desired data transfer rates for each of the individual user and group user data transfer rates.
15. The method of claim 14 wherein the step of storing the data packets further comprises logically associating the data packets of each individual user.
16. The method of claim 15 further comprising the step of determining whether each data packet is a priority or non-priority data packet.
17. The method of claim 16 wherein the step of processing in parallel the individual user and group user desired data transfer rates further comprises parallel processing the desired data transfer rates for both the priority and non-priority data packets for the individual and group users.
18. The method of claim 17 wherein the step of transmitting the stored data packets further comprises separately scheduling the allowable departure time for each of the priority and non-priority data packets.
19. The method of claim 13 wherein the step of processing the characterizing information in parallel further comprises processing said characterizing information for variable length data packets.
20. The method of claim 19 wherein the step of transmitting the stored data packets further comprises transmitting said data packets before the allowable data packet transmission times.
21. A data stream shaper providing multiple level shaping of a data stream, said data stream comprising data packets having characterizing information, said data stream shaper comprising a plurality of processors for multiple level parallel processing of said characterizing information to determine a plurality of allowable user data transfer rates.
22. The data stream shaper of claim 21 further comprising a buffer connected between an input and the processors for storing said data packets as said processors process said characterizing information.
23. The data stream shaper of claim 22 wherein said plurality of processors are configured to process said characterizing information for data packets in different processors.
24. The data stream shaper of claim 23 wherein each of said plurality of processors is configured for processing each level in separate of the different processors.
25. The data stream shaper of claim 24 wherein said buffer is configured to logically associate related data packets.
26. The data stream shaper of claim 25 wherein the buffer further comprises a priority storage area for storing priority data packets until said priority data packets are to be transmitted and a non- priority storage area for storing non-priority data packets until said non-priority packets are to be transmitted.
27. The data stream shaper of claim 26 wherein said data stream comprises a plurality of data streams emanating from a plurality of users, and wherein said multiple users are associated into groups, said plurality of processors being configured to determine allowable data transfer rates based on characterizing information for said users and said groups.
28. The data stream shaper of claim 27 further comprising a plurality of line cards, said buffer comprising a plurality of buffer elements and being mounted on said plurality of line cards, said plurality of processors also being mounted on said line cards, and further comprising a packet identifier for determining characterizing information connected to said line cards through a switch fabric and a packet manager for processing the data packets into a preselected format connected to said line cards through said switch fabric.
29. A controller for controlling the rate of transfer of data packets in a data stream, the controller comprising at least one packet identifier for processing protocol and user information and at least one data stream shaper connected to said packet identifier, the data stream shaper comprising an input interface for receiving the data packets, an output for forwarding the data packets at a determined allowable transfer rate, and a plurality of processors for multiple level processing of the data packets in parallel to determine a determined allowable transfer rate for each of the data packets.
30. The controller- of claim 29 wherein each of the packet identifiers has a packet inspector for determining the protocol and user information for users, the controller further comprising a packet manager connected to the processors for formatting the data packets into one of a plurality of predetermined data protocols.
31. The controller of claim 30 further comprising a buffer connected to the packet identifier for storing the data packets as the characterizing information is processed.
32. The controller of claim 31 wherein the packet identifiers determine whether each data packet is a priority or non-priority data packet .
33. A device for shaping a plurality of data streams, each of said data streams comprising a plurality of data packets, said device including a plurality of processors for shaping said data streams, a plurality of packet managers for formatting said data packets comprising said data streams, and a switch fabric interconnecting said plurality of processors and said plurality of packet managers.
34. A device for shaping a plurality of data streams, each of said data streams comprising a plurality of variable length data packets, said device including a plurality of line cards and a plurality of data processing cards, said cards shaping said data streams, and switch fabric interconnecting said plurality of line cards and plurality of data processing cards .
EP01904830A 2000-02-23 2001-01-11 Method and device for data traffic shaping Withdrawn EP1258114A1 (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US511059 1983-07-05
US51105900A 2000-02-23 2000-02-23
PCT/US2001/000910 WO2001063860A1 (en) 2000-02-23 2001-01-11 Method and device for data traffic shaping

Publications (1)

Publication Number Publication Date
EP1258114A1 true EP1258114A1 (en) 2002-11-20

Family

ID=24033284

Family Applications (1)

Application Number Title Priority Date Filing Date
EP01904830A Withdrawn EP1258114A1 (en) 2000-02-23 2001-01-11 Method and device for data traffic shaping

Country Status (3)

Country Link
EP (1) EP1258114A1 (en)
AU (1) AU2001232776A1 (en)
WO (1) WO2001063860A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10924415B2 (en) 2016-08-24 2021-02-16 Viasat, Inc. Device shaping in a communications network

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7158788B2 (en) * 2001-10-31 2007-01-02 Telefonaktiebolaget Lm Ericsson (Publ) Method and apparatus for auto-configuration for optimum multimedia performance
AU2002340961A1 (en) * 2002-10-01 2004-04-23 Telefonaktiebolaget Lm Ericsson (Publ) Access link bandwidth management scheme
CN101964740A (en) * 2009-07-24 2011-02-02 中兴通讯股份有限公司 Method and device for distributing service traffic
CN105306384A (en) * 2014-06-24 2016-02-03 中兴通讯股份有限公司 Message processing method and device, and line card

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE69737249T2 (en) * 1996-06-27 2007-08-09 Xerox Corp. Packet-switched communication system
US6041059A (en) * 1997-04-25 2000-03-21 Mmc Networks, Inc. Time-wheel ATM cell scheduling
WO1999026378A2 (en) * 1997-11-18 1999-05-27 Cabletron Systems, Inc. Hierarchical schedules for different atm traffic

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See references of WO0163860A1 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10924415B2 (en) 2016-08-24 2021-02-16 Viasat, Inc. Device shaping in a communications network
US11722414B2 (en) 2016-08-24 2023-08-08 Viasat, Inc. Device shaping in a communications network

Also Published As

Publication number Publication date
AU2001232776A1 (en) 2001-09-03
WO2001063860A1 (en) 2001-08-30

Similar Documents

Publication Publication Date Title
US6490248B1 (en) Packet transfer device and packet transfer method adaptive to a large number of input ports
US7649882B2 (en) Multicast scheduling and replication in switches
EP1111858B1 (en) A weighted round robin scheduling engine
EP0981228B1 (en) Two-component bandwidth scheduler having application in multi-class digital communication systems
EP0986221B1 (en) Port scheduler and method for scheduling service providing guarantees and hierarchical rate limiting with or without overbooking capability
US5732087A (en) ATM local area network switch with dual queues
US6058114A (en) Unified network cell scheduler and flow controller
US6587437B1 (en) ER information acceleration in ABR traffic
EP0886940A1 (en) Event-driven cell scheduler and method for supporting multiple service categories in a communication network
JP4163044B2 (en) BAND CONTROL METHOD AND BAND CONTROL DEVICE THEREOF
EP1508227A1 (en) Buffer memory reservation
CA2462793C (en) Distributed transmission of traffic streams in communication networks
JPH09149051A (en) Packet transfer device
US6947380B1 (en) Guaranteed bandwidth mechanism for a terabit multiservice switch
US20020150047A1 (en) System and method for scheduling transmission of asynchronous transfer mode cells
EP1111851B1 (en) A scheduler system for scheduling the distribution of ATM cells
Chiussi et al. Implementing fair queueing in atm switches: The discrete-rate approach
JP3906231B2 (en) Packet transfer device
EP1258114A1 (en) Method and device for data traffic shaping
US7079545B1 (en) System and method for simultaneous deficit round robin prioritization
US6807171B1 (en) Virtual path aggregation
WO2004062214A2 (en) System and method for providing quality of service in asynchronous transfer mode cell transmission
US9363186B2 (en) Hierarchical shaping of network traffic
EP1835672B1 (en) Data-switching apparatus having a scheduling mechanism for arbitrating between requests for transfers of data sets, for a node of very high data rate communications network
Zhu et al. A new scheduling scheme for resilient packet ring networks with single transit buffer

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20020916

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LI LU MC NL PT SE TR

AX Request for extension of the european patent

Free format text: AL;LT;LV;MK;RO;SI

GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

RAP1 Party data changed (applicant data changed or rights of an application transferred)

Owner name: ZHAO, XINGGUO

Owner name: CELOX NETWORKS, INC.

Owner name: BORDES, JEAN PIERRE

Owner name: HEGDE, MANJU

Owner name: SCHMID, OTTO ANDREAS

Owner name: MAHER, MONIER

Owner name: DAVIS, CURTIS

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN

18D Application deemed to be withdrawn

Effective date: 20050802