US20040022187A1 - Preemptive network traffic control for regional and wide area networks - Google Patents
Preemptive network traffic control for regional and wide area networks Download PDFInfo
- Publication number
- US20040022187A1 US20040022187A1 US10/211,174 US21117402A US2004022187A1 US 20040022187 A1 US20040022187 A1 US 20040022187A1 US 21117402 A US21117402 A US 21117402A US 2004022187 A1 US2004022187 A1 US 2004022187A1
- Authority
- US
- United States
- Prior art keywords
- network traffic
- link
- pause
- storage medium
- sender
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 230000006855 networking Effects 0.000 claims abstract description 52
- 230000003287 optical effect Effects 0.000 claims abstract description 28
- 238000000034 method Methods 0.000 claims description 25
- 238000012545 processing Methods 0.000 claims description 12
- 230000003139 buffering effect Effects 0.000 claims 1
- 230000006870 function Effects 0.000 description 13
- 230000001419 dependent effect Effects 0.000 description 6
- 239000000872 buffer Substances 0.000 description 5
- RGNPBRKPHBKNKX-UHFFFAOYSA-N hexaflumuron Chemical compound C1=C(Cl)C(OC(F)(F)C(F)F)=C(Cl)C=C1NC(=O)NC(=O)C1=C(F)C=CC=C1F RGNPBRKPHBKNKX-UHFFFAOYSA-N 0.000 description 5
- 238000013459 approach Methods 0.000 description 4
- 238000007726 management method Methods 0.000 description 4
- 230000001105 regulatory effect Effects 0.000 description 4
- 238000004891 communication Methods 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 230000008859 change Effects 0.000 description 2
- 238000009472 formulation Methods 0.000 description 2
- 239000000203 mixture Substances 0.000 description 2
- 230000004075 alteration Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 230000001276 controlling effect Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 238000012797 qualification Methods 0.000 description 1
- 238000011084 recovery Methods 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
- 230000032258 transport Effects 0.000 description 1
- 230000001960 triggered effect Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L12/00—Data switching networks
- H04L12/28—Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
- H04L12/40—Bus networks
- H04L12/40006—Architecture of a communication node
- H04L12/40013—Details regarding a bus controller
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L12/00—Data switching networks
- H04L12/28—Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
- H04L12/40—Bus networks
- H04L12/40006—Architecture of a communication node
- H04L12/40032—Details regarding a bus interface enhancer
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L12/00—Data switching networks
- H04L12/28—Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
- H04L12/40—Bus networks
- H04L12/407—Bus networks with decentralised control
- H04L12/413—Bus networks with decentralised control with random access, e.g. carrier-sense multiple-access with collision detection [CSMA-CD]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/10—Flow control; Congestion control
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/10—Flow control; Congestion control
- H04L47/18—End to end
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/10—Flow control; Congestion control
- H04L47/26—Flow control; Congestion control using explicit feedback to the source, e.g. choke packets
- H04L47/266—Stopping or restarting the source, e.g. X-on or X-off
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/10—Flow control; Congestion control
- H04L47/30—Flow control; Congestion control in combination with information about buffer occupancy at either end or at transit nodes
Definitions
- the present invention relates to the field of networking. More specifically, the present invention relates to network traffic control for high speed networking, such as, 10 Gigabit Ethernet spanning local, regional, and wide area networks.
- a device on the network may not be able to timely process all received data. That is, a device on the network may be not able to process the received data at the rate at which the data is received.
- one or more buffers are commonly utilized to temporarily hold the received data while the received data is waiting to be processed by the device.
- buffers are typically sized for certain “normal” or expected network traffic patterns, and the actual network traffic often deviates from the expectation unpredictably, resulting in the buffers becoming full or overflow. Once the buffer becomes full or overflow condition occurs, subsequent data may be lost, requiring the data to be resent.
- multiple data links may be sharing a physical line, and it may be desirable to regulate and prevent one or more data links from consuming more bandwidth than the data links are supposed/entitled to use.
- Transmission control in the form of a pause command may be utilized between the communicating devices.
- a pause command transmitted from an “overflowing” receiver to a sender
- the technique may not be effective with high speed regional/wide area networks. The reason being, by the time the pause command responsive issued by the overflowing receiver is received by the sender, large volume of data may already be in transit from the sender to the receiver.
- FIG. 1 illustrates an overview of the present invention, in the context of a network processor, in accordance with one embodiment
- FIG. 2 illustrates the concept of working storage capacity in further details, in accordance with one embodiment
- FIGS. 3 A- 3 B illustrate the operational flow of the relevant aspects of preemptive pause control logic of FIG. 1 in further detail, in accordance with one embodiment
- FIG. 4 illustrates an exemplary application of the network processor of FIG. 1.
- references herein to “one embodiment”, “an embodiment”, or similar formulations means that a particular feature, structure, or characteristic described in connection with the embodiment, is included in at least one embodiment of the present invention. Thus, the appearances, of such phrases or formulations herein are not necessarily all referring to the same embodiment. Furthermore, various particular feature, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
- Embodiments of the present invention provide efficient methods and apparatus for controlling data flow between networked devices, in particular, networked devices in high speed regional/wide area networks.
- FIG. 1 illustrates an overview of the present invention, in the context of an exemplary network processor, in accordance with one embodiment.
- Exemplary network processor 100 may e.g. be a component of a high speed networking module or device coupled to a high speed network, receiving and processing network traffic from one or more sender network nodes remotely disposed across a regional/wide area network over corresponding data links.
- exemplary network processor 100 includes a first interface block 102 for interfacing with a network (not shown) to receive network traffic data of the various data links, and a number of function blocks 104 for processing the received network traffic data of the various data links.
- exemplary network processor 100 further includes a second interface block 106 for interfacing with a system (not shown) to forward to the system, the processed network traffic data of the various data links.
- the first interface block 102 , function blocks 104 , and the second interface 106 each includes a certain amount of storage medium 108 to be allocated in portions or in whole to temporarily stage the ingress network traffic data, as they are received, and moved through network processor 100 onto the coupled system.
- network processor 100 is advantageously provided with preemptive pause control logic 110 .
- Preemptive pause control logic 110 is equipped to preemptively send pause controls to senders of the ingress network traffic of the one or more links to preemptively regulate them on the rates they can transmit network traffic for the respective data links. That is, under the present invention, preemptive pause control logic 110 preemptively sends pause controls to senders of the data links to preemptively regulate them before some or all of the senders of the links fully use up or exceed their bandwidths.
- the preemptive regulation advantageously overcomes the inherent latency in delivering these pause controls to the senders of the various data links, in the prior art reactive approach to regulating network traffic, especially in a regional/wide area networking application.
- preemptive pause control logic 110 issues or causes the pause controls to be issued periodically for the various data links. Moreover, preemptive pause control logic 110 issues or causes each of the pause controls to be issued with a pause duration the sender of network traffic of a data link is to pause and refrain from sending network traffic for the data link.
- preemptive pause control logic 110 is equipped to determine the pause durations included in the pause controls, as well as the periodicity for issuing the pause controls.
- preemptive pause control logic 110 is equipped to determine at least one of the pause duration and the periodicity for issuing pause controls for a data link, based at least in part on at least one of the working storage capacities allocated to service the data link, a network traffic drain rate of the data link, and a fill rate of an input line over which the network traffic of the data link is received, to be described more fully below.
- the network processor 100 shown in FIG. 1 may be any one of a wide range of high speed network processors known in the art, including but not limited to the multi-protocol processor of U.S. patent application Ser. No. 09/860,207, filed on May 18, 2001,
- Function blocks 104 may be any one of a number of function blocks of a network processor or networking device, e.g., a MAC block of a network processor. For ease of understanding, only a couple of function blocks are shown in FIG. 1. However, in practice, network processor 100 or other context on which the present invention is incorporated, may include one or more function blocks.
- Each of interfaces 102 and 106 may be any one of a number of system and network interfaces known in the art, including, but not limited to, a parallel LVDS interface compliant with OIF's SFI-4 interface for 102 , and a parallel LVDS interface compliant with OFI's SPI-4 interface for 106 .
- Storage medium 108 may be any storage medium known in the art, including but not limited to SDRAM, DDRAM, EEPROM, Flash memory and so forth.
- one storage medium is shown to be disposed in each of function blocks 104 and interfaces 102 , however, in practice, each of the selected ones of the components of the contextual networking device on which the present invention is practiced, may have one or more storage medium, with all or a portion to be allocated to service a data link. Together, the total capacity of the allocated portions of storage mediums 108 are collectively referred to as the working capacity allocated for staging the data of a particular link.
- FIG. 2 illustrates the notion of working capacity in further detail. Illustrated in FIG. 2 is a logical view of the total storage medium 200 of the various components of the contextual networking device on which the present invention is practiced, allocated to service a data link.
- network traffic of the data link is received, buffered, processed and forwarded onto a coupled system or another networking device, using the allocation storage medium 200 .
- the total amount of the allocated storage medium is referred to as the actual working capacity 204 of the allocated storage medium, whereas the portion of the actual working capacity between a low and a high “watermark” 208 and 210 is referred to as the effective working capacity 206 .
- the present invention may be practiced using either actual working capacity 204 or effective working capacity 206 .
- Employment of working capacity 206 provides for an even more aggressive approach to preemptively regulate the network traffic of the various data links, further reducing the likelihood of the allocated storage medium being overflowed, and necessitating the retransmission of the lost data.
- the reduction in the likelihood of overflow may be gained at the expense of reduced efficiency in fully utilizing the allocated storage medium.
- the low and high watermarks 208 and 210 are configurable. Such configuration may be achieved via any one of a number of configuration techniques known in the art.
- FIGS. 3 A- 3 B illustrate the operational flow of the relevant aspects of the preemptive pause control logic 110 of FIG. 1, in accordance with one embodiment.
- preemptive pause control logic 110 may be realized in hardware, e.g. through ASIC, or in software, executed e.g. by an embedded controller.
- preemptive pause control logic 110 on initialization, e.g. at power on or reset, preemptive pause control logic 110 first determines the fill rate of an input line over which network traffic of various data links will be received, block 302 .
- the fill rate is corresponding to the rate the input line is clocked.
- the clocking rate is networking protocol based. Accordingly, for at least some of these embodiments, preemptive pause control logic determines the fill rate of the input line by determining the networking protocol network processor 100 is configured to operate.
- preemptive pause control logic 110 waits for the establishment of the data links, block 304 .
- establishment of the data links may be triggered by the senders of the network traffic or by the recipients requesting data from the senders.
- preemptive pause control logic 110 determines the working capacity of the total storage medium of the various components allocated to service the data link, i.e. to buffer, process and forward the received network traffic.
- allocation of the storage medium by the various components to service a data link involves the establishment of address ranges, and pointers pointing to the start and/or end of the allocated portions of the storage medium.
- preemptive pause control logic 110 determines the working capacity based on these address ranges and/or pointers. In alternate embodiments, other approaches may be practiced instead.
- the amount of storage medium allocated to service a data link is protocol dependent, e.g. in the case of 10 Gb Ethernet applications, the amount of storage medium allocated to service a data link in one embodiment is about 12,288 bytes.
- preemptive pause control logic 110 may similarly determine the working storage capacity by accessing the configuration storage (not shown) to determine the networking protocol network processor 100 is configured to operate.
- preemptive pause control logic 110 further determines the low and high “watermarks” to determine the “safety margin” to be applied to the actual working capacity.
- the “watermarks” are preferably configurable, and accordingly are retrieved from the configuration storage (not shown). In alternate embodiments where the “watermarks” apply to all data links, the determination may be made at block 302 instead.
- preemptive pause control logic 110 determines the pause durations to be included in the pause controls, and the periodicity for issuing the pause controls, block 308 .
- the pause duration included each pause control for a data link is the same, and the periodicity of issuing the pause controls, i.e. the size of the period is constant.
- the periodicity of issuing the pause controls i.e. the size of the period is constant.
- the present invention may be practiced with different pause durations being included in the different pause controls, and/or variable period sizes.
- preemptive pause control logic 100 Upon determining the pause duration and the periodicity, preemptive pause control logic 100 proceeds to preemptively regulate the network traffic of the data link by preemptively and successively issuing the pause controls (with the determined pause duration) in accordance with the determined periodicity, block 310 .
- preemptive pause control logic 110 proceeds/continues to preemptively regulate the established data links, i.e. block 310 .
- the preemptive regulation terminates coincident with the tear down of a data link.
- FIG. 3B wherein a flow chart illustrating the operational flow of the relevant aspects of preemptive pause control logic 110 for determining the pause duration and periodicity for regulating a data link, in accordance with one embodiment, is shown.
- the embodiment determines a single duration for inclusion in each of the pause controls to be issued to the sender of the network traffic of a data link, and a single periodicity for the preemptive and successive issuance of the pause controls.
- preemptive pause control logic 110 first determines a network traffic drain rate of the data link, block 322 .
- the network traffic drain rate is the maximum drain rate allowable for the data link.
- the maximum drain rate for a data link is a configurable parameter (typically by sender protocol type or by the service level agreement between the sender and receiver).
- preemptive pause control logic 110 determines the network traffic drain rate of a link by retrieving the rate from configurable storage (not shown).
- the drain rate is controlled by a network management application, e.g., by a quality-of-service routine of an application that controls network processor 100 ).
- preemptive pause control logic 110 determines the difference between the earlier described fill rate of the input line and the determined network traffic drain rate of the data link, block 324 .
- preemptive pause control logic 110 determines the periodicity based on the ratio between the working capacity of the allocated storage medium and the determined difference in the fill rate of the input line and the drain rate of the data link, block 326 .
- preemptive pause control logic 110 determines the pause duration by first determining the ratio between the working capacity of the allocated storage medium and the determined drain rate of the data link (referring to as the initial or nominal pause duration), block 328 , and then applying an estimated latency to the initial/nominal pause duration, block 330 .
- the estimated latency is applied to account for potential latency or delay between the time the sender receives the pause control and the time the sender begins pausing the traffic it is sending.
- the exact amount is application dependent, e.g. dependent on the hardware and/or software interrupt latency in the sender.
- an estimated latency may also be applied to periodicity that is based on the dynamically determined network traffic drain rate of the data link, or upon determining a significant change in the network traffic drain rate of the data link.
- network traffic of data links are advantageously regulated in a straight forward and effective manner, overcoming the disadvantage of the prior art responsive approach.
- the network traffic drain rate of the data link may be dynamically determined, i.e. the actual drain rate of the data link instead.
- preemptive pause control logic 110 may systematically recompute the pause duration and/or periodicity based on the dynamically determined network traffic drain rate of the data link, or upon determining a significant change in the network traffic drain rate of the data link.
- “significance” may be application dependent and preferably be configurable using any one of a number of known configuration techniques.
- the pause control operation is performed in conformance with the Institute of Electrical and Electronics Engineers, Inc., (IEEE) standard Draft 802.3ae/D3.0, Annex 31B. Accordingly, the various time parameter values are specified in units of pause quanta (PQ), where one PQ is equal to 512 bit times.
- PQ pause quanta
- the pause control provided to the sender/senders by preemptive pause control logic 110 is in the form of an Ethernet “PAUSE frame”, which contains the value of the pause duration in its PAUSE frame's “pause_time” field), where the pause duration is specified in units of PQ.
- FIG. 4 illustrates an exemplary application of network processor of FIG. 1 incorporated with teachings of the present invention. Illustrated in FIG. 4 is integrated optical networking module 400 incorporated with network processor 100 of FIG. 1, which as described earlier is incorporated with the preemptive network traffic control teachings of the present invention for data links.
- Optical networking module 400 includes optical components 402 , optical-electrical components 404 , support control electronics 405 , and network processor 100 of FIG. 1, coupled to each other as shown.
- network processor 100 may be a multi-protocol processor having in particular, a number of interfaces and processing units, collectively referenced as reference number 410 , control function unit 408 , processor interface 407 and utility interface 409 coupled to each other and components 402 - 404 as shown.
- Optical components 402 are employed to facilitate the sending and receiving of optical signals encoded with data transmitted in accordance with a selected one of a plurality of protocols known in the art.
- Optical-electrical components 404 are employed to encode the egress data onto the optical signals, and decode the encoded ingress data.
- the supported datacom and telecom protocols include but are not limited to SONET/SDH, 10GBASE-LR, 10GBASE-LW, Ethernet-Over-SONET, Packet Over SONET, and so forth.
- Support control electronics 405 are employed to facilitate management of the various aspects of optical components 402 and optical-electrical components 404 .
- Network processor 100 may be employed to perform data link and physical sub-layer processing on the egress and ingress data in accordance with a selected one of a plurality of supported datacom/telecom protocols, and to facilitate management of the network processor 100 itself and optical, optical-electrical components 402 and 404 (through support control electronics 405 ).
- optical components 402 , optical-electrical components 404 , support control electronics 405 and network processor ASIC 100 are encased in a body (not shown) forming a singular optical networking module, with provided software forming a singular control interface for all functionality. That is, in addition to being equipped to provide optical to electrical and electrical to optical conversions, clock and data recovery, and so forth, integrated optical networking module 400 is also equipped to provide data link and physical sub-layer processing on egress and ingress data selectively for a number of protocols.
- control function unit 408 also includes control features, i.e. control registers and the like (not shown), in conjunction with support control electronics 405 to support a number of control functions for managing optical components 402 , optical-electrical components 404 as well as network processor ASIC 100 .
- Processor interface 407 is employed to facilitate provision of control specifications to control function unit 408
- utility interface 409 (a digital interface) is employed to facilitate management of components 402 and 404 by control function unit 408 (by way of support control electronics 405 ).
- the complementary control functions are placed with an embedded processor of optical networking equipment employing integrated optical network module 400 .
- integrated optical networking module 400 advantageously presents a singular unified software interface to optical networking equipment designers and developers to manage configuration and operation of the optical and electrical components, as well as protocol processing.
- optical networking equipment such as optical-electrical routers, switches, and the like
- Optical networking module 400 is the subject matter of co-pending application Ser. No. 09/861,002, entitled “An Optical Networking Module Including Protocol Processing And Unified Software Control”, having at least partial common inventorship and filed May 18, 2001.
- the co-pending application is hereby fully incorporated by reference.
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
Abstract
Description
- The present invention relates to the field of networking. More specifically, the present invention relates to network traffic control for high speed networking, such as, 10 Gigabit Ethernet spanning local, regional, and wide area networks.
- With advances in integrated circuit, microprocessor, networking and communication technologies, an increasing number of devices, in particular, digital computing devices, are being networked together. Devices are often first coupled to a local area network, such as an Ethernet based office/home network. In turn, the local area networks are interconnected together through wide area networks, such as SONET networks, ATM networks, Frame Relays, and the like. Of particular importance is the TCP/IP based global inter-network, the Internet. Historically, data communication protocols specified the requirements of local/regional area networks, whereas telecommunication protocols specified the requirements of the regional/wide area networks. The rapid growth of the Internet has fueled a convergence of data communication (datacom) and telecommunication (telecom) protocols and requirements. It is increasingly important that data traffic be carried efficiently across local, regional, as well as wide area networks.
- As a result of this trend of increased connectivity, an increasing number of applications that are network dependent are being deployed. Examples of these network dependent applications include but are not limited to, the world wide web, email, Internet based telephony, and various types of e-commerce and enterprise applications. The success of many content/service providers as well as commerce sites depend on high-speed delivery of a large volume of data across wide areas. As a result, high-speed data trafficking devices, such as high-speed optical, or optical-electro routers, switches and so forth, are needed.
- Unfortunately, because of the high-speed delivery of large volume of data across the network, a device on the network may not be able to timely process all received data. That is, a device on the network may be not able to process the received data at the rate at which the data is received. In order to improve the likelihood of timely processing all received data, one or more buffers are commonly utilized to temporarily hold the received data while the received data is waiting to be processed by the device. However, buffers are typically sized for certain “normal” or expected network traffic patterns, and the actual network traffic often deviates from the expectation unpredictably, resulting in the buffers becoming full or overflow. Once the buffer becomes full or overflow condition occurs, subsequent data may be lost, requiring the data to be resent.
- Additionally, multiple data links may be sharing a physical line, and it may be desirable to regulate and prevent one or more data links from consuming more bandwidth than the data links are supposed/entitled to use.
- Transmission control in the form of a pause command (transmitted from an “overflowing” receiver to a sender) may be utilized between the communicating devices. However, while responsive utilization of the pause command may be effective for LAN, experience has shown that the technique may not be effective with high speed regional/wide area networks. The reason being, by the time the pause command responsive issued by the overflowing receiver is received by the sender, large volume of data may already be in transit from the sender to the receiver.
- Accordingly, a need exists for facilitating improved network traffic control for high speed network traffic, in particular, for high speed regional/wide area networks, such as 10 gb Ethernet (10GBASE-LR or 10GBASE-LW).
- The present invention will be described by way of exemplary embodiments, but not limitations, illustrated in the accompanying drawings in which like references denote similar elements, and in which:
- FIG. 1 illustrates an overview of the present invention, in the context of a network processor, in accordance with one embodiment;
- FIG. 2 illustrates the concept of working storage capacity in further details, in accordance with one embodiment;
- FIGS.3A-3B illustrate the operational flow of the relevant aspects of preemptive pause control logic of FIG. 1 in further detail, in accordance with one embodiment; and
- FIG. 4 illustrates an exemplary application of the network processor of FIG. 1.
-
10GBASE-LR 64/66 coded 1310 nm LAN standard for 10 Gigabit Ethernet 10GBASE-LW 64166 coded SONET encapsulated 1310 nm WAN standard for 10 Gigabit Ethernet ASIC Application Specific Integrated Circuit DDRAM Dynamic Direct Random Access Memory Egress Outgoing data path from the system to the network EEPROM Electrical Erasable Programmable Read-Only-Memory Ingress Incoming data path from the network to the system LAN Local Area Network LVDS Low voltage differential signal MAC Media Access Control layer, defined for Ethernet systems OIF Optical Internetworking Forum SONET Synchronous Optical NETwork, a PHY telecommunication protocol SDRAM Static Direct Random Access Memory SPI-4 System Packet Interface Level 4(also POS-PHY 4) WAN Wide Area Network - In the following description, various aspects of the present invention will be described. However, it will apparent to those skilled in the art that the present invention may be practiced with only some or all aspects of the present invention. For purposes of explanation, specific numbers, materials and configurations are set forth in order to provide a thorough understanding of the present invention. However, it will be also apparent to one skilled in the art that the present invention may be practiced without the specific details. In other instances, well-known features are omitted or simplified in order not to obscure the present invention.
- References herein to “one embodiment”, “an embodiment”, or similar formulations, means that a particular feature, structure, or characteristic described in connection with the embodiment, is included in at least one embodiment of the present invention. Thus, the appearances, of such phrases or formulations herein are not necessarily all referring to the same embodiment. Furthermore, various particular feature, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
- Embodiments of the present invention provide efficient methods and apparatus for controlling data flow between networked devices, in particular, networked devices in high speed regional/wide area networks.
- FIG. 1 illustrates an overview of the present invention, in the context of an exemplary network processor, in accordance with one embodiment.
Exemplary network processor 100 may e.g. be a component of a high speed networking module or device coupled to a high speed network, receiving and processing network traffic from one or more sender network nodes remotely disposed across a regional/wide area network over corresponding data links. - As illustrated in FIG. 1, for the embodiment,
exemplary network processor 100 includes afirst interface block 102 for interfacing with a network (not shown) to receive network traffic data of the various data links, and a number of function blocks 104 for processing the received network traffic data of the various data links.Exemplary network processor 100 further includes asecond interface block 106 for interfacing with a system (not shown) to forward to the system, the processed network traffic data of the various data links. - For the illustrated embodiment, the
first interface block 102, function blocks 104, and thesecond interface 106, each includes a certain amount ofstorage medium 108 to be allocated in portions or in whole to temporarily stage the ingress network traffic data, as they are received, and moved throughnetwork processor 100 onto the coupled system. - Additionally, as shown in FIG. 1, in accordance with the present invention,
network processor 100 is advantageously provided with preemptive pause control logic 110. Preemptive pause control logic 110 is equipped to preemptively send pause controls to senders of the ingress network traffic of the one or more links to preemptively regulate them on the rates they can transmit network traffic for the respective data links. That is, under the present invention, preemptive pause control logic 110 preemptively sends pause controls to senders of the data links to preemptively regulate them before some or all of the senders of the links fully use up or exceed their bandwidths. The preemptive regulation advantageously overcomes the inherent latency in delivering these pause controls to the senders of the various data links, in the prior art reactive approach to regulating network traffic, especially in a regional/wide area networking application. - Before proceeding to further describe the present invention, it should be noted while the remaining descriptions are presented primarily in the context of preemptively regulating the ingress network traffic, the present invention may also be practiced to regulate the egress network traffic. Application of the present invention to regulate egress network traffic based on the ingress centric description is well within the ability of those skilled in the art, accordingly the present invention will not be redundantly re-described for egress network traffic.
- Continuing to refer to FIG. 1, in various embodiments, preemptive pause control logic110 issues or causes the pause controls to be issued periodically for the various data links. Moreover, preemptive pause control logic 110 issues or causes each of the pause controls to be issued with a pause duration the sender of network traffic of a data link is to pause and refrain from sending network traffic for the data link.
- In various embodiments, preemptive pause control logic110 is equipped to determine the pause durations included in the pause controls, as well as the periodicity for issuing the pause controls.
- In various embodiments, preemptive pause control logic110 is equipped to determine at least one of the pause duration and the periodicity for issuing pause controls for a data link, based at least in part on at least one of the working storage capacities allocated to service the data link, a network traffic drain rate of the data link, and a fill rate of an input line over which the network traffic of the data link is received, to be described more fully below.
- The
network processor 100 shown in FIG. 1 may be any one of a wide range of high speed network processors known in the art, including but not limited to the multi-protocol processor of U.S. patent application Ser. No. 09/860,207, filed on May 18, 2001, - entitled “A Multi-Protocol Networking Processor With Data Traffic Support Spanning Local, Regional, and Wide Area Networks”, and having at least partial common inventorship with the present application, which specification is hereby fully incorporated by reference.
- Function blocks104 may be any one of a number of function blocks of a network processor or networking device, e.g., a MAC block of a network processor. For ease of understanding, only a couple of function blocks are shown in FIG. 1. However, in practice,
network processor 100 or other context on which the present invention is incorporated, may include one or more function blocks. - Each of
interfaces - Storage medium108 may be any storage medium known in the art, including but not limited to SDRAM, DDRAM, EEPROM, Flash memory and so forth. For the embodiment, one storage medium is shown to be disposed in each of function blocks 104 and
interfaces 102, however, in practice, each of the selected ones of the components of the contextual networking device on which the present invention is practiced, may have one or more storage medium, with all or a portion to be allocated to service a data link. Together, the total capacity of the allocated portions ofstorage mediums 108 are collectively referred to as the working capacity allocated for staging the data of a particular link. - FIG. 2 illustrates the notion of working capacity in further detail. Illustrated in FIG. 2 is a logical view of the
total storage medium 200 of the various components of the contextual networking device on which the present invention is practiced, allocated to service a data link. - As illustrated, network traffic of the data link is received, buffered, processed and forwarded onto a coupled system or another networking device, using the
allocation storage medium 200. - For the purpose of the present application, the total amount of the allocated storage medium is referred to as the actual working capacity204 of the allocated storage medium, whereas the portion of the actual working capacity between a low and a high “watermark” 208 and 210 is referred to as the
effective working capacity 206. - As will be described in more detail below, for the embodiments where the pause durations and the periodicity for issuing the pause controls are determined based at least in part on the working capacity of the allocated storage medium, the present invention may be practiced using either actual working capacity204 or
effective working capacity 206. - Employment of working
capacity 206 provides for an even more aggressive approach to preemptively regulate the network traffic of the various data links, further reducing the likelihood of the allocated storage medium being overflowed, and necessitating the retransmission of the lost data. However, the reduction in the likelihood of overflow may be gained at the expense of reduced efficiency in fully utilizing the allocated storage medium. - For the embodiments where the present invention is practiced employing the effective working capacity, preferably, the low and
high watermarks - FIGS.3A-3B illustrate the operational flow of the relevant aspects of the preemptive pause control logic 110 of FIG. 1, in accordance with one embodiment. As will be appreciated by those skilled in the art, based on the description to follow, in practice, preemptive pause control logic 110 may be realized in hardware, e.g. through ASIC, or in software, executed e.g. by an embedded controller.
- As illustrated in FIG. 3A, on initialization, e.g. at power on or reset, preemptive pause control logic110 first determines the fill rate of an input line over which network traffic of various data links will be received, block 302. In various embodiments, the fill rate is corresponding to the rate the input line is clocked. In various embodiments, the clocking rate is networking protocol based. Accordingly, for at least some of these embodiments, preemptive pause control logic determines the fill rate of the input line by determining the networking
protocol network processor 100 is configured to operate. - Thereafter, preemptive pause control logic110 waits for the establishment of the data links, block 304. As those skilled in the art would appreciate, establishment of the data links may be triggered by the senders of the network traffic or by the recipients requesting data from the senders.
- For the illustrated embodiment, upon detecting the establishment of a data link, preemptive pause control logic110 determines the working capacity of the total storage medium of the various components allocated to service the data link, i.e. to buffer, process and forward the received network traffic.
- In various embodiments, allocation of the storage medium by the various components to service a data link, involves the establishment of address ranges, and pointers pointing to the start and/or end of the allocated portions of the storage medium. For these embodiments, preemptive pause control logic110 determines the working capacity based on these address ranges and/or pointers. In alternate embodiments, other approaches may be practiced instead.
- In various embodiments, the amount of storage medium allocated to service a data link is protocol dependent, e.g. in the case of 10 Gb Ethernet applications, the amount of storage medium allocated to service a data link in one embodiment is about 12,288 bytes. For these embodiments, preemptive pause control logic110 may similarly determine the working storage capacity by accessing the configuration storage (not shown) to determine the networking
protocol network processor 100 is configured to operate. - For embodiments where the effective working capacity (as opposed to the actual working capacity) is used, preemptive pause control logic110 further determines the low and high “watermarks” to determine the “safety margin” to be applied to the actual working capacity. As described earlier, in various embodiments, the “watermarks” are preferably configurable, and accordingly are retrieved from the configuration storage (not shown). In alternate embodiments where the “watermarks” apply to all data links, the determination may be made at
block 302 instead. - Upon determining the working capacity of the allocated storage medium (which may be actual or effective, but hereon forward, simply working capacity without qualification unless the context requires), preemptive pause control logic110 determines the pause durations to be included in the pause controls, and the periodicity for issuing the pause controls, block 308.
- In various embodiments where the fill rate of the input line is a very fast rate, such as the case of 10 Gb Ethernet, the pause duration included each pause control for a data link is the same, and the periodicity of issuing the pause controls, i.e. the size of the period is constant. One embodiment for determining the pause duration and the periodicity will be described in more detail below referencing FIG. 3B.
- Of course, in alternate embodiments, particularly in embodiments where the fill rate of the input line is not as fast or it is economically practical to employ sufficiently fast components to match the very fast line fill rate, the present invention may be practiced with different pause durations being included in the different pause controls, and/or variable period sizes.
- Upon determining the pause duration and the periodicity, preemptive
pause control logic 100 proceeds to preemptively regulate the network traffic of the data link by preemptively and successively issuing the pause controls (with the determined pause duration) in accordance with the determined periodicity, block 310. - Back at
block 304, if establishment of a new data link is not detected, preemptive pause control logic 110 proceeds/continues to preemptively regulate the established data links, i.e.block 310. - The preemptive regulation terminates coincident with the tear down of a data link.
- Referring now to FIG. 3B, wherein a flow chart illustrating the operational flow of the relevant aspects of preemptive pause control logic110 for determining the pause duration and periodicity for regulating a data link, in accordance with one embodiment, is shown. As alluded to earlier, the embodiment determines a single duration for inclusion in each of the pause controls to be issued to the sender of the network traffic of a data link, and a single periodicity for the preemptive and successive issuance of the pause controls.
- As illustrated, preemptive pause control logic110 first determines a network traffic drain rate of the data link, block 322. In various embodiments, the network traffic drain rate is the maximum drain rate allowable for the data link. In various embodiments, the maximum drain rate for a data link is a configurable parameter (typically by sender protocol type or by the service level agreement between the sender and receiver). For some embodiments, as with the fill rate of the input line, preemptive pause control logic 110 determines the network traffic drain rate of a link by retrieving the rate from configurable storage (not shown). In other embodiments, the drain rate is controlled by a network management application, e.g., by a quality-of-service routine of an application that controls network processor 100).
- For the embodiment, upon determining the network traffic drain rate of the data link, preemptive pause control logic110 determines the difference between the earlier described fill rate of the input line and the determined network traffic drain rate of the data link, block 324.
- Next, for the embodiment, preemptive pause control logic110 determines the periodicity based on the ratio between the working capacity of the allocated storage medium and the determined difference in the fill rate of the input line and the drain rate of the data link, block 326.
- Then, preemptive pause control logic110 determines the pause duration by first determining the ratio between the working capacity of the allocated storage medium and the determined drain rate of the data link (referring to as the initial or nominal pause duration), block 328, and then applying an estimated latency to the initial/nominal pause duration, block 330.
- The estimated latency is applied to account for potential latency or delay between the time the sender receives the pause control and the time the sender begins pausing the traffic it is sending. The exact amount is application dependent, e.g. dependent on the hardware and/or software interrupt latency in the sender.
- Similarly, an estimated latency may also be applied to periodicity that is based on the dynamically determined network traffic drain rate of the data link, or upon determining a significant change in the network traffic drain rate of the data link.
- Accordingly, network traffic of data links are advantageously regulated in a straight forward and effective manner, overcoming the disadvantage of the prior art responsive approach.
- In alternate embodiments, the network traffic drain rate of the data link may be dynamically determined, i.e. the actual drain rate of the data link instead. For these embodiments, preemptive pause control logic110 may systematically recompute the pause duration and/or periodicity based on the dynamically determined network traffic drain rate of the data link, or upon determining a significant change in the network traffic drain rate of the data link. Similarly, “significance” may be application dependent and preferably be configurable using any one of a number of known configuration techniques.
- In various embodiments, the pause control operation is performed in conformance with the Institute of Electrical and Electronics Engineers, Inc., (IEEE) standard Draft 802.3ae/D3.0, Annex 31B. Accordingly, the various time parameter values are specified in units of pause quanta (PQ), where one PQ is equal to 512 bit times. The amount of PQ may be any integer value between 0-65535. In other words, the largest amount of PQ assignable is 65535×512=33553920 bit times (or 3.355 ms for 10 Gigabit Ethernet).
- Further, in various embodiments that control 10 Gigabit Ethernet links, the pause control provided to the sender/senders by preemptive pause control logic110 is in the form of an Ethernet “PAUSE frame”, which contains the value of the pause duration in its PAUSE frame's “pause_time” field), where the pause duration is specified in units of PQ.
- FIG. 4 illustrates an exemplary application of network processor of FIG. 1 incorporated with teachings of the present invention. Illustrated in FIG. 4 is integrated optical networking module400 incorporated with
network processor 100 of FIG. 1, which as described earlier is incorporated with the preemptive network traffic control teachings of the present invention for data links. Optical networking module 400 includesoptical components 402, optical-electrical components 404,support control electronics 405, andnetwork processor 100 of FIG. 1, coupled to each other as shown. As alluded earlier,network processor 100 may be a multi-protocol processor having in particular, a number of interfaces and processing units, collectively referenced asreference number 410,control function unit 408,processor interface 407 andutility interface 409 coupled to each other and components 402-404 as shown. -
Optical components 402 are employed to facilitate the sending and receiving of optical signals encoded with data transmitted in accordance with a selected one of a plurality of protocols known in the art. Optical-electrical components 404 are employed to encode the egress data onto the optical signals, and decode the encoded ingress data. In a presently preferred embodiment, the supported datacom and telecom protocols include but are not limited to SONET/SDH, 10GBASE-LR, 10GBASE-LW, Ethernet-Over-SONET, Packet Over SONET, and so forth.Support control electronics 405 are employed to facilitate management of the various aspects ofoptical components 402 and optical-electrical components 404.Network processor 100 may be employed to perform data link and physical sub-layer processing on the egress and ingress data in accordance with a selected one of a plurality of supported datacom/telecom protocols, and to facilitate management of thenetwork processor 100 itself and optical, optical-electrical components 402 and 404 (through support control electronics 405). - In a presently preferred embodiment,
optical components 402, optical-electrical components 404,support control electronics 405 andnetwork processor ASIC 100 are encased in a body (not shown) forming a singular optical networking module, with provided software forming a singular control interface for all functionality. That is, in addition to being equipped to provide optical to electrical and electrical to optical conversions, clock and data recovery, and so forth, integrated optical networking module 400 is also equipped to provide data link and physical sub-layer processing on egress and ingress data selectively for a number of protocols. - Further, in the preferred embodiment,
control function unit 408 also includes control features, i.e. control registers and the like (not shown), in conjunction withsupport control electronics 405 to support a number of control functions for managingoptical components 402, optical-electrical components 404 as well asnetwork processor ASIC 100.Processor interface 407 is employed to facilitate provision of control specifications to controlfunction unit 408, whereas utility interface 409 (a digital interface) is employed to facilitate management ofcomponents - Optical networking module400 is the subject matter of co-pending application Ser. No. 09/861,002, entitled “An Optical Networking Module Including Protocol Processing And Unified Software Control”, having at least partial common inventorship and filed May 18, 2001. The co-pending application is hereby fully incorporated by reference.
- While the present invention has been described in terms of the foregoing embodiments and applications, those skilled in the art will recognize that the invention is not limited to these embodiments nor applications. The present invention may be practiced with modification and alteration within the spirit and scope of the appended claims. Thus, the description is to be regarded as illustrative instead of restrictive on the present invention.
Claims (36)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US10/211,174 US7433303B2 (en) | 2002-08-02 | 2002-08-02 | Preemptive network traffic control for regional and wide area networks |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US10/211,174 US7433303B2 (en) | 2002-08-02 | 2002-08-02 | Preemptive network traffic control for regional and wide area networks |
Publications (2)
Publication Number | Publication Date |
---|---|
US20040022187A1 true US20040022187A1 (en) | 2004-02-05 |
US7433303B2 US7433303B2 (en) | 2008-10-07 |
Family
ID=31187523
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US10/211,174 Active 2025-08-10 US7433303B2 (en) | 2002-08-02 | 2002-08-02 | Preemptive network traffic control for regional and wide area networks |
Country Status (1)
Country | Link |
---|---|
US (1) | US7433303B2 (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB2421634A (en) * | 2003-05-20 | 2006-06-28 | Lg Philips Lcd Co Ltd | Polycrystalline silicon align key |
US20130208595A1 (en) * | 2012-02-15 | 2013-08-15 | Ciena Corporation | Adaptive ethernet flow control systems and methods |
US20150229575A1 (en) * | 2012-08-21 | 2015-08-13 | Paul Allen Bottorff | Flow control in a network |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7743196B2 (en) * | 2007-08-15 | 2010-06-22 | Agere Systems Inc. | Interface with multiple packet preemption based on start indicators of different types |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5243596A (en) * | 1992-03-18 | 1993-09-07 | Fischer & Porter Company | Network architecture suitable for multicasting and resource locking |
US6170022B1 (en) * | 1998-04-03 | 2001-01-02 | International Business Machines Corporation | Method and system for monitoring and controlling data flow in a network congestion state by changing each calculated pause time by a random amount |
US6222825B1 (en) * | 1997-01-23 | 2001-04-24 | Advanced Micro Devices, Inc. | Arrangement for determining link latency for maintaining flow control in full-duplex networks |
US20030218977A1 (en) * | 2002-05-24 | 2003-11-27 | Jie Pan | Systems and methods for controlling network-bound traffic |
US7046624B1 (en) * | 1999-09-16 | 2006-05-16 | Hitachi, Ltd. | Network apparatus and a method for communicating by using the network apparatus |
-
2002
- 2002-08-02 US US10/211,174 patent/US7433303B2/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5243596A (en) * | 1992-03-18 | 1993-09-07 | Fischer & Porter Company | Network architecture suitable for multicasting and resource locking |
US6222825B1 (en) * | 1997-01-23 | 2001-04-24 | Advanced Micro Devices, Inc. | Arrangement for determining link latency for maintaining flow control in full-duplex networks |
US6170022B1 (en) * | 1998-04-03 | 2001-01-02 | International Business Machines Corporation | Method and system for monitoring and controlling data flow in a network congestion state by changing each calculated pause time by a random amount |
US7046624B1 (en) * | 1999-09-16 | 2006-05-16 | Hitachi, Ltd. | Network apparatus and a method for communicating by using the network apparatus |
US20030218977A1 (en) * | 2002-05-24 | 2003-11-27 | Jie Pan | Systems and methods for controlling network-bound traffic |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB2421634A (en) * | 2003-05-20 | 2006-06-28 | Lg Philips Lcd Co Ltd | Polycrystalline silicon align key |
US20130208595A1 (en) * | 2012-02-15 | 2013-08-15 | Ciena Corporation | Adaptive ethernet flow control systems and methods |
US9148382B2 (en) * | 2012-02-15 | 2015-09-29 | Ciena Corporation | Adaptive Ethernet flow control systems and methods |
US20150229575A1 (en) * | 2012-08-21 | 2015-08-13 | Paul Allen Bottorff | Flow control in a network |
US9614777B2 (en) * | 2012-08-21 | 2017-04-04 | Hewlett Packard Enterprise Development Lp | Flow control in a network |
Also Published As
Publication number | Publication date |
---|---|
US7433303B2 (en) | 2008-10-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10205683B2 (en) | Optimizing buffer allocation for network flow control | |
US7391728B2 (en) | Apparatus and method for improved Fibre Channel oversubscription over transport | |
US6078564A (en) | System for improving data throughput of a TCP/IP network connection with slow return channel | |
US9699095B2 (en) | Adaptive allocation of headroom in network devices | |
US7573821B2 (en) | Data packet rate control | |
US7379422B2 (en) | Flow control enhancement | |
US8819265B2 (en) | Managing flow control buffer | |
US7746778B2 (en) | Resource based data rate control | |
US20210243668A1 (en) | Radio Link Aggregation | |
US20070097864A1 (en) | Data communication flow control | |
US7409474B2 (en) | Method and system for rate adaptation | |
US7912078B2 (en) | Credit based flow control in an asymmetric channel environment | |
US7583594B2 (en) | Adaptive transmit window control mechanism for packet transport in a universal port or multi-channel environment | |
WO2006111787A1 (en) | Power reduction in switch architectures | |
Siemon | Queueing in the Linux network stack | |
EP1941640B1 (en) | Method, circuitry and system for transmitting data at different rates | |
JP4652314B2 (en) | Ether OAM switch device | |
US20080205430A1 (en) | Bandwidth control apparatus, bandwidth control system, and bandwidth control method | |
US7433303B2 (en) | Preemptive network traffic control for regional and wide area networks | |
EP2278757A1 (en) | Flow control mechanism for data transmission links | |
US20080137666A1 (en) | Cut-through information scheduler | |
KR20130048091A (en) | Apparatus and method for operating multi lane in high-rate ethernet optical link interface | |
US6373818B1 (en) | Method and apparatus for adapting window based data link to rate base link for high speed flow control | |
JP2007335986A (en) | Communication device and communication method | |
Pahlevanzadeh et al. | New approach for flow control using PAUSE frame management |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: NETWORK ELEMENTS, INC., OREGON Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:SHE, ALFRED C.;PETERS II, SAMUEL J.;DENTON, I. CLAUDE;REEL/FRAME:013164/0800 Effective date: 20020802 |
|
AS | Assignment |
Owner name: TRIQUINT SEMICONDUCTOR, INC., OREGON Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:NETWORK ELEMENTS, INC.;REEL/FRAME:016182/0609 Effective date: 20041217 |
|
AS | Assignment |
Owner name: NULL NETWORKS LLC, NEVADA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:TRIQUINT SEMICONDUCTOR, INC.;REEL/FRAME:017136/0951 Effective date: 20050908 |
|
AS | Assignment |
Owner name: NULL NETWORKS LLC, NEVADA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:TRIQUINT SEMICONDUCTOR, INC.;REEL/FRAME:017706/0550 Effective date: 20050908 |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
CC | Certificate of correction | ||
FPAY | Fee payment |
Year of fee payment: 4 |
|
AS | Assignment |
Owner name: XYLON LLC, NEVADA Free format text: MERGER;ASSIGNOR:NULL NETWORKS LLC;REEL/FRAME:037057/0156 Effective date: 20150813 |
|
FPAY | Fee payment |
Year of fee payment: 8 |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 12TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1553); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 12 |