MXPA00009741A - Apparatus and method for monitoring data flow at a node on a network - Google Patents

Apparatus and method for monitoring data flow at a node on a network

Info

Publication number
MXPA00009741A
MXPA00009741A MXPA/A/2000/009741A MXPA00009741A MXPA00009741A MX PA00009741 A MXPA00009741 A MX PA00009741A MX PA00009741 A MXPA00009741 A MX PA00009741A MX PA00009741 A MXPA00009741 A MX PA00009741A
Authority
MX
Mexico
Prior art keywords
value
data
data packet
packet
adjusted
Prior art date
Application number
MXPA/A/2000/009741A
Other languages
Spanish (es)
Inventor
James D Carlson
Original Assignee
Pluris Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Pluris Inc filed Critical Pluris Inc
Publication of MXPA00009741A publication Critical patent/MXPA00009741A/en

Links

Abstract

An apparatus and method for monitoring data flow at a node on a network are disclosed. A memory location or"bucket"is allocated to each of a plurality of links and classes of service at the node. A free-running counter is incremented at a rate determined by the maximum allowable data rates on the various links and classes of service. When a data packet is received at a particular link and class of service, the corresponding memory location or bucket is adjusted or"leaked"by subtracting the present counter value from the present bucket contents. That difference is then added to the number of units of data, i.e., bytes or groups of bytes of data, contained in the incoming packet. That sum is then compared with a predetermined threshold determined by the allowable data rate associated with the link and class of service. If the threshold is exceeded, then the incoming data packet is marked accordingly. The system can include multiple stages of monitoring such that multiple thresholds can be used to assign one of the multiple discard eligibility values to the incoming packet.

Description

APPARATUS AND METHOD FOR MONITORING DATA FLOWS IN A NODE OF A NETWORK Field of the Invention This invention relates generally to the field of digital communications, and more particularly to systems and methods for switching data packets into a switching node that is used in a digital data network, and to monitor data flows. in the switching node.
BACKGROUND OF THE INVENTION Digital networks have been developed to facilitate the transfer of information, including data and programs, between digital computer systems and many other types of devices. A variety of types of networks have been developed and implemented, using different information transfer methodologies. In modern networks, information is transferred through a mesh of switching nodes, which are interconnected by communication links in a variety of patterns. The mesh interconnection pattern can allow many paths to be available across the network from each computer system or other device, to another computer system or other device.
The information transferred from a source device to a destination device is generally transferred in the form of data packets of fixed or variable length, each of which is generally received by means of a switching node through a link of communications, and is transmitted through another communication link to facilitate the transfer of the packet to the destination device, or to another switching node along a path to the destination device. Each packet typically includes address information, including a source address that identifies the device that generated the packet, and a destination address that identifies the particular device or device that will receive the packet. Typically, a switching node includes one or more input ports, each of which is connected to a communication link in the network to receive data packets, and one or more output ports, each of which is also connected to a communication link in the network to transmit packets. Each node typically further includes a switching fabric that couples the data packets from the input ports to the output ports for transmission. Typically, a network service provider maintains and operates one or more switching nodes, which can transfer data packets from the incoming communication links, through the switching fabric, to the outgoing communication links. These providers charge fees to customers who use the links to transfer data through the nodes in the network. Typically, fees are related to the maximum data transfer rate at which a customer can expect data to be sent through the node. Each link in a node is typically assigned at least one "class of service" that is related to a maximum allowable data transfer rate, which is provided to a client using the link, which in turn is based on the fees paid by the customer to the supplier. In many cases, each link can be assigned multiple classes of services associated with a single user or multiple users. It is in the interest of the service providers to monitor or "monitor" the data traffic in each link, to determine whether the use of the clients of their assigned links is within the contractual limits. Where it is determined that the use of the link, that is, the data transmission speed, exceeds the contractual limit, the data packets can be identified and marked as such, that is, as being "out of the contract". In many cases, it is important to carefully monitor the data traffic in each, link, in each class of service. It is also often desirable to mark the data packets with respect to the degree to which a particular packet may be outside of the contract. For exampleIf a particular package is only slightly out of the contract, it may be desirable to mark the package as such. Furthermore, in cases of extreme overuse of the link, it may also be desirable for the data packets to be marked as such. In some systems, the jrado is used to which a packet exceeds the data rate of the link contract, to establish a priority to discard the packet. Packages that only slightly exceed the speed of the contract are assigned relatively low "disallowed eligibility" values, while those packages that greatly exceed the maximum speed are assigned high discarded eligibility values. In the case that it becomes necessary to omit a particular package, those with higher discarded eligibility values are more likely to be omitted than those with lower discarded eligibility values. Many approaches have been used to monitor the speeds of data flows in multiple links that have multiple kinds of services. A common approach is referred to as a "bucket with leaks" approach. Under this approach, a memory or storage location of records, commonly referred to as a "bucket", is assigned to each link and class of service. Each storage location or bucket keeps an account of a number of data units received for its assigned link and service class. A data unit may be a data byte, or a group of data bytes, for example, where each data packet transfers multiple bytes of data. For each cell, a predetermined threshold number of data units is generated and stored, related to the maximum permissible data transfer rate, for the associated link and service class. As a data packet is received, the number of data units (bytes) is added to the value or count present in the cell, and the updated value is compared with the threshold. If the updated value exceeds the threshold, then the incoming data packet is marked as exceeding the threshold. Because it is the data rates that are being monitored, rather than the total amount of data received, the value or count stored in each cell is periodically decreased by a predetermined number of data units, related to the transfer rate of maximum permissible data and the period in which the decrease occurs. This decrease is commonly referred to as the "drip" of the cuvette. By dripping the cuvette at the correct speed previously determined, it is ensured that when the number of data units in the cuvette exceeds the previously determined threshold, the maximum allowable data transfer rate has been exceeded. In order to identify short bursts of large amounts of data that exceed the maximum allowable data transfer rate, it is desirable to drip each cuvette, and perform a threshold comparison as frequently as possible. You can ignore those short bursts where the buckets do not drip and check frequently enough. In relatively small systems, which have a relatively small number of cuvettes, the system can cycle through all the cuvettes relatively quickly, so that short bursts of large amounts of data can be identified as being outside. of the contract. In those systems, the cuvettes take the form of memory locations, and drip and check are performed in the system software. However, as the systems become larger with larger numbers of links and classes of services and, consequently, larger numbers of buckets, the drip and check periods for each cuvette become longer. In this way, cuvettes are not maintained as frequently, and the probability of not identifying short bursts of many data increases. In this way, the monitoring of the data transfer speed in those systems becomes less accurate.
SUMMARY OF THE INVENTION The present invention is directed to an apparatus and method for monitoring or monitoring data traffic in a network node, which facilitates the transfer of data in at least one link that has at least one class of service. The data is transferred in data packets, and each data packet includes at least one unit of data. For each of the at least one link and selected service class, an updateable value is stored in a storage device. A counter value in a counter is increased at a rate determined by a maximum allowable data transfer rate, associated with the selected link and service class. A data packet is received, and a number of data units are counted in the data packet. An adjusted updatable value is calculated by adjusting the updatable value in accordance with the counter value, at the time the data packet is received, and the number of data units in the data packet. The adjusted update value is compared to a previously determined threshold associated with the selected service class and link. The data packet is marked with respect to the allowable data transfer rate, based on whether the adjusted updateable value exceeds the previously determined threshold. In one embodiment, the adjusted updatable value is calculated by calculating a difference between the updatable value and the value of the counter, when the data packet is received. Then you can add that calculated difference to the number of units of the data in the received data packet, to calculate the adjusted updatable value. In one embodiment, the adjusted updatable value is used to update the updatable value, such as by adding the updatable value adjusted to the number of data units in the data packet, and storing the resulting sum back into the storage unit as the updated value. In one embodiment, each link may include multiple classes of services, and each class of service may have a unique allowable data transfer rate. As a result, the storage device can include multiple individual storage areas. For example, the storage device may be a semiconductor memory, such as a static direct access memory (SRAM, for its acronym in English), which has multiple addressable locations. Therefore, the storage device stores a plurality of updatable values for each of a plurality of links and / or a plurality of service classes. In one embodiment, each data packet is associated with a discarded eligibility value, which identifies a priority for discarding the data packet. In general, assigned packages with higher discarded eligibility values are more likely to be discarded if it is determined that discarding is necessary for reasons such as excess data traffic or congestion. Therefore, in the present invention, the data packets can be marked in accordance with whether they cause a threshold to be exceeded, by altering the discarded eligibility value for the packet. That is, if a package causes a threshold to be exceeded, the discarded eligibility can be increased, in such a way that the priority is increased to discard the package. In one embodiment of the invention, a previously determined threshold can be assigned to a link and class of service, in such a way that the discarded eligibility value can be set to one of multiple corresponding levels, depending on which of the thresholds is exceeded. . In one embodiment, additional storage devices are provided to allow multi-stage surveillance, so that multiple levels of discarded eligibility can be assigned. Where a second storage device is provided, a second updateable value associated with the selected service class and link is stored in the second storage device. A second adjustable updatable value is calculated in accordance with the value of the counter at the time the data packet is received and the number of data units in the data packet. The second adjusted updateable value is compared to a second predetermined threshold, associated with the selected service class and link, wherein the second threshold previously determined is chosen to identify a second level of eligibility discarded for the packets. The incoming data packet is marked with respect to the allowable data transfer rate, based on whether the second adjusted updateable value exceeds the second previously determined threshold. In one modality, the data ites are analyzed with respect to the second and other additional stages, if it is found that the packet causes the first predetermined threshold to be exceeded. Accordingly, in the present invention, a single counter is effectively used to effectively decrease the values stored in all "buckets" simultaneously. This single counter can be applied to the counter branch circuit system, to derive the counter values for any number of the trays, for any number of links and classes of services. In the present invention, the processing steps, the cuvettes, which can be implemented as semiconductor SRAM memories, the counter and the counter bypass circuit system, can all be implemented in the hardware. As a result, the periodic decrease of cyclic circuit or "trickle" is eliminated, as is found in the systems of bucket with previous leaks. The cuvette values can be decreased and verified during extremely small time intervals, in such a way that large bursts can be identified in short periods of time. The result is a much more accurate and accurate surveillance approach than that found in previous systems. Due to the high accuracy of the inventive surveillance approach, it can be applied in very large systems that have large numbers of links and classes of services. The invention is applicable in different networks in which it is desirable to monitor the data traffic in the links. For example, the invention may be implemented in a switching node such as that described in co-pending United States Patent Application Serial Number 09 / 108,771, filed July 2, 1998, entitled " System and Method for Switching Packets in a Network "," by Schwartz, and collaborators, and assigned to the same transferee as the present application. The content of that application is incorporated herein by reference in its entirety.
BRIEF DESCRIPTION OF THE DRAWINGS The above and other objects and objectives of the invention will be apparent from the following more particular description of the preferred embodiments of the invention, as illustrated in the accompanying drawings in which the same reference characters are used. they refer to the same parties throughout the different views. The drawings are not necessarily to scale, placing the emphasis rather on the illustrative principles of the invention. Figure 1 contains a schematic diagram of a computer network that includes a plurality of switching nodes, in accordance with the present invention. Figure 2 contains a schematic block diagram of a switching node, in accordance with the present invention. Figure 3 contains a schematic block diagram of a modality of the circuit system for monitoring data flows, according to the invention. Figure 4 contains a schematic diagram of a data packet that can be processed according to the invention. Figure 5 contains a schematic flow diagram illustrating the logical flow of an embodiment of the data flow monitoring approach of the invention. Figure 6 contains a schematic functional block diagram illustrating one embodiment of the data transfer rate monitoring apparatus and method of the invention. Figure 7A contains a schematic block diagram of an embodiment of a first sub-step of calculation, according to the invention. Figure 7B contains a schematic block diagram of an embodiment of a second sub-step of calculation, according to the invention.
Detailed Description of the Preferred Modes of the Invention Figure 1 schematically illustrates a computer network 10 that includes a plurality of switching nodes 11 (1) to 11 (N), generally identified with the reference number 11, to transfer signals that they represent data among a number of devices, which in Figure 1 are represented by means of source / destination devices of packets 12 (1) to 12 (M). generally identified with the reference number 12, in an extended area network ("WAN"). The packet source / destination 12 devices, as conventional, include a particular device such as a computer system or other device that stores, generates, processes or otherwise uses the digital data, a local area network of those devices , or similar (not shown separately) to the extended area network 10. Each source / destination packet 12 is connected, via a communication link, generally identified with the reference number 13, to a node of switching 11, to facilitate the transmission of data thereto, or the reception of data therefrom. The switching nodes 11 are interconnected by communication links, also generally identified with the reference number 13, to facilitate the transfer of information between the switching nodes 11 (n). The communication links 12 can use any means of transmitting convenient information, including, for example, wires for carrying electrical signals, fiber optic links for carrying optical signals, and so on. Each communication link 13 is preferably bidirectional, allowing the switching nodes 11 to transmit and receive signals with each other, and with the premises equipment of the client 12 connected thereto via the same link; depending on the particular type of medium selected for the respective communication links 13, multiple means may be provided for transferring signals in opposite directions, thereby providing the bidirectional link. The data is transferred in the network 10 in the form of packets. Generally, a package includes a header portion and a portion of data. The header portion includes information that helps routing the packet through the network, with specific information depending on the particular packet routing protocol that is used in the routing of packets through the network. In connection with the network 10, any of a number of well-known packet routing protocols can be used.; in one mode the well-known Internet protocol ("IP") is used. In any case, the header typically includes address information that includes a source address that identifies the particular source device 12 (m3) that generated the packet, and a destination address that identifies the particular destination address 12 (mD) who will receive the package. In the IP protocol, a packet may be of variable length, and the header will typically also include information about the length, to identify the length of the packet. Typically, the length of the packet is identified as a number of bytes, or a number of byte groups, where a group of bytes contains a predetermined number of bytes. The header also typically includes other information, which includes, for example, identifier information of the protocol that identifies the particular protocol that defines the structure of the packet. The data portion contains the packet data payload. The packet may also include, as part of the data portion or other, error detection information that can be used to determine if an error occurred while transferring the packet.
A source device 12 (ms), after generating a packet for transfer to a target device 12 (mD), provides the packet to the switching node 11 (n) to which it is connected. The switching node 11 (n) uses the destination address in the packet to try to identify a route, which associates a destination address with one of the communication links 13, connected to the switching node through which it is going to transfer the packet, to send the packet to either the target device 12 (mD), if the switching node 11 (n) is connected to the target device 12 (mD), or to another switching node 11 (n) (n) '? n) along a path to the target device 12 (pio). If he. The switching node can identify a route for the received packet, it will send the packet through the communication link identified by the route. Each switching node 11 (n), 11 (n1 '), ... that receives the packet will perform a similar operation. If all the switching nodes have respective paths for the destination address, the packet will eventually reach the destination device 12 (mD). Figure 2 is a schematic block diagram of a mode of a switching node 11, in accordance with the present invention. The node 11 generally includes multiple input modules 20 (1), 20 (2), ..., 20 (N), which are generally referenced with the reference number 20, and the output port modules 21 (1), 21 (2), ..., 21 (N), corresponding, which are usually referenced with the reference number 21. Input port modules 21, and output port modules 21, are connected to the processing circuit system and the switching gender 24, which controls the sending of data from the input modules 20 to the output modules 21. In general, each input port module 20 (n) can include one or more input ports 22 (n) (l) through 22 (n) (M), which may be connected to multiple communication links 13 (n) (l) through 13 (n) (M), respectively. In the same way, each output port module 21 (n) can include one or more output ports 23 (n) (l) through 23 (n) (M), which in general can be connected to multiple links communications 13 (n) (l) to 13 (n) (M), respectively. The data received in each of the links 13 (n) (m) is sent from the corresponding input module 20 (n), through the processing circuit system and the switching gender 24, to the output module 21 ( n) appropriate, and outside on the network in the appropriate link 13 (n) (m). Each of the links 13 can be assigned one or more kinds of services, and one or more clients using the link. The data traffic initiated at each link can be monitored by means of the data monitoring or data monitoring circuitry 26 of the invention. In general, each input module 21 (1), 20 (2), ..., 20 (N) includes the surveillance circuit system 26 (1), 26 (2), ..., 26 (N) corresponding, in accordance with the invention. It should be noted that the surveillance circuitry 26 should not be located in the input modules 20. Alternatively, or additionally, the monitoring circuitry may be located in the processing circuitry system and the switching gender 24 and / or the output modules 21. It should also be noted that the configuration of the above network node is intended as an illustration only, in which the invention can be applied to a node structure that includes input / output modules that support multiple Connections connected by means of a processing genre, and separate switching. It will be understood that the present invention can be used with other node structures, including but not limited to nodes without input and / or output modules, and / or separate processing and switching genres, as well as nodes that support input links and / or individual output. Figure 3 contains a schematic block diagram of a mode of the surveillance circuit system 26, in accordance with the invention. As shown in Figure 3, the data packets are received in the surveillance circuitry 26 on the lines 27. The packets are processed by means of the packet processing circuitry 28, and taken out on the lines 31 Figure 4 contains a schematic diagram illustrating some of the fields contained in a typical data packet 36. The packet 36 includes a header portion 38 and a data portion 40. A typical header includes a discarded eligibility field (DE), and a packet length field (PL). The DE field includes a DE value that sets the priority to discard the packet. The value PL is the number of data units in the data portion 40 of the packet 36. Typically, a data unit is a byte, such that the PL value of the packet length is the number of bytes in the portion 40. In other systems, the packet length is a number of byte groups. For example, in a particular system, the bytes are packed in groups of 32,, and the value of the packet length is the number of 32-byte groups in the data. For example, in that system, a packet length of PL equal to 32 corresponds to 32 groups of 32 bytes of data, that is, 1024 bits (lk) of data. • The packet processing circuitry 28 receives a packet 40 and reads the values for DE and PL of the packet. These initial values are passed to the comparison circuitry 35, together with the identity of the link on which the packet is received, and the class of service associated with the packet. The comparison circuitry 35 performs the comparison processing as described below in detail, to determine if the packet causes an allowable data transfer rate to be exceeded in the identified link, in the identified service class. In one embodiment, the comparison circuitry 35 adjusts the DE value of the packet, in accordance with whether one or more predetermined thresholds assigned to the link and class of service are exceeded. The processor 30 then reassembles the packet with the new value DE, and sends the packet out of the surveillance circuitry 26 in the lines 31. The comparison circuitry 35 contains the circuitry that is used to determine whether a particular packet causes an allowable data transfer rate to be exceeded in a link. In the illustrated mode ^ this can also define the level at which the speed is exceeded, and assign a DE value based on that level. In the embodiment shown in Figure 3, three processing steps are used to compare a number of data units, e.g., bytes or byte groups, that have been received through a particular link in a particular class of data. service, with three threshold values previously determined. The three processing stages allow four levels of classification of the degree to which the threshold is exceeded. As a result, the system allows four possible establishments of the DE value associated with the package being examined. In this embodiment, the circuit system 35 includes a processor of the first stage 50, a processor of the second stage 52, and a processor of the third stage 54. Each of the three processors 50, 52, 54 is interconnected with a memory 56, 58, 60, respectively, each of which, in one embodiment, is an SRAM memory. Each memory is assigned a location or group of memory locations to each link and class of service that is being monitored. These locations or groups of locations, ie, "buckets", maintain an updateable value that can be updated after receipt of a data packet. Each memory 56, 58, 60 also stores the previously determined threshold for each link and class of service with which the corresponding updated value is compared, after receiving a new data package. In one embodiment of the invention, the processor of step 1 performs a first comparison step, using the bucket value stored in the first memory 56 and its corresponding threshold. If the threshold is exceeded, the DE value for the package is increased, and the processing proceeds to the second stage. The processor of the second stage 52 can then perform a second comparison, using the corresponding cuvette value stored in the memory 58 and its associated threshold. If the threshold is exceeded, the DE value can again be increased, and the processing can proceed to the third stage processor 54. If the threshold of the second stage is not exceeded, the DE value present in stage two can be stored from new with the data packet by means of the processor 30, in the packet processing circuit system, and the packet with the updated DE value can be passed out of the monitoring circuitry 26. The processor of the third step 54 is You can use it to perform the comparison, to increase again the DE value for the package, if necessary. At any stage, if a threshold is not exceeded, the present value DE is again stored with the packet, by means of the processor 30, and the packet is transmitted outside the monitoring circuitry 26 with the updated value DE. As shown in Figure 3, the packet processing circuitry 28 also includes a counter 32, and the branch circuitry 34 of the counter value. In one embodiment, the counter is a free run counter that increases at a previously determined periodic speed. The branch circuit system 34 can be used to derive counter values that increase to any of a number of previously determined speeds. The different counter values thus generated can be used by the comparison circuitry 35, as described below in detail, to determine if the data packets received in the different links, and in the different kinds of services have caused them to exceed the data flow rates allowable. It will be noted that throughout this description, where a counter value is referenced, it does not need to be the value stored in the current counter 32. The value can be one of the values generated by the branch circuit system 34. It should also be noted that instead of using a single counter with the branch circuit system, multiple counters can be used, with or without a branch circuit system. Figure 5 is a schematic flow diagram illustrating the logical flow of an embodiment of the data flow monitoring approach of the invention. The data flow monitoring approach of the invention will now be described in detail, in connection with Figures 3 and 5. As illustrated by step 100, the packet processing circuitry 28 expects a packet to be received. When the packet is received, in step 102, the link index L is read, which identifies the link on which the packet is received, and the service class for the link. The packet is also analyzed to read the length of the PL packet and the discarded eligibility of the incoming packet. This information is then transferred to the processor of the first stage 50. Firstly, it is determined whether the present value of DE is equal to the discarded eligibility associated with the step S. If not, the present stage does not perform any processing on the package, and the flow proceeds to the next subsequent step in step 106. If the DE value for the received packet is at the appropriate level for the present stage, then the link index L and the service class are used to access the (s) appropriate location (s) of memory (s), that is, the "bucket", in the associated SRAM memory, that in the first stage is the memory 56. In step 108 the maximum permissible cuvette value (threshold) M and the current cuvette content value B are read from the SRAM 56. In step 110, the difference is calculated D between the contents of cuvette B and the counter value C present, that is, D = B - C. This calculation of the difference effectively diminishes or "drips" the cuvette being examined. If the difference D is less than zero, then D is set to 0. In step 112, the length of the PL packet, which may be the number of bytes or the number of byte groups in the data packet, is added to the difference D. This effectively calculates a new adjusted cuvette content value E. In step 114, this set value E is compared to the cuvette threshold M. If E exceeds the threshold M, then, in step 116 , the DE value for the package is increased. Then, in one embodiment, the cuvette content of the present step is updated, in step 118, by adding the value of the counter to the difference D, and storing that sum again in the SRAM 56 as the new cuvette value B, associated with the link index L and the service class of the received packet, that is, B = D + C. In an alternative mode, the content of the cell is updated by the addition of as many data units as possible. add to the contents of the cuvette, without exceeding the threshold, ie, B = M. Then, in step 120, the process advances to the next subsequent stage, in this case, stage 2, where the process of comparison to determine if the incoming packet causes the threshold M of step 2 to be exceeded. Referring again to step 114, if the value E does not exceed the value M, then, in step 122, the content of the cell is updated by adding the value of the counter C to the sum E, and store returning that value back to the cuvette location as the new value B for the contents of the cuvette, i.e., B = E + C. Then, in step 124, the packets are regenerated by means of the processing circuit system of packages 28, using the present value for DE. The packet is then sent out of the packet processing circuitry 28, on lines 31. Then the flow returns to the beginning where another packet can be received. This process is carried out in each stage, with the value DE being increased in each stage in which a threshold M is exceeded. Finally the package is reassembled with the new value DE altered by means of the system of circuits of surveillance of the invention. Where none of the thresholds is exceeded, the DE value of the package remains unchanged. Because the counter, counter circuit system of the counter, and the cell locations are finite in bit sizes, at any time the difference between the cell value and the counter value may be »ambiguous, if time since the last bucket was updated it is long, due to the cyclical reset or the overflow of the minimum capacity. In the present invention this problem is solved by the use of a "rubbing" approach in which zero-length packets are introduced into the system periodically, to bring the buckets that overflow their minimum capacity back into a valid range. These zero-length packets are used to re-establish the individual trays and, therefore, are encoded with specific L-link indexes and service classes, in a similar way to the coding of current packets. After the approach described above in connection with Figure 5, when a packet of zero length is received, its link index L and the service class are read. These values are used to retrieve the threshold value M and the current content of the bucket B. The difference D = B - C is calculated, and D is set to zero since, in the situation of overflow of the minimum capacity, the value of the counter will exceed the value of cell B. Then the sum E = D + PL = 0 is calculated. After the comparison of the threshold M, it is determined that the threshold M is not exceeded. A new cell value is calculated B = E + C = 0 + C = C, which effectively establishes the new value of cell B in the value of counter C. Consequently, using a packet of zero length, the cell content is set to the present value of cell. counter, which effectively zeroes the bucket. In the case of a rubbing operation in which no overflow of the minimum capacity of the bucket has occurred, the content of the current bucket B is left unchanged. Therefore, in accordance with the above description of the invention, the counter 32 and the branch circuit system 34 are accessible for all the processing steps and all the cuvettes in a very small time frame. In fact, in the time frame defined by the reception and sending of the data packets, the cuvettes can be updated and verified virtually instantaneously. Consequently, within that time frame, many buckets can be effectively verified and updated almost simultaneously. This is a great improvement over the previous approaches of cyclical circuit found in the processes of conventional drip bucket. As a result, the "dripping" granularity in the present invention is much finer than that in conventional systems. Therefore, the threshold value M can be set at a very low level, to accurately monitor the data flow. In fact, using the processes described above, the threshold M can be set so low that you can mark up a single data packet with enough data to exceed the allowable limit, as outside the contract. In practical applications, that threshold would not be used, since it would prevent any bursts of data from being sent. At present, due to the flexibility provided by the present approach, the threshold value M can be established to allow a predetermined level of tolerable "bursts" of data traffic in the link. In this way, a very precise and accurate surveillance and monitoring approach is provided, with great flexibility to control e.1 traffic on the link. In today's network environments, data packets can include very large amounts of data. For example, it is not uncommon for more than one kilobyte of data to be transferred in a package. In some common systems and protocols, to accommodate large amounts of data, and to reduce the complexity involved in processing large numbers, quantize data quantities such that they are specified in terms of byte groups. For example, in a particular approach, data is quantized and counted in groups of 32 bytes. As a result, the value of the packet length (PL) for the data packets, which is typically given in bytes, is processed to identify a number of units of 32 bytes. Typically, this involves eliminating the five least significant bits (LSBs) of the word PL. In conventional systems, this is done by a rounding function. Where the five least significant bits of the word PL represent a number from 0? 0 to 15? 0, the new processed value is rounded down, representing the number of 32-byte units. Where the five least significant bits represent a number of 16? Or 31? O, the processed value is rounded up. In current systems there are some packet lengths that are more common than others. As a result, the five least significant bits of the word PL of the received packets are not distributed evenly from 0? 0 to 31? 0. This uneven distribution can result in a significant error when using the conventional rounding function. In the present invention, a more random rounding function is used. In the present invention, the value of the five least significant bits is compared with a randomly generated number between 0? O and 31? O. Where the value of the five least significant bits is greater than, or equal to, the randomly generated threshold, the number of 32-byte units is rounded up. Where the value of the five least significant bits of the PL value is below the randomly generated threshold, the number of 32-byte units is rounded down. This results in a more evenly distributed rounding function, which serves to eliminate the process error. Figure 6 contains a schematic functional block diagram illustrating an apparatus mode and data transfer rate monitoring method of the invention. As shown in Figure 6, in block 200, the difference D between the current cuvette content B and the value of the counter C is calculated, ie D = B - C. The difference is passed to a comparison block 202 which determines whether the difference D is less than zero. If D is less than zero, then the MUX (abbreviated expression for multiplexer) 204 is controlled by means of the selection line (labeled "x") outside the comparison block 202, to pass the value 0 as its output. Otherwise, the difference D is passed out of the MUX 204. The length of the packet PL of the incoming packet is added to the difference D in a summing block 206, to generate the sum E = D + PL. The difference D is also passed to an input of a second MUX 208. The sum also applies to a second comparison block 210, which determines whether the sum is less than, or equal to, the predetermined threshold M. The output of block 210 is a logical signal that is active (high) when the sum calculated in block 206 is less than, or equal to, the threshold M, and inactive (low) when the sum exceeds the threshold. This logic value is applied to a first input of a logical AND block 212, and also applies to an inversion circuit 214. The inverted value is applied to an input of a second AND block 216. The second input to the block .AND 216 it is generated from a comparison block 218, which determines whether the current S value, which identifies the current discarded eligibility (DE) for the packet, is equal to that of the current processing stage. If so, and it is determined that the threshold M is exceeded in block 210, then logic block AND 216 outputs an active signal to the selection line of MUX 220, such that the value N of the next stage it is taken out as a new value for S. This is, the processing moves to the next stage. The value N is a permanent code value with the index of the next stage in the channeling. The result of the comparison in block 218 is also input to logic block AND 212. If the current stage is correct, and the threshold is not exceeded, then logical AND 212 applies an active signal to the selection line of MUX 208, such that the sum E = D + PL calculated in block 206 is taken to an adding unit 222. Adder 222 adds the output of MUX 208 to the value of counter C, and stores it again as a new updated cube content B, that is, B = E + C. If either the stage is not correct, that is, the comparison block 218 has an inactive output, or it is determined that the threshold M is exceeded in block 210, then the line The selection of the MUX 208 (labeled "y") is activated low by means of the AND block 212, such that the difference D = B-C is passed to the summing block 222. As a result, the counter value is added again to the difference D, in such a way that the content of cell B remains unchanged . As mentioned above, the rubbing operation, which avoids the cyclic restarting distortion problem, is implemented as periodic simulated packages with arbitrary and unused PL length value, with S set to binary 11, and with L set to a value of a counter that is incremented by each rubbing operation. The S-value then causes each channeled stage to reset its bucket to the current counter value, if the bucket has overflowed at its minimum capacity. In one embodiment of the invention, substantial processing time can be eliminated by previously calculating certain values, before the process reaches the addition step 222, in which the contents of cube B are modified. Table 1 lists the values that can be calculated by means of the data path logic illustrated in Figure 6.
It is noted that only the B value is required to be read and written in a single step, and that all other values (including M) can be calculated previously, before reaching the modification step of B in block 222 of Figure 6. As a result, the calculations are made in sub-steps to reduce the calculation time. In one modality, the calculation can be made in two sub-steps. Figure 7A is a schematic block diagram of a first sub-step, referred to herein as "sub-step A", and Figure 7B is a schematic block diagram of a second sub-step referenced in the presentfe as "sub-step B". As shown in Figure 7A, the sum C + PL is calculated by adding the block 250. The threshold M is inverted by the investment block 252, and the sum of the inverted M and the PL value is calculated in the block of addition 254. The comparison block 256 determines whether PL-M-1 is less than 0. The value C is denied in the reversal block 258, and the sum calculated in the addition block 254 is added to -C in the block 260. The value of S is set to the stage number in block 262. These values in sub-step A, which are illustrated in Figure 7A, apply to sub-step B, which is illustrated in Figure 7B. As shown in Figure 7B, the values B, B + PL, C and C + PL are applied to the inputs of a MUX 264, which outputs the selected input as the updated B-cell contents. The selection lines x and y of the MUX 264 are generated in accordance with the following description. The sum of B and -C is calculated in the adder 266, and that sum is compared to 0 in the comparison circuit 268. The logic output of the comparison circuitry 268 is applied to the MUX 264 as the selection line x. When the sum calculated in block 266 is less than zero, then the selected x is active. If the sum calculated in block 266 is greater than, or equal to zero, then the selected x is inactive. The value of the selection line x is also applied to the selection line of a MUX 270. If the selection line is active, then the logical value of the relation PL-M-1 < 0 is passed to the output of the MUX 270. If the selection line x is inactive, then the logical value of the relation B - C + PL -M - 1 < 0, which is generated by the addition of the block 274..and the comparison block 276, is passed through the output of the MUX 270. The output of the MUX 270 is applied to a logical AND block 272, together with the value of S. The output of block AND 272 is the selection line and for MUX 264. The output of MUX 270 is also denied by reversal circuit 278. The value inverted outside block 278 is applied to logical AND circuit 280, together with the value of S. The output of the logical AND block 280 is used as the selection line for another MUX 282. The MUX 282 selects either S or N to be the output as the updated value for the variable S. Although this invention has been particularly shown and described with references to preferred embodiments thereof, those skilled in the art will understand that different changes in form and details may be made therein, without departing from the spirit and scope of the invention, as define by s attached claims.

Claims (36)

  1. CLAIMS 1. A method to monitor the data traffic in a node in a network, the node facilitating the transfer of data in at least one link that has at least one class of service, the data being transferred in data packets, and each data pack including at least one data unit, the method comprising: for each of the at least one link and selected service class, storing an updateable value in a storage device; increasing a counter value in a counter at a rate determined by an allowable data transfer rate, associated with the selected service class and link; receive a data packet; count a number of data units in the data package; calculating an adjusted updatable value, by adjusting the updatable value in accordance with the counter value, when the data packet is received, and the number of data units in the data packet; compare the adjusted updatable value with a previously determined threshold, associated with the selected link and service class; and marking the data packet with respect to the allowable data transfer rate, based on whether the adjusted updateable value exceeds the previously determined threshold. The method of claim 1, wherein calculating an adjusted updateable value comprises calculating a difference between the updatable value and the value of the counter when the data packet is received. 3. The method of claim 2, wherein the calculation of an adjusted updatable value further comprises calculating a sum of the number of data units in the data packet, and the difference between the updatable value and the counter value when receive the data package. The method of claim 3, characterized in that it also comprises updating the updatable value, using the adjusted updatable value. The method of claim 4, wherein updating the updateable value comprises calculating a sum of the adjusted updateable value and the number of data units in the data packet. 6. The method of claim 3, characterized in that it also comprises updating the updatable value, using the adjusted updatable value. The method of claim 1, wherein each link can include multiple classes of services. The method of claim 7, wherein each class of service can have a unique allowable data transfer rate. . The method of claim 1, wherein the storage device stores a plurality of updatable values for each of a plurality of links. The method of claim 1, wherein the storage device stores a plurality of updatable values for each of a plurality of service classes. The method of claim 1, wherein marking the data packet comprises setting a priority to discard the data packet. The method of claim 1, wherein marking the data packet comprises altering a discarded eligibility value for the data packet. The method of claim 1, characterized in that it also comprises associating the predetermined threshold with a priority to discard a data packet. The method of claim 1, characterized in that it also comprises associating the predetermined threshold with a discarded eligibility value for the packet. 15. The method of claim 1, wherein the data unit is a data byte. 16. The method of claim 1, wherein the data unit is a plurality of bytes of data. The method of claim 1, wherein a link and class of service may be associated with multiple predetermined thresholds, related to the allowable data transfer rate of the link and service class, in such a way that the data packet can be classified according to which of the previously determined thresholds are exceeded. The method of claim 17, characterized in that it also comprises associating each predetermined threshold with a priority to discard a data packet. The method of claim 17, characterized in that it also comprises associating each predetermined threshold with a discarded eligibility value for the packet. The method of claim 17, characterized in that it also comprises associating each predetermined threshold with an updateable value. The method of claim 1, characterized in that it also comprises: storing a second updateable value associated with the selected service class and link in a second storage device; calculating a second adjusted update value, in accordance with the value of the counter when the data packet is received and the number of data units in the data packet; comparing the second adjusted updatable value with a second predetermined threshold, associated with the selected link and service class; and marking the data packet with respect to the allowable data transfer rate, based on whether the second adjusted updateable value exceeds the second previously determined threshold. The method of claim 21, characterized in that it also comprises whether the first adjusted updateable value exceeds the first predetermined threshold, updating the first updateable value by adding the number of data units in the received data packet which does not cause the first threshold determined previously exceeds the first updateable value. The method of claim 1, characterized in that it also comprises: if the first adjusted updateable value exceeds the first predetermined threshold, store a second updateable value associated with the selected service class and link in a second storage device; calculating a second adjusted update value, in accordance with the value of the counter when the data packet is received and the number of data units in the data packet; comparing the second adjusted updatable value with a second predetermined threshold, associated with the selected link and service class; and marking the data packet with respect to the allowable data transfer rate, based on whether the second adjusted updateable value exceeds the second previously determined threshold. 24. The method of claim 1, characterized in that it also comprises receiving a data package with zero data units, for updating the updatable value in the storage device. 25. An apparatus for monitoring data traffic at a node in a network, the node facilitating the transfer of data on at least one link that has at least one class of service, the data being transferred in data packets, and each packet data including at least one data unit, said apparatus comprising: a storage device for storing an updateable value for each of the at least one selected link and service class; a counter for containing a counter value, the value of the counter being incremented at a rate determined by a maximum allowable data transfer rate, associated with the link and selected service class; an input unit for receiving a data packet; and a processor for (i) counting a number of data units in the data packet, (ii) calculating an adjusted updateable value, by adjusting the updatable value in accordance with the counter value, when the data packet is received , and the number of data units in the data packet, (iii) comparing the adjusted updateable value with a previously determined threshold, associated with the selected link and service class, (iv) marking the data packet with respect to the allowable data transfer rate, based on whether the adjusted updatable value exceeds the previously determined threshold. 26. The apparatus of claim 25, wherein the storage device stores a plurality of updatable values for each of a plurality of links. 27. The apparatus of claim 25, wherein the storage device stores a plurality of updatable values for each of a plurality of service classes. 28. The apparatus of claim 25, wherein the storage device comprises an SRAM. 29. The apparatus of claim 25, wherein the processor marks the incoming data packet by setting a priority to discard the data packet. 30. The apparatus of claim 25, wherein the predetermined threshold is associated with a priority to discard a data packet. 31. The apparatus of claim 25, wherein the data unit is a data byte. 32. The apparatus of claim 25, wherein the data unit is a plurality of bytes of data. 33. The apparatus of claim 25, wherein the processor updates the updatable value, based on the number of data units in the data packet. 34. The apparatus of claim 25, wherein a link and class of service may be associated with multiple thresholds previously determined, related to the maximum allowable data transfer rate of the link and class of service, such that the packet of incoming data can be classified according to 'which of the previously determined thresholds are exceeded. 35. The apparatus of claim 25, characterized in that it also comprises: a second storage device for storing a second updateable value associated with the selected service class and link; and a second processor for (i) calculating a second adjusted updateable value, by adjusting the second updatable value in accordance with the counter value, when the data packet is received, and the number of data units in the data packet., (ii) compare the second adjusted updateable value with a second previously determined threshold, associated with the selected link and service class, and (iii) mark the data packet with respect to the allowable data transfer rate, based on if »the second adjusted updateable value exceeds the second previously determined threshold. 36. The apparatus of claim 25, characterized in that it also comprises: a second straw storage device storing a second updateable value associated with the selected service class and link, if the first adjusted updateable value exceeds the first previously determined threshold; and a second processor for (i) calculating a second adjusted updateable value, by adjusting the second updatable value in accordance with the counter value, when the data packet is received, and the number of data units in the data packet. , (ii) comparing the second adjusted updateable value with a second previously determined threshold, associated with the selected link and service class, and (iii) marking the data packet with respect to the allowable data transfer rate, in -base a if the second adjusted updateable value exceeds the second threshold previously determined.
MXPA/A/2000/009741A 1999-02-05 2000-10-04 Apparatus and method for monitoring data flow at a node on a network MXPA00009741A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US09234082 1999-02-05

Publications (1)

Publication Number Publication Date
MXPA00009741A true MXPA00009741A (en) 2002-07-25

Family

ID=

Similar Documents

Publication Publication Date Title
US6381649B1 (en) Data flow monitoring at a network node using periodically incremented counters for comparison to predetermined data flow thresholds
US6578083B2 (en) Method for monitoring data flow at a node on a network facilitating data transfer on at least one link having at least one class of service
EP1982481B1 (en) Method and system for improving traffic distribution across a communication network
CA2160820C (en) Method and apparatus for storing and retrieving routing information in a network node
US5949786A (en) Stochastic circuit identification in a multi-protocol network switch
US6363077B1 (en) Load balancing in link aggregation and trunking
JP3965283B2 (en) Packet transfer device with multiple types of packet control functions
US5898689A (en) Packet network interface
US5455825A (en) Tag-based scheduling system for digital communication switch
US20040008625A1 (en) Method for monitoring traffic in packet switched network
US7330481B2 (en) Highly channelized port polling in a telecommunications switch
CA2260255C (en) Addressable, high speed counter array
EP1515499A1 (en) System and method for routing network traffic
US7496109B1 (en) Method of maximizing bandwidth efficiency in a protocol processor
US6301260B1 (en) Device and method for multiplexing cells of asynchronous transmission mode
MXPA00009741A (en) Apparatus and method for monitoring data flow at a node on a network
US20020181463A1 (en) System and method for handling asynchronous transfer mode cells
JP4258996B2 (en) Scheduling device and cell communication device
EP3866418A1 (en) Method for an improved traffic shaping and/or management of ip traffic in a packet processing system, telecommunications network, network node or network element, program and computer program product
Bergstrom et al. An analysis tool for predicting performance in an all-optical time division multiplexing data switch
US6061353A (en) Communication system formed by ATM network and multiplexing device suitable for such a system
JPH10303917A (en) Traffic shaping means, atm exchange and atm-nic
JPH0267044A (en) Traffic detection circuit in packet communication
Hildebrandt et al. Performance issues for network-based image information systems
JPH1098473A (en) Method and device for controlling traffic