CN100486229C - Network apparatus and method for data processing in data network - Google Patents

Network apparatus and method for data processing in data network Download PDF

Info

Publication number
CN100486229C
CN100486229C CNB2006100549086A CN200610054908A CN100486229C CN 100486229 C CN100486229 C CN 100486229C CN B2006100549086 A CNB2006100549086 A CN B2006100549086A CN 200610054908 A CN200610054908 A CN 200610054908A CN 100486229 C CN100486229 C CN 100486229C
Authority
CN
China
Prior art keywords
data
packet
network equipment
memory
port
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CNB2006100549086A
Other languages
Chinese (zh)
Other versions
CN1822572A (en
Inventor
布兰登·卡尔·史密斯
曹军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Broadcom Corp
Zyray Wireless Inc
Original Assignee
Zyray Wireless Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zyray Wireless Inc filed Critical Zyray Wireless Inc
Publication of CN1822572A publication Critical patent/CN1822572A/en
Application granted granted Critical
Publication of CN100486229C publication Critical patent/CN100486229C/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

A network device for processing data on a data network including a plurality of ports, configured to receive data from a data network and to send processed data to the data network via an egress port, a controller interface, configured to communicate with an external controller, a memory management unit, configured store data on and retrieve data from the memory and a metering unit, configured to police a flow of the processed data to be sent to the egress port. The metering unit further includes programmable registers, in communication with the controller interface, configured to be programmed through controller signals sent through the controller interface from the external controller, such at all aspects of the flow of the processed data may be controlled by the external controller.

Description

The network equipment of deal with data and method in data network
Technical field
The present invention relates to the network equipment of deal with data in network, more specifically, the present invention relates to the method for the data flow of the network equipment of a kind of control by improving processing speed and autgmentability.
Background technology
A network can comprise one or more network equipments, Ethernet switch for example, and each switch comprises several modules, is used to handle the information through this device transmission.Specifically, this equipment can comprise the port interface module, is used for sending and receiving the data of network; Memory Management Unit (MMU) is used for storage data up to being transmitted or further handling; And parsing module (resolution module), this module allows according to instruction checking and deal with data.Described decision module has function of exchange, is used for determining control the data of which destination interface.A certain port on the network equipment can be a cpu port, makes equipment send information and from its reception information to external switch/route controlled entity or CPU.
Majority of network equipment is worked in the Ethernet switch mode, and packet enters this equipment from a plurality of ports, and this packet is carried out exchange and other processing.Thereafter, packet is transferred to one or more destination interfaces by MMU.MMU is implemented in and shares pack buffer between different ports, guarantees for each input port, delivery outlet and service queue-level provide resource simultaneously.
According to present switching system framework, there are 8 relevant other service queues of level each input port.In order to ensure the bandwidth through this port and formation, this equipment includes scheduler, provides arbitration to guarantee minimum and maximum bandwidth to the service queue rank.The implementation of bandwidth that can guarantee the associated queue of each port is a standing part for the port assignment total bandwidth of each formation.Like this, the queue assignment relevant with seeervice level with high priority to the bandwidth bandwidth that will arrive than the queue assignment relevant with low priority service big.Then, described scheduler is handled packet in each formation in the mode of for example round-robin.
Yet, this implementation underaction.For example, when a formation was the free time, the bandwidth of distributing to this formation was not used, although another formation need be than the amount more bandwidth of distributing to it.Like this, in the formation of the bandwidth that exceeds its distribution, packet can be lost, and the bandwidth of idle queues is not used yet.Therefore, urgent need will be improved metering and dispatching method, with the velocity process data of realization to require, and provides the flexibility that needs to utilize all resources of the network equipment.
Summary of the invention
According to an aspect of the present invention, provide a kind of in data network the network equipment of deal with data, the described network equipment comprises:
A plurality of ports receive data and send to described data network by the data that delivery outlet will have been handled from data network;
Control unit interface communicates with peripheral control unit;
Memory Management Unit is controlled with the memory communication of described network equipment outside and to it, obtains data again to described memory stores data and from described memory;
Metering units, with described a plurality of ports, described control unit interface and described Memory Management Unit communication, control will be mail to the reduced data stream of described delivery outlet;
Wherein, described metering units further comprises programmable register, with described control unit interface communication, described programmable register can be programmed by the control signal of sending via described control unit interface by described peripheral control unit, makes all reduced data stream to be controlled by described peripheral control unit.
Preferably, described programmable register comprises 8 programmable register.
Preferably, described metering units is according to the packet of the described reduced data of color mark, to flow based on the described reduced data of the signal controlling of described controller.
Preferably, described metering units determines to import color of wrapping and the color that output packet is set based on the value in the input bag.
Preferably, the control of described metering units is via the data flow of a series of funnel buckets, and based on the color of the packet marking funnel bucket that successively decreases.
Preferably, described decrement can be determined by the size of described packet.
Preferably, the funnel bucket that is 512K at least of a plurality of funnel buckets in described a series of funnel bucket.
According to an aspect of the present invention, provide a kind of in the network equipment method of deal with data, described method comprises the steps:
The controller signals that reception is sent by control unit interface from peripheral control unit;
Based on the described controller signals that has received, the programmable register in the metering units is programmed;
A port from a plurality of ports receives data;
By the storage of the Memory Management Unit in the memory of described network equipment outside with described reception;
Determine the attribute of the data of described reception, and determine delivery outlet for the data of described reception;
From described memory, obtain the data of described reception again and revise the data of described reception, if desired, generate the data of having handled based on described definite attribute;
According to the indication of described metering units, send the described data of having handled by described delivery outlet;
The described programmable register of wherein said metering units determines to be sent to all features of the reduced data stream of described delivery outlet.
Preferably, described programming step comprises: the controller signals based on described reception is programmed to 8 programmable register.
Preferably, described method further comprises: according to the packet of the described reduced data of color mark, to control the described data flow of having handled based on described controller signals.
Preferably, described method further comprises: determine the color of input bag, and the color of output packet is set based on the value in the input bag.
Preferably, described method further comprises: control is via the data flow of a series of funnel buckets, and based on the color of the packet marking funnel bucket that successively decreases.
Preferably, described decrement can be made a return journey definite by the size of packet.
Preferably, described control data stream comprises: control is via the data flow of the funnel bucket of a series of 512K at least.
According to an aspect of the present invention, provide a kind of network equipment of deal with data, the described network equipment comprises:
The receiving system of the controller signals that reception is sent by control unit interface from peripheral control unit;
The programmer of the programmable register in the metering units being programmed based on the described controller signals that has received;
Receive the port device of data and transmission reduced data by delivery outlet;
The storage that will be received from analytical equipment by the Memory Management Unit in the memory of described network equipment outside is wherein and from wherein obtaining the memory of data device again;
Data that obtain again based on fixed attribute modification, storage are to generate the modifier of reduced data;
Wherein said programmable register determines to be sent to all features of the reduced data stream of described delivery outlet.
Preferably, described programmer comprises the device of 8 programmable register being programmed based on the controller signals of described reception.
Preferably, described equipment further comprises according to color and the packet of described reduced data is carried out mark so that the labelling apparatus of described reduced data stream being controlled based on described controller signals.
Preferably, described equipment further comprises definite device of determining input bag color and the output packet color being set based on the value in the input bag.
Preferably, described equipment comprises that further control is via the device of the data flow of a series of funnel buckets and based on the successively decrease device of funnel bucket of the color of packet mark.
Description of drawings
The invention will be further described below in conjunction with drawings and Examples, in the accompanying drawing:
Fig. 1 is the schematic diagram of the network equipment according to an embodiment of the invention;
Fig. 2 is the schematic diagram that uses the port of the network equipment to communicate according to one embodiment of the invention;
Fig. 3 a is the structural representation of the shared storage that is positioned at network equipment outside of network equipment use;
Fig. 3 b is the schematic diagram of the unit Buffer Pool of shared memory architecture among Fig. 3 a;
Fig. 4 is used for resource allocation restriction to guarantee the schematic diagram to the buffer management mechanism of the fair access of resource by Memory Management Unit;
Fig. 5 is the schematic diagram according to 2 grades of analyzers of certain embodiments of the invention;
Fig. 6 is the schematic diagram according to another analyzer that uses with interconnect port of certain embodiments of the invention;
Fig. 7 is the schematic diagram according to the adaptation as a result of certain embodiments of the invention;
Fig. 8 is the configuration schematic diagram that is used for delivery outlet moderator of the present invention;
Fig. 9 is the realization schematic diagram according to the minimum of certain embodiments of the invention and maximum bandwidth metrological service;
Figure 10 a is the schematic diagram of stream ID to the mapping of bucket;
Figure 10 b is the schematic diagram of common batch box;
Figure 10 c is to use the schematic diagram of the batch box of timestamp method;
Figure 11 is that the current time is stabbed the comparison diagram of result of calculation and token counts field according to an embodiment of the invention.
Embodiment
Figure 1 shows that the network equipment schematic diagram of exchange chip for example of realizing one embodiment of the invention.Equipment 100 comprises input port/ delivery outlet module 112 and 113, Memory Management Unit (MMU) 115, analyzer 130 and search engine 120.Input port/delivery outlet module is used for metadata cache and sends data to analyzer.Analyzer 130 is analyzed the data that receive and is utilized search engine 120 to carry out based on the data of having analyzed and search.Even the major function of Memory Management Unit 115 is under serious situation about stopping up, also can be with measurable method administrative unit buffer memory and packet pointer resource effectively.By these modules, packet can take place revise, and packet can send to suitable destination interface.
According to several embodiments of the present invention, equipment 100 can also comprise an inside for example HiGig of high-speed port (internal fabric high speed port) that interweaves TMOr high-speed port 108, one or more external ethernet port 109a-109x and a cpu port 110.High-speed port 108 is used for the various network device that interconnects in system, thereby forms an inner exchanging net, is used for externally transmits data packets between the source port and one or more outside destination interface.Like this, high-speed port 108 is sightless in the system outside that comprises a plurality of interconnected network equipments.Cpu port 110 is used for transmission information to outside exchange/route controlled entity or CUP, and from wherein receiving information.According to one embodiment of present invention, CUP port one 10 can be considered among the external ethernet port 109a-109x one.Equipment 100 is connected by the outer CPU of CPU processing module 111 (as CMIC, it is connected with the pci data bus of outer CPU with connection device 100) and outside/sheet.
In addition, the search engine module 122,124 that search engine module 120 can be added and 126 is formed, and the characterization and the specific of modification process that are used for the data of the network equipment 100 processing with execution are searched.Equally, analyzer 130 also includes attached module, is used for the data that interweave internally high-speed port 134 and other port ones 38 receive are analyzed, and analyzer 130 also includes other module 132 and 136, returns the port of the network equipment in order to transfer of data.High-speed port 134 and second order analysis device 138 will provide detailed description below.
The network information enters and output equipment 100 by external ethernet port 109a-109x.Specifically, the information flow-rate in the equipment 100 routes to one or more unique purpose ethernet ports by external ethernet resource port.In one embodiment of the invention, equipment 100 is supported 12 physics ethernet ports 109 and a high-speed port 108, the wherein digit rate work that each physics ethernet port can 10/100/1000Mbps, this high-speed port 108 can 10Gbps or the speed work of 12Gbps.
The structure of physical port 109 is further illustrated by Fig. 2.A series of serialization/parallelization module 103 transmits and receive data, and wherein the data of each port reception are managed by port manager 102A-L.These a plurality of port manager have timing generator 104 and bus agent 105 to realize their operation.Data Receiving and be sent to the port information storehouse so just can monitor traffic.It should be noted that high-speed port 108 also has approximate function but do not need so many parts, manage a port because only need.
In one embodiment of the invention, equipment 100 uses shared memory architecture, shown in Fig. 3 a-3b, MMU115 is implemented in and shares pack buffer between different port, guarantees for each input port, delivery outlet and the service queue level relevant with each delivery outlet provide resource simultaneously.Fig. 3 a is depicted as the schematic diagram of shared memory architecture of the present invention.Specifically, the memory resource of equipment 100 comprises unit caches pond (CBP) memory 302 and transmit queue (XQ) memory 304.CBP memory 302 is the outer resources of sheet, and among some embodiment, the CBP memory is made up of 4 dram chip 306a-306d.According to one embodiment of present invention, each dram chip has the capacity of 288Mbits, and the total capacity of CBP memory 302 is the original storage amount of 144Mbytes.Shown in Fig. 3 b, CBP memory 302 is divided into the unit 308a-308x of a plurality of 256K576 bytes, wherein each unit comprise 32 bytes header buffer memory 310, be up to the packet 312 of 512 bytes and the headspace 314 of 32 bytes.Like this, each input bag takies the unit 308 of at least one complete 576 byte.Therefore when input comprises the frame of 64 bytes, import in the bag for it is reserved with the space of 576 bytes, although have only 64 bytes to be used in this 576 byte by this frame.
Referring to Fig. 3 a, XQ memory 304 comprises a row packet pointer 316a-316x, points to CBP memory 302, and wherein different XQ pointers 316 is associated with each port.The packet counting of the element count of CBP memory 302 and XQ memory 304 can be followed the trail of based on the progressive row of input port, delivery outlet and service.Like this, equipment 100 can provide resource to guarantee based on a unit and/or service.
In case packet is by source port 109 access arrangements 100, this packet will be transferred to analyzer to handle.In processing procedure, the packet shared system resource 302 and 304 on each input port and the delivery outlet.In a particular embodiment, two independently 64 byte burst packet transfer to MMU by local port and high-speed port.Figure 4 shows that by MMU 115 and be used for resource allocation restriction to guarantee schematic diagram to the buffer management mechanism of the fair access of resource.MMU 115 comprises input port back pressure mechanism 404, first (the head of line) mechanism 406 of row and Weighted random earlier detection mechanism 408.Input port back pressure mechanism 404 supports lossless state, and all input ports are managed the buffer resource liberally.The visit that the first mechanism 406 of row supports cache resources, the throughput of optimization system simultaneously.Weighted random earlier detection mechanism 408 improves whole network throughput.
Input port back pressure mechanism 404 uses packets or location counter with the packet following the trail of each input port and use or the quantity of unit.Input port back pressure mechanism 404 includes the register that is used for one group 8 threshold values that are provided with respectively and is used to specify in 8 threshold values which and is used to the register of an input port in the system.This group threshold values comprises limit threshold values 412, abandon the limit (discard limit) threshold values 414 and replacement limit threshold values 416.If the counter that is associated with the use of input port packets/cells increases and surpasses when abandoning limit threshold values 414, the packet at place, input port will be dropped.Based on the register that is used for tracing unit/data packet number, can use the information flow (cache resources that this moment, used this input port has exceeded its fair cache resources of sharing) that suspends flow control and stop to arrive the input port, thereby stop amount of information, and alleviate the obstruction that causes by this input port from the input port that breaks the rules.
Particularly, follow the trail of to determine whether it is in input port back pressure state based on the input port back pressure counter that is associated with this group threshold values always each input port.When this input port is in input port back pressure state, periodically be that the time-out flow control frame of (0xFFFF) sends out this input port with timer value.When this input port no longer is in input port back pressure state, timer value is sent by this input port for the time-out flow control frame of (0x00), and allowed information flow to flow once more.If the current input port that is not in, input port is by pressure condition, and the value of packet counter goes beyond the limit of threshold values 412, and the state of this input port will be converted to input port back pressure state.Reduce to the limited threshold values of replacement below 416 if the input port is in the value of input port back pressure state and packet counter, then the state of this port will no longer be in the back pressure state.
The fair access that the first mechanism 406 of row supports cache resources, the throughput in the optimization system simultaneously.The first mechanism 406 of row relies on the packet that abandons to manage cache resources and kept away the total throughput of system of improvement.According to one embodiment of present invention, the first mechanism 406 of row uses delivery outlet counter and predetermined threshold values to use with the buffer of following the trail of each delivery outlet and seeervice level, and the packet of making decision thereafter and newly arriving the input port and will mail to the specific delivery outlet that has exceeded the quata/service queue level to abandon.The first mechanism 406 of row supports different threshold values according to the color of newly arrived packet.Packet can be based on color on metering of carrying out in the module of input port and the marking operation mark, and MMU relies on the different colours of packet to make different operations.
According to one embodiment of present invention, going first mechanism 406 can carry out independent setting and operation on each service queue level and all of the port (comprising cpu port).The use that the first mechanism of row 406 usage counters are followed the trail of XQ memory 304 and CBP memory 302 uses threshold value to support the static allocation of CBP memory buffer 302 and the dynamic assignment of XQ memory buffer 304.Abandon threshold values 422 definition and give all unit in the CBP memory 302, and no matter mark be what color.When the value of the location counter relevant with port reaches when abandoning threshold values 422, this port is converted to capable indictment attitude.Thereafter, if the value of its location counter drops to replacement limit threshold values below 424, then this port will swap out from the transfer of row indictment attitude.
For XQ memory 304, the fixing XQ buffer memory that distributes for each service queue level is defined by XQ entry value (entry value) 430a-430h.Each XQ entry value 430a-430h corresponds to a relevant formation and reserves how many buffers inlet and made definition.For example, if the XQ memory of 100 bytes is assigned to a port, the value that the one or four relevant with XQ inlet 430a-430d respectively service queue level distributed is 10 bytes, and the value of relevant with XQ inlet 430e-430h respectively back four queue assignment is 5 bytes.
According to one embodiment of present invention, although the buffer inlet of the relevant XQ entry value of all bases for its reservation do not used in a formation, the first mechanism 406 of row can not distribute to another formation with untapped buffer.However, unappropriated 40 bytes that are used for the XQ buffer memory remainder of this port can be shared by all service queue levels relevant with this port.The restriction that special services queue level can take the shared pool of how many XQ buffer memorys can be provided with limit threshold values 432 by XQ and set.Like this, the maximum quantity that limit threshold values 432 can be used to define a spendable buffer memory of formation is set, and is used to prevent that the formation from using all available XQ buffer memorys.Be not more than the total quantity of the XQ buffer memory that this port can use for the summation of guaranteeing XQ entry value 430a-430h, and guarantee that each service queue level can visit the quota of the XQ buffer memory that is distributed by its entry value 430, the available pool of using the dynamic counter register of port to follow the trail of the XO buffer of each port, the quantity that wherein dynamic counter register 434 is followed the trail of the available shared XQ buffer memory of this port always.Dynamically the initial value of counter register 434 deducts the value after the quantity sum of XQ entry value 430a-430h for the total quantity of the XQ buffer relevant with this port.Use when the service queue level to exceed when continuing available XQ buffer after its XQ entry value 430 allocated quotas, dynamically counter register 434 subtracts 1.On the contrary, use when the service queue level to exceed when discharging the XQ buffer after its XQ entry value 430 allocated quotas, dynamically counter register 434 adds 1.
When a formation request XQ buffer memory, the first mechanism of row 406 determine all inlets that these formations are used whether be less than this formation XQ entry value 430, and agree this cache request under less than the situation of XQ entry value 430 at the inlet that uses.But, if the inlet that uses greater than the XQ entry value 430 of formation, whether the amount that the first mechanism 406 of row will determine to be asked less than available buffer total amount or less than setting the maximum of giving this formation by the relevant limit threshold values 432 that is provided with.No matter the color of packet marking how, the threshold values of abandoning that limiting threshold value 432 comes down to this formation is set.Like this, the packet count value of this packet reaches when limit threshold values 432 is set, and formation/port enters capable indictment attitude.When the first mechanism 406 of row detects capable indictment attitude, send update mode, so the packet of block ports will be dropped.
Yet, because lagging reasons when the first mechanism of row 306 transmit statuss are upgraded, may also have packet to transmit between MMU 115 and this port.In this case, because be in the situation that packet discard appears in capable indictment attitude MMU115 place.In one embodiment of the invention, because the pile line operation of packet, predetermined amount is reduced in the dynamic pond of XQ pointer.Like this, when the quantity of available XQ pointer was equal to or less than predetermined quantity, this port translation was to row indictment attitude, and a update mode is sent to this port by MMU 115, reduced the quantity of data packets that may be abandoned by MMU 115 with this.In order to jump out capable indictment attitude, the XQ packet count value of this formation must drop to replacement limit threshold values below 436.
For the XO counter of a special services queue level, if its data packet discarding to be possible when not reaching the XQ resource that limit threshold values 432 and this port are set and being exceeded the quata to take by other service queue level.In one embodiment of the invention, can also be taildrop threshold 438 and 439 in the middle of the packet definitions that contains the particular color mark, wherein when the taildrop threshold definition should be with the data packet discarding of particular color in the middle of each.For example, middle taildrop threshold 438 can be used for defined label and should when be dropped for yellow packet, and middle taildrop threshold 439 is used for defined label and should when be dropped for red packet.According to one embodiment of present invention, packet can be labeled as green, yellow or red according to the priority of appointment respectively.Be the packet of guaranteeing each color and corresponding to processing of color assignment in each formation, one embodiment of the present of invention include virtual maximum threshold values 440.Virtual maximum threshold values 440 equals the unallocated and available buffer memory quantity value after divided by the buffer quantity sum of number of queues and current use.Virtual maximum threshold values 440 guarantees that the packet of each color handles in certain proportion.Therefore, if available unappropriated buffer quantity is provided with limit threshold values 432 less than a certain particular queue, and all available unappropriated buffers are visited in this formation request, virtual maximum threshold values 440 calculates for this formation in the first mechanism 406 of row, and according to the packet of relevant colors that is the rate process certain proportion amount of each definitions of color.
Be the save register space, the XQ threshold values can be expressed as compressed format, and wherein one group of XQ inlet is represented in each unit.The size of group depends on the quantity of the XQ buffer that the delivery outlet specific with certain/service queue level is relevant.
Weighted random earlier detection mechanism 408 is queue management mechanisms, empties packet in advance based on probable algorithm before XQ buffer 304 is used up.Therefore Weighted random earlier detection mechanism 408 can be used for optimizing the throughput of whole network.Weighted random earlier detection mechanism 408 comprises a mean value statistical value, in order to following the trail of the length of each formation, and be based upon queue definitions abandon explanation (drop profile) packet discard.This abandons the possibility that abandons that has defined under the given specific average queue size situation is described.According to one embodiment of present invention, Weighted random earlier detection mechanism 408 can be based on service queue level and the independent explanation of packet definitions.
As shown in Figure 1, MMU 115 receives packet to store from analyzer 130.As mentioned above, analyzer 130 comprises a secondary analysis device, and this part is shown in Figure 5.As mentioned above, data receive at port 501 places of the network equipment.Data also receive via CMIC 502, and wherein these data will be by input CMIC interface 503.It is the input port data format from the P-bus format conversion that this interface is used for the CMIC data.In one embodiment, data are converted to 168 bit formats from 45, and the form of back comprises 128 data, 16 control and 24 possible high speed header like this.Thereafter, data are sent to input port moderator 504 with the form of 64 bursts.
Input port moderator 504 receives data from port 501 and input CMIC interface 503, and carries out multiplexed based on the time-division multiplex arbitration technique to these inputs.Thereafter, data are transferred into MMU 510, and at MMU 510 places, all high speed headers are removed, and are set to the MMU interface format.Check the packet attribute then, for example, end-to-end, interrupt Bei Nuli handle (Interupted Bernoulli Process, IBP) or first (HOL) packet of row.In addition, 128 bytes of data are monitored, and the high speed header is reached analyzer ASM 525.If the bursty data that receives comprises end marker, then CRC result will be sent to adaptation 515 as a result.And length of data package is obtained by bursts length estimation, and generates 126 packet ID and use for debugging.
Analyzer ASM 525 is converted to 64 bursts of data of 4 circulations of each bursts 128 byte bursts of 8 circulations of each bursts.The bursty data of 128 bytes is transferred to tunnel analyzer 530 and analyzer FIFO 528 simultaneously to keep identical packet sequence.Tunnel analyzer 530 has determined whether to adopt the tunnel encapsulation of any kind, comprises MPLS and IP tunnel effect.In addition, this tunnel analyzer is also checked outside and inner label.By analyzing and processing, session initializtion protocol (SIP) is offered VLAN based on subnet, wherein, if when packet is ARP(Address Resolution Protocol), inverse arp agreement (RARP) or IP packet, SIP will takes place analyze.Based on source trunk line mapping table, can also create the ID (trunk port grid ID) of trunk ports grid, unless do not have trunk line (trunk) if or this trunk line ID can from the high speed header, obtain.
Tunnel analyzer 530 is worked with tunnel detector 531.The tunnel verifier is checked the verification of IP header and the characteristic of IPv6 packet (IPv6over IPv4packets) (checksum) and on UDP tunnel effect and the IPv4.Tunnel analyzer 530 utilizes search engine 520 to determine tunnel type by predefined table.
The packet header of analyzer FIFO 528 storages 128 bytes and the high speed header of 12 bytes, this high speed header is analyzed once more by depth analysis device 540.Finish when search engine and once to search for and prepare to carry out deep layer when searching, byte of header is stored.Also will keep other attribute, for example data packet length, high speed header state and packet ID.Depth analysis device 540 provides three kinds of data of different types, comprises Search Results, internal analysis result and the high-speed module header of the search engine 520 of " flowing through ".Specific type of data packet will be determined and be sent to search engine.Depth analysis device 540 reads the data from analyzer FIFO, and predefined field is analyzed.Search engine provides lookup result based on the value that is sent to this search engine, wherein will check to keep packet sequence packet ID.
Depth analysis device 540 also use agreement detector 541 check inner IP header verification and, the rejection of attribute, the mistake in the high-speed module header are attacked in the service of inspection, and carry out the martian verification.This depth analysis device is also worked with field processor analyzer 542, to analyze predefine field and user-defined field.The predefine field receives from the depth analysis device.These fields comprise MAC destination address, mac source address, inside and outside label, EtherType, IP purpose and source address, COS, IPP, IP mark, TDS, TSS, TT1, TCP mark and stream label.User-defined field also is analyzable, and the highest length is 128.
As mentioned above, the data that receive on the data that receive on the high-speed port and other local port are to separate individual processing.As shown in Figure 1, high-speed port 108 has own buffer, and data flow in its own analyzer 134 from this port.The more details of high speed analysis device as shown in Figure 6, its structure is similar to the device of secondary analysis shown in Fig. 5, but has some difference.The data that high-speed port 601 receives are transferred to high-speed port assembler 604.This assembler receives these data and high speed header with the form of 64 byte bursts, and is similar to the form that is used for local port.Described data are sent to MMU610 and do not have described high speed header with the MMU interface format.
128 bytes of described data are monitored and are sent to depth analysis device 640 with the high speed header.Similar to the secondary analysis device is that end-to-end information is examined, and sends this analysis result in sideband.Same approximate is, CRC and data packet length are checked by adaptation 615 as a result.In addition, generate 16 packet ID to be used to debug the stream with the trace data bag.
The high speed versions of depth analysis device 640 is subsets of secondary depth analysis device 540, and carries out similar function.Yet, there is not information via in the search engine 620, it can not skip the MPLS header, and only analyzes payload, does not send depth data to search engine.On function, the high speed versions of FP analyzer 642 is identical with FP analyzer discussed above 542.
Adaptation is shown specifically in Fig. 7 as a result.It should be noted that adaptation as a result can be applied to by a plurality of analyzers sharedly by general, perhaps each analyzer uses its own adaptation as a result.Among the embodiment shown in the figure, two types port 710 and 720 receives data and passes on certain numerical value by the operation of input port assembler 715 and input port moderator 725 and give detector as a result.This numerical value comprises existence, CRC and the data packet length of port number, EOF.Adaptation is worked with a series of FIFO as a result, comes the match search result by using search engine 705.Per-port basis, label and MIB incident and data packet length and CRC state are complementary.Per 4 circulations, Search Results just is provided for the network port and high-speed port.If there is the situation of delaying time less than the input packet time, this structure makes Search Results be stored in the adaptation as a result of each port, delays time than importing the packet time in short-term when search, and this structure allows to wait for packet result's terminal appearance.
After the data that receive are analyzed and assessed, make according to the information that receives and to pass on decision.This passes on to determine what destination interface is generally packet should send to, although this decision also can be the forgo data bag or gives CPU or other controller by CMIC 111 transfer of data bags.At delivery outlet, the analysis of equipment Network Based and assessment, this packet is modified.If delivery outlet is a high-speed port, this modification comprises the modification of mark, header information or adds module header.This modification is that carry out on the basis with the unit, to avoid producing time-delay when the transfer of data bag.
Figure 8 shows that the configuration schematic diagram that is used for delivery outlet moderator of the present invention.As shown in Figure 8, MMU115 also comprises scheduler 802, for 8 the service queue level 804a-804hs relevant with each delivery outlet provide arbitration, thereby provides minimum and the maximum bandwidth assurance.What it should be noted that introduction herein is 8 grades of services, but also supports other service class patterns.Scheduler 802 and one group of minimum and the maximum 806a-806h of metrological service integrate, and wherein each metrological service monitors the flow of each seeervice level and the flow of each delivery outlet.The 806a-806h of metrological service supports flow to adjust function (traffic shaping function), and guarantee its minimum bandwidth requirement based on each service queue level and/or delivery outlet, wherein the scheduling of scheduler 802 decision comes together to dispose with one group of control mask by flow adjusting mechanism 806a-806h, and this control mask is used to revise how use traffic adjusting mechanism 806a-806h of scheduler 802.
As shown in Figure 8, the minimum and maximum 806a-806h of metrological service monitoring is based on each service queue level and based on each delivery outlet monitoring flow.Minimum and maximum bandwidth metering 806a-806h gives scheduler 802 in order to feedback states information, and scheduler 802 these state informations of response are revised the service order in its whole service queue 804.Therefore, the network equipment 100 can make the system sales merchant carry out a plurality of service models by configuration service queue level 804, thereby supports clear and definite minimum and maximum bandwidth to guarantee.In one embodiment of the invention, the 806a-806h of metrological service detects information flow-rate based on the service queue level, provide the flow of a service queue level whether to be higher or lower than state information minimum and that maximum bandwidth requires, and transmitting this information to scheduler 802, scheduler 802 uses this metrical information to revise its scheduling decision.Like this, the 806a-806h of metrological service assist with service queue level 804 be divided into one group of formation that does not meet the minimum bandwidth requirement, one group meet its minimum bandwidth but do not meet formation that maximum bandwidth requires and one group exceeded the formation that its maximum bandwidth requires.If a formation belongs to the formation that this group does not meet its minimum bandwidth requirement, and this formation has packet, and scheduler 802 will be served this formation according to the scheduling rule of configuration.If formation belongs to this group and do not meet its minimum bandwidth requirement but do not have and surpass the formation that its maximum bandwidth requires, and in this formation packet is arranged, scheduler 802 will be served this formation according to the scheduling rule of configuration.If exceed formation that its maximum bandwidth requires or this formation for empty if formation belongs to this group, scheduler 802 will not served this formation.
In Fig. 9, the minimum and 806a-806h of maximum bandwidth metrological service can use simple funnel bucket mechanism to realize, follows the trail of a service queue level 804 and whether has taken its minimum or maximum bandwidth.The minimum of each seeervice level 804 and the scope of maximum bandwidth are arranged between the 64Kbps to 16Gbps, and increase progressively with 64Kbps.This funnel bucket mechanism has token " leakage " the bucket 902a-902h that quantity can be set, each bucket with among configurable ratio and the formation 804a-804h one be associated.When the minimum bandwidth of metering service queue level 804, because packet enters service queue rank 804, the token of the some that is in proportion with this packet is added in the corresponding bucket 902, and this barrel has the maximum of barrel the highest threshold values 904.This funnel bucket mechanism includes the minimum bandwidth 906 that removes what tokens in refresh interface and the each refresh time of the definition unit.Minimum threshold values 908 is used to point out whether data flow satisfies its minimum rate at least, fills threshold values (fillthreshold) 910 and is used for pointing out what tokens funnel bucket 902 has.Rise and surpass the sign that 908, one of minimum threshold values point out that this data flow has satisfied its minimum bandwidth requirement and be set to true value when filling threshold values 910.Be reduced to the minimum valve value 910 times when filling threshold values, this sign is set to pseudo-value.
The needed time-quantum method of operation (timescale) of 908 pairs of minimum bandwidth metrological services 806 of minimum threshold values exerts an influence.If minimum threshold values 908 is set as very low rank, service queue level 804 will mark fast and reach its minimum bandwidth.This is divided into reduction the flow of the time queue 804 in this set of queues that does not satisfy the minimum bandwidth requirement, and reduces the time cycle of this formation by scheduler 802 priority treatment.904 pairs of service queue levels of high threshold values reach can be set up how many credits (credit) behind its minimum bandwidth 906 and exert an influence.Big high threshold values 904 will reduce this formation and be divided into the time that does not meet the set of queues that minimum bandwidth requires, and reduce the time cycle of this formation by scheduler 802 priority treatment.
After the 806a-806h of metrological service points out that the maximum bandwidth of stipulating has exceeded high threshold values 904, the service that scheduler 802 stops this formation, and this formation is divided in the set of queues that exceeds the maximum bandwidth requirement.Then, send a sign and exceeded its maximum bandwidth to point out this formation.Subsequently, fill threshold values when it and drop to high threshold values 904 below and indicate it to exceed when the sign of bandwidth is reset, this formation is only served from scheduler 802 receptions.The 806i of metrological service is exceeded in order to the maximum bandwidth of pointing out certain ports specify, and when maximum bandwidth is exceeded, works in the mode identical with the 806a-806h of mechanism.According to one embodiment of present invention, usually whether formation 804 or a port are included in the scheduling arbitration based on the maximum metrological service of formation and port and exert an influence.Like this, only there is flow restriction function in maximum metrological service to scheduler 802.
The minimum metering of service queue level on the other hand, 804 has more complicated interactive operation with scheduler 802.In one embodiment of the invention, scheduler 802 is supported various scheduling rules, comes the bandwidth sharing performance of analog weighted Fair Queue scheme.This Weighted Fair Queuing scheme is the weighted version based on the Fair Queue scheme of packet, is defined as a kind of method that is used to provide " based on the position round-robin " scheduling of packet.Like this, can dispatch packet, with based on delivery time of this packet visit delivery outlet, this time can provide when serving based on the position round-robin at scheduler and calculate.Relevant weighting field will influence the detail how scheduler uses minimum metrological service, and wherein this scheduler attempts to provide minimum bandwidth to guarantee.
As mentioned above, the present invention externally uses the bucket of 512K in the memory.The method of traditional backstage fill bucket is not adjusted a plurality of buckets, does not use all available bandwidths.System is 25ms at the fastest time that external memory storage and backstage fill each barrel of can circulating.Because the cycle between the fill bucket of backstage is long more, just inaccurate more to the mark of packet, therefore the filling of the backstage of bucket is infeasible.In order to support the bucket of 512K, up time stamp method is to increase the accuracy that metering is handled.Yet this timestamp method has also increased the amount of calculation of this design.
A metering of the allocation of packets group identification to each reception is handled in packet classification, and this is sorted in the FP module 1001 and finishes.Metering module uses this metering group identification to understand the color which bucket this metering module should use come the specified data bag.Outside FP CAM engine is divided into 256K stream with packet.External memory storage comprises the 512K bucket of " two " funnel bucket that has 256K.This memory is divided into two sections, so this twin-tub is a pair of [idol, strange] bucket forever.Each " two " funnel bucket is assigned to a metering group.Figure 10 a is depicted as the address space 1002 how the metering group maps to external memory storage.
A common batch box can comprise following field: speed, burst size and token counts, and shown in Figure 10 b.Speed and burst size field are by software programming, and hardware is not made an amendment these fields.The token counts field is hardware modifications " bucket ".Speed field points out that this bucket (token counts) requires the speed of filling.The burst size field points out that this bucket (token counts) allows the maximum that is filled.Two incidents can be revised this token count area.First incident is the reception of classification to the packet of the data flow of this barrel.When certain packet was given a specific bucket, the token counts field of this barrel was successively decreased with the amount that equals data packet length.
Second incident is that processing is filled on the backstage.The backstage is filled to handle and is increased progressively token counts with the amount that equates with the speed field of this barrel.Usually with a kind of like this method definition, the value that is about to this field directly adds in the token counts speed field.For example, if the unit of token counts field is 1/2, the backstage is filled and is handled every 8us circulation primary, and then the unit of speed field may be defined as 1/2 of every 8us.In this case, fill to handle directly in the value adding token counts field with speed field on the backstage.
First incident (reception packet) only deducts token counts, and second incident (backstage filling) only adds token counts.This mathematical operation operation that makes token counts need is quite simple.In any given cycle period, a sub-addition or a subtraction only take place.Actual mark decision is made based on the token counts value, below will provide introduction.
It is impossible filling the backstage idealized.It is different that desirable bucket is filled the backstage filling that quantizes with 8us.How below will introduce this quantification causes the sign of packet with ideally different.Wait the size data bag for what arrive with the speed that doubles barrel speed, the mark of these packets is with different under ideal situation and quantification (8us) situation.If what use is a simple single bucket, adopt the dichromatism tagging scheme, when the token count area was big or small more than or equal to packet, this packet marking was green, and this token count area successively decreases with the size of packet so.If this token count area is less than the packet size, this packet is marked as redness, and this token count area is not made an amendment.Because packet arrives with the speed that doubles barrel speed, the packet of half will be marked as redness, and second half is labeled as green.Yet, in some cases, and then be 10 red data bags after 10 green data bags.Under long-term behavior, the mark of packet is correct, if but check a little time window, the mark of this packet is different.
The 25ms backstage is filled with ideal situation and is compared and will produce the color mark of fundamental difference.If supposing maximum packet rate is 24Gbits/s (12 1G ports and 1 12G port), and the packet under the speed of hypothesis 24Gbits/s is 64 bytes, approximately receives 90,0000 packets so in 25ms.Suppose that bucket speed is 12Gbits/s and 24Gbits/s, the packet that is equivalent to 64 bytes will divide to this bucket.In this case, about 45,0000 packets will be marked as redness in the delegation, and ensuing about 45,0000 packets will be marked as green.Very unfortunate, the mark that provides like this is incorrect.
In order to improve the accuracy of metering, adopt timestamp method to replace the backstage fill method.Timestamp method improve the accuracy of metering significantly, but the timestamp fields account has been used memory space and has also been needed extra computational logic under the allocated bandwidth situation of given backstage.In fact, be used to the backstage to fill under the same band distribution condition of 512K bucket generation 25ms quantification, this timestamp method can reach and be equivalent to the accuracy that 1us quantizes.
Timestamp method requires joining day stamp field in the definition of bucket, shown in Figure 10 c.Utilize timestamp method, the part of taking charge that triggers the token counts renewal is the reception of packet.The timestamp background process is the less important time that makes the token count update, only is in order to prevent that counter from overflowing (rollover), is explained as follows in detail.
An internal counter (current_time) that has same widths with this timestamp field periodically increases progressively, and supposes that every 1us increases once.The * multiple * that the reception of packet triggers speed adds in the token counts.This handles different with the backstage filling that directly speed is periodically added token counts.The multiple of this speed with pass after this barrel receives last packet how long directly related.The time that passs between the packet equal current_time and the bucket the timestamp field between difference.
The equation of current_tc is as follows:
Current_tc=((current_time-timestamp) * speed)+token number (1)
The color of packet is determined based on current_tc.The mark of packet requires data packet length to deduct current_tc.This value (new_tc) is calculated as follows:
New_tc=current_tc-data packet length (2)
Final step is that new_tc and current_time are write respectively in token counts and the timestamp field.If the not free field of stabbing, the token counts field does not have in all senses, because it is only just accurate in the time shown in the timestamp field.Why whether Here it is upgrades regardless of the token counts field, always the timestamp field is updated to current_time.This point and backstage are filled and are handled differently, fill the quantization time handled on the backstage for example in the 8us, and token counts accurately always.
Figure 11 shows that schematic diagram in the calculating of the packet arrival place current_tc shown in the vertical dotted line.Solid line is represented the value of token counts field among the figure.Dotted line is represented the ideal filling of bucket.When each packet arrives, calculate current_tc (shown in the vertical dotted line).Because packet is (in profile) on the side, new_tc is write back the token counts field.Between packet, we see that token counts is a constant.At each packet that divides to go into this barrel, this processing procedure repeats.
If the every 1us of current_time counter increases once, as shown in figure 11, this calculating is accurately in the 1us that ideal is filled.Quantization error in the timestamp method is pointed out by the frequency that current_time increases progressively, and replaces the backstage to fill the speed of handling.No matter how much time that passs between the packet is, the calculating in 1us accurately always.The frequency that increase current_time increases progressively can reduce the quantization error of marker.The current_time that each time cycle increases once can produce a desirable marker.As mentioned above, unfortunately, the frequency that increase current_time increases progressively also can increase the width of timestamp field under the allocated bandwidth situation of given backstage.The cost of accuracy is to take storage space.
A complicated factor that occurs is that internal counter (current_time) circulates the most at last.For example, every 1us increases progressively once 17 digit counters with every 131ms circulation primary.In case the current_time counter begins circulation, equation (current_time-timestamp) will be no longer accurate.Do not add under the situation of added logic, if in more than 1 circulation of current_time (131ms), packet no show batch box, the time of passage can not accurately track again.Therefore, still need the timestamp background process not lose efficacy with the timestamp field that keeps all barrels.For above-mentioned current_time counter, token counts of all barrels and timestamp field must be upgraded once by every 131ms at least.The timestamp background process is upgraded the token counts field based on equation (1), and the timestamp field is set to current_time.As long as the timestamp field did not lose efficacy, equation (1) is always accurate.
In above-mentioned example, timestamp method uses the timestamp background process of only upgrading each barrel within 131ms to be created in the 1us under the desirable filling situation current_tc accurately.If the backstage fill method will reach same accuracy, then require this backstage to fill the every 1us of processing and upgrade each bucket.Be easy to infer trading off between backstage fill method and the timestamp method, promptly increase memory use amount (timestamp field) and computational logic (speed must multiply each other) to save bandwidth of memory.
SrTCM (single-rate 3 color marker devices) and trTCM (dual rate 3 color marker devices) all use two buckets (twin-tub) packet is carried out 3 looks (red, yellow, green) mark.This srTCM scheme utilizes one to promise to undertake bucket (Committed bucket) and the bucket (Excess bucket) that exceeds the quata, and trTCM utilizes one to promise to undertake a bucket and a peak value bucket (Peak bucket).The difference of these two kinds of schemes is bucket filling and the mode of successively decreasing and the mode of flag data bag.
In the srTCM scheme, form two buckets of twin-tub and fill according to identical committed information rate (Committed Information Rate is called for short CIR).Promise to undertake that bucket at first is filled to committed burst size (Committed Burst Size is called for short CBS).Have only when promising to undertake bucket for " expiring ", the bucket that exceeds the quata just can be filled to excess burst size (EBS).In the trTCM scheme, form two buckets of twin-tub and fill with different speed.Promise to undertake that bucket is filled to CBS with committed information rate (CIR), and the peak value bucket is filled to peak value burst amount (PBS) with peak information rate (PIR).
The difference of two kinds of schemes is also how packet successively decreases, and how based on the status indication packet of twin-tub, shown in following 4 forms.Whether the input of preceding two behaviors decision, next behavior output color decision, last two row representatives be with data packet length this bucket that successively decreases.
Figure C200610054908D00221
Table 1
Figure C200610054908D00222
Table 2
Figure C200610054908D00231
Table 3
Figure C200610054908D00232
Table 4
The present invention allows color mark, bucket are progressively increased and successively decrease and programme fully too.In certain embodiments of the present invention, provide 8 programmable register to come the intactly metrological operation of the regulation network equipment.These registers can be used for realizing above-mentioned srTCM and trTCM method.Use the user of this network equipment can be by the table in the register that provides being programmed to determine and realize its oneself metered scheme.Use the selection of which register to make based on each stream.
As mentioned above, relate to two variablees during the width (ts_width) that stab to need computing time: the time of implementation is stabbed the frequency that background process increases progressively with time (loop_period) of serving each barrel and current_time, or equal current_time unit (counter_unit).Basically, it is must be the same with the speed that the current_time counter overflows fast or also faster than it that the timestamp background process is served the speed of each barrel.Yet solution as described below needs the timestamp background process with the twice speed of the time of current_time counter circulation each bucket to be served.The equation relevant with these variablees is as follows:
loop_period=1/2(2ts_width×counter_unit) (3)
The result of the ts_width that draws is:
ts_width=log2((2*loop_period)/counter_unit) (4)
If loop_period is that 25ms (25000us) and counter_unit are 1us, the width of timestamp field needs is 15.6 or 16 so.
The amount of calculation that increases is tangible.The backstage fill method requires the token counts field is carried out no more than addition or subtraction once.Timestamp method requires that the packet of each reception is carried out equation (1) and (2) and also requires the timestamp background process is carried out equation (1).
Multiplication in the equation (1) is wide range from the time.A possible solution simplifying this multiplication is that it is reduced to shifting function on the speed.For example, if (current_time-timestamp) is 240 or binary " b11110000 ", this speed can multiply by 128 or binary " b10000000 ".This is 7 shifting function on the simple ratio.The remaining time of losing because simplifying this multiplication can be used for adjusting the timestamp field.In the superincumbent situation, 122 chronomeres have been lost.Therefore, this timestamp field is set at current_time-122, rather than it is made as current_time.This makes the value of next packet (current_time-timestamp) increase by 122 chronomeres, has therefore considered the time of losing.
It is shifting function on the speed that above-mentioned solution is simplified multiplication in the equation (1), and increases a subtraction and come modification time to stab field to consider the time of loss.Utilize the pseudo-code version following (supposing that timestamp is 4) of the equation (1) of the multiplication of above-mentioned simplification:
delta_t[3:0]=(current_time-TIMESTAMP)
if(delta_t[3])
current_tc=(RATE<<3)+TOKEN?COUNT
missed_time={1’b0,delta[2:0]}
else?if(delta_t[2])
current_tc=(RATE<<2)+TOKEN?COUNT
missed_time={2’b0,delta[1:0]}
else?if(delta_t[1])
current(RATE<<2)+TOKEN?COUNT
missed_time={3’b0,delta[0]}
else?if(delta_t[0])
current_tc=RATE+TOKEN?COUNT
missed_time=4’b0
else?current_tc=RATE
missed_time=4’b0
new_tc=current_tc-packet_length
At last, missed_time is in order to calculate the value of TIMESTAMP field:
new_ts=current_time-missed_time (5)
It more than is description to the specific embodiment of the invention.Clearly, other variation and modification can be made, some or all advantage of the present invention can be reached equally the embodiment of foregoing description of the present invention.Therefore, right of the present invention will go spirit of the present invention and scope have been described, has covered all variation and modification situations to the above embodiment of the present invention.
The application quotes and requires the applying date is the priority of the U.S. Provisional Patent Application No.60/653942 in February 18 in 2005.

Claims (10)

1, a kind of in data network the network equipment of deal with data, the described network equipment comprises:
A plurality of ports receive data and send to described data network by the data that delivery outlet will have been handled from data network;
Control unit interface communicates with peripheral control unit;
Memory Management Unit is controlled with the memory communication of described network equipment outside and to it, obtains data again to described memory stores data and from described memory;
Metering units, described metering units adopts batch box and described a plurality of port, described control unit interface and the described Memory Management Unit communication of timestamp method, and control will be mail to the reduced data stream of described delivery outlet;
Wherein, described metering units further comprises programmable register, with described control unit interface communication, described programmable register is programmed by the control signal of sending via described control unit interface by described peripheral control unit, makes all reduced data stream be controlled by described peripheral control unit.
2, the network equipment according to claim 1 is characterized in that, described programmable register comprises 8 programmable register.
3, the network equipment according to claim 1 is characterized in that, described metering units is according to the packet of the described reduced data of color mark, to flow based on the described reduced data of the signal controlling of described control unit interface.
4, the network equipment according to claim 3 is characterized in that, described metering units is determined the color of input bag and the color of output packet is set based on the value in the input bag.
5, a kind of in the network equipment method of deal with data, described method comprises the steps:
The controller signals that reception is sent by control unit interface from peripheral control unit;
Based on the described controller signals that has received, the programmable register in the metering units of the batch box that adopts timestamp method is programmed;
A port from a plurality of ports receives data;
By the storage of the Memory Management Unit in the memory of described network equipment outside with described reception;
Determine the attribute of the data of described reception, and determine delivery outlet for the data of described reception;
From described memory, obtain the data of described reception again and revise the data of described reception, generate the data flow of having handled based on described definite attribute;
According to the indication of described metering units, send the described data flow of having handled by described delivery outlet;
The described programmable register of wherein said metering units determines to be sent to all features of the reduced data stream of described delivery outlet.
6, method according to claim 5 is characterized in that, described programming step comprises: the controller signals based on described reception is programmed to 8 programmable register.
7, method according to claim 5 is characterized in that, described method further comprises: according to the packet of the described reduced data stream of color mark, to control the described data flow of having handled based on described controller signals.
8, a kind of network equipment of deal with data, the described network equipment comprises:
The receiving system of the controller signals that reception is sent by control unit interface from peripheral control unit;
The programmer of the programmable register in the metering units of the batch box that adopts timestamp method being programmed based on the described controller signals that has received;
Receive data and send the port device that reduced data flows by delivery outlet;
The storage that will be received from analytical equipment by the Memory Management Unit in the memory of described network equipment outside is wherein and from wherein obtaining the memory of data device again;
Data that obtain again based on fixed attribute modification, storage are to generate the modifier of reduced data stream;
Wherein said programmable register determines to be sent to all features of the reduced data stream of described delivery outlet.
9, the network equipment according to claim 8 is characterized in that, described programmer comprises the device of 8 programmable register being programmed based on the controller signals of described reception.
10, the network equipment according to claim 8, it is characterized in that described equipment further comprises according to color and the packet of described reduced data stream carried out mark so that the labelling apparatus of described reduced data stream being controlled based on described controller signals.
CNB2006100549086A 2005-02-18 2006-02-16 Network apparatus and method for data processing in data network Expired - Fee Related CN100486229C (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US65394205P 2005-02-18 2005-02-18
US60/653,942 2005-02-18
US11/081,057 2005-03-16

Publications (2)

Publication Number Publication Date
CN1822572A CN1822572A (en) 2006-08-23
CN100486229C true CN100486229C (en) 2009-05-06

Family

ID=36923642

Family Applications (1)

Application Number Title Priority Date Filing Date
CNB2006100549086A Expired - Fee Related CN100486229C (en) 2005-02-18 2006-02-16 Network apparatus and method for data processing in data network

Country Status (1)

Country Link
CN (1) CN100486229C (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101127705B (en) * 2007-09-20 2011-09-21 中兴通讯股份有限公司 Method for realizing network transmission service quality
US8315168B2 (en) * 2009-10-28 2012-11-20 Broadcom Corporation Priority-based hierarchical bandwidth sharing
CN102594659A (en) * 2012-01-13 2012-07-18 潘薇 Method for carrying out flow quantity and flow direction bandwidth management in local area network exchange network
CN103685028B (en) * 2013-11-30 2018-06-12 许继电气股份有限公司 The method and device that polymorphic type port is in communication with each other

Also Published As

Publication number Publication date
CN1822572A (en) 2006-08-23

Similar Documents

Publication Publication Date Title
EP1694004B1 (en) Traffic policing with programmable registers
US8320240B2 (en) Rate limiting and minimum and maximum shaping in a network device
US7577096B2 (en) Timestamp metering and rollover protection in a network device
US7349403B2 (en) Differentiated services for a network processor
JP5778321B2 (en) Traffic management with ingress control
CN1781287B (en) Methods and devices for flexible bandwidth allocation
CN105794161B (en) Data communication network for aircraft
US10419965B1 (en) Distributed meters and statistical meters
US7894347B1 (en) Method and apparatus for packet scheduling
US7664028B1 (en) Apparatus and method for metering and marking data in a communication system
AU2002339349B2 (en) Distributed transmission of traffic flows in communication networks
CN100486229C (en) Network apparatus and method for data processing in data network
AU2004244306B2 (en) System and method for time-based scheduling
CN100544320C (en) The method of the network equipment and deal with data
US20060187965A1 (en) Creating an IP checksum in a pipeline architecture with packet modification
CN100493036C (en) Network apparatus and method for data processing
CN100486226C (en) Network device and method for processing data in same
CN100499588C (en) Network device and method of data processing in data network
US20060187828A1 (en) Packet identifier for use in a network device
Tyan A rate-based message scheduling paradigm
Albuquerque et al. Implementations of traffic control mechanisms for high-speed networks

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20090506

Termination date: 20160216