CN100499588C - Network device and method of data processing in data network - Google Patents

Network device and method of data processing in data network Download PDF

Info

Publication number
CN100499588C
CN100499588C CNB2006100041863A CN200610004186A CN100499588C CN 100499588 C CN100499588 C CN 100499588C CN B2006100041863 A CNB2006100041863 A CN B2006100041863A CN 200610004186 A CN200610004186 A CN 200610004186A CN 100499588 C CN100499588 C CN 100499588C
Authority
CN
China
Prior art keywords
data
module
parsing
network equipment
port
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CNB2006100041863A
Other languages
Chinese (zh)
Other versions
CN1832456A (en
Inventor
丹尼斯·苏吉克·李
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Broadcom Corp
Zyray Wireless Inc
Original Assignee
Zyray Wireless Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zyray Wireless Inc filed Critical Zyray Wireless Inc
Publication of CN1832456A publication Critical patent/CN1832456A/en
Application granted granted Critical
Publication of CN100499588C publication Critical patent/CN100499588C/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

A network device for processing data on a data network includes a plurality of ports, configured to receive data from a data network and to send processed data to the data network, a memory management unit configured store data on and retrieve data from the memory and a multi-part parser configured to communicate with the plurality of ports and the memory management unit and to parse the data received. The multi-part parser comprises several parsing blocks configured to serially parse the data, with each block designated to parse particular values and where each block but a first block receives parsed data from a prior block with the particular values at a zero offset position.

Description

The network equipment of deal with data and method in data network
Technical field
The present invention relates to handle the network equipment of the data in the network, more specifically, the present invention relates to realizing that the data that the network equipment received of higher processing speed and data-handling capacity resolve.
Background technology
A network can comprise one or more network equipments, Ethernet switch for example, and each switch comprises several modules, is used to handle the information through this device transmission.Specifically, this equipment can comprise the port interface module, is used for sending and receiving the data of network; Memory Management Unit (MMU) is used for storage data up to being transmitted or further handling; And parsing module (resolution module), this module allows according to instruction checking and deal with data.Described decision module has function of exchange, is used for determining control the data of which destination interface.A certain port on the network equipment can be a cpu port, makes equipment send information and from its reception information to external switch/route controlled entity or CPU.
Majority of network equipment is worked in the Ethernet switch mode, and packet enters this equipment from a plurality of ports, and this packet is carried out exchange and other processing.Thereafter, packet is transferred to one or more destination interfaces by MMU.MMU is implemented in and shares pack buffer between different ports, guarantees for each input port, delivery outlet and service queue-level provide resource simultaneously.
Yet majority of network equipment of the prior art does not possess enough disposal abilities, and the ability that often interlinks and greater flexibility just can be provided together and handle bigger throughput.These are linked at together existing equipment combine and also brought other problem, for example must be to these equipment addressings, these problems need not to consider in single network equipment.In the time must resolving and resolve to the data of a plurality of types, will go wrong, therefore need provide stronger parsing and analytic ability to tackle contingent problem.
Summary of the invention
According to an aspect of the present invention, provide a kind of in data network the network equipment of deal with data, the described network equipment comprises:
A plurality of ports, from data network, receive data and will handle after data send in the described data network;
Memory Management Unit, control and with the memory communication that is positioned at described network equipment outside, obtain data again to described memory stores data and from described memory;
The multi-part resolver is communicated by letter with described Memory Management Unit with described a plurality of ports, and the described data that receive from data network are resolved;
Wherein, described multi-part resolver comprises several parsing modules, sequentially data are resolved, each module parses particular value wherein, each module except that first module is received in data after the parsing that the zero offset position has described particular value from last module.
Preferably, described several parsing module comprises 4 parsing modules.
Preferably, described 4 parsing modules comprise the parsing module of the parsing module of special analytic sheaf 2 fields, special analytic sheaf 3 fields and the parsing module of special analytic sheaf 4 fields.
Preferably, described several parsing modules are linked together by intercommunicating line, and wherein each parsing module except that first module receives the signal of transmission to skip its performed parsing by described intercommunicating line from last parsing module.
Preferably, one in described a plurality of ports comprises the higher speed port, and is resolved by an independent resolver with the higher speed data of institute's described higher speed port reception.
Preferably, the addressable search engine of each module in described several parsing module.
Preferably, a module-specific in described several parsing module is in the packet of receiving is carried out decapsulation.
According to an aspect of the present invention, provide a kind of in the network equipment method of deal with data, described method comprises the steps:
On a plurality of ports of the network equipment, receive data;
By the multi-part resolver described data are resolved;
By Memory Management Unit with described storage at the memory that is arranged in described network equipment outside;
From described memory, obtain the described data of receiving again, and the described data of receiving are made amendment, if desired, generate data after the processing based on the attribute of determining;
Data after the described processing are sent to a output port in described a plurality of port;
Wherein, the step of described resolution data comprises by several parsing modules resolution data sequentially, wherein each module is used to resolve particular value, and each module except that first module is received in data after the parsing that the zero offset position has described particular value from last module.
Preferably, described step of sequentially resolving described data comprises by 4 parsing modules and sequentially resolves described data.
Preferably, described 4 parsing modules comprise the parsing module of the parsing module that is specifically designed to analytic sheaf 2 fields, special analytic sheaf 3 fields and the parsing module of special analytic sheaf 4 fields.
Preferably, described method also comprises the signal that is sent from last parsing module reception by intercommunicating line by a parsing module, to skip the parsing of being carried out by a described parsing module.
Preferably, the addressable search engine of each module in described several parsing module.
Preferably, a module-specific in described several parsing module is in the packet of receiving is carried out decapsulation.
According to an aspect of the present invention, provide a kind of network equipment of deal with data, the described network equipment comprises:
Port device receives data and the data after output port sends processing;
Resolver is by the data of the described reception of multi-part resolver resolves;
Storage arrangement, by Memory Management Unit with the storage received at the memory that is arranged in network equipment outside, and from wherein obtaining the data of being stored again;
Modifier is revised the data of obtaining of being stored, the data after handling based on the attribute generation of determining;
Wherein, described resolver comprises the device of sequentially resolving described reception data by several parsing modules, wherein each module is used to resolve particular value, and each module except that first module is received in data after the parsing that the zero offset position has described particular value from last module.
Preferably, described device of sequentially resolving the data of described reception comprises the device of sequentially resolving described data by 4 parsing modules.
Preferably, the described network equipment also comprises searches device, search list is carried out searched the attribute that the data of described reception are determined in search.
Description of drawings
The invention will be further described below in conjunction with drawings and Examples, in the accompanying drawing:
Fig. 1 is the structural representation according to the network equipment of one embodiment of the invention;
Fig. 2 is the schematic diagram that the use network equipment port according to one embodiment of the invention communicates;
Fig. 3 a is according to the employed schematic diagram that is positioned at the shared storage of this network equipment outside of the network equipment of one embodiment of the invention;
Fig. 3 b is the schematic diagram in the unit caches pond of the shared memory architecture shown in Fig. 3 a;
Fig. 4 is used for resource allocation restriction to guarantee the schematic diagram to the buffer management mechanism of the fair access of resource by Memory Management Unit;
Fig. 5 is the schematic diagram according to the second order resolver of certain embodiments of the invention;
Fig. 6 is the schematic diagram according to another resolver that uses with interconnect port of certain embodiments of the invention;
Fig. 7 is the schematic diagram according to the adaptation as a result of certain embodiments of the invention;
Fig. 8 is the configuration schematic diagram according to the delivery outlet moderator of one embodiment of the invention;
Fig. 9 is the schematic diagram at different levels of many parts resolver (multi-part parser) according to one embodiment of the invention.
Embodiment
Present invention is described below in conjunction with embodiment preferably, and example is wherein introduced in the accompanying drawings.
Figure 1 shows that the network equipment schematic diagram of exchange chip for example of realizing one embodiment of the invention.Equipment 100 comprises port interface module 112 and 113, Memory Management Unit (MMU) 115, input port/delivery outlet module 130 and search engine 120.Input port/delivery outlet module 130 is resolved the data that receive and is utilized search engine 120 to carry out based on the data of having resolved and search.Even the major function of Memory Management Unit 115 is under serious situation about stopping up, also can be with measurable method administrative unit buffer memory and packet pointer resource effectively.By these modules, packet can take place revise, and packet can send to suitable destination interface.
According to several embodiments of the present invention, equipment 100 can also comprise an inside for example HiGig of high-speed port (internal fabric high speed port) that interweaves TMOr high-speed port 108, one or more external ethernet port 109a-109x and a cpu port 110.High-speed port 108 is used for the various network device that interconnects in system, thereby forms an inner exchanging net, is used for externally transmits data packets between the source port and one or more outside destination interface.Like this, high-speed port 108 is sightless in the system outside that comprises a plurality of interconnected network equipments.Cpu port 110 is used for transmission information to outside exchange/route controlled entity or CUP, and from wherein receiving information.According to one embodiment of present invention, CUP port one 10 can be considered among the external ethernet port 109a-109x one.Equipment 100 is connected by the outer CPU of CPU processing module 111 (as CMIC, it is connected with the pci data bus of outer CPU with connection device 100) and outside/sheet.
In addition, the search engine module 122,124 that search engine module 120 can be added and 126 is formed, and the characterization and the specific of modification process that are used for the data of the network equipment 100 processing with execution are searched.Equally, input port/delivery outlet module 130 also includes attached module, is used for the data that interweave internally high-speed port 134 and other port ones 38 receive are resolved, and also includes other module 132 and 136, returns the port of the network equipment in order to transfer of data.These two resolvers will provide detailed description below.
The network information enters and output equipment 100 by external ethernet port 109a-109x.Specifically, the information flow-rate in the equipment 100 routes to one or more unique purpose ethernet ports by external ethernet resource port.In one embodiment of the invention, equipment 100 is supported 12 physics ethernet ports 109 and a high-speed port 108, the wherein digit rate work that each physics ethernet port can 10/100/1000Mbps, this high-speed port 108 can 10Gbps or the speed work of 12Gbps.
The structure of physical port 112 is further illustrated by Fig. 2.A series of serialization/parallelization module 103 transmits and receive data, and wherein the data of each port reception are managed by port manager 102A-L.These a plurality of port manager have timing generator 104 and bus agent 105 to realize their operation.Data Receiving and be sent to the port information storehouse so just can monitor traffic.It should be noted that high-speed port 108 also has approximate function but do not need so many parts because only need to a port manage but this port with higher speed operation.
In one embodiment of the invention, equipment 100 uses shared memory architecture, shown in Fig. 3 a-3b, MMU 115 is implemented in and shares pack buffer between different port, guarantees for each input port, delivery outlet and the service queue level relevant with each delivery outlet provide resource simultaneously.Fig. 3 a is depicted as the schematic diagram of shared memory architecture of the present invention.Specifically, the memory resource of equipment 100 comprises unit caches pond (CBP) memory 302 and transmit queue (XQ) memory 304.CBP memory 302 is the outer resources of sheet, and among some embodiment, the CBP memory is made up of 4 dram chip 306a-306d.According to one embodiment of present invention, each dram chip has the capacity of 288Mbits, and the total capacity of CBP memory 302 is the original storage amount of 144Mbytes.Shown in Fig. 3 b, CBP memory 302 is divided into the unit 308a-308x of a plurality of 256K576 bytes, wherein each unit comprise 32 bytes header buffer memory 310, be up to the packet 312 of 512 bytes and the headspace 314 of 32 bytes.Like this, each input bag takies the unit 308 of at least one complete 576 byte.Therefore when input comprises the frame of 64 bytes, import in the bag for it is reserved with the space of 576 bytes, although have only 64 bytes to be used in this 576 byte by this frame.
Referring to Fig. 3 a, XQ memory 304 comprises a row packet pointer 316a-316x, points to CBP memory 302, and wherein different XQ pointers 316 is associated with each port.The packet counting of the element count of CBP memory 302 and XQ memory 304 can be followed the trail of based on the progressive row of input port, delivery outlet and service.Like this, equipment 100 can provide resource to guarantee based on a unit and/or service.
In case packet is by source port 109 access arrangements 100, this packet will be transferred to resolver to handle.In processing procedure, the packet shared system resource 302 and 304 on each input port and the delivery outlet.In a particular embodiment, two independently 64 byte burst packet transfer to MMU by local port and high-speed port.Figure 4 shows that by MMU 115 and be used for resource allocation restriction to guarantee schematic diagram to the buffer management mechanism of the fair access of resource.MMU 115 comprises input port back pressure mechanism 404, first (the head of line) mechanism 406 of row and Weighted random earlier detection mechanism 408.Input port back pressure mechanism 404 supports lossless state, and all input ports are managed the buffer resource liberally.The visit that the first mechanism 406 of row supports cache resources, the throughput of optimization system simultaneously.Weighted random earlier detection mechanism 408 improves whole network throughput.
Input port back pressure mechanism 404 uses packets or location counter with the packet following the trail of each input port and use or the quantity of unit.Input port back pressure mechanism 404 includes the register that is used for one group 8 threshold values that are provided with respectively and is used to specify in 8 threshold values which and is used to the register of an input port in the system.This group threshold values comprises limit threshold values 412, abandon the limit (discard limit) threshold values 414 and replacement limit threshold values 416.If the counter that is associated with the use of input port packets/cells increases and surpasses when abandoning limit threshold values 414, the packet at place, input port will be dropped.Based on the register that is used for tracing unit/data packet number, can use the information flow (cache resources that this moment, used this input port has exceeded its fair cache resources of sharing) that suspends flow control and stop to arrive the input port, thereby stop amount of information, and alleviate the obstruction that causes by this input port from the input port that breaks the rules.
Particularly, follow the trail of to determine whether it is in input port back pressure state based on the input port back pressure counter that is associated with this group threshold values always each input port.When this input port is in input port back pressure state, periodically be that the time-out flow control frame of (OxFFFF) sends out this input port with timer value.When this input port no longer is in input port back pressure state, timer value is sent by this input port for the time-out flow control frame of (0x00), and allowed information flow to flow once more.If the current input port that is not in, input port is by pressure condition, and the value of packet counter goes beyond the limit of threshold values 412, and the state of this input port will be converted to input port back pressure state.Reduce to the limited threshold values of replacement below 416 if the input port is in the value of input port back pressure state and packet counter, then the state of this port will no longer be in the back pressure state.
The fair access that the first mechanism 406 of row supports cache resources, the throughput in the optimization system simultaneously.The first mechanism 406 of row relies on the packet that abandons to manage cache resources and kept away the total throughput of system of improvement.According to one embodiment of present invention, the first mechanism 406 of row uses delivery outlet counter and predetermined threshold values to use with the buffer of following the trail of each delivery outlet and seeervice level, and the packet of making decision thereafter and newly arriving the input port and will mail to the specific delivery outlet that has exceeded the quata/service queue level to abandon.The first mechanism 406 of row supports different threshold values according to the color of newly arrived packet.Packet can be based on color on metering of carrying out in the module of input port and the marking operation mark, and MMU relies on the different colours of packet to make different operations.
According to one embodiment of present invention, going first mechanism 406 can carry out independent setting and operation on each service queue level and all of the port (comprising cpu port).The use that the first mechanism of row 406 usage counters are followed the trail of XQ memory 304 and CBP memory 302 uses threshold value to support the static allocation of CBP memory buffer 302 and the dynamic assignment of XQ memory buffer 304.Abandon threshold values 422 definition and give all unit in the CBP memory 302, and no matter mark be what color.When the value of the location counter relevant with port reaches when abandoning threshold values 422, this port is converted to capable indictment attitude.Thereafter, if the value of its location counter drops to replacement limit threshold values below 424, then this port will swap out from the transfer of row indictment attitude.
For XQ memory 304, the fixing XQ buffer memory that distributes for each service queue level is defined by XQ entry value (entry value) 430a-430h.Each XQ entry value 430a-430h corresponds to a relevant formation and reserves how many buffers inlet and made definition.For example, if the XQ memory of 100 bytes is assigned to a port, the value that the one or four relevant with XQ inlet 430a-430d respectively service queue level distributed is 10 bytes, and the value of relevant with XQ inlet 430e-430h respectively back four queue assignment is 5 bytes.
According to one embodiment of present invention, although the buffer inlet of the relevant XQ entry value of all bases for its reservation do not used in a formation, the first mechanism 406 of row can not distribute to another formation with untapped buffer.However, unappropriated 40 bytes that are used for the XQ buffer memory remainder of this port can be shared by all service queue levels relevant with this port.The restriction that special services queue level can take the shared pool of how many XQ buffer memorys can be provided with limit threshold values 432 by XQ and set.Like this, the maximum quantity that limit threshold values 432 can be used to define a spendable buffer memory of formation is set, and is used to prevent that the formation from using all available XQ buffer memorys.Be not more than the total quantity of the XQ buffer memory that this port can use for the summation of guaranteeing XQ entry value 430a-430h, and guarantee that each service queue level can visit the quota of the XQ buffer memory that is distributed by its entry value 430, the available pool of using the dynamic counter register of port to follow the trail of the XO buffer of each port, the quantity that wherein dynamic counter register 434 is followed the trail of the available shared XQ buffer memory of this port always.Dynamically the initial value of counter register 434 deducts the value after the quantity sum of XQ entry value 430a-430h for the total quantity of the XQ buffer relevant with this port.Use when the service queue level to exceed when continuing available XQ buffer after its XQ entry value 430 allocated quotas, dynamically counter register 434 subtracts 1.On the contrary, use when the service queue level to exceed when discharging the XQ buffer after its XQ entry value 430 allocated quotas, dynamically counter register 434 adds 1.
When a formation request XQ buffer memory, the first mechanism of row 406 determine all inlets that these formations are used whether be less than this formation XQ entry value 430, and agree this cache request under less than the situation of XQ entry value 430 at the inlet that uses.But, if the inlet that uses greater than the XQ entry value 430 of formation, whether the amount that the first mechanism 406 of row will determine to be asked less than available buffer total amount or less than setting the maximum of giving this formation by the relevant limit threshold values 432 that is provided with.No matter the color of packet marking how, the threshold values of abandoning that limiting threshold value 432 comes down to this formation is set.Like this, the packet count value of this packet reaches when limit threshold values 432 is set, and formation/port enters capable indictment attitude.When the first mechanism 406 of row detects capable indictment attitude, send update mode, so the packet of block ports will be dropped.
Yet, because lagging reasons when the first mechanism of row 306 transmit statuss are upgraded, may also have packet to transmit between MMU 115 and this port.In this case, because be in the situation that packet discard appears in capable indictment attitude MMU 115 places.In one embodiment of the invention, because the pile line operation of packet, predetermined amount is reduced in the dynamic pond of XQ pointer.Like this, when the quantity of available XQ pointer was equal to or less than predetermined quantity, this port translation was to row indictment attitude, and a update mode is sent to this port by MMU 115, reduced the quantity of data packets that may be abandoned by MMU 115 with this.In order to jump out capable indictment attitude, the XQ packet count value of this formation must drop to replacement limit threshold values below 436.
For the XO counter of a special services queue level, if its data packet discarding to be possible when not reaching the XQ resource that limit threshold values 432 and this port are set and being exceeded the quata to take by other service queue level.In one embodiment of the invention, can also be taildrop threshold 438 and 439 in the middle of the packet definitions that contains the particular color mark, wherein when the taildrop threshold definition should be with the data packet discarding of particular color in the middle of each.For example, middle taildrop threshold 438 can be used for defined label and should when be dropped for yellow packet, and middle taildrop threshold 439 is used for defined label and should when be dropped for red packet.According to one embodiment of present invention, packet can be labeled as green, yellow or red according to the priority of appointment respectively.Be the packet of guaranteeing each color and corresponding to processing of color assignment in each formation, one embodiment of the present of invention include virtual maximum threshold values 440.Virtual maximum threshold values 440 equals the unallocated and available buffer memory quantity value after divided by the buffer quantity sum of number of queues and current use.Virtual maximum threshold values 440 guarantees that the packet of each color handles in certain proportion.Therefore, if available unappropriated buffer quantity is provided with limit threshold values 432 less than a certain particular queue, and all available unappropriated buffers are visited in this formation request, virtual maximum threshold values 440 calculates for this formation in the first mechanism 406 of row, and according to the packet of relevant colors that is the rate process certain proportion amount of each definitions of color.
Be the save register space, the XQ threshold values can be expressed as compressed format, and wherein one group of XQ inlet is represented in each unit.The size of group depends on the quantity of the XQ buffer that the delivery outlet specific with certain/service queue level is relevant.
Weighted random earlier detection mechanism 408 is queue management mechanisms, empties packet in advance based on probable algorithm before XQ buffer 304 is used up.Therefore Weighted random earlier detection mechanism 408 can be used for optimizing the throughput of whole network.Weighted random earlier detection mechanism 408 comprises a mean value statistical value, in order to following the trail of the length of each formation, and be based upon queue definitions abandon explanation (drop profile) packet discard.This abandons the possibility that abandons that has defined under the given specific average queue size situation is described.According to one embodiment of present invention, Weighted random earlier detection mechanism 408 can be based on service queue level and the independent explanation of packet definitions.
As shown in Figure 1, MMU 115 from the input port/delivery outlet module 130 receives packets to store.As mentioned above, input port/delivery outlet module 130 comprises a secondary resolver, and this part is shown in Figure 5.As mentioned above, data receive at port 501 places of the network equipment.Data also receive via CMIC502, and wherein these data will be by input CMIC interface 503.It is the input port data format from the CMIC-bus format conversion that this interface is used for the CMIC data.In one embodiment, data are converted to 172 bit formats from 45, and the form of back comprises 128 data, 20 control and 24 possible high speed header like this.Thereafter, data are sent to input port moderator 504 with the form of 64 bursts.
Input port moderator 504 receives data from port 501 and input CMIC interface 503, and carries out multiplexed based on the time-division multiplex arbitration technique to these inputs.Thereafter, data are transferred into MMU 510, and at MMU 510 places, all high speed headers are removed, and are set to the MMU interface format.Check the packet attribute then, for example, end-to-end, interrupt Bei Nuli handle (Interupted Bernoulli Process, IBP) or first (HOL) packet of row.In addition, 128 bytes of data are monitored, and the high speed header is reached resolver ASM 525.If the bursty data that receives comprises end marker, then CRC result and data packet length will be sent to adaptation 515 as a result.And length of data package is obtained by bursts length estimation, and generates 16 packet ID and use for debugging.
Resolver ASM 525 is converted to 64 bursts of data of 4 circulations of each bursts 128 byte bursts of 8 circulations of each bursts.The bursty data of 128 bytes is transferred to tunnel resolver 530 and resolver FIFO 528 simultaneously to keep identical packet sequence.Tunnel resolver 530 has determined whether to adopt the tunnel encapsulation of any kind, comprises MPLS and IP tunnel effect.In addition, this tunnel resolver is also checked outside and inner label.By dissection process, session initializtion protocol (SIP) is offered VLAN based on subnet, wherein, if when packet is ARP(Address Resolution Protocol), inverse arp agreement (RARP) or IP packet, SIP will takes place resolve.Based on source trunk line mapping table, can also create the ID (trunk port grid ID) of trunk ports grid, unless do not have trunk line (trunk) if or this trunk line ID can from the high speed header, obtain.
Tunnel resolver 530 is worked with tunnel detector 531.The tunnel verifier is checked the verification of IP header and the characteristic of IPv6 packet (IPv6over IPv4packets) (checksum) and on UDP tunnel effect and the IPv4.Tunnel resolver 530 utilizes search engine 520 to determine tunnel type by predefined table.
The packet header of resolver FIFO 528 storages 128 bytes and the high speed header of 12 bytes, this high speed header is resolved once more by deep analysis device 540.Finish when search engine and once to search for and prepare to carry out deep layer when searching, byte of header is stored.Also will keep other attribute, for example data packet length, high speed header state and packet ID.Deep analysis device 540 provides three kinds of data of different types, comprises the Search Results of the search engine 520 of " flowing through ", inner analysis result and high-speed module header.Specific type of data packet will be determined and be sent to search engine.Deep analysis device 540 reads the data from resolver FIFO, and predefined field is resolved.Search engine provides lookup result based on the value that is sent to this search engine, wherein will check to keep packet sequence packet ID.
Deep analysis device 540 also use agreement detector 541 check inner IP header verification and, the rejection of attribute, the mistake in the high-speed module header are attacked in the service of inspection, and carry out the martian verification.This deep analysis device is also worked with field processor resolver 542, to resolve predefine field and user-defined field.The predefine field receives from the deep analysis device.These fields comprise MAC destination address, mac source address, inside and outside label, EtherType, IP purpose and source address, COS, IPP, IP mark, TDS, TSS, TTI, TCP mark and stream label.User-defined field also is analysable, and the highest length is 128.
As mentioned above, the data that receive on the data that receive on the high-speed port and other local port are to separate individual processing.As shown in Figure 1, high-speed port 108 has own buffer, and data flow in its own resolver 134 from this port.The more details of high speed resolver as shown in Figure 6, its structure is similar to the resolver of secondary shown in Fig. 5, but has some difference.The data that high-speed port 601 receives are transferred to high-speed port assembler 604.This assembler receives these data and high speed header with the form of 64 byte bursts, and is similar to the form that is used for local port.Described data are sent to MMU610 and do not have described high speed header with the MMU interface format.
128 bytes of described data are monitored and are sent to deep analysis device 640 with the high speed header.Similar to the secondary resolver is that end-to-end information is examined, and sends this analysis result in sideband.Approximate equally is CRC and data packet length to be checked, and check result is sent to adaptation 615 as a result.In addition, generate 16 packet ID to be used to debug the stream with the trace data bag.
The high speed versions of deep analysis device 640 is subsets of secondary deep analysis device 540, and carries out similar function.Yet, there is not information via in the search engine 620, it can not skip the MPLS header, and only resolves payload, does not send depth data to search engine.On function, the high speed versions of FP resolver 642 is identical with FP resolver discussed above 542.
Adaptation is shown specifically in Fig. 7 as a result.It should be noted that adaptation as a result can be applied to by a plurality of resolvers sharedly by general, perhaps each resolver uses its own adaptation as a result.Among the embodiment shown in the figure, two types port 710 and 720 receives data and passes on certain numerical value by the operation of input port assembler 715 and input port moderator 725 and give detector as a result.This numerical value comprises existence, CRC and the data packet length of port number, EOF.Adaptation is worked with a series of FIFO as a result, comes the match search result by using search engine 705.Per-port basis, label and MIB incident and data packet length and CRC state are complementary.For the network port, per 8 cycles provide Search Results one time, and for high-speed port, per 8 cycles provide Search Results one time.If there is the situation of delaying time less than the input packet time, this structure makes Search Results be stored in the adaptation as a result of each port, delays time than importing the packet time in short-term when search, and this structure allows to wait for packet result's terminal appearance.
After the data that receive are resolved and assessed, make according to the information that receives and to pass on decision.This passes on to determine what destination interface is generally packet should send to, although this decision also can be the forgo data bag or gives CPU or other controller by CMIC 111 transfer of data bags.At delivery outlet, the parsing of equipment Network Based and assessment, this packet is modified.If delivery outlet is a high-speed port, this modification comprises the modification of mark, header information or adds module header.This modification is that carry out on the basis with the unit, to avoid producing time-delay when the transfer of data bag.
Figure 8 shows that the configuration schematic diagram that is used for delivery outlet moderator of the present invention.As shown in Figure 8, MMU115 also comprises scheduler 802, for 8 the service queue level 804a-804hs relevant with each delivery outlet provide arbitration, thereby provides minimum and the maximum bandwidth assurance.What it should be noted that introduction herein is 8 grades of services, but also supports other service class patterns.Scheduler 802 and one group of minimum and the maximum 806a-806h of metrological service integrate, and wherein each metrological service monitors the flow of each seeervice level and the flow of each delivery outlet.The 806a-806h of metrological service supports flow to adjust function (traffic shaping function), and guarantee its minimum bandwidth requirement based on each service queue level and/or delivery outlet, wherein the scheduling of scheduler 802 decision comes together to dispose with one group of control mask by flow adjusting mechanism 806a-806h, and this control mask is used to revise how use traffic adjusting mechanism 806a-806h of scheduler 802.
As shown in Figure 8, the minimum and maximum 806a-806h of metrological service monitoring is based on each service queue level and based on each delivery outlet monitoring flow.Minimum and maximum bandwidth metering 806a-806h gives scheduler 802 in order to feedback states information, and scheduler 802 these state informations of response are revised the service order in its whole service queue 804.Therefore, the network equipment 100 can make the system sales merchant carry out a plurality of service models by configuration service queue level 804, thereby supports clear and definite minimum and maximum bandwidth to guarantee.In one embodiment of the invention, the 806a-806h of metrological service detects information flow-rate based on the service queue level, provide the flow of a service queue level whether to be higher or lower than state information minimum and that maximum bandwidth requires, and transmitting this information to scheduler 802, scheduler 802 uses this metrical information to revise its scheduling decision.Like this, the 806a-806h of metrological service assist with service queue level 804 be divided into one group of formation that does not meet the minimum bandwidth requirement, one group meet its minimum bandwidth but do not meet formation that maximum bandwidth requires and one group exceeded the formation that its maximum bandwidth requires.If a formation belongs to the formation that this group does not meet its minimum bandwidth requirement, and this formation has packet, and scheduler 802 will be served this formation according to the scheduling rule of configuration.If formation belongs to this group and do not meet its minimum bandwidth requirement but do not have and surpass the formation that its maximum bandwidth requires, and in this formation packet is arranged, scheduler 802 will be served this formation according to the scheduling rule of configuration.If exceed formation that its maximum bandwidth requires or this formation for empty if formation belongs to this group, scheduler 802 will not served this formation.
Minimum and the 806a-806h of maximum bandwidth metrological service can use simple funnel bucket mechanism to realize, follows the trail of a service queue level 804 and whether has taken its minimum or maximum bandwidth.The minimum of each seeervice level 804 and the scope of maximum bandwidth are arranged between the 64Kbps to 16Gbps, and increase progressively with 64Kbps.This funnel bucket mechanism has token " leakage " bucket that quantity can be set, each bucket with among configurable ratio and the formation 804a-804h one be associated.When the minimum bandwidth of metering service queue level 804, because packet enters service queue rank 804, the token of the some that is in proportion with this packet is added in the corresponding bucket, has barrel the maximum of high threshold values.This funnel bucket mechanism includes the minimum bandwidth that removes what tokens in refresh interface and the each refresh time of the definition unit.Minimum threshold values is used in reference to out data flow and whether satisfies its minimum rate at least, fills threshold values (fill threshold) and is used for pointing out what tokens the funnel bucket has.Rise and surpass minimum threshold values when filling threshold values, a sign of pointing out that this data flow has satisfied its minimum bandwidth requirement is set to true value.Be lower than minimum threshold values when filling threshold values decline, this sign is set to pseudo-value.
After the 806a-806h of metrological service points out that the maximum bandwidth of stipulating has exceeded high threshold values, the service that scheduler 802 stops this formation, and this formation is divided in the set of queues that exceeds the maximum bandwidth requirement.Then, send a sign and exceeded its maximum bandwidth to point out this formation.Subsequently, fill threshold values when it and drop to below the high threshold values and indicate it to exceed when the sign of bandwidth is reset, this formation only receives from scheduler 802 and serves.
The 808i of maximum rate metrological service is exceeded in order to the maximum bandwidth of pointing out certain ports specify, and when maximum bandwidth is exceeded, works in the mode identical with the 806a-806h of mechanism.According to one embodiment of present invention, usually whether formation 804 or a port are included in the scheduling arbitration based on the maximum metrological service of formation and port and exert an influence.Like this, only there is flow restriction function in maximum metrological service to scheduler 802.
The minimum metering of service queue level on the other hand, 804 has more complicated interactive operation with scheduler 802.In one embodiment of the invention, scheduler 802 is supported various scheduling rules, comes the bandwidth sharing performance of analog weighted Fair Queue scheme.This Weighted Fair Queuing scheme is the weighted version based on the Fair Queue scheme of packet, is defined as a kind of method that is used to provide " based on the position round-robin " scheduling of packet.Like this, can dispatch packet, with based on delivery time of this packet visit delivery outlet, this time can provide when serving based on the position round-robin at scheduler and calculate.Relevant weighting field will influence the detail how scheduler uses minimum metrological service, and wherein this scheduler attempts to provide minimum bandwidth to guarantee.
In one embodiment of the invention, minimum bandwidth guarantees it is that a relative bandwidth guarantees, whether one of them relevant field has determined scheduler will the minimum bandwidth metering be set and be considered as the specification that a relative or absolute bandwidth guarantees.If be provided with relevant field, scheduler will be provided with minimum bandwidth 806 and be considered as a relative bandwidth specification.Scheduler 802 attempts being provided at the relative bandwidth of sharing on the backlog queue 804 then.
Figure 9 shows that the functional schematic of multi-part resolver.In certain embodiments of the present invention, this multi-part resolver is combined in the deep analysis device 540.This multi-part resolver is made of a plurality of parsing modules 910,920 etc., and wherein each module is resolved the specific fragment of input packet specially.Each module all begins to resolve from offset of zero, and so, the fragment after the parsing of a module output is when offering next module, and the field that next module is wanted to resolve is from zero-bit.Therefore, if a module-specific is in resolving the tunnel value, next module-specific is in analytic sheaf 3 (layer3) field, and so last module will be resolved its oneself field, and the fragment after will resolving then offers next module and do not carry out tunnel encapsulation (tunnel encapsulation).Because the module parses accumulation is carried out, module subsequently need not the header of search data bag and searches relevant field.In addition, this can also make the function of each module realize with the streamline form.
In certain embodiments,, data are offered the value that the first order 910 is come analytic sheaf 2 (layer2),, the fragment after resolving is offered the second level 920 resolve the tunnel value, comprise IP or MPLS value then in step 915 in step 901.After this, the value of the third level 930 analytic sheafs 3 (layer 3), the value of the fourth stage 940 analytic sheafs 4 (layer 4).Through behind the afterbody, in step 975, field value relevant and/or that asked will be exported to search for, to revise or to replace.In addition, also can use other parsing modules, comprise the decapsulation engine.But the quantity of module is 4 or 5 without limits, but the quantity of employed module is relevant with overall parsing requirement.
In Fig. 9, also show the communication 918 between the module.These interactive operations of intermodule make this multi-part resolver to respond to particular environment.A specific communications may using is to skip the parse operation of next module under the situation that does not need to use.A typical example is, if initial decision should be carried out decapsulation to this fragment, but in another treatment step, find that this packet is not IP tunnel (IP tunnel), but the IP on the IP (IP over IP) therefore only needs at the IPv4 value this fragment to be resolved.
It more than is description to the specific embodiment of the invention.Clearly, other variation and modification can be made, some or all advantage of the present invention can be reached equally the embodiment of foregoing description of the present invention.Therefore, right of the present invention will go spirit of the present invention and scope have been described, has covered all variation and modification situations to the above embodiment of the present invention.
The application quotes and requires the applying date is the priority of the U.S. Provisional Patent Application No.60/653953 in February 18 in 2005.

Claims (10)

1, a kind of in data network the network equipment of deal with data, the described network equipment comprises:
A plurality of ports, from data network, receive data and will handle after data send in the described data network;
Memory Management Unit, control and with the memory communication that is positioned at described network equipment outside, obtain data again to described memory stores data and from described memory;
The multi-part resolver is communicated by letter with described Memory Management Unit with described a plurality of ports, and the described data that receive from data network are resolved;
Wherein, described multi-part resolver comprises several parsing modules, sequentially data are resolved, each module parses particular value wherein, each module except that first module is received in data after the parsing that the zero offset position has described particular value from last module.
2, the network equipment according to claim 1 is characterized in that, described several parsing modules comprise 4 parsing modules.
3, the network equipment according to claim 2 is characterized in that, described 4 parsing modules comprise the parsing module of the parsing module of special analytic sheaf 2 fields, special analytic sheaf 3 fields and the parsing module of special analytic sheaf 4 fields.
4, the network equipment according to claim 1, it is characterized in that, described several parsing module is linked together by intercommunicating line, and wherein each parsing module except that first module receives the signal of transmission to skip its performed parsing by described intercommunicating line from last parsing module.
5, a kind of in the network equipment method of deal with data, described method comprises the steps:
On a plurality of ports of the network equipment, receive data;
By the multi-part resolver described data are resolved;
By Memory Management Unit with described storage at the memory that is arranged in described network equipment outside;
From described memory, obtain the described data of receiving again, and the described data of receiving are made amendment, if desired, generate data after the processing based on the attribute of determining;
Data after the described processing are sent to a output port in described a plurality of port;
Wherein, the step of described resolution data comprises by several parsing modules resolution data sequentially, wherein each module is used to resolve particular value, and each module except that first module is received in data after the parsing that the zero offset position has described particular value from last module.
6, method according to claim 5 is characterized in that, described step of sequentially resolving described data comprises by 4 parsing modules sequentially resolves described data.
7, method according to claim 6 is characterized in that, described 4 parsing modules comprise the parsing module of the parsing module that is specifically designed to analytic sheaf 2 fields, special analytic sheaf 3 fields and the parsing module of special analytic sheaf 4 fields.
8, a kind of network equipment of deal with data, the described network equipment comprises:
Port device receives data and the data after output port sends processing;
Resolver is by the data of the described reception of multi-part resolver resolves;
Storage arrangement, by Memory Management Unit with the storage received at the memory that is arranged in network equipment outside, and from wherein obtaining the data of being stored again;
Modifier is revised the data of obtaining of being stored, the data after handling based on the attribute generation of determining;
Wherein, described resolver comprises the device of sequentially resolving the data of described reception by several parsing modules, wherein each module is used to resolve particular value, and each module except that first module is received in data after the parsing that the zero offset position has described particular value from last module.
9, the network equipment according to claim 8 is characterized in that, described device of sequentially resolving the data of described reception comprises the device of sequentially resolving the data of described reception by 4 parsing modules.
10, the network equipment according to claim 8 is characterized in that, the described network equipment also comprises searches device, search list is carried out searched the attribute that the data of described reception are determined in search.
CNB2006100041863A 2005-02-18 2006-02-20 Network device and method of data processing in data network Expired - Fee Related CN100499588C (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US65395305P 2005-02-18 2005-02-18
US60/653,953 2005-02-18
US11/154,827 2005-06-17

Publications (2)

Publication Number Publication Date
CN1832456A CN1832456A (en) 2006-09-13
CN100499588C true CN100499588C (en) 2009-06-10

Family

ID=36994461

Family Applications (1)

Application Number Title Priority Date Filing Date
CNB2006100041863A Expired - Fee Related CN100499588C (en) 2005-02-18 2006-02-20 Network device and method of data processing in data network

Country Status (1)

Country Link
CN (1) CN100499588C (en)

Also Published As

Publication number Publication date
CN1832456A (en) 2006-09-13

Similar Documents

Publication Publication Date Title
EP1694004B1 (en) Traffic policing with programmable registers
US8085668B2 (en) Timestamp metering and rollover protection in a network device
US20060114912A1 (en) Rate limiting and minimum and maximum shaping in a network device
US7463630B2 (en) Multi-part parsing in a network device
US20060187832A1 (en) Filter based range check in a network device
US8457131B2 (en) Dynamic table sharing of memory space within a network device
TWI323108B (en) Powerful and expandable pipeline architecture for a network device
CN100544320C (en) The method of the network equipment and deal with data
US20060187965A1 (en) Creating an IP checksum in a pipeline architecture with packet modification
CN100486229C (en) Network apparatus and method for data processing in data network
EP1694005B1 (en) Flexible packet modification engine for a network device
US20060187948A1 (en) Layer two and layer three virtual private network support in a network device
CN100499588C (en) Network device and method of data processing in data network
CN100493036C (en) Network apparatus and method for data processing
US20060187919A1 (en) Two stage parser for a network
US20060187923A1 (en) Dynamic filter processor key generation based on packet type
CN100486226C (en) Network device and method for processing data in same
US8331380B2 (en) Bookkeeping memory use in a search engine of a network device
US20060187828A1 (en) Packet identifier for use in a network device
US20060203824A1 (en) Passing values through a memory management unit of a network device
US8228932B2 (en) Layout architecture for expandable network device
US20060187924A1 (en) Ingress handling of data in a network device
US20060187920A1 (en) Flexible packet modification engine
US20060187936A1 (en) Table searching techniques in a network device

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
CI02 Correction of invention patent application

Correction item: Priority

Correct: 2005.06.17 US 11/154,827

False: Lack of priority second

Number: 37

Page: The title page

Volume: 22

COR Change of bibliographic data

Free format text: CORRECT: PRIORITY; FROM: MISSING THE SECOND ARTICLE OF PRIORITY TO: 2005.6.17 US 11/154,827

C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20090610

Termination date: 20160220