CN101645852A - Equipment and method for classifying network packet - Google Patents

Equipment and method for classifying network packet Download PDF

Info

Publication number
CN101645852A
CN101645852A CN200910203662A CN200910203662A CN101645852A CN 101645852 A CN101645852 A CN 101645852A CN 200910203662 A CN200910203662 A CN 200910203662A CN 200910203662 A CN200910203662 A CN 200910203662A CN 101645852 A CN101645852 A CN 101645852A
Authority
CN
China
Prior art keywords
module
node
linear
label
keyword
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN200910203662A
Other languages
Chinese (zh)
Other versions
CN101645852B (en
Inventor
王永纲
章涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Science and Technology of China USTC
Original Assignee
University of Science and Technology of China USTC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Science and Technology of China USTC filed Critical University of Science and Technology of China USTC
Priority to CN2009102036628A priority Critical patent/CN101645852B/en
Publication of CN101645852A publication Critical patent/CN101645852A/en
Application granted granted Critical
Publication of CN101645852B publication Critical patent/CN101645852B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention provides equipment and a method for classifying a network packet. The equipment comprises a production line tree searching module, a classification core, a node memorizer and a regulation memorizer, wherein the production line tree searching module is used for receiving keywords of the network packet, for searching the front K layers of a tree structure; and for ending the address ofa next node, linear comparison information and the keywords, which are obtained by searching, to the classification core and (or) an output network packet identifier and a flow identifier, and K is apositive integer; the classification core is used for receiving the keywords, the address of the next node and the linear comparison information outputted by the production line tree searching module, and for visiting the node memorizer and the regulation memorizer, for searching parts under the K layers of the tree structure, comparing linear regulation, and outputting the network packet identifier and the flow identifier; the node memorizer is used for storing the post-K-layer node information of the tree structure; and the regulation memorizer is used for storing the linear comparison regulation. The equipment and the method for classifying the network packet increase the utilization factor of each memorizer and the speed of packet classification.

Description

A kind of equipment of classifying network packet and method
Technical field
Relate generally to field of computer network communication of the present invention relates more specifically to the sorting technique of network packet.
Background technology
The bag classification is a basic technology of computer network communication and network safety filed.Flourish along with the internet, offered load increases rapidly, has surpassed the growth rate of router capacity, causes network congestion and message dropping phenomenon.Conventional router to each network packet all the form of " equality, best effort " no longer made the all-network user satisfied, the user who has is ready to spend more money and obtains better network service, this needs router that functions such as differentiated services, traffic shaping, charge on traffic are provided, and these have required directly to facilitate the development and the application of bag sorting technique.And along with multimedia service, it is universal day by day that P2P uses, and malicious attack and scanning, and virus and worm etc. require firewall box to connect with a large amount of network of very high velocity process, and follow the tracks of these connection status spreading unchecked on the internet.All these backgrounds and application all require equipment to want and can classify to network packet, content according to territories such as the source address of network packet, destination address, source port number, destination slogans, with set regular collection coupling, discern different network packet, and to its special processing.
At present a lot of researchs can be implemented under the situation of OC-48 (2.5Gbps) carries out the linear speed classification to network packet, but at higher speed such as OC-192 (10Gbps), more than the OC-384 (20Gbps), more than the more regular number 10k magnitude, realize the still thing of a difficulty of linear speed bag classification.Present realization mainly is based on TCAM (Ternary Content Addressable Memory, ternary content-addressable memories).TCAM has following shortcoming: cost an arm and a leg; Power consumption is big; Density is low.The variety of problems that TCAM exists impels people to explore new bag classification realization.
HiCuts algorithm (Packet Classification using Hierarchical Intelligent Cuttings) is a kind of effective tree bag sorting algorithm that Pankaj Gupta and Nick Mckeown propose.It regards the rule in d territory the rectangle of d dimension space as, the d dimension space is constantly cut apart by certain principle, up to the regular number that is distributed in each subspace less than some.The process of cutting apart can be represented with tree, the information that the storage of the node of tree is cut apart at every turn, as which selects tie up cut apart, it is inferior what are cut apart.Regular distribution is under the leaf node of tree.From the root node of tree, carry out up to leaf node when packet header is searched for, then relatively, can finish and wrap classification a small amount of regular linear under IP bag and the leaf node along the path of tree.
Some packet classification methods based on the HiCuts algorithm have been arranged at present, for example used software that the HiCuts tree is searched in the CPU of computer system or network processing unit, but be subjected to the influence of equipment operating efficiency and algorithm implementation structure, classification speed is lower.
Summary of the invention
In order one of to address the above problem, the present invention proposes a kind of classifying network packet equipment, comprise streamline tree search module, classification core, node memory and rule memory.Described streamline tree search module is used to receive the keyword of network packet, preceding K layer to tree structure is searched for, next node address that search is obtained and linear comparison information and described keyword send to described classification core and/or output network Packet Identifier and flow identifier, and wherein K is a positive integer; Described classification core is used to receive described keyword, next node address and the linear comparison information of described streamline tree search module output, visit described node memory and rule memory, part search below the K layer of described tree structure is gone forward side by side the line linearity rule relatively, export described network packet identifier and flow identifier; Described node memory is used to store the later nodal information of K layer of described tree structure; Described rule memory is used to store linear comparison rule.
According to embodiments of the invention, described streamline tree search module comprises root node register, a K-1 individual layer node memory module, root node search module, K-1 node searching module and control module.Wherein, described root node register is used to deposit the root node information of described tree structure; Described K-1 individual layer node memory module comprises second layer individual layer node memory module to K layer individual layer node memory module, and wherein each described individual layer node memory module is used to store the wherein nodal information of a node layer of preceding K layer except that root node of described tree structure; Described root node search module is used to receive described keyword, visit described root node register, root node is searched for, and the flow identifier of next node address, linear comparison information and/or the described keyword that described keyword and search are obtained sends to described second layer node searching module; Described K-1 node searching module comprises that the second layer node searching module of phase cascade is to K node layer search module, each described node searching module is used to receive described root node search module or the described keyword of last layer node searching module transmission and the flow identifier of next node address, linear comparison information and/or described keyword, from corresponding described individual layer node memory module, take out node corresponding, it is searched for, and Search Results is sent to next node layer search module; Wherein said K node layer search module is used for next node address, linear comparison information and/or flow identifier that described keyword and search obtain are sent to described control module; Described control module is used to receive described keyword and next node address, linear comparison information and/or the flow identifier that described K node layer search module sends, and exports described network packet identifier and flow identifier and/or described keyword, next node address and linear comparison information are sent to described classification core.
According to embodiments of the invention, described classification core comprises label distribution module, parallel tree search module, parallel linear programming comparison module and core classification outcome pool.Wherein, described label distribution module is used to receive described keyword, next node address and the linear comparison information of described streamline tree search module output, give label of each described keyword assignment, described label, keyword and next node address are sent to described parallel tree search module, and described label, keyword and linear comparison information sent to described parallel linear programming comparison module, and receive the label that described core classification outcome pool sends back to and recycle; Described parallel tree search module is used to receive described label, keyword and the next node address that described label distribution module sends, visit described node memory according to described next node address, set search, find leaf node, if described leaf node is empty node, then export described label and default stream identifier,, then described label and linear comparison information are sent to described parallel linear programming comparison module if described leaf node is the non-NULL node; Described parallel linear programming comparison module is used to receive the label of described label distribution module and described parallel tree search module transmission, keyword and linear comparison information, visit described rule memory, generate the flow identifier that is complementary with described keyword, described label, network packet identifier and flow identifier are sent to described core classification outcome pool; Described core classification outcome pool is used to receive described label, network packet identifier and the flow identifier of described parallel tree search module and described parallel linear programming comparison module transmission, described label is sent to described label distribution module, and export described network packet identifier and flow identifier.
According to embodiments of the invention, described parallel tree search module comprises tree search key buffer, label and next node address queue module, tree search engine scheduler and a plurality of tree search engine.Wherein, described tree search key buffer is used to store the described keyword that described label distribution module sends; Described label and next node address queue module are used to store described label and the next node address that described label distribution module sends; Described tree search engine scheduler is used to control described a plurality of tree search engine, when existing idle tree search engine and described label and next node address queue module not to be sky, read the tree search engine that label in described label and the next node address queue module and next node address send to the described free time, start the tree search engine of described free time; Described tree search engine is used for starting under the control of described tree search engine scheduler, visit described tree search key buffer and described node memory according to described label and next node address, the territory of from described tree search key buffer, reading described keyword, search for described tree structure up to leaf node, if described leaf node is empty node, then export described label, network packet identifier and default stream identifier, if leaf node is the non-NULL node, then export described label and linear comparison information.
According to embodiments of the invention, described tree search key buffer is a dual-ported memory.
According to embodiments of the invention, described parallel linear programming comparison module comprise linear comparison keyword buffer, first label and linear ratio than message queue module, second label and linear ratio than message queue module, rule comparator, linear ratio than engine scheduling device and a plurality of linear ratio than engine.Wherein, described linear comparison keyword buffer is used to receive the described keyword that described parallel tree search module sends; Described first label and linear ratio are used to deposit described label and the linear comparison information that described label distribution module sends than the message queue module; Described second label and linear ratio are used to deposit described label and the linear comparison information that described tree search module sends than the message queue module; Described linear ratio is used to control described a plurality of linear ratio than engine than the engine scheduling device, when existing idle linear ratio to have a non-NULL than message queue module and described second label and linear ratio at least in than the message queue module than engine and described first label and linear ratio, the linear ratio that described label and linear comparison information are sent to the described free time is than engine, and the linear ratio that starts the described free time is than engine; Described linear ratio is used under the control of described linear ratio than the engine scheduling device than engine, according to the described linear comparison keyword buffer of described tag access, according to described linear ratio than the described linear programming memory of message reference, receive the comparative result of described rule comparator, up to the strictly all rules that finds under the regular or relatively intacter leaf node that is complementary with described keyword, obtain the flow identifier or the default stream identifier that are complementary with described keyword, export described label, network packet identifier and flow identifier; Described rule comparator is used to receive the described keyword that described linear comparison keyword buffer sends, and compares with the rule of described linear programming memory output, and comparative result is sent to described linear ratio than engine.
According to embodiments of the invention, described linear comparison keyword buffer is a dual-ported memory.
According to embodiments of the invention, at least one is a dual-ported memory in described node memory and the rule memory.
According to embodiments of the invention, described equipment also comprises: the classification results summarizing module, it is connected with described classification core with described streamline tree search module, be used to receive the described keyword identifier and the described flow identifier of described streamline tree search module and the generation of described classification core, the output category result.
According to embodiments of the invention, described classification core comprises at least two classification core, and described streamline tree search module also is used for described keyword, next node address and linear comparison information are sent to idle classification core.
According to embodiments of the invention, described classifying network packet equipment is integrated in the single-chip.
The invention allows for a kind of method of classifying network packet, may further comprise the steps: streamline tree search module receives the keyword of network packet, preceding K layer to tree structure is searched for, next node address that search is obtained and linear comparison information and described keyword send to classification core and/or output network Packet Identifier and flow identifier, and wherein K is a positive integer; Described classification core receives described keyword, next node address and the linear comparison information of described streamline tree search module output, visit described node memory and rule memory, part search below the K layer of described tree structure is gone forward side by side the line linearity rule relatively, export described network packet identifier and flow identifier, wherein, described node memory is used to store the later nodal information of K layer of described tree structure, and described rule memory is used to store linear comparison rule.
The equipment of classifying network packet proposed by the invention and method have been avoided use TCAM, have improved memory utilance and bag classification speed.
Description of drawings
Above-mentioned and/or additional aspect of the present invention and advantage are from obviously and easily understanding becoming the description of embodiment below in conjunction with accompanying drawing, wherein:
Fig. 1 is the data structure schematic diagram of tree node according to an embodiment of the invention;
Fig. 2 is the structural representation of classifying network packet equipment according to an embodiment of the invention;
Fig. 3 is the structural representation of streamline tree search module according to an embodiment of the invention;
Fig. 4 is the structural representation of classification core according to an embodiment of the invention;
Fig. 5 is the structural representation of parallel tree search module according to an embodiment of the invention;
Fig. 6 is the structural representation of parallel linear programming comparison module according to an embodiment of the invention;
Fig. 7 is the change curve schematic diagram that takes up room and anywhere rule count according to an embodiment of the invention;
Fig. 8 is the change curve schematic diagram that classification speed according to an embodiment of the invention is anywhere rule counted.
Embodiment
Describe embodiments of the invention below in detail, the example of described embodiment is shown in the drawings.Below by the embodiment that is described with reference to the drawings is exemplary, only is used to explain the present invention, and can not be interpreted as limitation of the present invention.
The present invention wraps at first will cut apart original rule before the classification and sets up the HiCuts tree, all original rules are expressed as range format after cutting apart achievement, form by the regular upper bound and regular lower bound, for for simplicity, with the regular upper bound and regular lower bound general designation rule.
As one embodiment of the present of invention, the keyword of use is made up of source address, destination address, source port number, destination slogan, IP layer protocol, TOS (Type of Service, COS) and network packet identifier.As one embodiment of the present of invention, the rule of use is made up of source address, destination address, source port number, destination slogan, IP layer protocol, TOS (Type of Service, COS) and flow identifier.The network packet identifier is used for after classification is finished, and finds the network packet of network packet identifier correspondence, by the flow identifier of equipment output it is handled.As one embodiment of the present of invention, if do not find the flow identifier that is complementary with keyword, then export 0XFFFF, be called the default stream identifier in this manual.
As one embodiment of the present of invention, the node structure that the HiCuts tree is used comprises intermediate node and leaf node as shown in Figure 1.The node width is chosen 32.Wherein intermediate node and leaf node are distinguished with 1 sign F, and F is 0 and is intermediate node that F is 1 and is leaf node.Except that sign F, intermediate node also has following four parts: cut apart territory C, start bit S, mask M, child node address A.Cutting apart the territory and take 3, is the sequence number in the territory chosen when this node is cut apart; Start bit takies 5, is the minimum bit position that relates to when cutting apart this territory; Mask takies 8, protects cutting apart irrelevant position; The child node address takies 15, is to cut apart the plot of child node when storage that the back produces.Except that sign, child node also has with the lower part: regular number RC and regular initial address RA.Rule number RC takies 3, shows the number of rule under this leaf node.The rule initial address takies 24, is the original position that these rules are deposited in rule memory.When the regular number RC of leaf node equals 0, claim that then this leaf node is empty node.
As one embodiment of the present of invention, the search of tree node be may further comprise the steps:
(1) from the respective nodes memory, takes out node according to the next node address.If root node then need not the next node address, directly from the root node register, take out root node.
(2) if node label F is 0, promptly this node is an intermediate node, then jumps to for (3) step, if node label F is 1, promptly this node is a leaf node, then jumps to for (4) step.
(3) the territory C of cutting apart according to node chooses corresponding field in the keyword, with the high-order 0 S position that moves to right after extending to 32 of mending, this territory, get most-significant byte and mask M with, with child node address A addition output next node address, jump to (1) execution then then.
(4) if the regular number RC of leaf node is not equal to 0, promptly this node is the non-NULL leaf node, then exports regular number RC and regular initial address RA, below regular number RC and regular initial address RA is collectively referred to as linear comparison information.If the regular number RC of leaf node equals 0, promptly this node is empty node, then output network Packet Identifier and default stream identifier.
Be illustrated in figure 2 as the structural representation of the classifying network packet equipment of one embodiment of the present of invention, this classifying network packet equipment comprises streamline tree search module, node memory, rule memory, 2 classification core and classification results summarizing module.
Wherein, streamline tree search module is used to receive the keyword of network packet, and the preceding K layer of tree structure is searched for.As one embodiment of the present of invention, K can choose according to the characteristics of tree, if MAX_CUTS is less, and the tree degree of depth is darker, and the desirable big slightly value of K is if MAX_CUTS is bigger, and the tree degree of depth is more shallow, the desirable less value of K, and wherein MAX_CUTS is the maximum isodisperse of tree node.
The structural representation of the streamline tree search module when being illustrated in figure 3 as K=2 according to an embodiment of the invention, this enforcement comprises: the ground floor node register; Second layer node memory; Ground floor node searching module; Second layer node searching module; And control module.
Wherein, the ground floor node register is used to store the ground floor node, because the ground floor node is a root node, has only a node, does not need the address, therefore leaves in the register.
Ground floor node searching module is used to receive the keyword of network packet, and visit ground floor node register is searched for root node, and with keyword, next node address, linear comparison information and flow identifier output to ground floor node searching module.
Second layer node searching module is used to receive root node search module output, and visit second layer node memory module is searched for second layer node, with keyword, next node address and linear comparison information and/flow identifier outputs to control module.
Control module is used to receive the output of second layer node searching module, the network packet identifier and the flow identifier of output network bag, or keyword, next node address and linear comparison information sent to idle classification core.
As one embodiment of the present of invention, node memory is the memory of dual-port, and it is used to store the later nodal information of HiCuts tree structure K layer.As one embodiment of the present of invention, two ports of node memory are all readable, at least one port can be write, and each port connects a classification core, and one of them readable port of writing is multiplexing, when the sorter network bag, link to each other with a classification core, the out-tree nodal information is when tree structure upgrades, link to each other with external bus, receive the new tree structure information of outside input.
As one embodiment of the present of invention, rule memory is the memory of dual-port, and it is used to store linear programming.As one embodiment of the present of invention, rule memory needs two block storages, comprises regular upper bound memory and regular lower bound memory.Rule upper bound memory is used to store the upper bound of linear programming.Rule lower bound memory is used to store the lower bound of linear programming.The flow identifier in every regular upper bound is identical with the flow identifier of regular lower bound.For for simplicity, with the regular upper bound and regular lower bound general designation linear programming.As one embodiment of the present of invention, rule upper bound memory and two ports of regular lower bound memory are all readable, at least one port can be write, each port connects a classification core, one of them readable port of writing is multiplexing, when the sorter network bag, link to each other the output rule with a classification core.As one embodiment of the present of invention, the strictly all rules during rale store under the same child node is stored in the continuous zone of a slice of linear programming memory.Rule memory can also comprise the Policy Updates module, when interpolation or deletion rule, links to each other with external bus, receives the new linear programming of outside input.
Be illustrated in figure 4 as the structural representation of classification core according to an embodiment of the invention.This classification core comprises: label distribution module, parallel tree search module, parallel linear programming comparison module and core classification outcome pool.
Wherein, the label distribution module is used to receive keyword, next node address and the linear comparison information of streamline tree search output, give the label of each keyword assignment that enters, when keyword enters parallel tree search module and parallel linear programming comparison module, leave in the label is in the buffer memory of address, when needs read keyword, this label will be as the address of buffer memory.The sum of label is fixed, so this module also plays the buffer memory controlled function, and is intact when label distribution, shows that buffer memory is full, can not receive keyword again.According to different inputs, next node address and correspondent keyword and label are admitted in the parallel tree search module, and linear comparison information and corresponding keyword and label are admitted in the parallel linear comparison module.As an example of the present invention, total number of labels can be chosen for 16, and promptly the width of label is 4.
Parallel tree search module is used to receive label, keyword and the next node address of label distribution module output, according to next node address access node memory, set search, find leaf node, if described leaf node is empty node, then label and default stream identifier are outputed to the core classification outcome pool,, then label and linear comparison information are outputed to parallel linear programming comparison module if leaf node is the non-NULL node.
Parallel linear programming comparison module is used to receive label, keyword and the linear comparison information of label distribution module and the output of parallel tree search module, access rules storage, generate the flow identifier that is complementary with keyword, label, network packet identifier and flow identifier are outputed to the core classification outcome pool.
The core classification outcome pool receives label, network packet identifier and the flow identifier of parallel tree search module output, and they are exported immediately, receives label, network packet identifier and the flow identifier of parallel linear programming comparison module output.As one embodiment of the present of invention, the core classification outcome pool is temporarily stored in them among the FIFO (pushup storage), as can when parallel tree search module is not exported, content among the FIFO being exported for the degree of depth is that 16 width are 36 FIFO.Label is output to the label distribution module, and network packet identifier and flow identifier are output classification core.
In an embodiment of the present invention, parallel tree search module and parallel linear programming comparison module all are read-only modes to the visit of memory, and the resource of programmable logic device or ASIC realization single port and dual-ported memory consumption much at one.Therefore embodiments of the invention are the mode of dual-port with node memory and linear programming reservoir designs, adopt two identical classification core, have improved one times of package processing capability under the situation that does not increase memory resource consumption.Certainly, in other embodiment, the also number of selection sort core as the case may be.
Be illustrated in figure 5 as the structural representation of parallel tree search module according to an embodiment of the invention.Parallel tree search module is used for the next node address according to input, proceeds the tree search, up to finding leaf node, then according to content, output stream identifier or the linear search information of leaf node.Should comprise tree search key buffer, label and next node address queue module, tree search engine scheduler and a plurality of tree search engine by parallel tree search module.
Wherein, tree search key buffer is used for receiving and storage key.As one embodiment of the present of invention, this is the memory of a dual-port, the Breadth Maximum that writes port width and be key field multiply by the number of key field, read the Breadth Maximum that port width is a key field, the degree of depth is calculated from writing inbound port, be the sum of label, its receives the keyword of input, the high-order zero padding in its each territory is expanded to be stored in after 32 with the label be the position of address.As one embodiment of the present of invention, tree search key buffer is a dual-ported memory, and writing port width is 256, and the degree of depth is 16 identical with total number of labels, and reading port width is 32, and the degree of depth is 128.The high-order zero padding in the territory of the keyword of input expands to 32, and 6 territories obtain 192, and 64 zero of high-order again benefits obtain 256.With the label is the address, writes in the in-tree keyword buffer memory.
Label and next node address queue module are used for storage tags and next node address.As one embodiment of the present of invention, label and next node address queue module are deposited the label and the next node address of input by the order of first-in first-out.As one embodiment of the present of invention, label and next node address queue module are that the degree of depth is 16 FIFO, deposit label, next node address and network packet identifier.
Tree search engine scheduler is used to control a plurality of tree search engines, when existing idle tree search engine and label and next node address queue module not to be sky, label and next node address is inputed to idle tree search engine, starts the tree search engine.As one embodiment of the present of invention, the state of a plurality of tree search engines is inquired about in the circulation of tree search engine scheduler successively, if engine is in idle condition, then from label and next node address queue module, read a label, next node address and network packet identifier, send in this tree search engine, and start this tree search engine.
The tree search engine is used for starting under the control of tree search engine scheduler, label and next node address access tree search key buffer and node memory according to input, the territory of from tree search key buffer, reading keyword, the search tree structure is up to leaf node, according to the type of leaf node, output label, network packet identifier, default stream identifier or linear comparison information.Under the control of tree search engine scheduler, each tree search engine stagger access tree search key buffer memory and node memory.
As one embodiment of the present of invention, 6 tree search engines have been shown among Fig. 5, but this only is an example of the present invention, in actual applications, can according to circumstances increase and decrease the number of tree search engine.
Describe as example with 6 tree search engines, tree search engine 1 is six identical tree search engines of structure to tree search engine 6.There is one 6 circulating register inside, and register value was 0x1 when engine stopped, and begins after engine is activated to high-order cyclic shift, and engine is finished each step of tree search successively under the control of shift register.The engine course of work is as follows:
Output next node address was to node memory when (1) register was for 0x2;
Output label when (2) register is 0x4.This moment the node memory output node.Label is as high 4, and the territory combination conduct of cutting apart of node is hanged down 3, delivers to the keyword buffer as reading the address.Node is simultaneously also delivered to engine, if be the non-NULL leaf node, and then output label and linear comparison information, if be empty leaf node then output label and default stream identifier, tree search end, engine enters idle condition, the startup next time of wait scheduling engine.If be intermediate node, then continue (3).
When (3) register was 0x10, the territory of keyword buffer memory output was admitted to engine, through calculating, obtains the next node address, jumped to (1) then and carried out.
Because one deck search to tree needs the access node memory once, spend a plurality of clock calculation then and go out the next node address, if adopt single tree search engine memory when calculating the next node address to be in idle condition, cause the memory utilization ratio lower.Embodiments of the invention adopt a plurality of tree search engines, move under the control of separately shift register.The engine scheduling device starts their moment difference, so their access tree memories stagger, and can allow other engine references to storage in an engine calculates the wait process of next node address, and this design has improved the efficient of node memory.
Be illustrated in figure 6 as the structural representation of an embodiment of parallel linear programming comparison module of the present invention.Parallel linear programming comparison module is according to the linear comparison information of input, find under the leaf node regular at the rule memory deposit position, therefrom read rule, compare one by one with keyword, if find the rule that is complementary, output regular flow identifier is as classification results, otherwise the flow identifier of output acquiescence.Should parallel linear programming comparison module comprise linear comparison keyword buffer, first label and linear ratio than message queue module, second label and linear ratio than message queue module, linear ratio than engine scheduling device, a plurality of linear ratio than engine and rule comparator.
Wherein, linear comparison keyword buffer is used to receive the keyword of input, and it is stored in the label is the position of address.As one embodiment of the present of invention, linear comparison keyword buffer is the memory of a dual-port, write inbound port and to read port width identical, width for keyword, as 128, the degree of depth is a total number of labels, as 16, it receives the keyword of input, and it is stored in the label is the position of address.
First label and linear ratio are used to deposit the label and the linear comparison information of the input of label distribution module than the message queue module, deposit the label and the linear comparison information of label distribution module input by the order of first-in first-out.
Second label and linear ratio are used to deposit the label and the linear comparison information of the input of tree search module than the message queue module, deposit the label and the linear comparison information of tree search module input by the order of first-in first-out.
As one embodiment of the present of invention, first and second labels and comparison information formation module all are that the degree of depth is 16 FIFO.
Linear ratio is used to control a plurality of linear ratios than engine than the engine scheduling device, when existing idle linear ratio to have a non-NULL than message queue module or second label and linear ratio at least in than the message queue module than engine and first label and linear ratio, export label and linear comparison information to idle linear ratio than engine, start this idle linear ratio than engine.
As one embodiment of the present of invention, linear ratio is inquired about the state of a plurality of linear ratios than engine successively than the circulation of engine scheduling device, if linear ratio is in idle condition than engine, and has one in the formation module at least not for empty, then from the formation module, read a label and linear comparison information and send into this linear ratio, and start this linear ratio than engine than in the engine.When reading the formation module, read the first formation module earlier, if the first formation module is the empty second formation module that just reads.
Linear ratio is used for comparing under the control of engine scheduling device in linearity than engine, according to the linear comparison keyword buffer of tag access and according to linear ratio than the message reference rule memory, receive the linear programming comparative result of rule comparator, up to the strictly all rules that finds under the linear programming that is complementary with keyword or the relatively intacter leaf node, obtain the flow identifier that is complementary with keyword, output label, network packet identifier and flow identifier.As one embodiment of the present of invention, linear ratio is exported the read address of the label of reception as linear comparison keyword buffer than engine, export the address of reading of linear rule memory according to linear comparison information, receive the comparative result of rule comparator, if the result is the rule keyword that do not match, then the address of reading with the linear programming memory adds 1 output, strictly all rules under rule that finds coupling or relatively intacter leaf node, obtain classification results, output label, network packet identifier and flow identifier, each engine address of reading of linear comparison keyword buffer memory of output and linear programming memory of staggering under linearity comparison engine scheduling device control.
Rule comparator is used to receive the keyword of linear comparison keyword buffer output, compare with the linear programming of rule memory output, if each territory of keyword is smaller or equal to each territory in the regular upper bound and more than or equal to each territory of regular lower bound, then this keyword and this rule are complementary.The linear programming comparative result is outputed to linear ratio than engine.
Describe as example than engine with 6 linear ratios below.Linear ratio is that structure identical six linear ratios than engine to linear ratio than engine 6 than engine 1.There is one 6 circulating register inside, and register was 0x1 when engine stopped, and begins after engine is activated to high-order cyclic shift, and engine is finished linear ratio each step successively under the control of shift register.The regular number that engine internal also has a regular counter record to compare, it is 0 that engine stops the hour counter value.The engine course of work is as follows:
Export regular address when (1) register is for 0x2 to the linear programming memory, output label is as the read port address of keyword memory, and regular counter adds 1 simultaneously.
Rule was sent from rule memory when (2) register was 0x8, and keyword is sent from the keyword buffer.Network packet identifier and flow identifier are admitted in the engine temporary.Rule and keyword are sent in the rule comparator and are compared.
When (3) register was 0x10, rule comparator result's output was fed through engine.If rule match would carry out 4), repeat 1 otherwise regular address is added 1) to 3).
When (4) register is 0x20, output network Packet Identifier and rule identifier if rule is complementary, or regular counter equals leaf node rule number, then output network Packet Identifier and default stream identifier, output label simultaneously.
As parallel tree search module uses a plurality of tree search engine principles, use a plurality of linear ratios to improve the efficient of regular upper bound memory and regular lower bound memory than engine.
One embodiment of the present of invention have proposed a kind of method of classifying network packet, may further comprise the steps:
Streamline tree search module receives the keyword of network packet, preceding K layer to tree structure is searched for, next node address that search is obtained and linear comparison information and described keyword send to classification core and/or output network Packet Identifier and flow identifier, and wherein K is a positive integer;
Classification core receives keyword, next node address and the linear comparison information of streamline tree search module output, access node memory and rule memory, part search below the K layer of tree structure is gone forward side by side the line linearity rule relatively, output network Packet Identifier and flow identifier, wherein, node memory is used to store the later nodal information of K layer of tree structure, and rule memory is used to store linear comparison rule.
The present invention has good performance in the classification time with on taking up room.Be illustrated in figure 7 as that the number that takes up room anywhere rule according to an embodiment of the invention increases and the curve that changes, this curve is linear growth, and the equipment that the embodiments of the invention be describeds growth that takes up room when rule increases is more slow.The classification speed that is illustrated in figure 8 as equipment according to an embodiment of the invention is anywhere rule counted the curve that increases and change, as can be seen under a large amount of regular situations, equipment still has very high bag classification speed, during as 25K rule, reached the classification speed in a per second 90M packet header, pressed average packet head length degree 330 bytes and calculate, package processing capability is 237Gbps, minimum packet header length 40 bytes are calculated, and package processing capability is 28.8Gbps.
Although illustrated and described embodiments of the invention, for the ordinary skill in the art, be appreciated that without departing from the principles and spirit of the present invention and can carry out multiple variation, modification, replacement and modification that scope of the present invention is by claims and be equal to and limit to these embodiment.

Claims (12)

1, a kind of classifying network packet equipment is characterized in that, comprises streamline tree search module, classification core, node memory and rule memory,
Described streamline tree search module is used to receive the keyword of network packet, preceding K layer to tree structure is searched for, next node address that search is obtained and linear comparison information and described keyword send to described classification core and/or output network Packet Identifier and flow identifier, and wherein K is a positive integer;
Described classification core is used to receive described keyword, next node address and the linear comparison information of described streamline tree search module output, visit described node memory and rule memory, part search below the K layer of described tree structure is gone forward side by side the line linearity rule relatively, export described network packet identifier and flow identifier;
Described node memory is used to store the later nodal information of K layer of described tree structure;
Described rule memory is used to store linear comparison rule.
2, classifying network packet equipment according to claim 1, it is characterized in that, described streamline tree search module comprises root node register, a K-1 individual layer node memory module, root node search module, K-1 node searching module and control module, wherein
Described root node register is used to deposit the root node information of described tree structure;
Described K-1 individual layer node memory module comprises second layer individual layer node memory module to K layer individual layer node memory module, and wherein each described individual layer node memory module is used to store the wherein nodal information of a node layer of preceding K layer except that root node of described tree structure;
Described root node search module is used to receive described keyword, visit described root node register, root node is searched for, and the flow identifier of next node address, linear comparison information and/or the described keyword that described keyword and search are obtained sends to described second layer node searching module;
Described K-1 node searching module comprises that the second layer node searching module of phase cascade is to K node layer search module, each described node searching module is used to receive described root node search module or the described keyword of last layer node searching module transmission and the flow identifier of next node address, linear comparison information and/or described keyword, from corresponding described individual layer node memory module, take out node corresponding, it is searched for, and Search Results is sent to next node layer search module; Wherein said K node layer search module is used for next node address, linear comparison information and/or flow identifier that described keyword and search obtain are sent to described control module;
Described control module is used to receive described keyword and next node address, linear comparison information and/or the flow identifier that described K node layer search module sends, and exports described network packet identifier and flow identifier and/or described keyword, next node address and linear comparison information are sent to described classification core.
3, classifying network packet equipment according to claim 1 is characterized in that, described classification core comprises label distribution module, parallel tree search module, parallel linear programming comparison module and core classification outcome pool, wherein,
Described label distribution module is used to receive described keyword, next node address and the linear comparison information of described streamline tree search module output, give label of each described keyword assignment, described label, keyword and next node address are sent to described parallel tree search module, and described label, keyword and linear comparison information sent to described parallel linear programming comparison module, and receive the label that described core classification outcome pool sends back to and recycle;
Described parallel tree search module is used to receive described label, keyword and the next node address that described label distribution module sends, visit described node memory according to described next node address, set search, find leaf node, if described leaf node is empty node, then export described label and default stream identifier,, then described label and linear comparison information are sent to described parallel linear programming comparison module if described leaf node is the non-NULL node;
Described parallel linear programming comparison module is used to receive the label of described label distribution module and described parallel tree search module transmission, keyword and linear comparison information, visit described rule memory, generate the flow identifier that is complementary with described keyword, described label, network packet identifier and flow identifier are sent to described core classification outcome pool;
Described core classification outcome pool is used to receive described label, network packet identifier and the flow identifier of described parallel tree search module and described parallel linear programming comparison module transmission, described label is sent to described label distribution module, and export described network packet identifier and flow identifier.
4, classifying network packet equipment according to claim 3 is characterized in that, described parallel tree search module comprises tree search key buffer, label and next node address queue module, tree search engine scheduler and a plurality of tree search engine, wherein,
Described tree search key buffer is used to store the described keyword that described label distribution module sends;
Described label and next node address queue module are used to store described label and the next node address that described label distribution module sends;
Described tree search engine scheduler is used to control described a plurality of tree search engine, when existing idle tree search engine and described label and next node address queue module not to be sky, read the tree search engine that label in described label and the next node address queue module and next node address send to the described free time, start the tree search engine of described free time;
Described tree search engine is used for starting under the control of described tree search engine scheduler, visit described tree search key buffer and described node memory according to described label and next node address, the territory of from described tree search key buffer, reading described keyword, search for described tree structure up to leaf node, if described leaf node is empty node, then export described label, network packet identifier and default stream identifier, if leaf node is the non-NULL node, then export described label and linear comparison information.
5, classifying network packet equipment according to claim 4 is characterized in that, described tree search key buffer is a dual-ported memory.
6, classifying network packet equipment according to claim 3, it is characterized in that, described parallel linear programming comparison module comprise linear comparison keyword buffer, first label and linear ratio than message queue module, second label and linear ratio than message queue module, rule comparator, linear ratio than engine scheduling device and a plurality of linear ratio than engine, wherein
Described linear comparison keyword buffer is used to receive the described keyword that described parallel tree search module sends;
Described first label and linear ratio are used to deposit described label and the linear comparison information that described label distribution module sends than the message queue module;
Described second label and linear ratio are used to deposit described label and the linear comparison information that described tree search module sends than the message queue module;
Described linear ratio is used to control described a plurality of linear ratio than engine than the engine scheduling device, when existing idle linear ratio to have a non-NULL than message queue module and described second label and linear ratio at least in than the message queue module than engine and described first label and linear ratio, the linear ratio that described label and linear comparison information are sent to the described free time is than engine, and the linear ratio that starts the described free time is than engine;
Described linear ratio is used under the control of described linear ratio than the engine scheduling device than engine, according to the described linear comparison keyword buffer of described tag access, according to described linear ratio than the described linear programming memory of message reference, receive the comparative result of described rule comparator, up to the strictly all rules that finds under the regular or relatively intacter leaf node that is complementary with described keyword, obtain the flow identifier or the default stream identifier that are complementary with described keyword, export described label, network packet identifier and flow identifier;
Described rule comparator is used to receive the described keyword that described linear comparison keyword buffer sends, and compares with the rule of described linear programming memory output, and comparative result is sent to described linear ratio than engine.
7, classifying network packet equipment according to claim 6 is characterized in that, described linear comparison keyword buffer is a dual-ported memory.
8, classifying network packet equipment according to claim 1 is characterized in that, at least one is a dual-ported memory in described node memory and the rule memory.
9, classifying network packet equipment according to claim 1 is characterized in that, also comprises:
The classification results summarizing module, it is connected with described classification core with described streamline tree search module, is used to receive the described keyword identifier and the described flow identifier of described streamline tree search module and the generation of described classification core, the output category result.
10, classifying network packet equipment according to claim 1 is characterized in that, described classification core comprises at least two classification core,
Described streamline tree search module also is used for described keyword, next node address and linear comparison information are sent to idle described classification core.
11, according to each described classifying network packet equipment of claim 1-10, it is characterized in that described classifying network packet equipment is integrated in the single-chip.
12, a kind of method of classifying network packet is characterized in that, may further comprise the steps:
Streamline tree search module receives the keyword of network packet, preceding K layer to tree structure is searched for, next node address that search is obtained and linear comparison information and described keyword send to classification core and/or output network Packet Identifier and flow identifier, and wherein K is a positive integer;
Described classification core receives described keyword, next node address and the linear comparison information of described streamline tree search module output, visit described node memory and rule memory, part search below the K layer of described tree structure is gone forward side by side the line linearity rule relatively, export described network packet identifier and flow identifier, wherein, described node memory is used to store the later nodal information of K layer of described tree structure, and described rule memory is used to store linear comparison rule.
CN2009102036628A 2009-06-09 2009-06-09 Equipment and method for classifying network packet Expired - Fee Related CN101645852B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2009102036628A CN101645852B (en) 2009-06-09 2009-06-09 Equipment and method for classifying network packet

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2009102036628A CN101645852B (en) 2009-06-09 2009-06-09 Equipment and method for classifying network packet

Publications (2)

Publication Number Publication Date
CN101645852A true CN101645852A (en) 2010-02-10
CN101645852B CN101645852B (en) 2011-06-15

Family

ID=41657574

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2009102036628A Expired - Fee Related CN101645852B (en) 2009-06-09 2009-06-09 Equipment and method for classifying network packet

Country Status (1)

Country Link
CN (1) CN101645852B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108491535A (en) * 2018-03-29 2018-09-04 北京小米移动软件有限公司 The classification storage method and device of information
CN110830376A (en) * 2019-11-05 2020-02-21 苏州盛科科技有限公司 INT message processing method and device
CN116050956A (en) * 2022-06-17 2023-05-02 南京云次方信息技术有限公司 Cross-border E-commerce logistics freight calculation system

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100387028C (en) * 2005-04-01 2008-05-07 清华大学 Parallel IP packet sorter matched with settling range based on TCAM and method thereof
CN101141389B (en) * 2007-09-29 2010-06-16 华为技术有限公司 Reinforcement multidigit Trie tree searching method and apparatus

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108491535A (en) * 2018-03-29 2018-09-04 北京小米移动软件有限公司 The classification storage method and device of information
CN110830376A (en) * 2019-11-05 2020-02-21 苏州盛科科技有限公司 INT message processing method and device
CN110830376B (en) * 2019-11-05 2021-11-09 苏州盛科科技有限公司 INT message processing method and device
CN116050956A (en) * 2022-06-17 2023-05-02 南京云次方信息技术有限公司 Cross-border E-commerce logistics freight calculation system
CN116050956B (en) * 2022-06-17 2023-09-26 南京云次方信息技术有限公司 Cross-border E-commerce logistics freight calculation system

Also Published As

Publication number Publication date
CN101645852B (en) 2011-06-15

Similar Documents

Publication Publication Date Title
Waldvogel Fast longest prefix matching: algorithms, analysis, and applications
US7313142B2 (en) Packet processing device
US7525958B2 (en) Apparatus and method for two-stage packet classification using most specific filter matching and transport level sharing
EP2695334B1 (en) Packet scheduling method and apparatus
CN103348640B (en) Relay
CN108833299B (en) Large-scale network data processing method based on reconfigurable switching chip architecture
Huang et al. Green datapath for TCAM-based software-defined networks
US8966152B2 (en) On-chip memory (OCM) physical bank parallelism
CN100428225C (en) Apparatus and method for performing high-speed IP route lookup and managing routing/forwarding tables
US7408932B2 (en) Method and apparatus for two-stage packet classification using most specific filter matching and transport level sharing
CN1278524C (en) Group processor for multi-stage warning management logic
CN102487374B (en) Access control list realization method and apparatus thereof
US10778588B1 (en) Load balancing for multipath groups routed flows by re-associating routes to multipath groups
Vamanan et al. TreeCAM: decoupling updates and lookups in packet classification
CN104579941A (en) Message classification method in OpenFlow switch
CN1465014A (en) Selective routing of data flows using a tcam
CN101242362B (en) Find key value generation device and method
US10116567B1 (en) Load balancing for multipath group routed flows by re-routing the congested route
CN102970150A (en) Extensible multicast forwarding method and device for data center (DC)
WO2003021906A1 (en) Method and system for classifying binary strings
Meiners et al. Hardware based packet classification for high speed internet routers
Jiang et al. A memory-balanced linear pipeline architecture for trie-based IP lookup
CN101645852B (en) Equipment and method for classifying network packet
US7512122B1 (en) Identifying QoS flows using indices
US20010015976A1 (en) Table-type data retrieval mechanism, packet processing system using the same, and table-type data retrieval method for said packet processing system

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20110615

Termination date: 20150609

EXPY Termination of patent right or utility model