WO2012081148A1 - Packet classifier, packet classification method and packet classification program - Google Patents

Packet classifier, packet classification method and packet classification program Download PDF

Info

Publication number
WO2012081148A1
WO2012081148A1 PCT/JP2011/005131 JP2011005131W WO2012081148A1 WO 2012081148 A1 WO2012081148 A1 WO 2012081148A1 JP 2011005131 W JP2011005131 W JP 2011005131W WO 2012081148 A1 WO2012081148 A1 WO 2012081148A1
Authority
WO
WIPO (PCT)
Prior art keywords
rule
entry
processing
decision tree
block
Prior art date
Application number
PCT/JP2011/005131
Other languages
French (fr)
Inventor
Norio Yamagaki
Original Assignee
Nec Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nec Corporation filed Critical Nec Corporation
Priority to JP2013514491A priority Critical patent/JP5807676B2/en
Publication of WO2012081148A1 publication Critical patent/WO2012081148A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/74Address processing for routing
    • H04L45/745Address table lookup; Address filtering
    • H04L45/748Address table lookup; Address filtering using longest matching prefix

Definitions

  • the present invention relates to packet classification.
  • Packet classification is an important technique for classifying, in a router and a switch on a network, packets into a packet sequence called a flow having a set of attribute.
  • the packet classification plays an important role for achieving a network application having something extra such as provision of QoS (Quality of Service) with respect to individual flow and security of firewall and the like.
  • QoS Quality of Service
  • a "rule (sometimes called a filter)" is defined by using one or more fields included in a packet header.
  • the rule is defined by a plurality of header fields such as a source IP address, a destination IP address and a protocol number specified in an IP (Internet Protocol) header of a packet as well as a source port number and a destination port number specified in a TCP (Transmission Control Protocol) / UDP (User Datagram Protocol) header.
  • TCP Transmission Control Protocol
  • UDP User Datagram Protocol
  • a plurality of rules are defined.
  • field values used in the definition of the rule are extracted from the packet header.
  • a combination of the extracted field values is compared with a plurality of rules to determine which rule matches, and thereby the packet is classified into a flow.
  • the combination of the field values extracted from the received packet header is referred to as a "search key”.
  • priority and action are defined for each rule. If the search key matches two or more rules among the plurality of rules, a rule with a higher priority is selected.
  • how to treat the received packet if it matches the rule is defined in the action.
  • Each field in the rule is defined by such methods as Exact Match wherein the field is defined as a specific value, Prefix Match wherein some upper bits are specified while some lower bits are defined as indefinite by the use of wildcard '*', Range Match wherein the field is defined as a range between two specific values, and Wildcard Match wherein the field is defined by using wildcard in units of an individual bit. For example, let us consider a field of 8 bits. In the case of Exact Match, the field is specified by a specific value such as "00110101”. In the case of Prefix Match, the field is specified by a value such as "0011****" that starts from 4 bits of "0011".
  • Range Match the field is specified by a range such as [3-64] that allows the field of 8 bits within a range from 3 to 64 in decimal number.
  • Wildcard Match the wildcard is usable in units of a bit as in "0**10*01".
  • TCAM ternary content addressable memory
  • the TCAM has disadvantages such as high costs, high power consumption and a large circuit size.
  • the rule needs to be divided into rules using Prefix Match, which causes a problem that the number of rules is increased.
  • Non Patent Literature 1 proposes a method using a "decision tree".
  • matching is not performed with respect to all rules, but matching is performed with respect to only a small number of rules which have possibility of matching with the search key. As a result, a time required for search processing is reduced.
  • the method using a decision tree will be briefly described with reference to Figs. 1 to 3.
  • Fig. 1 shows an example of a rule set consisting of 16 rules from R0 to R15 that are defined by using two fields X and Y both having 4 bit lengths.
  • the fields X and Y each corresponds to an actual packet header field such as a source IP address and a source port number.
  • the field X is expressed by a binary number, and '*' denotes the wildcard.
  • the field Y is expressed by Range Match, and a and b in "[a : b]" respectively indicate a lower limit value and an upper limit value (decimal notation). It should be noted that the priority and the action added to each rule are omitted here.
  • Fig. 2 illustrates the rule set shown in Fig. 1 on a 2-dimensional space consisting of two axes of the fields X and Y. Respective digits on the X and Y axes are expressed by decimal notation.
  • the multi-dimensional space as shown in Fig. 2 is divided with focusing on the plurality of dimensions.
  • a decision tree is constructed by repeating the region division until the number of rules existing in a post-division region becomes equal to or less than a certain threshold value.
  • a group of rules managed by the post-division region is referred to as a "rule list".
  • the rule lists managed by the respective regions are as follows: [R7, R8, R9, R11] (region 0), [R0, R6, R9, R10, R11, R12] (region 1), [R1, R2, R3, R4, R5, R13, R14] (region 2) and [R10, R14, R15] (region 3). Since more rules than 2 being the threshold value are still managed in each region, the region division is further performed with respect to each region until the number of rules becomes equal to or less than the threshold value. In the example shown in Fig. 3, the whole space is eventually divided into 24 regions. It should be noted that algorithm for constructing a decision tree is described in Non Patent Literature 1 and Non Patent Literature 2, the description of which is omitted here.
  • the decision tree When performing the packet classification, the decision tree is followed with reference to the search key, and all rules managed by a leaf node at a point of arrival, whose number is equal to or less than the threshold value, are used for the matching.
  • the decision tree shown in Fig. 3 the whole space is divided into the above-mentioned four regions at the root node, and the packet is found to belong to the region 1 ([0:7], [8:15]) among the four regions. Subsequently, at the node of the region 1, the space is further divided evenly in the respective X and Y directions.
  • the region 1 divided into four regions: a region 10 ([0:3], [8:11]), a region 11 ([0:3], [12:15]), a region 12 ([4:7], [8:11]) and a region 13 ([4:7], [12:15]).
  • the packet is found to belong to the region 12 among the four regions.
  • the region 12 is divided into four regions, and the packet is found to belong a region 122 ([6:7], [8:9]).
  • the matching is performed with respect to two rules R9 and R10 managed in the region 122, and a matching rule is selected.
  • the packet matches both of the rules R9 and R10 and therefore a rule is selected depending on the priority added to each rule which is omitted in Fig. 3.
  • the region division may result in a possibility that an identical rule is managed in a plurality of post-division regions, and this is hereinafter referred to as "duplicate of rule".
  • such rules as R7 and R9 are duplicated. If the duplicate of rule occurs, a memory usage is increased in order to manage the duplicated rule itself or address values for the duplicated rules, and apparently more rules than an actual rule set must be treated. That is to say, the duplicate of rule causes increase in an amount of data in the decision tree.
  • the followings are known as techniques for suppressing the duplicate of rule.
  • a node other than a leaf node also has a rule list. If an identical rule appears to be duplicated to all child nodes (post-division regions) as a result of the region division, the rule is managed in the rule list of this node. However, this method cannot prevent the duplicate of rule to not all of but a plurality of child nodes.
  • Non Patent Literature 2 proposes hardware architecture that a packet classification method using a decision tree is executed at high-speed by a hardware using pipeline processing. Also in this method, a node other than a leaf node is provided with a rule list similarly. If an identical rule appears to be duplicated to a plurality of child nodes as a result of the region division, the rule is managed in the rule list of this node. However, if the number of rules that appear to be duplicated exceeds a limit number of rules that can be managed in the rule list of a single node, the duplicate of rule is caused.
  • Non Patent Literature 3 also proposes a method that the packet classification is executed by hardware using pipeline processing, as in the case of Non Patent Literature 2.
  • leaf nodes manage rules in the decision tree, which is different from the cases of Non Patent Literature 1 and Non Patent Literature 2.
  • a plurality of decision trees are prepared, and any one decision tree in which the duplicate of rule does not occur manages this rule. As a result, the duplicate of rule is prevented from occurring.
  • NPL1 Sumeet Singh, Florin Baboescu, George Varghese, Jia Wang, “Packet Classification Using Multidimensional Cutting", Proceedings of the ACM SIGCOMM 2003 Conference on Applications, Technologies, Architectures, and Protocols for Computer Communications, 2003, pp. 213 - 224.
  • NPL2 Weirong Jiang, Viktor K. Prasanna, “Large-Scale Wire-Speed Packet Classification on FPGAs", Proceedings of the ACM/SIGDA International Symposium on Field Programmable Gate Arrays, 2009, pp. 219 - 228.
  • NPL3 Weirong Jiang, Viktor K. Prasanna, Norio Yamagaki, “Decision Forest: A Scalable Architecture for Flexible Flow Matching on FPGA", Proceedings of 2010 International Conference on Field Programmable Logic and Application, 2010, pp. 394 - 399.
  • Non Patent Literature 3 a concrete procedure for performing the dynamic update of entry is not described at all. It is therefore considered to be necessary to previously determine which rule in which node in which decision tree is to be updated. That is, "preprocessing" is separately required for performing the dynamic update of entry, which is a problem. Particularly, since the Non Patent Literature 3 is presupposed to construct an optimum decision tree with respect to a given rule set, it is considered to be necessary to change the whole configuration of the decision tree. In this case, a time required for the preprocessing of the dynamic update of entry and a time required for the entry update both are increased, which is a problem.
  • An object of the present invention is to provide a technique that can achieve the dynamic update of entry without performing preprocessing, in packet classification that uses a decision tree.
  • a packet classifier has: a plurality of decision tree processing blocks respectively configuring a plurality of decision trees for use in packet classification; a command input block configured to input a command simultaneously to the plurality of decision tree processing blocks; and an entry addition target determination block connected to the plurality of decision tree processing blocks.
  • a single rule is managed by a single leaf node in any one of the plurality of decision trees. If the command is a lookup command, each of the plurality of decision tree processing blocks uses its own decision tree to determine whether or not a search key matches any rule.
  • each of the plurality of decision tree processing blocks determines whether or not the new rule can be managed by a single leaf node in its own decision tree. If the new rule can be managed by a single leaf node in a decision tree processing block, this decision tree processing block is an entry addition target candidate and this single leaf node is an addition-target leaf node.
  • the entry addition target determination block selects, from the entry addition target candidate, an entry addition target being a target to which the new rule is added. The selected entry addition target adds the new rule to a rule list managed by the addition-target leaf node in its own decision tree.
  • a packet classification method that uses a plurality of decision trees.
  • a single rule is managed by a single leaf node in any one of the plurality of decision trees.
  • the packet classification method includes: (A) inputting a command simultaneously to the plurality of decision trees; (B) if the command is a lookup command, determining in each of the plurality of decision trees whether or not a search key matches any rule; (C) if the command is an insertion command for adding a new rule to a decision tree, determining in each of the plurality of decision trees whether or not the new rule can be managed by a single leaf node, wherein if the new rule can be managed by a single leaf node in a decision tree, this decision tree is an entry addition target candidate and this single leaf node is an addition-target leaf node; (D) selecting, from the entry addition target candidate, an entry addition target being a target to which the new rule is added; and (E) adding the new rule to a rule list managed by the addition-target leaf no
  • a packet classification program which causes a computer to perform packet classification processing that uses a plurality of decision trees.
  • a single rule is managed by a single leaf node in any one of the plurality of decision trees.
  • the packet classification processing includes: (A) inputting a command simultaneously to the plurality of decision trees; (B) if the command is a lookup command, determining in each of the plurality of decision trees whether or not a search key matches any rule; (C) if the command is an insertion command for adding a new rule to a decision tree, determining in each of the plurality of decision trees whether or not the new rule can be managed by a single leaf node, wherein if the new rule can be managed by a single leaf node in a decision tree, this decision tree is an entry addition target candidate and this single leaf node is an addition-target leaf node; (D) selecting, from the entry addition target candidate, an entry addition target being a target to which the new rule is added; and (E) adding the new rule to
  • Fig. 1 is a conceptual diagram showing an example of a rule set.
  • Fig. 2 is a conceptual diagram showing a case where the rule set shown in Fig. 1 is arranged on a 2-dimensional space consisting of two axes of fields X and Y.
  • Fig. 3 is a conceptual diagram showing an example of a decision tree with respect to the rule set shown in Fig. 1.
  • Fig. 4 is a block diagram showing a configuration of a packet classifier according to a first exemplary embodiment of the present invention.
  • Fig. 5 is a block diagram showing a configuration of each decision tree processing block of the packet classifier according to the first exemplary embodiment.
  • Fig. 1 is a conceptual diagram showing an example of a rule set.
  • Fig. 2 is a conceptual diagram showing a case where the rule set shown in Fig. 1 is arranged on a 2-dimensional space consisting of two axes of fields X and Y.
  • Fig. 3 is a conceptual diagram showing an example of a decision tree
  • FIG. 6 is a conceptual diagram showing a correspondence relationship between decision tree node processing blocks and respective nodes of the decision tree in the first exemplary embodiment.
  • Fig. 7 is a block diagram showing a configuration of a decision tree node processing block of the decision tree processing block according to the first exemplary embodiment.
  • Fig. 8 is a block diagram showing a configuration of a number-of-entry counting block of the decision tree processing block according to the first exemplary embodiment.
  • Fig. 9 is a conceptual diagram showing a correspondence relationship between rule processing blocks and rules included in a rule list of each leaf node of the decision tree in the first exemplary embodiment.
  • Fig. 10 is a block diagram showing a configuration of the rule processing block of the decision tree processing block according to the first exemplary embodiment.
  • FIG. 11 is a flow chart showing LOOKUP processing in the first exemplary embodiment.
  • Fig. 12 is a conceptual diagram showing a configuration example of a command in the first exemplary embodiment.
  • Fig. 13 is a flow chart showing processing of Step A2.
  • Fig. 14 is a conceptual diagram showing a configuration example of node information in the first exemplary embodiment.
  • Fig. 15 is a diagram for explaining an address calculation method in the first exemplary embodiment.
  • Fig. 16 is a flow chart showing processing of Step A4.
  • Fig. 17 is a conceptual diagram showing a configuration example of entry information in the first exemplary embodiment.
  • Fig. 18 is a flow chart showing INSERT processing in the first exemplary embodiment.
  • Fig. 19 is a flow chart showing processing of Step A6.
  • Fig. 12 is a conceptual diagram showing a configuration example of a command in the first exemplary embodiment.
  • Fig. 13 is a flow chart showing processing of Step A2.
  • Fig. 14 is a conceptual diagram
  • Fig. 20 is a flow chart showing processing of Step A9.
  • Fig. 21 is a flow chart showing processing of Step A12.
  • Fig. 22 is a flow chart showing DELETE processing in the first exemplary embodiment.
  • Fig. 23 is a flow chart showing processing of Step A14.
  • Fig. 24 is a block diagram showing a configuration of a packet classifier according to a second exemplary embodiment of the present invention.
  • Fig. 25 is a block diagram showing a configuration of a decision tree processing block of the packet classifier according to the second exemplary embodiment.
  • Fig. 26 is a flow chart showing INSERT processing in the second exemplary embodiment.
  • Fig. 27 is a flow chart showing processing of Step A18.
  • Fig. 28 is a flow chart showing DELETE processing in the second exemplary embodiment.
  • Fig. 21 is a flow chart showing processing of Step A12.
  • Fig. 22 is a flow chart showing DELETE processing in the first exemplary embodiment.
  • Fig. 23 is
  • FIG. 29 is a conceptual diagram showing a configuration example of a command in a third exemplary embodiment of the present invention.
  • Fig. 30 is a flow chart showing INSERT processing in the third exemplary embodiment.
  • Fig. 31 is a flow chart showing processing of Step A19.
  • Fig. 32 is a flow chart showing processing of Step A20.
  • Fig. 33 is a block diagram showing a configuration of a packet classifier according to a fourth exemplary embodiment of the present invention.
  • Summary Fig. 4 is a block diagram showing a configuration of a packet classifier 1 according to a first exemplary embodiment of the present invention.
  • the packet classifier 1 performs packet classification by using a decision tree. That is, the packet classifier 1 uses a decision tree to determine which rule matches a search key extracted from a packet header.
  • the packet classifier 1 according to the present exemplary embodiment automatically performs dynamic update of entry (addition of a new rule, deletion of an existing rule) in response to an input command.
  • the packet classifier 1 is achieved by a hardware circuit. More specifically, as shown in Fig. 4, the packet classifier 1 has one or a plurality of decision tree processing blocks 2 (2-1 to 2-N: N is an integer equal to or larger than 2), an entry addition target determination block 3, a command input block 4 and a result output block 5. An input data 6 is input to the packet classifier 1, and an output data 7 is output from the packet classifier 1.
  • the input data 6 depends on processing that the packet classifier 1 should execute.
  • the processing executed by the packet classifier 1 includes: (1) processing of searching for a rule that matches a search key (hereinafter referred to as "LOOKUP processing"), (2) processing of adding a new rule (new entry) to a decision tree (hereinafter referred to as "INSERT processing"), (3) processing of invalidating an existing rule (existing entry) (hereinafter referred to as "DELETE processing”), and (4) Configuration processing that previously writes setting values to a memory and a configuration register included in the packet classifier 1.
  • the Configuration processing which is executed at a time of initialization, sets parameters regarding a configuration in a configuration register and a memory installed in each block include in the packet classifier 1.
  • the input data 6 includes a type data indicating a processing type and a data used in the processing.
  • the input data 6 includes the type data indicating the LOOKUP processing and a search key.
  • the input data 6 includes the type data indicating the INSERT processing and a new rule to be added.
  • the input data 6 includes the type data indicating the DELETE processing and an existing rule to be invalidated.
  • the command input block 4 receives the input data 6 and generates a command depending on the input data 6.
  • the command includes a type data indicating a processing type and a data used in the processing, as in the case of the input data 6.
  • a lookup command that instructs execution of the LOOKUP processing includes the type data indicating the LOOKUP processing and a search key.
  • An insertion command that instructs execution of the INSERT processing includes the type data indicating the INSERT processing and a new rule to be added.
  • a deletion command that instructs execution of the DELETE processing includes the type data indicating the DELETE processing and an existing rule to be invalidated.
  • the command input block 4 inputs the generated command simultaneously to the plurality of decision tree processing blocks 2-1 to 2-N.
  • the plurality of decision tree processing blocks 2-1 to 2-N respectively configure a plurality of decision trees for use in the packet classification.
  • the plurality of decision trees are so configured as to prevent the duplicate of rule from occurring, as in the case of the Non Patent Literature 3. That is, each rule is managed in a decision tree where the duplicate of the rule does not occur. In other words, a single rule is managed by only a single leaf node in any one of the plurality of decision trees.
  • Each decision tree processing block 2 executes the LOOKUP processing, the INSERT processing or the DELETE processing depending on the command input from the command input block 4. It should be noted that the plurality of decision tree processing blocks 2-1 to 2-N can execute respective processing concurrently and in parallel.
  • each decision tree processing block 2 uses its own decision tree to determine whether or not a search key matches any rule managed by any leaf node. If there is a rule that matches the search key, the decision tree processing block 2 notifies the result output block 5 of the matching rule. If the search key matches a plurality of rules included in the rule list in the leaf node, the decision tree processing block 2 notifies the result output block 5 of a rule having the highest priority among the plurality of matching rules. On the other hand, if there is no rule that matches the search key, the decision tree processing block 2 notifies the result output block 5 of absence of the matching rule.
  • the result output block 5 receives the search results respectively from the plurality of decision tree processing blocks 2-1 to 2-N and selects a matching rule having the highest priority. Then, the result output block 5 outputs an output data 7 indicating the selection result as a final result of the LOOKUP processing.
  • each decision tree processing block 2 In response to the insertion command, the plurality of decision tree processing blocks 2-1 to 2-N execute the INSERT processing concurrently and in parallel. More specifically, each decision tree processing block 2 first determines whether or not the duplicate of rule occurs when a new rule is added to its own decision tree. In other words, each decision tree processing block 2 determines whether or not the new rule can be managed by only a single leaf node in its own decision tree. If the duplicate of rule occurs, this decision tree processing block 2 is excluded from a target of addition of new rule.
  • this decision tree processing block 2 is an "entry addition target candidate" and this single leaf node is an "addition-target leaf node”.
  • the decision tree processing block 2 being a target to which the new rule is added is an "entry addition target".
  • the entry addition target is selected from the above-mentioned entry addition target candidate. It is the entry addition target determination block 3 that selects the entry addition target. As shown in Fig. 4, the entry addition target determination block 3 is connected to each of the plurality of decision tree processing blocks 2-1 to 2-N. Based on information notified from the plurality of decision tree processing blocks 2-1 to 2-N, the entry addition target determination block 3 recognizes the entry addition target candidates and selects the entry addition target from the entry addition target candidates.
  • each entry addition target candidate notifies the entry addition target determination block 3 of a number-of-entry of valid rules (number of valid entries) managed by the addition-target leaf node. Then, the entry addition target determination block 3 refers to the number of valid entries received from each entry addition target candidate to select the entry addition target in accordance with a certain policy. For example, the entry addition target determination block 3 selects, as the entry addition target, the entry addition target candidate in which the number of valid entries is smallest. In this case, unevenness of the number of rules between the plurality of decision trees can be suppressed, which is preferable.
  • the entry addition target determination block 3 notifies each decision tree processing block 2 of the result of the selection of the entry addition target. For example, the entry addition target determination block 3 instructs the entry addition target to add the new rule and instructs the other decision tree processing blocks 2 not to execute the rule addition processing. Then, the decision tree processing block 2 being the entry addition target adds the new rule to the rule list managed by the addition-target leaf node in its own decision tree.
  • the result output block 5 receives the processing results from the plurality of decision tree processing blocks 2-1 to 2-N and outputs the output data 7 indicating the final result of the INSERT processing.
  • each decision tree processing block 2 determines whether or not an existing rule as a deletion target is being managed by a single leaf node its own decision tree. If the existing rule is being managed by a single leaf node, this decision tree processing block 2 is an "entry deletion target" and this single leaf node is a "deletion-target leaf node". The decision tree processing block 2 being the entry deletion target invalidates the existing rule being managed by the deletion-target leaf node.
  • the result output block 5 receives the processing results from the plurality of decision tree processing blocks 2-1 to 2-N and outputs the output data 7 indicating the final result of the DELETE processing.
  • the duplicate of rule is prevented from occurring.
  • the INSERT processing is so executed as to prevent the duplicate of rule from occurring.
  • the dynamic update of entry it is necessary to rewrite a data of the update-target entry at a data storage region in a memory. If the duplicate of rule exists, the data rewriting is required for the duplication number of times. Therefore, regarding the INSERT processing or the DELETE processing, the processing time is not always constant and, in some cases, the processing time is increased greatly. According to the present exemplary embodiment, such the problem can be solved. That is, the processing time required for the dynamic update of entry such as the INSERT processing and the DELETE processing becomes constant.
  • each of the plurality of decision tree processing blocks 2-1 to 2-N receives the insertion command and, in response to the insertion command, automatically determines whether or not it is an entry addition target candidate.
  • the entry addition target determination block 3 automatically determines an appropriate entry addition target based on information notified from at least the entry addition target candidate. Then, the entry addition target automatically adds the new rule to the addition-target leaf node.
  • each of the plurality of decision tree processing blocks 2-1 to 2-N receives the deletion command and, in response to the deletion command, automatically determines whether or not it is an entry deletion target.
  • the entry deletion target automatically invalidates a specified rule in the deletion-target leaf node.
  • the dynamic update of entry is automatically performed in response to the command.
  • preprocessing that previously determines which rule in which node in which decision tree is to be updated. That is, according to the present exemplary embodiment, it is possible to achieve the dynamic update of entry without performing the preprocessing, in the packet classification that uses the decision tree.
  • the packet classifier 1 is provided, for example, in a network device of a switch, a router and the like or a NIC (Network Interface Card) installed on a server as an extension card or an on-board card.
  • the packet classifier 1 is connected to a control block.
  • the control block has functions of analyzing a packet header of a received packet, extracting a search key from the packet header and making the packet classifier 1 execute the LOOKUP processing using the search key.
  • the control block has functions of making the packet classifier 1 execute the INSERT processing that adds an externally-specified new rule and making the packet classifier 1 execute the DELETE processing that deletes an externally-specified existing rule.
  • the packet classifier 1 receives the input data 6 depending on the processing from the control block and outputs the output data 7 to the control block.
  • Fig. 5 is a block diagram showing a configuration of each decision tree processing block 2 of the packet classifier 1 according to the present exemplary embodiment.
  • the decision tree processing block 2 has a decision tree pipeline block 20, a number-of-entry counting block 21 and a rule pipeline block 22.
  • An input data input from the command input block 4 to the decision tree processing block 2 is an input data 23.
  • a data output from the decision tree processing block 2 to the result output block 5 is an output data 24.
  • a data communicated between the decision tree processing block 2 and the entry addition target determination block 3 is an input-output data 25.
  • Fig. 7 is a block diagram showing a configuration of the decision tree node processing block 200 according to the present exemplary embodiment. As shown in Fig. 7, the decision tree node processing block 200 has a child node determination block 2000 and a node information memory block 2001.
  • the node information memory block 2001 is configured by a storage medium such as a memory and a register and stores node information. With regard to each of all nodes located at the same depth as seen from the root node, the node information indicates region division information in the node, namely, information of child nodes (post-division regions) managed by the node.
  • the decision tree node processing block 200 receives an input data 2002 from the former-stage decision tree node processing block 200.
  • the input data 2002 includes a command depending on the processing and an address value used for accessing the node information memory block 2001.
  • the input data 2002 from the former-stage decision tree node processing block 200 further includes information of an "effective bit length" described later.
  • the child node determination block 2000 receives the input data 2002 and refers to the address value specified in the input data 2002 to read out the node information from the node information memory block 2001. Then, based on the node information, the effective bit length and the command, the child node determination block 2000 calculates an address value at which the node information of child nodes of the node is stored. Moreover, the child node determination block 2000 updates the effective bit length. The calculation of the address value and the update of the effective bit length will be described later in detail.
  • An output data 2003 includes the command included in the input data 2002, the calculated new address value and the post-update effective bit length. The child node determination block 2000 outputs the output data 2003 to the next-stage decision tree node processing block 200.
  • the last-stage decision tree node processing block 200-H is as follows.
  • the address value calculated by the child node determination block 2000 is an address value used for accessing an entry memory block 2201 (discussed below) of a rule processing block 220 of the rule pipeline block 22.
  • the decision tree node processing block 200-H outputs the output data 2003 to a rule processing block 220-1.
  • the address value calculated by the child node determination block 2000 is an address value used for accessing a number-of-entry memory block 211 (discussed below) of the number-of-entry counting block 21.
  • An output data 2004 includes the command included in the input data 2002 and the calculated address value.
  • the child node determination block 2000 outputs the output data 2004 to the number-of-entry counting block 21.
  • Number-of-entry counting block 21 The number-of-entry counting block 21 manages a number-of-entry of valid rules with respect to each of all leaf nodes in the decision tree. It should be noted that the processing by the number-of-entry counting block 21 is executed in a processing cycle corresponding to the depth H of the decision tree.
  • Fig. 8 is a block diagram showing a configuration of the number-of-entry counting block 21 according to the present exemplary embodiment. As shown in Fig. 8, the number-of-entry counting block 21 has a count processing block 210 and a number-of-entry memory block 211.
  • the number-of-entry memory block 211 is configured by a storage medium such as a memory and a register and stores number-of-entry information.
  • the number-of-entry information indicates the number-of-entry of valid rules managed by each leaf node in the decision tree.
  • the input data 212 includes the insertion command and the address value used for accessing the number-of-entry memory block 211.
  • the count processing block 210 refers to the address value to read out the number-of-entry information (number of valid rule entries) regarding the addition-target leaf node from the number-of-entry memory block 211. Then, the count processing block 210 outputs an output data 216 indicating the read-out number of valid rule entries to the above-mentioned entry addition target determination block 3.
  • the count processing block 210 receives an input data 216 from the entry addition target determination block 3. If the input data 216 indicates "addition of the new rule to the decision tree", the count processing block 210 adds "1" to the number of valid rule entries regarding the above-mentioned addition-target leaf node and then writes it in the number-of-entry memory block 211. As a result, the number-of-entry memory block 211 is updated to the latest condition. Furthermore, the count processing block 210 outputs an output data 214 to a rule processing block 220-1 of the rule pipeline block 22. The output data 214 includes the insertion command and an address value used for accessing an entry memory block 2201 (discussed below) of the rule processing block 220-1.
  • the count processing block 210 receives an input data 213 from a rule processing block 220-B of the rule pipeline block 22.
  • the count processing block 210 refers to an address value specified in the input data 213 to read out the number-of-entry information (number of valid rule entries) regarding the deletion-target leaf node from the number-of-entry memory block 211. Then, the count processing block 210 subtracts "1" from the number of valid rule entries and writes it in the number-of-entry memory block 211. As a result, the number-of-entry memory block 211 is updated to the latest condition. Moreover, the count processing block 210 outputs an output data 215 indicating completion of the DELETE processing as the output data 24 to the result output block 5.
  • Fig. 10 is a block diagram showing a configuration of the rule processing block 220 according to the present exemplary embodiment. As shown in Fig. 10, the rule processing block 220 has a comparison and update processing block 2200 and an entry memory block 2201.
  • the entry memory block 2201 is configured by a storage medium such as a memory and a register and stores entry information.
  • the entry information includes a validation flag indicating whether the rule is validated or invalidated, a rule ID, the rule and the like.
  • the input data 2202 includes the command depending on the processing and the address value used for accessing the entry memory block 2201.
  • the input data 2203 includes the insertion command and the address value used for accessing the entry memory block 2201.
  • the input data 2203 is hereinafter treated in common with the above-mentioned input data 2202.
  • the comparison and update processing block 2200 receives the input data 2202 and refers to the address value specified in the input data 2202 to read out the entry information from the entry memory block 2201. Then, based on the read-out entry information, the comparison and update processing block 2200 executes the rule processing depending on the command. Details of the rule processing will be described later. Moreover, the comparison and update processing block 2200 outputs the received input data 2202 as an output data 2204 to the next-stage rule processing block 220.
  • Fig. 11 is a flow chart showing the LOOKUP processing in the present exemplary embodiment.
  • Step A1 An input data 6 is input to the packet classifier 1.
  • the input data 6 includes the type data indicating the LOOKUP processing and a search key extracted from a search-target packet.
  • the command input block 4 receives the input data 6.
  • the command input block 4 generates an internal command (here, a lookup command) based on the input data 6 and outputs the internal command simultaneously to the decision tree processing blocks 2-1 to 2-N.
  • Fig. 12 is a conceptual diagram showing a configuration example of the internal command.
  • a command section includes a processing type, an end flag and a result flag.
  • the processing type being a 2-bit data indicates the LOOKUP processing if it is "00", the INSERT processing if it is "01" and the DELETE processing if it is "10".
  • the end flag indicates "not yet end” if it is '0' and "end” if it is '1'.
  • the result flag is set to '1' if addition or deletion of an entry is completed. It should be noted that the end flag and the result flag are referred to only in the cases of the INSERT processing and the DELETE processing, and a method of utilization thereof will be described later.
  • Stored in the latter part of the internal command is a search key (LOOKUP processing), a new entry to be added (INSERT processing) or an existing entry to be deleted (DELETE processing).
  • various expression methods can be used for the entry (rule), in consideration of Prefix Match, Range Match and Wildcard Match.
  • data sequences A and B each having a same length as the search key are prepared.
  • Prefix Match or Wildcard Match a target data is assigned to A and a mask bit data is assigned to B.
  • Range Match the lower limit value is assigned to A and the upper limit value is assigned to B.
  • an entry length of the rule is twice the length of the search key. In consideration of this situation, it is permitted in the latter part of the internal command that the length is different between the search key at the time of the LOOKUP processing and the addition/deletion entry at the time of the INSERT/DELETE processing. Note that in the case of the addition entry, its priority also is specified.
  • Step A2 When receiving the lookup command from the command input block 4, each decision tree processing block 2 uses the specified search key to execute processing of following its own decision tree.
  • Fig. 13 is a flow chart showing the processing of this Step A2.
  • Step B1 The processing of following the decision tree starts from the root node of the decision tree.
  • a current handling node is first set to the root node of the decision tree. More specifically, the insertion command is input to the first-stage decision tree node processing block 200-1 of the decision tree pipeline block 20.
  • Step B2 The child node determination block 2000 confirms that the processing type is LOOKUP and then reads out the node information from the node information memory block 2001.
  • Fig. 14 is a conceptual diagram showing a configuration example of the node information.
  • the node information is information of the region division regarding the corresponding node.
  • the node information includes C pairs of a field identifier (field ID) and a division number (k) as well as a base address.
  • the C is a maximum number of fields that can be used for the region division regarding a single node of the decision tree.
  • the field ID is predetermined with respect to each of fields constituting the search key or the rule.
  • a range of each field is divided into 2 k sections (k is a natural number), and the division number k represents the exponent.
  • the base address represents a minimum address value of a memory region in which the node information of child nodes of the node is stored. More specifically, the base address is a minimum address value in the node information memory block 2001 of the next-stage decision tree node processing block 200 in which the node information of child nodes of the node is stored. The node information of all child nodes of the node is stored in a memory region following the base address.
  • Step B3 Based on the read-out node information, a bit sequence of each field of the search key and the effective bit length, the child node determination block 2000 calculates an address value of a memory region in which the node information of the next-stage child node. More specifically, the next-stage address value is calculated in the following manner (refer also to Fig. 15).
  • respective effective bit lengths are 3 and 4.
  • the effective bit length means a bit length of lower bits of the bit sequence of each field that are used for the region division. That is, a bit sequence of lower bits having a length specified by the effective bit length in the field value is referred to.
  • Step B4 Subsequently, the child node determination block 2000 updates the effective bit length by subtracting the division number k from the input effective bit length.
  • the effective bit length is not input to the first decision tree node processing block 200-1 and so the field length of each field is previously set as the effective bit length in this case.
  • the effective bit length is input from the former-stage decision tree node processing block 200.
  • Steps B5, B6 Next, it is determined whether or not the child node is a leaf node. More specifically, the decision tree node processing block 200 under processing just automatically determines it, because the child node of the node treated by the decision tree node processing block 200-H is the leaf node (refer to Fig. 6).
  • next-stage node is not the leaf node (Step B5; No)
  • it means a case of the decision tree node processing block 200-i (i 1, 2, ..., H-1).
  • the decision tree node processing block 200-i outputs the output data 2003 to the next-stage decision tree node processing block 200-(i+1) (Step B6).
  • the output data 2003 includes the command included in the input data 2002, the calculated new address value and the post-update effective bit length. Then, the procedure returns back to the Step B2, and the next-stage node becomes the handling node.
  • Step B5 If the next-stage node is the leaf node (Step B5; Yes), it means a case of the decision tree node processing block 200-H. In this case, the Step A2 ends.
  • Step A3 After the Step A2 is completed, the decision tree node processing block 200-H outputs the output data 2003 including the calculated address value and the command to the rule processing block 220-1 of the rule pipeline block 22.
  • Step A4 Subsequently, the rule pipeline block 22 makes a comparison between the search key and each rule in the rule list (i.e. performs matching processing).
  • Fig. 16 is a flow chart showing processing of this Step A4.
  • Step C1 The matching processing starts from the first rule in the rule list. More specifically, the first-stage rule processing block 220-1 initiates the procedure.
  • Step C2 The comparison and update processing block 2200 confirms that the processing type is LOOKUP and then uses the input address value to read out the entry information from the entry memory block 2201. Then, the comparison and update processing block 2200 checks the validation flag.
  • Fig. 17 is a conceptual diagram showing a configuration example of the entry information.
  • the entry information includes the validation flag indicating whether the rule is validated or not, the rule ID, the rule and the priority. If an action associated with the rule is retained together, the action may be further added. In the present example, the action is managed by a different device from the packet classifier 1, and after the packet classifier 1 has completed the search, the action is obtained by the use of the matching rule ID.
  • the rule ID is included in the entry information in the case of the example shown in Fig. 17, the rule ID may not be included in the entry information.
  • the rule ID may be generated based on the decision tree processing block ID, the leaf node ID, an order in the rule list and the like.
  • Step C4 If the rule is validated (Step C3; Yes), the comparison and update processing block 2200 makes a comparison between the search key and the read-out rule.
  • a method of the comparison is disclosed, for example, in Non Patent Literature 2.
  • Step C6 If the search key matches the rule (Step C5; Yes), the comparison and update processing block 2200 compares the priority between this rule and a current matching rule.
  • the current matching rule means a rule having the highest priority among rules that have been matched in or before the former-stage rule processing block 220.
  • the rule used for the comparison in the current rule processing block 220 is referred to as a comparison rule.
  • Step C8 If the priority of the comparison rule is higher (Step C7; Yes), the comparison and update processing block 2200 sets the comparison rule as a new current matching rule. After that, the procedure proceeds to Step C9.
  • Step C3 If the read-out rule is not validated (Step C3; No) or if the search key does not match the rule (Step C5; No) or if the priority of the comparison rule is lower than the priority of the matching rule (Step C7; No), the procedure proceeds to the Step C9 as well.
  • the rule processing block 220-B If the current rule is the last rule in the rule list, namely, if the processing is being performed by the rule processing block 220-B (Step C9; Yes), the rule processing block 220-B outputs the rule ID and the priority of the matching rule as well as the command to the result output block 5 (Step C11). Then, the Step A4 ends. It should be noted that if there is no matching rule, the absence of the matching rule is represented by outputting a special value, for example, by setting all bit values of the rule ID to "1".
  • Step A5 The result output block 5 receives the search results respectively from the plurality of decision tree processing blocks 2-1 to 2-N.
  • the result output block 5 compares the priority between the matching rules to select a matching rule having the highest priority. Then, the result output block 5 outputs the output data 7 indicating the selection result (for example, the rule ID of the matching rule having the highest priority) as a final result of the LOOKUP processing.
  • the selection result for example, the rule ID of the matching rule having the highest priority
  • the absence of the matching rule is represented by outputting the above-mentioned special-value rule ID.
  • INSERT processing Next, the INSERT processing will be described.
  • Fig. 18 is a flow chart showing the INSERT processing in the present exemplary embodiment.
  • Step A1 An input data 6 is input to the packet classifier 1.
  • the input data 6 includes the type data indicating the INSERT processing and a new rule to be added.
  • the command input block 4 receives the input data 6.
  • the command input block 4 generates an internal command (here, an insertion command) based on the input data 6 and outputs the internal command simultaneously to the decision tree processing blocks 2-1 to 2-N.
  • Step A6 When receiving the insertion command from the command input block 4, each decision tree processing block 2 uses the specified new rule to execute processing of following its own decision tree.
  • Fig. 19 is a flow chart showing the processing of this Step A6.
  • Step B1 A current handling node is first set to the root node of the decision tree, as in the case of the LOOKUP processing.
  • Step B7 The child node determination block 2000 confirms that the processing type is INSERT and then refers to the end flag in the command. If the end flag is '1' (Step B7; Yes), the procedure proceeds to Step B5. On the other hand, if the end flag is '0' (Step B7; No), the procedure proceeds to Step B2.
  • Step B2 The child node determination block 2000 reads out the node information from the node information memory block 2001, as in the case of the LOOKUP processing.
  • Step B8 The child node determination block 2000 determines whether or not the duplicate of rule occurs, based on the read-out node information, a bit sequence of each field of the new rule and the effective bit length. Whether or not the duplicate of rule occurs can be determined by checking whether or not wildcard is included when an address value is calculated.
  • '1' means "validated”
  • '0' means "invalidated” and is treated as wildcard. In the above-mentioned case, no wildcard is included in each field, and it is found that the duplicate of rule does not occur.
  • the effective bit length 0100
  • the effective bit length is 4
  • the division number is 3, and thus "01*" is obtained.
  • the bit sequence indicating the post-division region of the child node includes wildcard, which means that the duplicate of rule occurs.
  • the field value is expressed by Range Match, respective bit sequences each having the effective bit length of the lower limit value and the upper limit value are considered, and it is checked whether or not the bit sequences that are referred to have the same value.
  • the effective bit length is 3, and the lower three bits of the lower limit value and the upper limit value are "000" and "011", respectively.
  • the division number is 1, and the first bits of the respective effective bits both are '0'. Therefore, it is determined that the duplicate of rule does not occur when the region division is performed.
  • Step B10 If the duplicate of rule occurs (Step B9; Yes), this decision tree processing block 2 is excluded from the entry addition target candidate. In this case, the child node determination block 2000 sets the end flag in the command to '1'. After that, the procedure proceeds to Step B5.
  • Steps B3, B4 On the other hand, the duplicate of rule does not occur (Step B9; No), calculation of the next-stage address value (Step B3) and update of the effective bit length (Step B4) are executed, as in the case of the LOOKUP processing. After that, the procedure proceeds to Step B5.
  • Step A7 After the Step A6 is completed, the decision tree node processing block 200-H outputs the output data 2004 including the calculated address value and the command to the number-of-entry counting block 21.
  • the count processing block 210 confirms that the processing type is INSERT and then refers to the end flag. If the end flag is '0', this decision tree processing block 2 is the entry addition target candidate.
  • the count processing block 210 uses the input address value to read out the number-of-entry information (i.e. the number of valid rule entries) regarding the addition-target leaf node from the number-of-entry memory block 211. Then, the count processing block 210 outputs the output data 216 indicating the read-out number of valid rule entries to the entry addition target determination block 3.
  • this decision tree processing block 2 is not the entry addition target candidate.
  • the count processing block 210 outputs a processing end signal to the entry addition target determination block 3.
  • the count processing block 210 may set the number of valid rule entries to the maximum value and then notify the entry addition target determination block 3 of the number of valid rule entries.
  • Step A9 The entry addition target determination block 3 recognizes the entry addition target candidate based on the information notified from the plurality of decision tree processing blocks 2-1 to 2-N and selects an entry addition target from the entry addition target candidate.
  • Fig. 20 is a flow chart showing processing of this Step A9.
  • Steps D1, D2, D3 The entry addition target determination block 3 receives the information indicating the number of valid rule entries from the entry addition target candidate (Step D1). Next, the entry addition target determination block 3 checks whether or not there is an entry addition target candidate in which the number of valid rule entries is less than the maximum number B (list size) of the rule list (Step D2). If there is no entry addition target candidate in which the number of valid rule entries is less than the list size (Step D2; No), the entry addition target determination block 3 determines that there is no entry addition target (Step D3).
  • Steps D4, D5 On the other hand, if there is an entry addition target candidate in which the number of valid rule entries is less than the list size (Step D2; Yes), the entry addition target determination block 3 selects an entry addition target candidate in which the number of valid entries is smallest, as the entry addition target (Step D4). After that, the entry addition target determination block 3 notifies each decision tree processing block 2 of true/false of the entry addition (Step D5). For example, the entry addition target determination block 3 instructs the entry addition target to add the new rule and instructs the other decision tree processing blocks 2 not to execute the rule addition processing. It should be noted that although the entry addition target candidate in which the number of valid rule entries is smallest is selected as the entry addition target in the above-described example, the entry addition target may be selected in accordance with another policy.
  • Step A10 The number-of-entry counting block 21 of the decision tree processing block 2 being the entry addition target adds "1" to the number of valid rule entries regarding the addition-target leaf node and writes it in the number-of-entry memory block 211.
  • the number-of-entry memory block 211 is updated to the latest condition.
  • the number-of-entry counting block 21 of each of the other decision tree processing blocks 2 sets the end flag in the command to '1'.
  • Step A11 Moreover, the number-of-entry counting block 21 of each decision tree processing block 2 outputs the output data 214 including the address value and the insertion command to the rule processing block 220-1 of the rule pipeline block 22.
  • Step A12 The rule pipeline block 22 performs rule addition processing.
  • Fig. 21 is a flow chart showing processing of this Step A12.
  • Step C12 The Step A12 starts from the first rule in the rule list. More specifically, the first-stage rule processing block 220-1 initiates the procedure.
  • Step C13 The comparison and update processing block 2200 confirms that the processing type is INSERT and then refers to the end flag in the command. If the end flag is '1' (Step C13; Yes), the procedure proceeds to Step C9. On the other hand, if the end flag is '0' (Step C13; No), the procedure proceeds to Step C2.
  • Step C2 As in the case of the LOOKUP processing, the comparison and update processing block 2200 uses the input address value to read out the entry information from the entry memory block 2201. Then, the comparison and update processing block 2200 checks the validation flag.
  • Step C15 If the rule is validated (Step C3; Yes), it is not allowed to write the new rule to this entry. In this case, the rule processing block 220-i outputs the output data 2204 including the address value and the command to the next-stage rule processing block 220-(i+1). Then, the procedure returns back to the Step C13 and the processing is executed with respect to the next rule in the rule list.
  • Step C14 If the rule is not validated (Step C3; No), it is possible to write the new rule to this entry (i.e. free entry).
  • the comparison and update processing block 2200 changes the rule information in the entry information to that of the new rule, sets the validation flag to '1' and writes it in the entry memory block 2201.
  • the comparison and update processing block 2200 sets the end flag in the command to '1' and sets the result flag in the command to '1' (addition succeeded). After that, the procedure proceeds to Step C9.
  • Step C9 As in the case of the LOOKUP processing, it is determined whether or not the current rule is the last rule in the rule list. If the current rule is not the last rule (Step C9; No), the procedure proceeds to the above-mentioned Step C15. On the other hand, if the current rule is the last rule (Step C9; Yes), the rule processing block 220-B outputs the current command to the result output block 5 (Step C16). Then, the Step A12 ends.
  • Step A13 The result output block 5 receives the command from each of the plurality of decision tree processing blocks 2-1 to 2-N.
  • the result output block 5 refers to the result flag in the received command to check whether the new rule is added or not. Then, the result output block 5 outputs the output data 7 indicating the result as a final result of the INSERT processing.
  • an entry ID indicating a location to which the new rule is added may be output as the final result of the INSERT processing.
  • the entry ID can be generated based on the ID of the decision tree processing block being the entry addition target, the leaf node ID, an order in the rule list and the like, as in the case of generating the rule ID when the entry information does not include the rule ID.
  • a signal indicating that the rule addition has resulted in fail may be output.
  • a special value e.g. all bits of the entry ID are set to '1'
  • the end flag and the result flag each having 1 bit are used in the foregoing example. Instead, a bit width of the end flag may be increased.
  • the end flag can represent a cause of processing termination such as "the processing has ended due to occurrence of the duplicate of rule” and "the processing has ended due to absence of the free region in the rule list of the leaf node". By referring to such the end flag, the reason why the new rule has not been added also can be output as the final result of the INSERT processing.
  • Fig. 22 is a flow chart showing the DELETE processing in the present exemplary embodiment.
  • Step A1 An input data 6 is input to the packet classifier 1.
  • the input data 6 includes the type data indicating the DELETE processing and an existing rule to be deleted.
  • the command input block 4 receives the input data 6.
  • the command input block 4 generates an internal command (here, a deletion command) based on the input data 6 and outputs the internal command simultaneously to the decision tree processing blocks 2-1 to 2-N.
  • Step A6 As in the case of the INSERT processing, each decision tree processing block 2 uses the specified existing rule to execute processing of following its own decision tree.
  • Step A3 After the Step A6 is completed, the decision tree node processing block 200-H outputs the output data 2003 including the calculated address value and the command to the rule processing block 220-1 of the rule pipeline block 22.
  • Step A14 The rule pipeline block 22 performs rule deletion processing.
  • Fig. 23 is a flow chart showing processing of this Step A14.
  • Step C1 The Step A14 starts from the first rule in the rule list. More specifically, the first-stage rule processing block 220-1 initiates the procedure.
  • Step C13 The comparison and update processing block 2200 confirms that the processing type is DELETE and then refers to the end flag in the command. If the end flag is '1' (Step C13; Yes), the procedure proceeds to Step C9. On the other hand, if the end flag is '0' (Step C13; No), the procedure proceeds to Step C2.
  • Steps C2, C3 As in the case of the LOOKUP processing, the comparison and update processing block 2200 uses the input address value to read out the entry information from the entry memory block 2201. Then, the comparison and update processing block 2200 checks the validation flag. If the rule is invalidated (Step C3; No), this entry is not the deletion target. In this case, the procedure proceeds to Step C9. On the other hand, if the rule is validated (Step C3; Yes), the procedure proceeds to Step C17.
  • Step C17 The comparison and update processing block 2200 compares the read-out rule and the rule designated as the deletion target. That is, the comparison and update processing block 2200 determines whether or not the rule designated as the deletion target matches the read-out rule. Here, it is determined whether or not they both completely match. In a case of mismatch (Step C5; No), the procedure proceeds to Step C9. On the other hand, in a case of complete match (Step C5; Yes), the procedure proceeds to Step C18.
  • Step C18 The comparison and update processing block 2200 sets the validation flag in the entry information to '0' and writes it in the entry memory block 2201. As a result, this entry (rule) is invalidated. This processing is equivalent to the deletion of the existing rule. Moreover, the comparison and update processing block 2200 sets the end flag in the command to '1' and sets the result flag in the command to '1' (deletion succeeded). After that, the procedure proceeds to Step C9.
  • Steps C9, C15 As in the case of the LOOKUP processing, it is determined whether or not the current rule is the last rule in the rule list. If the current rule is not the last rule (Step C9; No), the rule processing block 220-i outputs the output data 2204 including the address value and the command to the next-stage rule processing block 220-(i+1) (Step C15). Then, the procedure returns back to the Step C13 and the processing is executed with respect to the next rule in the rule list. On the other hand, if the current rule is the last rule in the rule list (Step C9; Yes), the Step A14 ends.
  • Step A15 After the Step A14 is completed, the rule processing block 220-B outputs the output data 2205 including the address value and the command to the number-of-entry counting block 21.
  • the count processing block 210 confirms that the processing type is DELETE and then refers to the end flag and the result flag. If the end flag and the result flag both are '1', the count processing block 210 uses the input address value to read out the number-of-entry information (number of valid rule entries) regarding the deletion-target leaf node from the number-of-entry memory block 211. Then, the count processing block 210 subtracts 1 from the number of valid rule entries and then writes it in the number-of-entry memory block 211. As a result, the number-of-entry memory block 211 is updated to the latest condition.
  • Step A17 Lastly, the count processing block 210 outputs an output data 215 including the command to the result output block 5.
  • the result output block 5 refers to the result flags in the commands respectively received from the decision tree processing blocks 2-1 to 2-N to check whether the rule is deleted or not. Then, the result output block 5 outputs the output data 7 indicating the result as a final result of the DELETE processing.
  • an entry ID indicating a location from which the deletion rule is deleted may be output as the final result of the DELETE processing, as in the case of the entry ID indicating a location to which the new rule is added in the INSERT processing.
  • a signal indicating the absence of the deletion-target rule or a special value where all bits of the entry ID are set to '1' may be output in order to notify that the deletion-target rule does not exist.
  • the command input block 4 externally receives the processing type, the search key and the addition/deletion target entry as the input data 6 and generates the internal command.
  • the command input block 4 may directly receive an arrived packet itself or a header data of the arrived packet as the input data 6, extract the search key and then generate the internal command.
  • the new rule is added to an entry list whose number of valid rule entries is smallest.
  • unevenness of the number of rules between the plurality of decision trees can be suppressed, which is preferable.
  • the processing time required for the dynamic update of entry such as the INSERT processing and the DELETE processing becomes constant.
  • the one unit cycle is desirably one clock cycle. Strictly speaking, the number of cycles before input of a next command varies depends on which processing command is input following the LOOKUP processing, the INSERT processing or the DELETE processing. However, in the estimation here, the maximum value of the number of cycles before input of a next processing command is considered.
  • the processing by the command input block 4, the H-stages processing by the decision tree pipeline block 20 and the B-stages processing by the rule pipeline block 22 of the decision tree processing block 2 and the processing by the result output block 5 can be executed in a total of H+B+2 unit cycles. Moreover, since information in the memory is not updated in the LOOKUP processing, a next command can be input at a cycle immediately following the input of the LOOKUP command.
  • a next command can be input after 3 unit cycles have passed from the input of the INSERT command.
  • a next command can be input after B+3 unit cycles have passed from the input of the DELETE command.
  • each processing can be executed in a fixed period of time; for example, the LOOKUP processing can be executed in H+B+2 unit cycles, the INSERT processing can be executed in H+B+6 unit cycles, and the DELETE processing can be executed in H+B+5 unit cycles.
  • the problem that the processing time varies due to the duplicate of rule as in the conventional technique is not caused.
  • Configuration and Summary Fig. 24 is a block diagram showing a configuration of a packet classifier 1 according to the second exemplary embodiment. As compared with the configuration of the first exemplary embodiment shown in Fig. 4, the decision tree processing blocks 2-1 to 2-N are replaced by decision tree processing blocks 9-1 to 9-N, respectively.
  • Fig. 25 is a block diagram showing a configuration of the decision tree processing block 9 according to the second exemplary embodiment. As compared with the configuration of the decision tree processing block 2 shown in Fig. 5, the number-of-entry counting block 21 is omitted. Also, the rule processing block 220-B of the rule pipeline block 22 outputs an output data 26 to the entry addition target determination block 3. Moreover, the rule processing block 220-1 receives an input data 27 from the entry addition target determination block 3.
  • the rule pipeline block 22 of each entry addition target candidate temporarily adds the new rule to the rule list managed by the addition-target leaf node.
  • the rule pipeline block 22 counts the number of valid rule entries in the rule list.
  • the rule pipeline block 22 instead of the number-of-entry counting block 21 notifies the entry addition target determination block 3 of the number of valid rule entries.
  • the last-stage rule processing block 220-B of the rule pipeline block 22 outputs an output data 26 indicating the number of valid rule entries obtained through the temporal addition processing to the entry addition target determination block 3.
  • the entry addition target determination block 3 selects one entry addition target from the entry addition target candidate, as in the case of the first exemplary embodiment. Then, the entry addition target determination block 3 sends information (input data 27) indicating the selection result to all the entry addition target candidate.
  • Each entry addition target candidate other than the selected entry addition target performs "added-entry invalidation processing".
  • the entry addition target candidate invalidates the new rule that has been temporarily added by the above-mentioned temporal addition processing.
  • the added-entry invalidation processing can be achieved in a similar manner to the DELETE processing.
  • the same processing result as in the first exemplary embodiment can be obtained without using the number-of-entry counting block 21. Since the number-of-entry counting block 21 is omitted, a total memory region can be reduced.
  • LOOKUP processing is the same as in the case of the first exemplary embodiment.
  • Fig. 26 is a flow chart showing the INSERT processing in the present exemplary embodiment.
  • the Steps A1, A6 and A13 are the same as in the case of the INSERT processing in the first exemplary embodiment.
  • Step A3 After the Step A6 is completed, the Step A3 instead of the Step A7 is executed as in the case of the LOOKUP processing. That is, the decision tree node processing block 200-H outputs the output data 2003 including the calculated address value and the command to the rule processing block 220-1 of the rule pipeline block 22.
  • Step A18 The rule pipeline block 22 executes the above-mentioned "temporal addition processing".
  • Fig. 27 is a flow chart showing processing of this Step A18.
  • Step C12 The Step A18 starts from the first rule in the rule list. More specifically, the first-stage rule processing block 220-1 initiates the procedure. Here, the number of valid rule entries is set to an initial value 0.
  • Steps C2, C3, C19 As in the case of the first exemplary embodiment, the rule processing block 220 reads out the entry information and checks the validation flag (Step C2). If the rule is validated (Step C3; Yes), the rule processing block 220 adds "1" to the number of valid rule entries (Step C19). After that, the procedure proceeds to Step C15.
  • Step C15 The rule processing block 220-i outputs an output data 2204 including the address value and the command as well as the number of valid rule entries to the next-stage rule processing block 220-(i+1). Then, the procedure returns back to the Step C2, and the processing is executed with respect to the next rule in the rule list.
  • Step C13 If the rule is not validated (Step C3; No), the rule processing block 220 refers to the end flag in the command. If the end flag is '1' (Step C13; Yes), the processing proceeds to the above-mentioned Step C15. On the other hand, if the end flag is '0' (Step C13; No), the procedure proceeds to Step C14.
  • Step C14 As in the case of the first exemplary embodiment, the rule processing block 220 writes the new rule to this entry (i.e. free entry) and sets the validation flag to '1'. Moreover, the rule processing block 220 sets the end flag in the command to '1' and sets the result flag in the command to '1' (addition succeeded). After that, the procedure proceeds to Step C9.
  • Steps C9, C20 As in the case of the first exemplary embodiment, it is determined whether or not the current rule is the last rule in the rule list. If the current rule is not the last rule (Step C9; No), the procedure proceeds to the above-mentioned Step C15. On the other hand, if the current rule is the last rule (Step C9; Yes), the rule processing block 220-B outputs the current command and the number of valid rule entries to the entry addition target determination block 3 (Step C20). Then, the Step A18 ends.
  • Step A9 Next, the entry addition target determination block 3 selects an entry addition target from the entry addition target candidate in a similar manner to the first exemplary embodiment. Then, the entry addition target determination block 3 sends information indicating the selection result to the all entry addition target candidate.
  • Step A14 The rule pipeline block 22 executes the above-mentioned "added-entry invalidation processing".
  • the added-entry invalidation processing is performed in a similar manner to the DELETE processing.
  • the deletion-target rule is the new rule that has been added by the temporal addition processing. It should be noted that "11" that is not used in the first exemplary embodiment may be used as the processing type of the command for performing the added-entry invalidation processing. The end flag and the result flag at this time are initialized.
  • DELETE processing Fig. 28 is a flow chart showing the DELETE processing in the present exemplary embodiment.
  • the number-of-entry counting block 21 is omitted. Therefore, the Steps A15 and A16 relating to the number-of-entry counting block 21 can be omitted, as compared with the DELETE processing in the first exemplary embodiment shown in Fig. 22.
  • the other processing is the same as in the case of the first exemplary embodiment.
  • the LOOKUP processing is the same as in the case of the first exemplary embodiment.
  • the INSERT processing includes processing by the command input block 4, the H-stages processing by the decision tree pipeline block 20 and the B-stages read processing and the one-time write processing by the rule pipeline block 22 of the decision tree processing block 2, the processing by the entry addition target determination block 3, the B-stages read processing and the one-time write processing by the rule pipeline block 22 and the processing by the result output block 5. Therefore, it can be executed in a total of H+2B+5 unit cycles. Moreover, unless the invalidation of the added entry in the decision trees other than the addition-target decision tree is completed, it is not possible to execute subsequent processing without wait. Therefore, a next command can be input after B+3 unit cycles have passed from the input of the INSERT command.
  • the DELETE processing includes processing by the command input block 4, the H-stages processing by the decision tree pipeline block 20 and the B-stages read processing and the one-time write processing by the rule pipeline block 22 of the decision tree processing block 2 and the processing by the result output block 5. Therefore, it can be executed in a total of H+B+3 unit cycles. Since the delete-target entry is deleted in order, the one-time write processing in the rule pipeline block 22 that performs the entry deletion is considered. Thus, a next command can be input after 1 unit cycle has passed from the input of the DELETE command.
  • each processing can be executed in a fixed period of time; for example, the LOOKUP processing can be executed in H+B+2 unit cycles, the INSERT processing can be executed in H+2B+5 unit cycles, and the DELETE processing can be executed in H+B+3 unit cycles.
  • the problem that the processing time varies due to the duplicate of rule as in the conventional technique is not caused.
  • it is possible to input a next command at the subsequent unit cycle in the case of the LOOKUP processing after B+3 unit cycles have passed in the case of the INSERT processing and after 1 unit cycle has passed in the case of the DELETE processing. Therefore, each processing can be executed in series although slight processing overhead is required.
  • Third Exemplary Embodiment A third exemplary embodiment of the present invention is different from the second exemplary embodiment in the configuration of the command and the INSERT processing using the command.
  • the configuration, the LOOKUP processing and the DELETE processing are the same as those in the second exemplary embodiment, and an overlapping description will be omitted as appropriate.
  • Fig. 29 is a conceptual diagram showing a configuration example of the command in the third exemplary embodiment.
  • the command section is further provided with a filed of "entry addition list ID".
  • Fig. 30 is a flow chart showing the INSERT processing in the present exemplary embodiment.
  • Step A19 instead of the Step A18 in the second exemplary embodiment is executed as the temporal addition processing.
  • Step A20 instead of the Step A14 in the second exemplary embodiment is executed as the added-entry invalidation processing.
  • the other processing is the same as in the case of the second exemplary embodiment.
  • Step A19 is a flow chart showing processing of the Step A19.
  • Step C21 is added between the Step C14 and the Step C9, as compared with the Step A18 (refer to Fig. 27) in the second exemplary embodiment.
  • the other processing is the same as in the Step A18.
  • Step A20 Fig. 32 is a flow chart showing processing of the Step A20.
  • Step C1 The Step A20 starts from the first rule in the rule list. More specifically, the first-stage rule processing block 220-1 initiates the procedure.
  • Step C22 The rule processing block 220 determines whether or not its own ID matches the entry addition list ID in the input command. In a case of match (Step C22; Yes), the procedure proceeds to Step C2. On the other hand, in a case of mismatch (Step C22; No), the procedure proceeds to Step C9.
  • Steps C2, C18 The rule processing block 220 uses the input address value to read out the entry information from the entry memory block 2201 (Step C2). Moreover, the rule processing block 220 sets the validation flag in the entry information to '0' and writes it in the entry memory block 2201. As a result, this entry (rule) is invalidated. Furthermore, the rule processing block 220 sets the end flag in the command to '1' and sets the result flag in the command to '1' (deletion succeeded). After that, the procedure proceeds to Step C9.
  • Step C9, C15 As in the case of the foregoing exemplary embodiments, it is determined whether or not the current rule is the last rule in the rule list. If the current rule is not the last rule (Step C9; No), the rule processing block 220-i outputs the output data 2204 including the address value and the command to the next-stage rule processing block 220-(i+1) (Step C15). Then, the procedure returns back to the Step C22 and the processing is executed with respect to the next rule in the rule list. On the other hand, if the current rule is the last rule in the rule list (Step C9; Yes), the Step A20 ends.
  • the same effects as in the case of the second exemplary embodiment can be obtained. Furthermore, in the added-entry invalidation processing, the rule matching processing (read/comparison processing) regarding rules other than the invalidation-target rule can be skipped. Therefore, processing time and power consumption can be reduced.
  • Fourth Exemplary Embodiment Fig. 33 is a block diagram showing a configuration of a packet classifier according to a fourth exemplary embodiment of the present invention.
  • the packet classification processing is achieved by a computer executing a software program. More specifically, the packet classifier according to the present exemplary embodiment has a program processing device 10 and a packet classification program 11.
  • the program processing device 10 is achieved by a CPU of a host such as a server and a PC.
  • the packet classification program 11 is a computer program executed by the program processing device 10 and controls an operation of the program processing device 10. By executing the packet classification program 11, the program processing device 10 can have the functions of the packet classifier 1 described in the foregoing exemplary embodiments.
  • the packet classification program 11 may be recorded on a tangible computer-readable recording medium.
  • a multi-core processor provided with a plurality of CPU cores may be used as the program processing device 10.
  • the respective CPU cores may be assigned to the command input block 4, the decision tree processing blocks 2-1 to 2-N, the entry addition target determination block 3, the result output block 5 and the decision tree node processing blocks 200, the number-of-entry counting block 21 and the rule processing blocks 220 of the decision tree processing block 2, which enables further faster processing.
  • a packet classifier comprising: a plurality of decision tree processing blocks respectively configuring a plurality of decision trees for use in packet classification; a command input block configured to input a command simultaneously to said plurality of decision tree processing blocks; and an entry addition target determination block connected to said plurality of decision tree processing blocks, wherein a single rule is managed by a single leaf node in any one of said plurality of decision trees, wherein if said command is a lookup command, each of said plurality of decision tree processing blocks uses its own decision tree to determine whether or not a search key matches any rule, wherein if said command is an insertion command for adding a new rule to a decision tree, each of said plurality of decision tree processing blocks determines whether or not said new rule can be managed by a single leaf node in its own decision tree, wherein if said new rule can be managed by a single leaf node in a decision tree processing block, this decision tree processing block is an entry addition target candidate and this single leaf node is an addition-target leaf node,
  • each of said plurality of decision tree processing blocks comprises a number-of-entry memory block configured to retain the number-of-entry of valid rules managed by each leaf node in its own decision tree, wherein said entry addition target candidate reads out said number-of-entry regarding said addition-target leaf node from said number-of-entry memory block and notifies said entry addition target determination block of said read-out number-of-entry, and wherein said entry addition target updates said number-of-entry memory block by adding 1 to said number-of-entry regarding said addition-target leaf node.
  • each of said plurality of decision tree processing blocks determines whether or not said existing rule is being managed by a single leaf node in its own decision tree, wherein if said existing rule is being managed by a single leaf node in a decision tree processing block, this decision tree processing block is an entry deletion target and this single leaf node is a deletion-target leaf node, wherein said entry deletion target invalidates said existing rule and updates said number-of-entry memory block by subtracting 1 from said number-of-entry regarding said deletion-target leaf node.
  • each of said entry addition target candidate performs temporal addition processing that temporarily adds said new rule to a rule list managed by said addition-target leaf node, and counts the number-of-entry of valid rules in this rule list during said temporal addition processing, wherein each of said entry addition target candidate notifies said entry addition target determination block of said number-of-entry obtained in said temporal addition processing, and wherein each of said entry addition target candidate other than said selected entry addition target performs added-entry invalidation processing that invalidates said temporarily added new rule.
  • each of said entry addition target candidate writes location information in said insertion command, said location information indicating a location in said rule list to which said new rule is added, wherein each of said entry addition target candidate other than said selected entry addition target performs said added-entry invalidation processing with reference to said location information without performing rule matching.
  • each of said plurality of decision tree processing blocks comprises: a decision tree pipeline block comprising decision tree node processing blocks whose number of stages is equal to a number of stages of the corresponding decision tree and using said decision tree node processing blocks to perform pipeline processing for searching in the decision tree; and a rule pipeline block comprising rule processing blocks whose number of stages is equal to a number of rules that can be managed in each leaf node and using said rule processing blocks to perform pipeline processing for search for, addition and deletion of a rule.
  • a packet classification method that uses a plurality of decision trees, wherein a single rule is managed by a single leaf node in any one of said plurality of decision trees, wherein said packet classification method comprises: inputting a command simultaneously to said plurality of decision trees; if said command is a lookup command, determining in each of said plurality of decision trees whether or not a search key matches any rule; if said command is an insertion command for adding a new rule to a decision tree, determining in each of said plurality of decision trees whether or not said new rule can be managed by a single leaf node, wherein if said new rule can be managed by a single leaf node in a decision tree, this decision tree is an entry addition target candidate and this single leaf node is an addition-target leaf node; selecting, from said entry addition target candidate, an entry addition target being a target to which said new rule is added; and adding said new rule to a rule list managed by said addition-target leaf node of said entry addition target.
  • a packet classification program which causes a computer to perform packet classification processing that uses a plurality of decision trees, wherein a single rule is managed by a single leaf node in any one of said plurality of decision trees, wherein said packet classification processing comprises: inputting a command simultaneously to said plurality of decision trees; if said command is a lookup command, determining in each of said plurality of decision trees whether or not a search key matches any rule; if said command is an insertion command for adding a new rule to a decision tree, determining in each of said plurality of decision trees whether or not said new rule can be managed by a single leaf node, wherein if said new rule can be managed by a single leaf node in a decision tree, this decision tree is an entry addition target candidate and this single leaf node is an addition-target leaf node; selecting, from said entry addition target candidate, an entry addition target being a target to which said new rule is added; and adding said new rule to a rule list managed by said addition-target leaf node of

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

A packet classifier has: a plurality of decision tree processing blocks respectively configuring a plurality of decision trees; a command input block configured to input a command simultaneously to the decision tree processing blocks; and an entry addition target determination block. A single rule is managed by a single leaf node in any one decision tree. If the command is an insertion command for adding a new rule, each decision tree processing block determines whether or not the new rule can be managed by a single leaf node in its own decision tree. If the new rule can be managed by a single leaf node in a decision tree processing block, this decision tree processing block is an entry addition target candidate and this single leaf node is an addition-target leaf node. The entry addition target determination block selects an entry addition target from the entry addition target candidate. The selected entry addition target adds the new rule to a rule list managed by the addition-target leaf node.

Description

PACKET CLASSIFIER, PACKET CLASSIFICATION METHOD AND PACKET CLASSIFICATION PROGRAM
The present invention relates to packet classification.
Packet classification is an important technique for classifying, in a router and a switch on a network, packets into a packet sequence called a flow having a set of attribute. The packet classification plays an important role for achieving a network application having something extra such as provision of QoS (Quality of Service) with respect to individual flow and security of firewall and the like.
In the packet classification, a "rule (sometimes called a filter)" is defined by using one or more fields included in a packet header. For example, the rule is defined by a plurality of header fields such as a source IP address, a destination IP address and a protocol number specified in an IP (Internet Protocol) header of a packet as well as a source port number and a destination port number specified in a TCP (Transmission Control Protocol) / UDP (User Datagram Protocol) header. The packet classification where a rule is defined by using a plurality of header fields in this manner is especially called Multi-Field Packet Classification.
Typically, a plurality of rules are defined. When a packet is received, field values used in the definition of the rule are extracted from the packet header. A combination of the extracted field values is compared with a plurality of rules to determine which rule matches, and thereby the packet is classified into a flow. Here, the combination of the field values extracted from the received packet header is referred to as a "search key". Also, in general, priority and action are defined for each rule. If the search key matches two or more rules among the plurality of rules, a rule with a higher priority is selected. Moreover, how to treat the received packet if it matches the rule (e.g. discard, transmit to port 1, and the like) is defined in the action.
Each field in the rule is defined by such methods as Exact Match wherein the field is defined as a specific value, Prefix Match wherein some upper bits are specified while some lower bits are defined as indefinite by the use of wildcard '*', Range Match wherein the field is defined as a range between two specific values, and Wildcard Match wherein the field is defined by using wildcard in units of an individual bit. For example, let us consider a field of 8 bits. In the case of Exact Match, the field is specified by a specific value such as "00110101". In the case of Prefix Match, the field is specified by a value such as "0011****" that starts from 4 bits of "0011". In the case of Range Match, the field is specified by a range such as [3-64] that allows the field of 8 bits within a range from 3 to 64 in decimal number. In the case of Wildcard Match, the wildcard is usable in units of a bit as in "0**10*01".
With regard to the multi-field packet classification technique, increase in a size of a rule set and improvement in a link speed cause a technical problem, how to achieve high-speed processing in a high-speed router or a high-speed switch. At present, in order to achieve the high-speed processing, a method based on a ternary content addressable memory (TCAM) is used in many cases.
However, the TCAM has disadvantages such as high costs, high power consumption and a large circuit size. Moreover, in the case where Range Match is used, the rule needs to be divided into rules using Prefix Match, which causes a problem that the number of rules is increased.
Meanwhile, in order to avoid the problem of high costs and high power consumption in the case of TCAM, various multi-field packet classification methods which use a static random access memory (SRAM) or a dynamic random access memory (DRAM) with lower costs and lower power consumption have been proposed.
For example, Non Patent Literature 1 (NPL1) proposes a method using a "decision tree". In the method using a decision tree, matching is not performed with respect to all rules, but matching is performed with respect to only a small number of rules which have possibility of matching with the search key. As a result, a time required for search processing is reduced. The method using a decision tree will be briefly described with reference to Figs. 1 to 3.
Fig. 1 shows an example of a rule set consisting of 16 rules from R0 to R15 that are defined by using two fields X and Y both having 4 bit lengths. The fields X and Y each corresponds to an actual packet header field such as a source IP address and a source port number. The field X is expressed by a binary number, and '*' denotes the wildcard. The field Y is expressed by Range Match, and a and b in "[a : b]" respectively indicate a lower limit value and an upper limit value (decimal notation). It should be noted that the priority and the action added to each rule are omitted here.
Fig. 2 illustrates the rule set shown in Fig. 1 on a 2-dimensional space consisting of two axes of the fields X and Y. Respective digits on the X and Y axes are expressed by decimal notation. According to the method using a decision tree, the multi-dimensional space as shown in Fig. 2 is divided with focusing on the plurality of dimensions. A decision tree is constructed by repeating the region division until the number of rules existing in a post-division region becomes equal to or less than a certain threshold value. Here, a group of rules managed by the post-division region is referred to as a "rule list".
Fig. 3 shows an example of the decision tree that is constructed with respect to the rule set shown in Fig. 1. It should be noted that in the decision tree shown in Fig. 3, the threshold value of the number of rules within the post-division region is set to 2. As shown in Fig. 3, the whole space is first divided evenly in the respective X and Y directions. That is, the whole space (X, Y) = ([0:15], [0:15]) is divided into four regions: a region 0 ([0:7], [0:7]), a region 1 ([0:7], [8:15]), a region 2 ([8:15], [0:7]) and a region 3 ([8:15], [8:15]). The rule lists managed by the respective regions are as follows: [R7, R8, R9, R11] (region 0), [R0, R6, R9, R10, R11, R12] (region 1), [R1, R2, R3, R4, R5, R13, R14] (region 2) and [R10, R14, R15] (region 3). Since more rules than 2 being the threshold value are still managed in each region, the region division is further performed with respect to each region until the number of rules becomes equal to or less than the threshold value. In the example shown in Fig. 3, the whole space is eventually divided into 24 regions. It should be noted that algorithm for constructing a decision tree is described in Non Patent Literature 1 and Non Patent Literature 2, the description of which is omitted here.
When performing the packet classification, the decision tree is followed with reference to the search key, and all rules managed by a leaf node at a point of arrival, whose number is equal to or less than the threshold value, are used for the matching. As an example, let us consider the packet classification with respect to a packet where the search key is X = 0111 and Y = 1001. According to the decision tree shown in Fig. 3, the whole space is divided into the above-mentioned four regions at the root node, and the packet is found to belong to the region 1 ([0:7], [8:15]) among the four regions. Subsequently, at the node of the region 1, the space is further divided evenly in the respective X and Y directions. That is, the region 1 divided into four regions: a region 10 ([0:3], [8:11]), a region 11 ([0:3], [12:15]), a region 12 ([4:7], [8:11]) and a region 13 ([4:7], [12:15]). The packet is found to belong to the region 12 among the four regions. Furthermore, the region 12 is divided into four regions, and the packet is found to belong a region 122 ([6:7], [8:9]). Then, the matching is performed with respect to two rules R9 and R10 managed in the region 122, and a matching rule is selected. It should be noted that in the present example, the packet matches both of the rules R9 and R10 and therefore a rule is selected depending on the priority added to each rule which is omitted in Fig. 3.
According to the method using a decision tree as described above, the region division may result in a possibility that an identical rule is managed in a plurality of post-division regions, and this is hereinafter referred to as "duplicate of rule". For example, in Fig. 3, such rules as R7 and R9 are duplicated. If the duplicate of rule occurs, a memory usage is increased in order to manage the duplicated rule itself or address values for the duplicated rules, and apparently more rules than an actual rule set must be treated. That is to say, the duplicate of rule causes increase in an amount of data in the decision tree. The followings are known as techniques for suppressing the duplicate of rule.
According to Non Patent Literature 1, a node other than a leaf node also has a rule list. If an identical rule appears to be duplicated to all child nodes (post-division regions) as a result of the region division, the rule is managed in the rule list of this node. However, this method cannot prevent the duplicate of rule to not all of but a plurality of child nodes.
Non Patent Literature 2 (NPL2) proposes hardware architecture that a packet classification method using a decision tree is executed at high-speed by a hardware using pipeline processing. Also in this method, a node other than a leaf node is provided with a rule list similarly. If an identical rule appears to be duplicated to a plurality of child nodes as a result of the region division, the rule is managed in the rule list of this node. However, if the number of rules that appear to be duplicated exceeds a limit number of rules that can be managed in the rule list of a single node, the duplicate of rule is caused.
Non Patent Literature 3 (NPL3) also proposes a method that the packet classification is executed by hardware using pipeline processing, as in the case of Non Patent Literature 2. However, only leaf nodes manage rules in the decision tree, which is different from the cases of Non Patent Literature 1 and Non Patent Literature 2. According to this method, a plurality of decision trees are prepared, and any one decision tree in which the duplicate of rule does not occur manages this rule. As a result, the duplicate of rule is prevented from occurring.
[NPL1] Sumeet Singh, Florin Baboescu, George Varghese, Jia Wang, "Packet Classification Using Multidimensional Cutting", Proceedings of the ACM SIGCOMM 2003 Conference on Applications, Technologies, Architectures, and Protocols for Computer Communications, 2003, pp. 213 - 224.
[NPL2] Weirong Jiang, Viktor K. Prasanna, "Large-Scale Wire-Speed Packet Classification on FPGAs", Proceedings of the ACM/SIGDA International Symposium on Field Programmable Gate Arrays, 2009, pp. 219 - 228.
[NPL3] Weirong Jiang, Viktor K. Prasanna, Norio Yamagaki, "Decision Forest: A Scalable Architecture for Flexible Flow Matching on FPGA", Proceedings of 2010 International Conference on Field Programmable Logic and Application, 2010, pp. 394 - 399.
Regarding the packet classification using a decision tree, it is also important to add a new rule to the decision tree and to delete an existing rule from the decision tree. Such the rule addition/deletion processing is hereinafter referred to as "dynamic update of entry".
In the case of Non Patent Literature 3, a concrete procedure for performing the dynamic update of entry is not described at all. It is therefore considered to be necessary to previously determine which rule in which node in which decision tree is to be updated. That is, "preprocessing" is separately required for performing the dynamic update of entry, which is a problem. Particularly, since the Non Patent Literature 3 is presupposed to construct an optimum decision tree with respect to a given rule set, it is considered to be necessary to change the whole configuration of the decision tree. In this case, a time required for the preprocessing of the dynamic update of entry and a time required for the entry update both are increased, which is a problem.
An object of the present invention is to provide a technique that can achieve the dynamic update of entry without performing preprocessing, in packet classification that uses a decision tree.
In an aspect of the present invention, a packet classifier is provided. The packet classifier has: a plurality of decision tree processing blocks respectively configuring a plurality of decision trees for use in packet classification; a command input block configured to input a command simultaneously to the plurality of decision tree processing blocks; and an entry addition target determination block connected to the plurality of decision tree processing blocks. A single rule is managed by a single leaf node in any one of the plurality of decision trees. If the command is a lookup command, each of the plurality of decision tree processing blocks uses its own decision tree to determine whether or not a search key matches any rule. If the command is an insertion command for adding a new rule to a decision tree, each of the plurality of decision tree processing blocks determines whether or not the new rule can be managed by a single leaf node in its own decision tree. If the new rule can be managed by a single leaf node in a decision tree processing block, this decision tree processing block is an entry addition target candidate and this single leaf node is an addition-target leaf node. The entry addition target determination block selects, from the entry addition target candidate, an entry addition target being a target to which the new rule is added. The selected entry addition target adds the new rule to a rule list managed by the addition-target leaf node in its own decision tree.
In another aspect of the present invention, a packet classification method that uses a plurality of decision trees is provided. A single rule is managed by a single leaf node in any one of the plurality of decision trees. The packet classification method includes: (A) inputting a command simultaneously to the plurality of decision trees; (B) if the command is a lookup command, determining in each of the plurality of decision trees whether or not a search key matches any rule; (C) if the command is an insertion command for adding a new rule to a decision tree, determining in each of the plurality of decision trees whether or not the new rule can be managed by a single leaf node, wherein if the new rule can be managed by a single leaf node in a decision tree, this decision tree is an entry addition target candidate and this single leaf node is an addition-target leaf node; (D) selecting, from the entry addition target candidate, an entry addition target being a target to which the new rule is added; and (E) adding the new rule to a rule list managed by the addition-target leaf node of the entry addition target.
In still another aspect of the present invention, a packet classification program which causes a computer to perform packet classification processing that uses a plurality of decision trees is provided. A single rule is managed by a single leaf node in any one of the plurality of decision trees. The packet classification processing includes: (A) inputting a command simultaneously to the plurality of decision trees; (B) if the command is a lookup command, determining in each of the plurality of decision trees whether or not a search key matches any rule; (C) if the command is an insertion command for adding a new rule to a decision tree, determining in each of the plurality of decision trees whether or not the new rule can be managed by a single leaf node, wherein if the new rule can be managed by a single leaf node in a decision tree, this decision tree is an entry addition target candidate and this single leaf node is an addition-target leaf node; (D) selecting, from the entry addition target candidate, an entry addition target being a target to which the new rule is added; and (E) adding the new rule to a rule list managed by the addition-target leaf node of the entry addition target.
According to the present invention, it is possible to achieve the dynamic update of entry without performing preprocessing, in packet classification that uses a decision tree.
Fig. 1 is a conceptual diagram showing an example of a rule set. Fig. 2 is a conceptual diagram showing a case where the rule set shown in Fig. 1 is arranged on a 2-dimensional space consisting of two axes of fields X and Y. Fig. 3 is a conceptual diagram showing an example of a decision tree with respect to the rule set shown in Fig. 1. Fig. 4 is a block diagram showing a configuration of a packet classifier according to a first exemplary embodiment of the present invention. Fig. 5 is a block diagram showing a configuration of each decision tree processing block of the packet classifier according to the first exemplary embodiment. Fig. 6 is a conceptual diagram showing a correspondence relationship between decision tree node processing blocks and respective nodes of the decision tree in the first exemplary embodiment. Fig. 7 is a block diagram showing a configuration of a decision tree node processing block of the decision tree processing block according to the first exemplary embodiment. Fig. 8 is a block diagram showing a configuration of a number-of-entry counting block of the decision tree processing block according to the first exemplary embodiment. Fig. 9 is a conceptual diagram showing a correspondence relationship between rule processing blocks and rules included in a rule list of each leaf node of the decision tree in the first exemplary embodiment. Fig. 10 is a block diagram showing a configuration of the rule processing block of the decision tree processing block according to the first exemplary embodiment. Fig. 11 is a flow chart showing LOOKUP processing in the first exemplary embodiment. Fig. 12 is a conceptual diagram showing a configuration example of a command in the first exemplary embodiment. Fig. 13 is a flow chart showing processing of Step A2. Fig. 14 is a conceptual diagram showing a configuration example of node information in the first exemplary embodiment. Fig. 15 is a diagram for explaining an address calculation method in the first exemplary embodiment. Fig. 16 is a flow chart showing processing of Step A4. Fig. 17 is a conceptual diagram showing a configuration example of entry information in the first exemplary embodiment. Fig. 18 is a flow chart showing INSERT processing in the first exemplary embodiment. Fig. 19 is a flow chart showing processing of Step A6. Fig. 20 is a flow chart showing processing of Step A9. Fig. 21 is a flow chart showing processing of Step A12. Fig. 22 is a flow chart showing DELETE processing in the first exemplary embodiment. Fig. 23 is a flow chart showing processing of Step A14. Fig. 24 is a block diagram showing a configuration of a packet classifier according to a second exemplary embodiment of the present invention. Fig. 25 is a block diagram showing a configuration of a decision tree processing block of the packet classifier according to the second exemplary embodiment. Fig. 26 is a flow chart showing INSERT processing in the second exemplary embodiment. Fig. 27 is a flow chart showing processing of Step A18. Fig. 28 is a flow chart showing DELETE processing in the second exemplary embodiment. Fig. 29 is a conceptual diagram showing a configuration example of a command in a third exemplary embodiment of the present invention. Fig. 30 is a flow chart showing INSERT processing in the third exemplary embodiment. Fig. 31 is a flow chart showing processing of Step A19. Fig. 32 is a flow chart showing processing of Step A20. Fig. 33 is a block diagram showing a configuration of a packet classifier according to a fourth exemplary embodiment of the present invention.
Exemplary embodiments of the present invention will be described with reference to the attached drawings.
1. First Exemplary Embodiment
1-1. Summary
Fig. 4 is a block diagram showing a configuration of a packet classifier 1 according to a first exemplary embodiment of the present invention. The packet classifier 1 performs packet classification by using a decision tree. That is, the packet classifier 1 uses a decision tree to determine which rule matches a search key extracted from a packet header. Moreover, the packet classifier 1 according to the present exemplary embodiment automatically performs dynamic update of entry (addition of a new rule, deletion of an existing rule) in response to an input command.
In the present exemplary embodiment, the packet classifier 1 is achieved by a hardware circuit. More specifically, as shown in Fig. 4, the packet classifier 1 has one or a plurality of decision tree processing blocks 2 (2-1 to 2-N: N is an integer equal to or larger than 2), an entry addition target determination block 3, a command input block 4 and a result output block 5. An input data 6 is input to the packet classifier 1, and an output data 7 is output from the packet classifier 1.
The input data 6 depends on processing that the packet classifier 1 should execute. The processing executed by the packet classifier 1 includes: (1) processing of searching for a rule that matches a search key (hereinafter referred to as "LOOKUP processing"), (2) processing of adding a new rule (new entry) to a decision tree (hereinafter referred to as "INSERT processing"), (3) processing of invalidating an existing rule (existing entry) (hereinafter referred to as "DELETE processing"), and (4) Configuration processing that previously writes setting values to a memory and a configuration register included in the packet classifier 1. Here, the Configuration processing, which is executed at a time of initialization, sets parameters regarding a configuration in a configuration register and a memory installed in each block include in the packet classifier 1. In the following description, only the LOOKUP processing, the INSERT processing and the DELETE processing are considered. The input data 6 includes a type data indicating a processing type and a data used in the processing. In the case of the LOOKUP processing, the input data 6 includes the type data indicating the LOOKUP processing and a search key. In the case of the INSERT processing, the input data 6 includes the type data indicating the INSERT processing and a new rule to be added. In the case of the DELETE processing, the input data 6 includes the type data indicating the DELETE processing and an existing rule to be invalidated.
The command input block 4 receives the input data 6 and generates a command depending on the input data 6. The command includes a type data indicating a processing type and a data used in the processing, as in the case of the input data 6. A lookup command that instructs execution of the LOOKUP processing includes the type data indicating the LOOKUP processing and a search key. An insertion command that instructs execution of the INSERT processing includes the type data indicating the INSERT processing and a new rule to be added. A deletion command that instructs execution of the DELETE processing includes the type data indicating the DELETE processing and an existing rule to be invalidated. The command input block 4 inputs the generated command simultaneously to the plurality of decision tree processing blocks 2-1 to 2-N.
The plurality of decision tree processing blocks 2-1 to 2-N respectively configure a plurality of decision trees for use in the packet classification. Here, the plurality of decision trees are so configured as to prevent the duplicate of rule from occurring, as in the case of the Non Patent Literature 3. That is, each rule is managed in a decision tree where the duplicate of the rule does not occur. In other words, a single rule is managed by only a single leaf node in any one of the plurality of decision trees. Each decision tree processing block 2 executes the LOOKUP processing, the INSERT processing or the DELETE processing depending on the command input from the command input block 4. It should be noted that the plurality of decision tree processing blocks 2-1 to 2-N can execute respective processing concurrently and in parallel.
<LOOKUP processing>
In response to the lookup command, the plurality of decision tree processing blocks 2-1 to 2-N execute the LOOKUP processing concurrently and in parallel. More specifically, each decision tree processing block 2 uses its own decision tree to determine whether or not a search key matches any rule managed by any leaf node. If there is a rule that matches the search key, the decision tree processing block 2 notifies the result output block 5 of the matching rule. If the search key matches a plurality of rules included in the rule list in the leaf node, the decision tree processing block 2 notifies the result output block 5 of a rule having the highest priority among the plurality of matching rules. On the other hand, if there is no rule that matches the search key, the decision tree processing block 2 notifies the result output block 5 of absence of the matching rule. The result output block 5 receives the search results respectively from the plurality of decision tree processing blocks 2-1 to 2-N and selects a matching rule having the highest priority. Then, the result output block 5 outputs an output data 7 indicating the selection result as a final result of the LOOKUP processing.
<INSERT processing>
In response to the insertion command, the plurality of decision tree processing blocks 2-1 to 2-N execute the INSERT processing concurrently and in parallel. More specifically, each decision tree processing block 2 first determines whether or not the duplicate of rule occurs when a new rule is added to its own decision tree. In other words, each decision tree processing block 2 determines whether or not the new rule can be managed by only a single leaf node in its own decision tree. If the duplicate of rule occurs, this decision tree processing block 2 is excluded from a target of addition of new rule. On the other hand, if the duplicate of rule does not occur, namely, if the new rule can be managed by only a single leaf node in its own decision tree, this decision tree processing block 2 is an "entry addition target candidate" and this single leaf node is an "addition-target leaf node".
The decision tree processing block 2 being a target to which the new rule is added is an "entry addition target". The entry addition target is selected from the above-mentioned entry addition target candidate. It is the entry addition target determination block 3 that selects the entry addition target. As shown in Fig. 4, the entry addition target determination block 3 is connected to each of the plurality of decision tree processing blocks 2-1 to 2-N. Based on information notified from the plurality of decision tree processing blocks 2-1 to 2-N, the entry addition target determination block 3 recognizes the entry addition target candidates and selects the entry addition target from the entry addition target candidates.
For example, each entry addition target candidate notifies the entry addition target determination block 3 of a number-of-entry of valid rules (number of valid entries) managed by the addition-target leaf node. Then, the entry addition target determination block 3 refers to the number of valid entries received from each entry addition target candidate to select the entry addition target in accordance with a certain policy. For example, the entry addition target determination block 3 selects, as the entry addition target, the entry addition target candidate in which the number of valid entries is smallest. In this case, unevenness of the number of rules between the plurality of decision trees can be suppressed, which is preferable.
The entry addition target determination block 3 notifies each decision tree processing block 2 of the result of the selection of the entry addition target. For example, the entry addition target determination block 3 instructs the entry addition target to add the new rule and instructs the other decision tree processing blocks 2 not to execute the rule addition processing. Then, the decision tree processing block 2 being the entry addition target adds the new rule to the rule list managed by the addition-target leaf node in its own decision tree.
In this manner, the INSERT processing is executed. The result output block 5 receives the processing results from the plurality of decision tree processing blocks 2-1 to 2-N and outputs the output data 7 indicating the final result of the INSERT processing.
<DELETE processing>
In response to the deletion command, the plurality of decision tree processing blocks 2-1 to 2-N execute the DELETE processing concurrently and in parallel. More specifically, each decision tree processing block 2 determines whether or not an existing rule as a deletion target is being managed by a single leaf node its own decision tree. If the existing rule is being managed by a single leaf node, this decision tree processing block 2 is an "entry deletion target" and this single leaf node is a "deletion-target leaf node". The decision tree processing block 2 being the entry deletion target invalidates the existing rule being managed by the deletion-target leaf node.
In this manner, the DELETE processing is executed. The result output block 5 receives the processing results from the plurality of decision tree processing blocks 2-1 to 2-N and outputs the output data 7 indicating the final result of the DELETE processing.
According to the present exemplary embodiment, as described above, the duplicate of rule is prevented from occurring. In particular, in the dynamic update of entry, the INSERT processing is so executed as to prevent the duplicate of rule from occurring. As a comparative example, let us consider a case where the dynamic update of entry is performed with respect to a decision tree in which the duplicate of rule is allowed. In the dynamic update of entry, it is necessary to rewrite a data of the update-target entry at a data storage region in a memory. If the duplicate of rule exists, the data rewriting is required for the duplication number of times. Therefore, regarding the INSERT processing or the DELETE processing, the processing time is not always constant and, in some cases, the processing time is increased greatly. According to the present exemplary embodiment, such the problem can be solved. That is, the processing time required for the dynamic update of entry such as the INSERT processing and the DELETE processing becomes constant.
Furthermore, according to the present exemplary embodiment, at the time of the INSERT processing, each of the plurality of decision tree processing blocks 2-1 to 2-N receives the insertion command and, in response to the insertion command, automatically determines whether or not it is an entry addition target candidate. The entry addition target determination block 3 automatically determines an appropriate entry addition target based on information notified from at least the entry addition target candidate. Then, the entry addition target automatically adds the new rule to the addition-target leaf node. Moreover, at the time of the DELETE processing, each of the plurality of decision tree processing blocks 2-1 to 2-N receives the deletion command and, in response to the deletion command, automatically determines whether or not it is an entry deletion target. Then, the entry deletion target automatically invalidates a specified rule in the deletion-target leaf node. In this manner, the dynamic update of entry is automatically performed in response to the command. There is no need to perform "preprocessing" that previously determines which rule in which node in which decision tree is to be updated. That is, according to the present exemplary embodiment, it is possible to achieve the dynamic update of entry without performing the preprocessing, in the packet classification that uses the decision tree.
It should be noted that the packet classifier 1 according to the present exemplary embodiment is provided, for example, in a network device of a switch, a router and the like or a NIC (Network Interface Card) installed on a server as an extension card or an on-board card. In this case, the packet classifier 1 is connected to a control block. The control block has functions of analyzing a packet header of a received packet, extracting a search key from the packet header and making the packet classifier 1 execute the LOOKUP processing using the search key. Furthermore, the control block has functions of making the packet classifier 1 execute the INSERT processing that adds an externally-specified new rule and making the packet classifier 1 execute the DELETE processing that deletes an externally-specified existing rule. The packet classifier 1 receives the input data 6 depending on the processing from the control block and outputs the output data 7 to the control block.
1-2. Configuration example
Next, a configuration example of the packet classifier 1 according to the present exemplary embodiment will be described in more detail. Fig. 5 is a block diagram showing a configuration of each decision tree processing block 2 of the packet classifier 1 according to the present exemplary embodiment. As shown in Fig. 5, the decision tree processing block 2 has a decision tree pipeline block 20, a number-of-entry counting block 21 and a rule pipeline block 22. An input data input from the command input block 4 to the decision tree processing block 2 is an input data 23. A data output from the decision tree processing block 2 to the result output block 5 is an output data 24. A data communicated between the decision tree processing block 2 and the entry addition target determination block 3 is an input-output data 25.
1-2-1. Decision tree pipeline block 20
The decision tree pipeline block 20 searches the decision tree by pipeline processing. More specifically, the decision tree pipeline block 20 has decision tree node processing blocks 200-1 to 200-H whose number of stages is equal to depth H (the number of stages) of the corresponding decision tree. As shown in Fig. 6, a decision tree node processing block 200-i (i = 1, 2, ..., H) corresponds to a node at a depth j (j = 0, 1, ..., H-1) from the root node in the decision tree and performs processing regarding the corresponding node. By using the decision tree node processing blocks 200-1 to 200-H, the decision tree pipeline block 20 performs the searching processing from the root node towards the leaf node one node by one node in the decision tree. Preferably, processing of one node in the decision tree is performed every one clock cycle.
Fig. 7 is a block diagram showing a configuration of the decision tree node processing block 200 according to the present exemplary embodiment. As shown in Fig. 7, the decision tree node processing block 200 has a child node determination block 2000 and a node information memory block 2001.
The node information memory block 2001 is configured by a storage medium such as a memory and a register and stores node information. With regard to each of all nodes located at the same depth as seen from the root node, the node information indicates region division information in the node, namely, information of child nodes (post-division regions) managed by the node.
The decision tree node processing block 200 receives an input data 2002 from the former-stage decision tree node processing block 200. Note that the decision tree node processing block 200-1 receives an input data 2002 (= input data 23) from the command input block 4. The input data 2002 includes a command depending on the processing and an address value used for accessing the node information memory block 2001. The input data 2002 from the former-stage decision tree node processing block 200 further includes information of an "effective bit length" described later.
The child node determination block 2000 receives the input data 2002 and refers to the address value specified in the input data 2002 to read out the node information from the node information memory block 2001. Then, based on the node information, the effective bit length and the command, the child node determination block 2000 calculates an address value at which the node information of child nodes of the node is stored. Moreover, the child node determination block 2000 updates the effective bit length. The calculation of the address value and the update of the effective bit length will be described later in detail. An output data 2003 includes the command included in the input data 2002, the calculated new address value and the post-update effective bit length. The child node determination block 2000 outputs the output data 2003 to the next-stage decision tree node processing block 200.
The last-stage decision tree node processing block 200-H is as follows. In the case of the LOOKUP processing or the DELETE processing, the address value calculated by the child node determination block 2000 is an address value used for accessing an entry memory block 2201 (discussed below) of a rule processing block 220 of the rule pipeline block 22. The decision tree node processing block 200-H outputs the output data 2003 to a rule processing block 220-1. In the case of the INSERT processing, the address value calculated by the child node determination block 2000 is an address value used for accessing a number-of-entry memory block 211 (discussed below) of the number-of-entry counting block 21. An output data 2004 includes the command included in the input data 2002 and the calculated address value. The child node determination block 2000 outputs the output data 2004 to the number-of-entry counting block 21.
1-2-2. Number-of-entry counting block 21
The number-of-entry counting block 21 manages a number-of-entry of valid rules with respect to each of all leaf nodes in the decision tree. It should be noted that the processing by the number-of-entry counting block 21 is executed in a processing cycle corresponding to the depth H of the decision tree.
Fig. 8 is a block diagram showing a configuration of the number-of-entry counting block 21 according to the present exemplary embodiment. As shown in Fig. 8, the number-of-entry counting block 21 has a count processing block 210 and a number-of-entry memory block 211.
The number-of-entry memory block 211 is configured by a storage medium such as a memory and a register and stores number-of-entry information. The number-of-entry information indicates the number-of-entry of valid rules managed by each leaf node in the decision tree.
In the case of the INSERT processing, the count processing block 210 receives an input data 212 (= the above-mentioned output data 2004) from the decision tree node processing block 200-H of the decision tree pipeline block 20. The input data 212 includes the insertion command and the address value used for accessing the number-of-entry memory block 211. The count processing block 210 refers to the address value to read out the number-of-entry information (number of valid rule entries) regarding the addition-target leaf node from the number-of-entry memory block 211. Then, the count processing block 210 outputs an output data 216 indicating the read-out number of valid rule entries to the above-mentioned entry addition target determination block 3.
Moreover, the count processing block 210 receives an input data 216 from the entry addition target determination block 3. If the input data 216 indicates "addition of the new rule to the decision tree", the count processing block 210 adds "1" to the number of valid rule entries regarding the above-mentioned addition-target leaf node and then writes it in the number-of-entry memory block 211. As a result, the number-of-entry memory block 211 is updated to the latest condition. Furthermore, the count processing block 210 outputs an output data 214 to a rule processing block 220-1 of the rule pipeline block 22. The output data 214 includes the insertion command and an address value used for accessing an entry memory block 2201 (discussed below) of the rule processing block 220-1.
In the case of the DELETE processing, the count processing block 210 receives an input data 213 from a rule processing block 220-B of the rule pipeline block 22. The count processing block 210 refers to an address value specified in the input data 213 to read out the number-of-entry information (number of valid rule entries) regarding the deletion-target leaf node from the number-of-entry memory block 211. Then, the count processing block 210 subtracts "1" from the number of valid rule entries and writes it in the number-of-entry memory block 211. As a result, the number-of-entry memory block 211 is updated to the latest condition. Moreover, the count processing block 210 outputs an output data 215 indicating completion of the DELETE processing as the output data 24 to the result output block 5.
1-2-3. Rule pipeline block 22
The rule pipeline block 22 performs rule processing (search for a rule matching a search key, addition of a new rule, and deletion of an existing rule) based on pipeline control. More specifically, the rule pipeline block 22 has rule processing blocks 220-1 to 220-B whose number of stages is equal to the maximum number B of rules (i.e. a size of the rule list) that can be managed in each leaf node. As shown in Fig. 9, a rule processing block 220-i (i = 1, 2, ..., B) is associated with a rule j (j = 1, 2 ..., B) included in the rule list of the corresponding leaf node and executes the rule processing with respect to the rule j. By using the rule processing blocks 220-1 to 220-B, the rule pipeline block 22 executes the rule processing with respect to each rule included in the rule list of the leaf node. Preferably, the rule processing with respect to one rule is performed every one clock cycle.
Fig. 10 is a block diagram showing a configuration of the rule processing block 220 according to the present exemplary embodiment. As shown in Fig. 10, the rule processing block 220 has a comparison and update processing block 2200 and an entry memory block 2201.
The entry memory block 2201 is configured by a storage medium such as a memory and a register and stores entry information. The entry information includes a validation flag indicating whether the rule is validated or invalidated, a rule ID, the rule and the like.
In the case of the LOOKUP processing or the DELETE processing, the rule processing block 220-1 receives an input data 2202 (= the above-mentioned output data 2003) from the decision tree node processing block 200-H of the decision tree pipeline block 20. The input data 2202 includes the command depending on the processing and the address value used for accessing the entry memory block 2201.
In the case of the INSERT processing, the rule processing block 220-1 receives an input data 2203 (= the above-mentioned output data 214) from the number-of-entry counting block 21. The input data 2203 includes the insertion command and the address value used for accessing the entry memory block 2201. The input data 2203 is hereinafter treated in common with the above-mentioned input data 2202.
The comparison and update processing block 2200 receives the input data 2202 and refers to the address value specified in the input data 2202 to read out the entry information from the entry memory block 2201. Then, based on the read-out entry information, the comparison and update processing block 2200 executes the rule processing depending on the command. Details of the rule processing will be described later. Moreover, the comparison and update processing block 2200 outputs the received input data 2202 as an output data 2204 to the next-stage rule processing block 220.
Each of the rule processing blocks 220-2 to 220-B receives the input data 2202 (= the above-mentioned output data 2204) from the former-stage rule processing block 220. The processing by the comparison and update processing block 2200 is the same as that described above. It should be noted that in the case of the DELETE processing, the comparison and update processing block 2200 of the last-stage rule processing block 220-B outputs an output data 2205 (= the above-mentioned input data 213) instead of the output data 2204 to the number-of-entry counting block 21. Content of the output data 2205 is the same as that of the output data 2204.
1-3. Operation
Next, an operation of the packet classifier 1 according to the present exemplary embodiment will be described in detail.
1-3-1. LOOKUP processing
First, the LOOKUP processing will be described. Fig. 11 is a flow chart showing the LOOKUP processing in the present exemplary embodiment.
Step A1:
An input data 6 is input to the packet classifier 1. The input data 6 includes the type data indicating the LOOKUP processing and a search key extracted from a search-target packet. The command input block 4 receives the input data 6. The command input block 4 generates an internal command (here, a lookup command) based on the input data 6 and outputs the internal command simultaneously to the decision tree processing blocks 2-1 to 2-N.
Fig. 12 is a conceptual diagram showing a configuration example of the internal command. A command section includes a processing type, an end flag and a result flag. For example, the processing type being a 2-bit data indicates the LOOKUP processing if it is "00", the INSERT processing if it is "01" and the DELETE processing if it is "10". The end flag indicates "not yet end" if it is '0' and "end" if it is '1'. The result flag is set to '1' if addition or deletion of an entry is completed. It should be noted that the end flag and the result flag are referred to only in the cases of the INSERT processing and the DELETE processing, and a method of utilization thereof will be described later. Stored in the latter part of the internal command is a search key (LOOKUP processing), a new entry to be added (INSERT processing) or an existing entry to be deleted (DELETE processing).
Here, various expression methods can be used for the entry (rule), in consideration of Prefix Match, Range Match and Wildcard Match. For example, data sequences A and B each having a same length as the search key are prepared. In the case of Prefix Match or Wildcard Match, a target data is assigned to A and a mask bit data is assigned to B. In the case of Range Match, the lower limit value is assigned to A and the upper limit value is assigned to B. In this case, an entry length of the rule is twice the length of the search key. In consideration of this situation, it is permitted in the latter part of the internal command that the length is different between the search key at the time of the LOOKUP processing and the addition/deletion entry at the time of the INSERT/DELETE processing. Note that in the case of the addition entry, its priority also is specified.
Step A2:
When receiving the lookup command from the command input block 4, each decision tree processing block 2 uses the specified search key to execute processing of following its own decision tree. Fig. 13 is a flow chart showing the processing of this Step A2.
Step B1:
The processing of following the decision tree starts from the root node of the decision tree. For that purpose, a current handling node is first set to the root node of the decision tree. More specifically, the insertion command is input to the first-stage decision tree node processing block 200-1 of the decision tree pipeline block 20.
Step B2:
The child node determination block 2000 confirms that the processing type is LOOKUP and then reads out the node information from the node information memory block 2001.
Fig. 14 is a conceptual diagram showing a configuration example of the node information. The node information is information of the region division regarding the corresponding node. As shown in Fig. 14, the node information includes C pairs of a field identifier (field ID) and a division number (k) as well as a base address. Here, the C is a maximum number of fields that can be used for the region division regarding a single node of the decision tree. The field ID is predetermined with respect to each of fields constituting the search key or the rule. A range of each field is divided into 2k sections (k is a natural number), and the division number k represents the exponent. For example, in a case where a range of each of fields X and Y is evenly divided as in the foregoing example shown in Figs. 1 to 3, the division number k for the field X is 1 and the division number k for the field Y also is 1. The maximum value of k is predetermined. The base address represents a minimum address value of a memory region in which the node information of child nodes of the node is stored. More specifically, the base address is a minimum address value in the node information memory block 2001 of the next-stage decision tree node processing block 200 in which the node information of child nodes of the node is stored. The node information of all child nodes of the node is stored in a memory region following the base address.
Step B3:
Based on the read-out node information, a bit sequence of each field of the search key and the effective bit length, the child node determination block 2000 calculates an address value of a memory region in which the node information of the next-stage child node. More specifically, the next-stage address value is calculated in the following manner (refer also to Fig. 15).
As an example, let us consider the following case. The node information includes division information of two fields (field ID, division number) = (0001, 01), (0100, 11) and the base address = 00001000. As for the search key, a field value (field ID = 0001) is "0101", a field value (field ID = 0100) is "0110", and respective effective bit lengths are 3 and 4. Here, the effective bit length means a bit length of lower bits of the bit sequence of each field that are used for the region division. That is, a bit sequence of lower bits having a length specified by the effective bit length in the field value is referred to. Let us explain a case of the field ID = 0001. Since the effective bit length is 3, the lower three bits "101" of the field value are referred to. Then, since the division number k = 1, the upper one bit of the effective bits is referred to and thereby '1' is obtained (see Fig. 15). Similarly, regarding the field ID = 0100, the effective bit length is 4, the division number k is 3, and accordingly "011" is obtained. A combination of the obtained two values yields "1011". Then, "1011" is added to the base address "00001000" and thereby "00010011" is obtained. This value is the address value of the memory region in which the node information of the next-stage child node is stored.
Step B4:
Subsequently, the child node determination block 2000 updates the effective bit length by subtracting the division number k from the input effective bit length. In the case of the above-mentioned example, the effective bit length regarding the field ID = 0001 is updated to 3 - 1 = 2, and the effective bit length regarding the field ID = 0100 is updated to 4 - 3 = 1. It should be noted that the effective bit length is not input to the first decision tree node processing block 200-1 and so the field length of each field is previously set as the effective bit length in this case. As for the subsequent decision tree node processing block 200, the effective bit length is input from the former-stage decision tree node processing block 200.
Steps B5, B6:
Next, it is determined whether or not the child node is a leaf node. More specifically, the decision tree node processing block 200 under processing just automatically determines it, because the child node of the node treated by the decision tree node processing block 200-H is the leaf node (refer to Fig. 6).
If the next-stage node is not the leaf node (Step B5; No), it means a case of the decision tree node processing block 200-i (i = 1, 2, ..., H-1). The decision tree node processing block 200-i outputs the output data 2003 to the next-stage decision tree node processing block 200-(i+1) (Step B6). The output data 2003 includes the command included in the input data 2002, the calculated new address value and the post-update effective bit length. Then, the procedure returns back to the Step B2, and the next-stage node becomes the handling node.
If the next-stage node is the leaf node (Step B5; Yes), it means a case of the decision tree node processing block 200-H. In this case, the Step A2 ends.
Step A3:
After the Step A2 is completed, the decision tree node processing block 200-H outputs the output data 2003 including the calculated address value and the command to the rule processing block 220-1 of the rule pipeline block 22.
Step A4:
Subsequently, the rule pipeline block 22 makes a comparison between the search key and each rule in the rule list (i.e. performs matching processing). Fig. 16 is a flow chart showing processing of this Step A4.
Step C1:
The matching processing starts from the first rule in the rule list. More specifically, the first-stage rule processing block 220-1 initiates the procedure.
Step C2:
The comparison and update processing block 2200 confirms that the processing type is LOOKUP and then uses the input address value to read out the entry information from the entry memory block 2201. Then, the comparison and update processing block 2200 checks the validation flag.
Fig. 17 is a conceptual diagram showing a configuration example of the entry information. The entry information includes the validation flag indicating whether the rule is validated or not, the rule ID, the rule and the priority. If an action associated with the rule is retained together, the action may be further added. In the present example, the action is managed by a different device from the packet classifier 1, and after the packet classifier 1 has completed the search, the action is obtained by the use of the matching rule ID. It should be noted that although the rule ID is included in the entry information in the case of the example shown in Fig. 17, the rule ID may not be included in the entry information. For example, the rule ID may be generated based on the decision tree processing block ID, the leaf node ID, an order in the rule list and the like. More specifically, a binary number obtained by coupling "i" of the decision tree processing block 2-i (i = 1, 2, ..., N), "j" being an ID of the leaf node at a point of arrival in the decision tree processing block 2-i and "k" of the rule processing block 220-k (k = 1, 2, ..., B) may be used as the rule ID.
Step C4:
If the rule is validated (Step C3; Yes), the comparison and update processing block 2200 makes a comparison between the search key and the read-out rule. A method of the comparison is disclosed, for example, in Non Patent Literature 2.
Step C6:
If the search key matches the rule (Step C5; Yes), the comparison and update processing block 2200 compares the priority between this rule and a current matching rule. Here, the current matching rule means a rule having the highest priority among rules that have been matched in or before the former-stage rule processing block 220. On the other hand, the rule used for the comparison in the current rule processing block 220 is referred to as a comparison rule.
Step C8:
If the priority of the comparison rule is higher (Step C7; Yes), the comparison and update processing block 2200 sets the comparison rule as a new current matching rule. After that, the procedure proceeds to Step C9.
Steps C9, C10, C11:
It is determined whether or not the current rule is the last rule in the rule list. More specifically, if the rule processing block 220-i (i = 1, 2, ..., B-1) is executing the processing, it is not the last rule (Step C9; No). In this case, the rule processing block 220-i outputs the output data 2204 including the address value and the command to the next-stage rule processing block 220-(i+1) (Step C10). Then, the procedure returns back to the Step C2 and the matching processing is executed with respect to the next rule in the rule list.
If the read-out rule is not validated (Step C3; No) or if the search key does not match the rule (Step C5; No) or if the priority of the comparison rule is lower than the priority of the matching rule (Step C7; No), the procedure proceeds to the Step C9 as well.
If the current rule is the last rule in the rule list, namely, if the processing is being performed by the rule processing block 220-B (Step C9; Yes), the rule processing block 220-B outputs the rule ID and the priority of the matching rule as well as the command to the result output block 5 (Step C11). Then, the Step A4 ends. It should be noted that if there is no matching rule, the absence of the matching rule is represented by outputting a special value, for example, by setting all bit values of the rule ID to "1".
Step A5:
The result output block 5 receives the search results respectively from the plurality of decision tree processing blocks 2-1 to 2-N. The result output block 5 compares the priority between the matching rules to select a matching rule having the highest priority. Then, the result output block 5 outputs the output data 7 indicating the selection result (for example, the rule ID of the matching rule having the highest priority) as a final result of the LOOKUP processing. Here, if there is no matching rule, the absence of the matching rule is represented by outputting the above-mentioned special-value rule ID.
1-3-2. INSERT processing
Next, the INSERT processing will be described. Fig. 18 is a flow chart showing the INSERT processing in the present exemplary embodiment.
Step A1:
An input data 6 is input to the packet classifier 1. The input data 6 includes the type data indicating the INSERT processing and a new rule to be added. The command input block 4 receives the input data 6. The command input block 4 generates an internal command (here, an insertion command) based on the input data 6 and outputs the internal command simultaneously to the decision tree processing blocks 2-1 to 2-N.
Step A6:
When receiving the insertion command from the command input block 4, each decision tree processing block 2 uses the specified new rule to execute processing of following its own decision tree. Fig. 19 is a flow chart showing the processing of this Step A6.
Step B1:
A current handling node is first set to the root node of the decision tree, as in the case of the LOOKUP processing.
Step B7:
The child node determination block 2000 confirms that the processing type is INSERT and then refers to the end flag in the command. If the end flag is '1' (Step B7; Yes), the procedure proceeds to Step B5. On the other hand, if the end flag is '0' (Step B7; No), the procedure proceeds to Step B2.
Step B2:
The child node determination block 2000 reads out the node information from the node information memory block 2001, as in the case of the LOOKUP processing.
Step B8:
The child node determination block 2000 determines whether or not the duplicate of rule occurs, based on the read-out node information, a bit sequence of each field of the new rule and the effective bit length. Whether or not the duplicate of rule occurs can be determined by checking whether or not wildcard is included when an address value is calculated.
As an example, let us consider the following case. The node information includes region division information of two fields (field ID, division number) = (0001, 01), (0100, 11) and the base address = 00001000. As for the addition-target rule, respective fields of the field ID = 0001 and the field ID = 0100 are set to (field value, mask bit sequence) = ("0101", "1111") and ("0110", "1111") and respective effective bit lengths are 3 and 4. Regarding each bit of the mask bit sequence, '1' means "validated", and '0' means "invalidated" and is treated as wildcard. In the above-mentioned case, no wildcard is included in each field, and it is found that the duplicate of rule does not occur. It is therefore possible to calculate an address value of the next child node, as in the Step B3 in the case of LOOKUP. As another example, let us consider a case where the node information and the effective bit length are the same as those in the above-mentioned case but respective fields of the field ID = 0001 and the field ID = 0100 in the addition target rule are set to (field value, mask bit sequence) = ("0101", "1111") and ("0110", "1100"). In this case, the respective field values are "0101" and "01**" when the mask bit sequence is taken into consideration. An address value of the child node is calculated based on the node information. With regard to the field ID = 0001, the effective bit length is 3, the upper one bit of the lower three bits "101" is referred to, and thus '1' is obtained. On the other hand, with regard to the field ID = 0100, the effective bit length is 4, the division number is 3, and thus "01*" is obtained. In this case, the bit sequence indicating the post-division region of the child node includes wildcard, which means that the duplicate of rule occurs.
In a case where the field value is expressed by Range Match, respective bit sequences each having the effective bit length of the lower limit value and the upper limit value are considered, and it is checked whether or not the bit sequences that are referred to have the same value. For example, in the above-mentioned example, the field ID = 0001 is defined by the lower limit value = 0000 and the upper limit value = 0011. The effective bit length is 3, and the lower three bits of the lower limit value and the upper limit value are "000" and "011", respectively. The division number is 1, and the first bits of the respective effective bits both are '0'. Therefore, it is determined that the duplicate of rule does not occur when the region division is performed. On the other hand, if the field ID = 0001 is defined by the lower limit value 0000 and the upper limit value 0111, the reference bits are '0' and '1'. In this case, it is determined that the duplicate of rule occurs.
Step B10:
If the duplicate of rule occurs (Step B9; Yes), this decision tree processing block 2 is excluded from the entry addition target candidate. In this case, the child node determination block 2000 sets the end flag in the command to '1'. After that, the procedure proceeds to Step B5.
Steps B3, B4:
On the other hand, the duplicate of rule does not occur (Step B9; No), calculation of the next-stage address value (Step B3) and update of the effective bit length (Step B4) are executed, as in the case of the LOOKUP processing. After that, the procedure proceeds to Step B5.
Steps B5, B6:
The Steps B5 and B6 are executed as in the case of the LOOKUP processing. That is, if the next-stage node is not the leaf node (Step B5; No), the decision tree node processing block 200-i (i = 1, 2, ... , H-1) outputs the output data 2003 to the next-stage decision tree node processing block 200-(i+1) (Step B6). Then, the procedure returns back to the Step B7, and the next-stage node becomes the handling node. On the other hand, if the next-stage node is the leaf node (Step B5; Yes), the Step A6 ends.
Step A7:
After the Step A6 is completed, the decision tree node processing block 200-H outputs the output data 2004 including the calculated address value and the command to the number-of-entry counting block 21.
Step A8:
The count processing block 210 of the number-of-entry counting block 21 receives the input data 212 (= the above-mentioned output data 2004) from the decision tree node processing block 200-H. The count processing block 210 confirms that the processing type is INSERT and then refers to the end flag. If the end flag is '0', this decision tree processing block 2 is the entry addition target candidate. In this case, the count processing block 210 uses the input address value to read out the number-of-entry information (i.e. the number of valid rule entries) regarding the addition-target leaf node from the number-of-entry memory block 211. Then, the count processing block 210 outputs the output data 216 indicating the read-out number of valid rule entries to the entry addition target determination block 3.
On the other hand, if the end flag is '1', this decision tree processing block 2 is not the entry addition target candidate. In this case, the count processing block 210 outputs a processing end signal to the entry addition target determination block 3. Alternatively, the count processing block 210 may set the number of valid rule entries to the maximum value and then notify the entry addition target determination block 3 of the number of valid rule entries.
Step A9:
The entry addition target determination block 3 recognizes the entry addition target candidate based on the information notified from the plurality of decision tree processing blocks 2-1 to 2-N and selects an entry addition target from the entry addition target candidate. Fig. 20 is a flow chart showing processing of this Step A9.
Steps D1, D2, D3:
The entry addition target determination block 3 receives the information indicating the number of valid rule entries from the entry addition target candidate (Step D1). Next, the entry addition target determination block 3 checks whether or not there is an entry addition target candidate in which the number of valid rule entries is less than the maximum number B (list size) of the rule list (Step D2). If there is no entry addition target candidate in which the number of valid rule entries is less than the list size (Step D2; No), the entry addition target determination block 3 determines that there is no entry addition target (Step D3).
Steps D4, D5:
On the other hand, if there is an entry addition target candidate in which the number of valid rule entries is less than the list size (Step D2; Yes), the entry addition target determination block 3 selects an entry addition target candidate in which the number of valid entries is smallest, as the entry addition target (Step D4). After that, the entry addition target determination block 3 notifies each decision tree processing block 2 of true/false of the entry addition (Step D5). For example, the entry addition target determination block 3 instructs the entry addition target to add the new rule and instructs the other decision tree processing blocks 2 not to execute the rule addition processing. It should be noted that although the entry addition target candidate in which the number of valid rule entries is smallest is selected as the entry addition target in the above-described example, the entry addition target may be selected in accordance with another policy.
Step A10:
The number-of-entry counting block 21 of the decision tree processing block 2 being the entry addition target adds "1" to the number of valid rule entries regarding the addition-target leaf node and writes it in the number-of-entry memory block 211. Thus, the number-of-entry memory block 211 is updated to the latest condition. The number-of-entry counting block 21 of each of the other decision tree processing blocks 2 sets the end flag in the command to '1'.
Step A11:
Moreover, the number-of-entry counting block 21 of each decision tree processing block 2 outputs the output data 214 including the address value and the insertion command to the rule processing block 220-1 of the rule pipeline block 22.
Step A12:
The rule pipeline block 22 performs rule addition processing. Fig. 21 is a flow chart showing processing of this Step A12.
Step C12:
The Step A12 starts from the first rule in the rule list. More specifically, the first-stage rule processing block 220-1 initiates the procedure.
Step C13:
The comparison and update processing block 2200 confirms that the processing type is INSERT and then refers to the end flag in the command. If the end flag is '1' (Step C13; Yes), the procedure proceeds to Step C9. On the other hand, if the end flag is '0' (Step C13; No), the procedure proceeds to Step C2.
Step C2:
As in the case of the LOOKUP processing, the comparison and update processing block 2200 uses the input address value to read out the entry information from the entry memory block 2201. Then, the comparison and update processing block 2200 checks the validation flag.
Step C15:
If the rule is validated (Step C3; Yes), it is not allowed to write the new rule to this entry. In this case, the rule processing block 220-i outputs the output data 2204 including the address value and the command to the next-stage rule processing block 220-(i+1). Then, the procedure returns back to the Step C13 and the processing is executed with respect to the next rule in the rule list.
Step C14:
If the rule is not validated (Step C3; No), it is possible to write the new rule to this entry (i.e. free entry). In this case, the comparison and update processing block 2200 changes the rule information in the entry information to that of the new rule, sets the validation flag to '1' and writes it in the entry memory block 2201. Moreover, the comparison and update processing block 2200 sets the end flag in the command to '1' and sets the result flag in the command to '1' (addition succeeded). After that, the procedure proceeds to Step C9.
Step C9:
As in the case of the LOOKUP processing, it is determined whether or not the current rule is the last rule in the rule list. If the current rule is not the last rule (Step C9; No), the procedure proceeds to the above-mentioned Step C15. On the other hand, if the current rule is the last rule (Step C9; Yes), the rule processing block 220-B outputs the current command to the result output block 5 (Step C16). Then, the Step A12 ends.
Step A13:
The result output block 5 receives the command from each of the plurality of decision tree processing blocks 2-1 to 2-N. The result output block 5 refers to the result flag in the received command to check whether the new rule is added or not. Then, the result output block 5 outputs the output data 7 indicating the result as a final result of the INSERT processing.
Here, an entry ID indicating a location to which the new rule is added may be output as the final result of the INSERT processing. The entry ID can be generated based on the ID of the decision tree processing block being the entry addition target, the leaf node ID, an order in the rule list and the like, as in the case of generating the rule ID when the entry information does not include the rule ID. Here, if there is no free region for adding the new rule and the new rule cannot be added, a signal indicating that the rule addition has resulted in fail may be output. Alternatively, a special value (e.g. all bits of the entry ID are set to '1') may be output, which can omit the signal indicating that the rule addition has resulted in fail. Also, in the INSERT processing, the end flag and the result flag each having 1 bit are used in the foregoing example. Instead, a bit width of the end flag may be increased. In this case, for example, the end flag can represent a cause of processing termination such as "the processing has ended due to occurrence of the duplicate of rule" and "the processing has ended due to absence of the free region in the rule list of the leaf node". By referring to such the end flag, the reason why the new rule has not been added also can be output as the final result of the INSERT processing.
1-3-3. DELETE processing
Next, the DELETE processing will be described. Fig. 22 is a flow chart showing the DELETE processing in the present exemplary embodiment.
Step A1:
An input data 6 is input to the packet classifier 1. The input data 6 includes the type data indicating the DELETE processing and an existing rule to be deleted. The command input block 4 receives the input data 6. The command input block 4 generates an internal command (here, a deletion command) based on the input data 6 and outputs the internal command simultaneously to the decision tree processing blocks 2-1 to 2-N.
Step A6:
As in the case of the INSERT processing, each decision tree processing block 2 uses the specified existing rule to execute processing of following its own decision tree.
Step A3:
After the Step A6 is completed, the decision tree node processing block 200-H outputs the output data 2003 including the calculated address value and the command to the rule processing block 220-1 of the rule pipeline block 22.
Step A14:
The rule pipeline block 22 performs rule deletion processing. Fig. 23 is a flow chart showing processing of this Step A14.
Step C1:
The Step A14 starts from the first rule in the rule list. More specifically, the first-stage rule processing block 220-1 initiates the procedure.
Step C13:
The comparison and update processing block 2200 confirms that the processing type is DELETE and then refers to the end flag in the command. If the end flag is '1' (Step C13; Yes), the procedure proceeds to Step C9. On the other hand, if the end flag is '0' (Step C13; No), the procedure proceeds to Step C2.
Steps C2, C3:
As in the case of the LOOKUP processing, the comparison and update processing block 2200 uses the input address value to read out the entry information from the entry memory block 2201. Then, the comparison and update processing block 2200 checks the validation flag. If the rule is invalidated (Step C3; No), this entry is not the deletion target. In this case, the procedure proceeds to Step C9. On the other hand, if the rule is validated (Step C3; Yes), the procedure proceeds to Step C17.
Step C17:
The comparison and update processing block 2200 compares the read-out rule and the rule designated as the deletion target. That is, the comparison and update processing block 2200 determines whether or not the rule designated as the deletion target matches the read-out rule. Here, it is determined whether or not they both completely match. In a case of mismatch (Step C5; No), the procedure proceeds to Step C9. On the other hand, in a case of complete match (Step C5; Yes), the procedure proceeds to Step C18.
Step C18:
The comparison and update processing block 2200 sets the validation flag in the entry information to '0' and writes it in the entry memory block 2201. As a result, this entry (rule) is invalidated. This processing is equivalent to the deletion of the existing rule. Moreover, the comparison and update processing block 2200 sets the end flag in the command to '1' and sets the result flag in the command to '1' (deletion succeeded). After that, the procedure proceeds to Step C9.
Steps C9, C15:
As in the case of the LOOKUP processing, it is determined whether or not the current rule is the last rule in the rule list. If the current rule is not the last rule (Step C9; No), the rule processing block 220-i outputs the output data 2204 including the address value and the command to the next-stage rule processing block 220-(i+1) (Step C15). Then, the procedure returns back to the Step C13 and the processing is executed with respect to the next rule in the rule list. On the other hand, if the current rule is the last rule in the rule list (Step C9; Yes), the Step A14 ends.
Step A15:
After the Step A14 is completed, the rule processing block 220-B outputs the output data 2205 including the address value and the command to the number-of-entry counting block 21.
Step A16:
The count processing block 210 of the number-of-entry counting block 21 receives the input data 213 (= the above-mentioned output data 2205) from the rule processing block 220-B. The count processing block 210 confirms that the processing type is DELETE and then refers to the end flag and the result flag. If the end flag and the result flag both are '1', the count processing block 210 uses the input address value to read out the number-of-entry information (number of valid rule entries) regarding the deletion-target leaf node from the number-of-entry memory block 211. Then, the count processing block 210 subtracts 1 from the number of valid rule entries and then writes it in the number-of-entry memory block 211. As a result, the number-of-entry memory block 211 is updated to the latest condition.
Step A17:
Lastly, the count processing block 210 outputs an output data 215 including the command to the result output block 5. The result output block 5 refers to the result flags in the commands respectively received from the decision tree processing blocks 2-1 to 2-N to check whether the rule is deleted or not. Then, the result output block 5 outputs the output data 7 indicating the result as a final result of the DELETE processing.
Here, an entry ID indicating a location from which the deletion rule is deleted may be output as the final result of the DELETE processing, as in the case of the entry ID indicating a location to which the new rule is added in the INSERT processing. Here, if the deletion-target rule is not found in the managed rules for some reason, a signal indicating the absence of the deletion-target rule or a special value where all bits of the entry ID are set to '1' may be output in order to notify that the deletion-target rule does not exist.
1-3-4. Supplemental
According to the operation in the present exemplary embodiment, in the case of the INSERT/DELETE processing, when the decision tree pipeline block 20 performs the processing of following the decision tree or when the rule pipeline block 22 performs the rule addition/deletion processing, all pipelines are executed in series even if the processing has been completed. However, the processing block in each pipeline executes no processing at all after the processing is completed. Therefore, at timing when the processing is completed, the corresponding processing block may output a processing completion signal and the command directly to the number-of-entry counting block 21 or the result output block 5. Then, each block waits such that the processing timings of the respective decision tree processing blocks are synchronized with each other.
It is assumed that a rule is added to only one of the plurality of decision trees in the INSERT processing and only one deletion-target rule is deleted in the DELETE processing. In the present exemplary embodiment, processing in a case where the same entry as an existing entry is added is not described in detail. However, in the INSERT processing, the new addition entry just needs to be compared with the valid entries of all the decision trees including other than the addition target decision tree. This can prevent overlapping with the existing entry from occurring. Such the processing can be achieved by utilizing the above-described LOOKUP processing. By checking the addition-target entry and the existing entry at the time of the INSERT processing, it is prevented that the deletion-target rule is deleted from a plurality of locations at the time of the DELETE processing.
In the above-described embodiment, the command input block 4 externally receives the processing type, the search key and the addition/deletion target entry as the input data 6 and generates the internal command. Alternatively, the command input block 4 may directly receive an arrived packet itself or a header data of the arrived packet as the input data 6, extract the search key and then generate the internal command.
1-4. Effects
According to the present exemplary embodiment, it is possible to achieve the dynamic update of entry without performing preprocessing, in the packet classification that uses the decision tree.
Moreover, in the INSERT processing according to the present exemplary embodiment, the new rule is added to an entry list whose number of valid rule entries is smallest. As a result, unevenness of the number of rules between the plurality of decision trees can be suppressed, which is preferable.
Furthermore, according to the present exemplary embodiment, the processing time required for the dynamic update of entry such as the INSERT processing and the DELETE processing becomes constant.
Let us approximately estimate the number of cycles required for each processing and the number of cycles before input of a next command to the packet classifier 1. In the estimation here, let us assume that each of the read-related processing and the write-related processing can be executed in one unit cycle in each of the command input block 4, the entry addition target determination block 3, the result output block 5, the decision tree node processing block 200, the number-of-entry counting block 21 and the rule processing block 220. Here, the one unit cycle is desirably one clock cycle. Strictly speaking, the number of cycles before input of a next command varies depends on which processing command is input following the LOOKUP processing, the INSERT processing or the DELETE processing. However, in the estimation here, the maximum value of the number of cycles before input of a next processing command is considered.
In the case of the LOOKUP processing, only reading is performed with respect to the memory region in the packet classifier 1. Therefore, the processing by the command input block 4, the H-stages processing by the decision tree pipeline block 20 and the B-stages processing by the rule pipeline block 22 of the decision tree processing block 2 and the processing by the result output block 5 can be executed in a total of H+B+2 unit cycles. Moreover, since information in the memory is not updated in the LOOKUP processing, a next command can be input at a cycle immediately following the input of the LOOKUP command.
In the case of the INSERT processing, both reading and writing are performed with respect to the memory region in the packet classifier 1. Therefore, the processing by the command input block 4, the H-stages processing by the decision tree pipeline block 20 and the read processing by the number-of-entry counting block 21 of the decision tree processing block 2, the processing by the entry addition target determination block 3, the write processing by the number-of-entry counting block 21, the B-stages read processing and the one-time write processing by the rule pipeline block 22 and the processing by the result output block 5 can be executed in a total of H+B+6 unit cycles. Moreover, unless the write processing with respect to the number-of-entry memory block 211 by the number-of-entry counting block 21 is completed, it is not possible to correctly execute a next command, especially INSERT processing. Therefore, a next command can be input after 3 unit cycles have passed from the input of the INSERT command.
In the case of the DELETE processing, both reading and writing are performed with respect to the memory region in the packet classifier 1, as in the case of the INSERT processing. Therefore, the processing by the command input block 4, the H-stages processing by the decision tree pipeline block 20 and the B-stages read processing and the one-time write processing by the rule pipeline block 22 of the decision tree processing block 2, the one-time read processing and the one-time write processing by the number-of-entry counting block 21 and the processing by the result output block 5 can be executed in a total of H+B+5 unit cycles. Moreover, unless the write processing with respect to the number-of-entry memory block 211 by the number-of-entry counting block 21 is completed, it is not possible to correctly execute a next command, especially INSERT processing. Therefore, a next command can be input after B+3 unit cycles have passed from the input of the DELETE command.
As described above, according to the present exemplary embodiment, each processing can be executed in a fixed period of time; for example, the LOOKUP processing can be executed in H+B+2 unit cycles, the INSERT processing can be executed in H+B+6 unit cycles, and the DELETE processing can be executed in H+B+5 unit cycles. The problem that the processing time varies due to the duplicate of rule as in the conventional technique is not caused. Moreover, it is possible to input a next command at the subsequent unit cycle in the case of the LOOKUP processing, after 3 unit cycles have passed in the case of the INSERT processing and after B+3 unit cycles have passed in the case of the DELETE processing. Therefore, each processing can be executed with slight processing overhead.
2. Second Exemplary Embodiment
Next, a second exemplary embodiment of the present invention will be described. It should be noted that an overlapping description with the first exemplary embodiment is omitted as appropriate.
2-1. Configuration and Summary
Fig. 24 is a block diagram showing a configuration of a packet classifier 1 according to the second exemplary embodiment. As compared with the configuration of the first exemplary embodiment shown in Fig. 4, the decision tree processing blocks 2-1 to 2-N are replaced by decision tree processing blocks 9-1 to 9-N, respectively.
Fig. 25 is a block diagram showing a configuration of the decision tree processing block 9 according to the second exemplary embodiment. As compared with the configuration of the decision tree processing block 2 shown in Fig. 5, the number-of-entry counting block 21 is omitted. Also, the rule processing block 220-B of the rule pipeline block 22 outputs an output data 26 to the entry addition target determination block 3. Moreover, the rule processing block 220-1 receives an input data 27 from the entry addition target determination block 3.
According to the present exemplary embodiment, in the INSERT processing, all the entry addition target candidate once add the new rule to the decision tree. This processing is hereinafter referred to as "temporal addition processing". More specifically, in the temporal addition processing, the rule pipeline block 22 of each entry addition target candidate temporarily adds the new rule to the rule list managed by the addition-target leaf node. During the temporal addition processing, the rule pipeline block 22 counts the number of valid rule entries in the rule list. Then, the rule pipeline block 22 instead of the number-of-entry counting block 21 notifies the entry addition target determination block 3 of the number of valid rule entries. More specifically, the last-stage rule processing block 220-B of the rule pipeline block 22 outputs an output data 26 indicating the number of valid rule entries obtained through the temporal addition processing to the entry addition target determination block 3.
The entry addition target determination block 3 selects one entry addition target from the entry addition target candidate, as in the case of the first exemplary embodiment. Then, the entry addition target determination block 3 sends information (input data 27) indicating the selection result to all the entry addition target candidate.
Each entry addition target candidate other than the selected entry addition target performs "added-entry invalidation processing". In the added-entry invalidation processing, the entry addition target candidate invalidates the new rule that has been temporarily added by the above-mentioned temporal addition processing. The added-entry invalidation processing can be achieved in a similar manner to the DELETE processing.
In this manner, according to the present exemplary embodiment, the same processing result as in the first exemplary embodiment can be obtained without using the number-of-entry counting block 21. Since the number-of-entry counting block 21 is omitted, a total memory region can be reduced.
2-2. Operation
Next, an operation of the packet classifier 1 according to the present exemplary embodiment will be described in detail.
2-2-1. LOOKUP processing
The LOOKUP processing is the same as in the case of the first exemplary embodiment.
2-2-2. INSERT processing
Next, the INSERT processing will be described. Fig. 26 is a flow chart showing the INSERT processing in the present exemplary embodiment. The Steps A1, A6 and A13 are the same as in the case of the INSERT processing in the first exemplary embodiment.
Step A3:
After the Step A6 is completed, the Step A3 instead of the Step A7 is executed as in the case of the LOOKUP processing. That is, the decision tree node processing block 200-H outputs the output data 2003 including the calculated address value and the command to the rule processing block 220-1 of the rule pipeline block 22.
Step A18:
The rule pipeline block 22 executes the above-mentioned "temporal addition processing". Fig. 27 is a flow chart showing processing of this Step A18.
Step C12:
The Step A18 starts from the first rule in the rule list. More specifically, the first-stage rule processing block 220-1 initiates the procedure. Here, the number of valid rule entries is set to an initial value 0.
Steps C2, C3, C19:
As in the case of the first exemplary embodiment, the rule processing block 220 reads out the entry information and checks the validation flag (Step C2). If the rule is validated (Step C3; Yes), the rule processing block 220 adds "1" to the number of valid rule entries (Step C19). After that, the procedure proceeds to Step C15.
Step C15:
The rule processing block 220-i outputs an output data 2204 including the address value and the command as well as the number of valid rule entries to the next-stage rule processing block 220-(i+1). Then, the procedure returns back to the Step C2, and the processing is executed with respect to the next rule in the rule list.
Step C13:
If the rule is not validated (Step C3; No), the rule processing block 220 refers to the end flag in the command. If the end flag is '1' (Step C13; Yes), the processing proceeds to the above-mentioned Step C15. On the other hand, if the end flag is '0' (Step C13; No), the procedure proceeds to Step C14.
Step C14:
As in the case of the first exemplary embodiment, the rule processing block 220 writes the new rule to this entry (i.e. free entry) and sets the validation flag to '1'. Moreover, the rule processing block 220 sets the end flag in the command to '1' and sets the result flag in the command to '1' (addition succeeded). After that, the procedure proceeds to Step C9.
Steps C9, C20:
As in the case of the first exemplary embodiment, it is determined whether or not the current rule is the last rule in the rule list. If the current rule is not the last rule (Step C9; No), the procedure proceeds to the above-mentioned Step C15. On the other hand, if the current rule is the last rule (Step C9; Yes), the rule processing block 220-B outputs the current command and the number of valid rule entries to the entry addition target determination block 3 (Step C20). Then, the Step A18 ends.
Step A9:
Next, the entry addition target determination block 3 selects an entry addition target from the entry addition target candidate in a similar manner to the first exemplary embodiment. Then, the entry addition target determination block 3 sends information indicating the selection result to the all entry addition target candidate.
Step A14:
The rule pipeline block 22 executes the above-mentioned "added-entry invalidation processing". The added-entry invalidation processing is performed in a similar manner to the DELETE processing. Here, the deletion-target rule is the new rule that has been added by the temporal addition processing. It should be noted that "11" that is not used in the first exemplary embodiment may be used as the processing type of the command for performing the added-entry invalidation processing. The end flag and the result flag at this time are initialized.
2-2-3. DELETE processing
Fig. 28 is a flow chart showing the DELETE processing in the present exemplary embodiment. According to the present exemplary embodiment, the number-of-entry counting block 21 is omitted. Therefore, the Steps A15 and A16 relating to the number-of-entry counting block 21 can be omitted, as compared with the DELETE processing in the first exemplary embodiment shown in Fig. 22. The other processing is the same as in the case of the first exemplary embodiment.
2-3. Effects
According to the present exemplary embodiment, the same effects as in the case of the first exemplary embodiment can be obtained. Furthermore, the memory region is reduced.
Let us approximately estimate the number of cycles required for each processing and the number of cycles before input of a next command to the packet classifier 1, in a similar manner to the first exemplary embodiment.
The LOOKUP processing is the same as in the case of the first exemplary embodiment.
The INSERT processing includes processing by the command input block 4, the H-stages processing by the decision tree pipeline block 20 and the B-stages read processing and the one-time write processing by the rule pipeline block 22 of the decision tree processing block 2, the processing by the entry addition target determination block 3, the B-stages read processing and the one-time write processing by the rule pipeline block 22 and the processing by the result output block 5. Therefore, it can be executed in a total of H+2B+5 unit cycles. Moreover, unless the invalidation of the added entry in the decision trees other than the addition-target decision tree is completed, it is not possible to execute subsequent processing without wait. Therefore, a next command can be input after B+3 unit cycles have passed from the input of the INSERT command.
The DELETE processing includes processing by the command input block 4, the H-stages processing by the decision tree pipeline block 20 and the B-stages read processing and the one-time write processing by the rule pipeline block 22 of the decision tree processing block 2 and the processing by the result output block 5. Therefore, it can be executed in a total of H+B+3 unit cycles. Since the delete-target entry is deleted in order, the one-time write processing in the rule pipeline block 22 that performs the entry deletion is considered. Thus, a next command can be input after 1 unit cycle has passed from the input of the DELETE command.
As described above, according to the present exemplary embodiment, each processing can be executed in a fixed period of time; for example, the LOOKUP processing can be executed in H+B+2 unit cycles, the INSERT processing can be executed in H+2B+5 unit cycles, and the DELETE processing can be executed in H+B+3 unit cycles. The problem that the processing time varies due to the duplicate of rule as in the conventional technique is not caused. Moreover, it is possible to input a next command at the subsequent unit cycle in the case of the LOOKUP processing, after B+3 unit cycles have passed in the case of the INSERT processing and after 1 unit cycle has passed in the case of the DELETE processing. Therefore, each processing can be executed in series although slight processing overhead is required.
3. Third Exemplary Embodiment
A third exemplary embodiment of the present invention is different from the second exemplary embodiment in the configuration of the command and the INSERT processing using the command. The configuration, the LOOKUP processing and the DELETE processing are the same as those in the second exemplary embodiment, and an overlapping description will be omitted as appropriate.
Fig. 29 is a conceptual diagram showing a configuration example of the command in the third exemplary embodiment. According to the present exemplary embodiment, as shown in Fig. 29, the command section is further provided with a filed of "entry addition list ID". The entry addition list ID is location information indicating a location (stage) to which the new rule is added by the above-described temporal addition processing. That is, when the new rule is added in the rule processing block 220-i (i = 1, 2, ..., B), the entry addition list ID is set to "i".
Fig. 30 is a flow chart showing the INSERT processing in the present exemplary embodiment. According to the present exemplary embodiment, Step A19 instead of the Step A18 in the second exemplary embodiment is executed as the temporal addition processing. Moreover, Step A20 instead of the Step A14 in the second exemplary embodiment is executed as the added-entry invalidation processing. The other processing is the same as in the case of the second exemplary embodiment.
Step A19:
Fig. 31 is a flow chart showing processing of the Step A19. Step C21 is added between the Step C14 and the Step C9, as compared with the Step A18 (refer to Fig. 27) in the second exemplary embodiment. In the Step C21, the rule processing block 220-i sets the entry addition list ID in the command to the ID (= i) of this rule processing block. The other processing is the same as in the Step A18.
Step A20:
Fig. 32 is a flow chart showing processing of the Step A20.
Step C1:
The Step A20 starts from the first rule in the rule list. More specifically, the first-stage rule processing block 220-1 initiates the procedure.
Step C22:
The rule processing block 220 determines whether or not its own ID matches the entry addition list ID in the input command. In a case of match (Step C22; Yes), the procedure proceeds to Step C2. On the other hand, in a case of mismatch (Step C22; No), the procedure proceeds to Step C9.
Steps C2, C18:
The rule processing block 220 uses the input address value to read out the entry information from the entry memory block 2201 (Step C2). Moreover, the rule processing block 220 sets the validation flag in the entry information to '0' and writes it in the entry memory block 2201. As a result, this entry (rule) is invalidated. Furthermore, the rule processing block 220 sets the end flag in the command to '1' and sets the result flag in the command to '1' (deletion succeeded). After that, the procedure proceeds to Step C9.
Step C9, C15:
As in the case of the foregoing exemplary embodiments, it is determined whether or not the current rule is the last rule in the rule list. If the current rule is not the last rule (Step C9; No), the rule processing block 220-i outputs the output data 2204 including the address value and the command to the next-stage rule processing block 220-(i+1) (Step C15). Then, the procedure returns back to the Step C22 and the processing is executed with respect to the next rule in the rule list. On the other hand, if the current rule is the last rule in the rule list (Step C9; Yes), the Step A20 ends.
According to the present exemplary embodiment, the same effects as in the case of the second exemplary embodiment can be obtained. Furthermore, in the added-entry invalidation processing, the rule matching processing (read/comparison processing) regarding rules other than the invalidation-target rule can be skipped. Therefore, processing time and power consumption can be reduced.
4. Fourth Exemplary Embodiment
Fig. 33 is a block diagram showing a configuration of a packet classifier according to a fourth exemplary embodiment of the present invention. In the fourth exemplary embodiment, the packet classification processing is achieved by a computer executing a software program. More specifically, the packet classifier according to the present exemplary embodiment has a program processing device 10 and a packet classification program 11. The program processing device 10 is achieved by a CPU of a host such as a server and a PC. The packet classification program 11 is a computer program executed by the program processing device 10 and controls an operation of the program processing device 10. By executing the packet classification program 11, the program processing device 10 can have the functions of the packet classifier 1 described in the foregoing exemplary embodiments.
It should be noted that the packet classification program 11 may be recorded on a tangible computer-readable recording medium.
A multi-core processor provided with a plurality of CPU cores (or a many-core processor provided with more CPU cores) may be used as the program processing device 10. In this case, the respective CPU cores may be assigned to the command input block 4, the decision tree processing blocks 2-1 to 2-N, the entry addition target determination block 3, the result output block 5 and the decision tree node processing blocks 200, the number-of-entry counting block 21 and the rule processing blocks 220 of the decision tree processing block 2, which enables further faster processing.
While the exemplary embodiments of the present invention have been described above with reference to the attached drawings, the present invention is not limited to these exemplary embodiments and can be modified as appropriate by those skilled in the art without departing from the spirit and scope of the present invention.
While a part of or whole of the above-described exemplary embodiments may be described as the following Supplementary notes, it is not limited to that.
(Supplementary note 1)
A packet classifier comprising:
a plurality of decision tree processing blocks respectively configuring a plurality of decision trees for use in packet classification;
a command input block configured to input a command simultaneously to said plurality of decision tree processing blocks; and
an entry addition target determination block connected to said plurality of decision tree processing blocks,
wherein a single rule is managed by a single leaf node in any one of said plurality of decision trees,
wherein if said command is a lookup command, each of said plurality of decision tree processing blocks uses its own decision tree to determine whether or not a search key matches any rule,
wherein if said command is an insertion command for adding a new rule to a decision tree, each of said plurality of decision tree processing blocks determines whether or not said new rule can be managed by a single leaf node in its own decision tree,
wherein if said new rule can be managed by a single leaf node in a decision tree processing block, this decision tree processing block is an entry addition target candidate and this single leaf node is an addition-target leaf node,
wherein said entry addition target determination block selects, from said entry addition target candidate, an entry addition target being a target to which said new rule is added, and
wherein said selected entry addition target adds said new rule to a rule list managed by said addition-target leaf node in its own decision tree.
(Supplementary note 2)
The packet classifier according to Supplementary note 1,
wherein said entry addition target candidate notifies said entry addition target determination block of a number-of-entry of valid rules managed by said addition-target leaf node, and
wherein said entry addition target determination block selects said entry addition target based on said number-of-entry.
(Supplementary note 3)
The packet classifier according to Supplementary note 2,
wherein said entry addition target determination block selects, as said entry addition target, said entry addition target candidate in which said number-of-entry is smallest.
(Supplementary note 4)
The packet classifier according to Supplementary note 2 or 3,
wherein each of said plurality of decision tree processing blocks comprises a number-of-entry memory block configured to retain the number-of-entry of valid rules managed by each leaf node in its own decision tree,
wherein said entry addition target candidate reads out said number-of-entry regarding said addition-target leaf node from said number-of-entry memory block and notifies said entry addition target determination block of said read-out number-of-entry, and
wherein said entry addition target updates said number-of-entry memory block by adding 1 to said number-of-entry regarding said addition-target leaf node.
(Supplementary note 5)
The packet classifier according to Supplementary note 4,
wherein if said command is a deletion command for deleting an existing rule, each of said plurality of decision tree processing blocks determines whether or not said existing rule is being managed by a single leaf node in its own decision tree,
wherein if said existing rule is being managed by a single leaf node in a decision tree processing block, this decision tree processing block is an entry deletion target and this single leaf node is a deletion-target leaf node,
wherein said entry deletion target invalidates said existing rule and updates said number-of-entry memory block by subtracting 1 from said number-of-entry regarding said deletion-target leaf node.
(Supplementary note 6)
The packet classifier according to Supplementary note 2 or 3,
wherein each of said entry addition target candidate performs temporal addition processing that temporarily adds said new rule to a rule list managed by said addition-target leaf node, and counts the number-of-entry of valid rules in this rule list during said temporal addition processing,
wherein each of said entry addition target candidate notifies said entry addition target determination block of said number-of-entry obtained in said temporal addition processing, and
wherein each of said entry addition target candidate other than said selected entry addition target performs added-entry invalidation processing that invalidates said temporarily added new rule.
(Supplementary note 7)
The packet classifier according to Supplementary note 6,
wherein in said temporal addition processing, each of said entry addition target candidate writes location information in said insertion command, said location information indicating a location in said rule list to which said new rule is added,
wherein each of said entry addition target candidate other than said selected entry addition target performs said added-entry invalidation processing with reference to said location information without performing rule matching.
(Supplementary note 8)
The packet classifier according to any one of Supplementary notes 1 to 7,
wherein each of said plurality of decision tree processing blocks comprises:
a decision tree pipeline block comprising decision tree node processing blocks whose number of stages is equal to a number of stages of the corresponding decision tree and using said decision tree node processing blocks to perform pipeline processing for searching in the decision tree; and
a rule pipeline block comprising rule processing blocks whose number of stages is equal to a number of rules that can be managed in each leaf node and using said rule processing blocks to perform pipeline processing for search for, addition and deletion of a rule.
(Supplementary note 9)
A packet classification method that uses a plurality of decision trees,
wherein a single rule is managed by a single leaf node in any one of said plurality of decision trees,
wherein said packet classification method comprises:
inputting a command simultaneously to said plurality of decision trees;
if said command is a lookup command, determining in each of said plurality of decision trees whether or not a search key matches any rule;
if said command is an insertion command for adding a new rule to a decision tree, determining in each of said plurality of decision trees whether or not said new rule can be managed by a single leaf node,
wherein if said new rule can be managed by a single leaf node in a decision tree, this decision tree is an entry addition target candidate and this single leaf node is an addition-target leaf node;
selecting, from said entry addition target candidate, an entry addition target being a target to which said new rule is added; and
adding said new rule to a rule list managed by said addition-target leaf node of said entry addition target.
(Supplementary note 10)
A packet classification program which causes a computer to perform packet classification processing that uses a plurality of decision trees,
wherein a single rule is managed by a single leaf node in any one of said plurality of decision trees,
wherein said packet classification processing comprises:
inputting a command simultaneously to said plurality of decision trees;
if said command is a lookup command, determining in each of said plurality of decision trees whether or not a search key matches any rule;
if said command is an insertion command for adding a new rule to a decision tree, determining in each of said plurality of decision trees whether or not said new rule can be managed by a single leaf node,
wherein if said new rule can be managed by a single leaf node in a decision tree, this decision tree is an entry addition target candidate and this single leaf node is an addition-target leaf node;
selecting, from said entry addition target candidate, an entry addition target being a target to which said new rule is added; and
adding said new rule to a rule list managed by said addition-target leaf node of said entry addition target.
This application is based upon and claims the benefit of priority from Japanese patent application No. 2010-279693 filed on December 15, 2010, the disclosure of which is incorporated herein in its entirely by reference.
Explanation of Reference Numerals
1 packet classifier
2 decision tree processing block
3 entry addition target determination block
4 command input block
5 result output block
9 decision tree processing block
10 program processing device
11 packet classification program
20 decision tree pipeline block
21 number-of-entry counting block
22 rule pipeline block
200 decision tree node processing block
210 count processing block
211 number-of-entry memory block
220 rule processing block
2000 child node determination block
2001 node information memory block
2200 comparison and update processing block
2201 entry memory block

Claims (10)

  1. A packet classifier comprising:
    a plurality of decision tree processing blocks respectively configuring a plurality of decision trees for use in packet classification;
    a command input block configured to input a command simultaneously to said plurality of decision tree processing blocks; and
    an entry addition target determination block connected to said plurality of decision tree processing blocks,
    wherein a single rule is managed by a single leaf node in any one of said plurality of decision trees,
    wherein if said command is a lookup command, each of said plurality of decision tree processing blocks uses its own decision tree to determine whether or not a search key matches any rule,
    wherein if said command is an insertion command for adding a new rule to a decision tree, each of said plurality of decision tree processing blocks determines whether or not said new rule can be managed by a single leaf node in its own decision tree,
    wherein if said new rule can be managed by a single leaf node in a decision tree processing block, this decision tree processing block is an entry addition target candidate and this single leaf node is an addition-target leaf node,
    wherein said entry addition target determination block selects, from said entry addition target candidate, an entry addition target being a target to which said new rule is added, and
    wherein said selected entry addition target adds said new rule to a rule list managed by said addition-target leaf node in its own decision tree.
  2. The packet classifier according to claim 1,
    wherein said entry addition target candidate notifies said entry addition target determination block of a number-of-entry of valid rules managed by said addition-target leaf node, and
    wherein said entry addition target determination block selects said entry addition target based on said number-of-entry.
  3. The packet classifier according to claim 2,
    wherein said entry addition target determination block selects, as said entry addition target, said entry addition target candidate in which said number-of-entry is smallest.
  4. The packet classifier according to claim 2 or 3,
    wherein each of said plurality of decision tree processing blocks comprises a number-of-entry memory block configured to retain the number-of-entry of valid rules managed by each leaf node in its own decision tree,
    wherein said entry addition target candidate reads out said number-of-entry regarding said addition-target leaf node from said number-of-entry memory block and notifies said entry addition target determination block of said read-out number-of-entry, and
    wherein said entry addition target updates said number-of-entry memory block by adding 1 to said number-of-entry regarding said addition-target leaf node.
  5. The packet classifier according to claim 4,
    wherein if said command is a deletion command for deleting an existing rule, each of said plurality of decision tree processing blocks determines whether or not said existing rule is being managed by a single leaf node in its own decision tree,
    wherein if said existing rule is being managed by a single leaf node in a decision tree processing block, this decision tree processing block is an entry deletion target and this single leaf node is a deletion-target leaf node,
    wherein said entry deletion target invalidates said existing rule and updates said number-of-entry memory block by subtracting 1 from said number-of-entry regarding said deletion-target leaf node.
  6. The packet classifier according to claim 2 or 3,
    wherein each of said entry addition target candidate performs temporal addition processing that temporarily adds said new rule to a rule list managed by said addition-target leaf node, and counts the number-of-entry of valid rules in this rule list during said temporal addition processing,
    wherein each of said entry addition target candidate notifies said entry addition target determination block of said number-of-entry obtained in said temporal addition processing, and
    wherein each of said entry addition target candidate other than said selected entry addition target performs added-entry invalidation processing that invalidates said temporarily added new rule.
  7. The packet classifier according to claim 6,
    wherein in said temporal addition processing, each of said entry addition target candidate writes location information in said insertion command, said location information indicating a location in said rule list to which said new rule is added,
    wherein each of said entry addition target candidate other than said selected entry addition target performs said added-entry invalidation processing with reference to said location information without performing rule matching.
  8. The packet classifier according to any one of claims 1 to 7,
    wherein each of said plurality of decision tree processing blocks comprises:
    a decision tree pipeline block comprising decision tree node processing blocks whose number of stages is equal to a number of stages of the corresponding decision tree and using said decision tree node processing blocks to perform pipeline processing for searching in the decision tree; and
    a rule pipeline block comprising rule processing blocks whose number of stages is equal to a number of rules that can be managed in each leaf node and using said rule processing blocks to perform pipeline processing for search for, addition and deletion of a rule.
  9. A packet classification method that uses a plurality of decision trees,
    wherein a single rule is managed by a single leaf node in any one of said plurality of decision trees,
    wherein said packet classification method comprises:
    inputting a command simultaneously to said plurality of decision trees;
    if said command is a lookup command, determining in each of said plurality of decision trees whether or not a search key matches any rule;
    if said command is an insertion command for adding a new rule to a decision tree, determining in each of said plurality of decision trees whether or not said new rule can be managed by a single leaf node,
    wherein if said new rule can be managed by a single leaf node in a decision tree, this decision tree is an entry addition target candidate and this single leaf node is an addition-target leaf node;
    selecting, from said entry addition target candidate, an entry addition target being a target to which said new rule is added; and
    adding said new rule to a rule list managed by said addition-target leaf node of said entry addition target.
  10. A packet classification program which causes a computer to perform packet classification processing that uses a plurality of decision trees,
    wherein a single rule is managed by a single leaf node in any one of said plurality of decision trees,
    wherein said packet classification processing comprises:
    inputting a command simultaneously to said plurality of decision trees;
    if said command is a lookup command, determining in each of said plurality of decision trees whether or not a search key matches any rule;
    if said command is an insertion command for adding a new rule to a decision tree, determining in each of said plurality of decision trees whether or not said new rule can be managed by a single leaf node,
    wherein if said new rule can be managed by a single leaf node in a decision tree, this decision tree is an entry addition target candidate and this single leaf node is an addition-target leaf node;
    selecting, from said entry addition target candidate, an entry addition target being a target to which said new rule is added; and
    adding said new rule to a rule list managed by said addition-target leaf node of said entry addition target.
PCT/JP2011/005131 2010-12-15 2011-09-13 Packet classifier, packet classification method and packet classification program WO2012081148A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2013514491A JP5807676B2 (en) 2010-12-15 2011-09-13 Packet classifier, packet classification method, and packet classification program

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2010-279693 2010-12-15
JP2010279693 2010-12-15

Publications (1)

Publication Number Publication Date
WO2012081148A1 true WO2012081148A1 (en) 2012-06-21

Family

ID=46244268

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2011/005131 WO2012081148A1 (en) 2010-12-15 2011-09-13 Packet classifier, packet classification method and packet classification program

Country Status (2)

Country Link
JP (1) JP5807676B2 (en)
WO (1) WO2012081148A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016023232A1 (en) * 2014-08-15 2016-02-18 Hewlett-Packard Development Company, L.P. Memory efficient packet classification method
WO2018111473A1 (en) * 2016-12-13 2018-06-21 Oracle International Corporation System and method for providing partitions of classification resources in a network device
CN109101260A (en) * 2018-08-30 2018-12-28 郑州云海信息技术有限公司 A kind of upgrade method of node software, device and computer readable storage medium
US10341242B2 (en) 2016-12-13 2019-07-02 Oracle International Corporation System and method for providing a programmable packet classification framework for use in a network device

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5682442B2 (en) * 2011-05-12 2015-03-11 日本電気株式会社 Packet classifier, packet classification method, and packet classification program
KR101897612B1 (en) * 2017-04-14 2018-09-12 계명대학교 산학협력단 METHOD FOR CREATING DECISION TREE FOR PACKET CLASSIFICATION BASED ON WILD CARD RATIO IN IoT AND SYSTEM THEREOF
CN108711074B (en) * 2018-05-21 2021-08-24 创新先进技术有限公司 Service classification method, device, server and readable storage medium
US20220222232A1 (en) * 2019-05-27 2022-07-14 Nec Corporation Data management device, control method, and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004527980A (en) * 2001-05-22 2004-09-09 アルカテル・インターネツトワーキング(ピー・イー)インコーポレイテツド Snooping standby router
JP2007507915A (en) * 2003-07-25 2007-03-29 サンドバースト コーポレーション Apparatus and method for classifier identification

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7536476B1 (en) * 2002-12-20 2009-05-19 Cisco Technology, Inc. Method for performing tree based ACL lookups
US8233493B2 (en) * 2008-09-08 2012-07-31 Wisconsin Alumni Research Foundation Packet router having improved packet classification

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004527980A (en) * 2001-05-22 2004-09-09 アルカテル・インターネツトワーキング(ピー・イー)インコーポレイテツド Snooping standby router
JP2007507915A (en) * 2003-07-25 2007-03-29 サンドバースト コーポレーション Apparatus and method for classifier identification

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016023232A1 (en) * 2014-08-15 2016-02-18 Hewlett-Packard Development Company, L.P. Memory efficient packet classification method
US10462062B2 (en) 2014-08-15 2019-10-29 Hewlett Packard Enterprise Development Lp Memory efficient packet classification method
WO2018111473A1 (en) * 2016-12-13 2018-06-21 Oracle International Corporation System and method for providing partitions of classification resources in a network device
US10341242B2 (en) 2016-12-13 2019-07-02 Oracle International Corporation System and method for providing a programmable packet classification framework for use in a network device
US10404594B2 (en) 2016-12-13 2019-09-03 Oracle International Corporation System and method for providing partitions of classification resources in a network device
CN109101260A (en) * 2018-08-30 2018-12-28 郑州云海信息技术有限公司 A kind of upgrade method of node software, device and computer readable storage medium
CN109101260B (en) * 2018-08-30 2023-01-24 郑州云海信息技术有限公司 Node software upgrading method and device and computer readable storage medium

Also Published As

Publication number Publication date
JP5807676B2 (en) 2015-11-10
JP2014504042A (en) 2014-02-13

Similar Documents

Publication Publication Date Title
WO2012081148A1 (en) Packet classifier, packet classification method and packet classification program
US8180803B2 (en) Deterministic finite automata (DFA) graph compression
US7949683B2 (en) Method and apparatus for traversing a compressed deterministic finite automata (DFA) graph
Wang et al. Hyperscan: A fast multi-pattern regex matcher for modern {CPUs}
US8886680B2 (en) Deterministic finite automata graph traversal with nodal bit mapping
US9652505B2 (en) Content search pattern matching using deterministic finite automata (DFA) graphs
US9787693B2 (en) Graph caching
JP6383578B2 (en) Apparatus and method for uniquely enumerating paths in a parse tree
US8914320B2 (en) Graph generation method for graph-based search
US20140324900A1 (en) Intelligent Graph Walking
US10958770B2 (en) Realization of a programmable forwarding pipeline through packet header summaries in a data processing unit
US8935270B1 (en) Content search system including multiple deterministic finite automaton engines having shared memory resources
JP5673667B2 (en) Packet classifier, packet classification method, packet classification program
JP5682442B2 (en) Packet classifier, packet classification method, and packet classification program
WO2022097725A1 (en) Information processing device, information processing method, and computer program
Haghighat et al. Toward fast regex pattern matching using simple patterns
JP2009123050A (en) Information retrieving device and registration method of entry information to the same
Vespa et al. Predictive Pattern Matching for Scalable Network Intrusion Detection

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 11848282

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2013514491

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 11848282

Country of ref document: EP

Kind code of ref document: A1