WO2016125501A1 - Dispositif de traitement de données, procédé de gestion d'entrée d'informations et support d'enregistrement doté d'un programme de gestion d'entrée d'informations enregistré sur ce dernier - Google Patents

Dispositif de traitement de données, procédé de gestion d'entrée d'informations et support d'enregistrement doté d'un programme de gestion d'entrée d'informations enregistré sur ce dernier Download PDF

Info

Publication number
WO2016125501A1
WO2016125501A1 PCT/JP2016/000593 JP2016000593W WO2016125501A1 WO 2016125501 A1 WO2016125501 A1 WO 2016125501A1 JP 2016000593 W JP2016000593 W JP 2016000593W WO 2016125501 A1 WO2016125501 A1 WO 2016125501A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
search
search tree
node
information entry
Prior art date
Application number
PCT/JP2016/000593
Other languages
English (en)
Japanese (ja)
Inventor
義和 渡邊
Original Assignee
日本電気株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 日本電気株式会社 filed Critical 日本電気株式会社
Priority to JP2016573232A priority Critical patent/JP6652070B2/ja
Publication of WO2016125501A1 publication Critical patent/WO2016125501A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor

Definitions

  • the present invention relates to a data processing device, an information entry management method, and an information entry management program, and more particularly to a data processing device, an information entry management method, and an information entry management program for processing a received packet as an example of data processing.
  • a packet processing device that transfers or discards a packet according to a certain rule, such as an Ethernet (registered trademark) switch, a router, or an open flow (OpenFlow) switch, generally receives and processes a packet. Specifically, the following processing is executed.
  • a certain rule such as an Ethernet (registered trademark) switch, a router, or an open flow (OpenFlow) switch
  • the process (2) is generally called packet classification, and is a process for searching for a rule with the highest priority that matches a received packet from a rule set given in advance.
  • the rule includes a match condition (for example, a range of suitable addresses) and an action that is a process executed on a packet that meets the match condition.
  • the rule may include priority and statistical information.
  • Non-Patent Document 1 (“EffiCuts: Optimized Packets Classification for Memory and Through CMMs”) by Non-Patent Document 1 (Balayee Vamanan et al. Has been.
  • the technique described in Non-Patent Document 1 treats the d field rule search problem as a subspace search problem in the d-dimensional space.
  • the technique described in Non-Patent Document 1 is a technique for finding a rule that matches a received packet by cutting the search space and narrowing the search area each time the search tree is traced.
  • the nodes constituting the search tree represent a partial space of the d-dimensional space
  • the root node represents the entire region (initial search region) that can be taken in the d-dimensional space.
  • Each rule is associated with every node having a subspace that overlaps the d-dimensional subspace represented by the respective rule. If the number of rules associated with (included in) a certain node is greater than a predetermined number (Binth), the node is cut in one or more dimensions to create child nodes ( That is, the node is an intermediate node). On the other hand, when the number of rules associated with (included in) a certain node is equal to or less than a certain number (Binth), the node is a leaf node.
  • FIG. 23 is a schematic diagram illustrating an example of a rule set.
  • the example shown in FIG. 23 assumes a rule search of two fields (dimension X and dimension Y), and shows a case where the ranges that dimension X and dimension Y can take are 0 to 255 and 0 to 127, respectively. .
  • each rectangle represents a two-dimensional subspace corresponding to the matching conditions of the rules R1 to R7. If the rectangles overlap, the rule in which all of the rectangles are displayed shall have a higher priority.
  • FIG. 25 is a table showing an example of information related to the nodes of the search tree, and shows the partial space represented by each node of the search tree of FIG. 24 and the rules included in each node.
  • the partial space represented by the root node is the entire search space.
  • the subspace represented by the root node has dimensions X and Y in the ranges of 0 ⁇ X ⁇ 255 and 0 ⁇ Y ⁇ 127, and the rules included in the root node are R1 to R7.
  • the search space is cut into four on the X axis and cut into two on the Y axis.
  • the positions of the leaf nodes 4 in the X-axis and Y-axis cuts (Cut) are the fourth and first positions, respectively.
  • the subspace represented by the leaf node 4 has dimensions X and Y in the ranges of 192 ⁇ X ⁇ 255 and 0 ⁇ Y ⁇ 63, and the rules included in the leaf node 4 are R5 and R7. .
  • the packet processing apparatus confirms the root node and selects the node 1 (192 ⁇ X ⁇ 255) indicating the fourth partial space because the value of X of the received packet is 230.
  • the packet processing apparatus confirms the node 1 and selects the leaf node 4 (0 ⁇ Y ⁇ 63) indicating the first partial space because the Y value of the received packet is 30.
  • the leaf node 4 includes the rules R5 and R7 as described above.
  • the packet processing device performs a linear search from these rules and finds, for example, rule R7 as a rule having the highest priority corresponding to the received packet.
  • Non-Patent Document 1 discloses an example in which a rule set is divided into a plurality of sub-rule sets by classifying using rule characteristics, a search tree is constructed for each sub-rule set, and used for search processing. Has been.
  • the rule search process for the received packet is executed for each search tree of each sub-rule set, and a plurality of obtained rules (the number of obtained rules is the sub-rule set). Rule with the highest priority) is selected.
  • Japanese Patent Laid-Open No. 2000-41065 discloses a technique for improving search performance by storing (caching) search results of rule search processing using a search tree and reusing them.
  • a search result based on a search tree is registered in an exact match cache entry search table implemented as, for example, a hash table.
  • the search result based on the search tree stored in the cache entry is used, and the search process using the search tree is omitted. Is done.
  • the search result is cached in the cache entry search table and used.
  • the rule set is changed (that is, the rule is added to, changed, or deleted from the search tree), it is cached. Search results must also be updated appropriately. If it is not updated properly, the received packet is processed according to the search result in the old rule set, and communication processing / control by the rule is not performed correctly.
  • Patent Document 2 (Patent No. 4664623) is a technique for reducing a load when a cache entry that has cached a search result based on a search tree is deleted from the cache entry search table in accordance with a change in the search tree (routing table). No. “Router Device, Route Determination Method in Router Device”).
  • a counter value is provided in a routing entry representing a correspondence relationship between a destination address of a packet and a route stored in the routing table, and the routing entry is updated, The counter value is incremented.
  • the device disclosed in Patent Document 2 receives a packet, and processes the packet using a cache entry (including a routing table search result (that is, routing entry information)) corresponding to the packet. .
  • the device refers to the counter value of the routing entry to determine whether the routing entry has been updated.
  • the device deletes the cache entry when an update is detected. Accordingly, the apparatus can reduce the load for specifying the cache entry to be deleted, and can reduce the load for deleting the cache entry.
  • JP 2000-41065 A Japanese Patent No. 4664623
  • the search result is cached and reused even in the packet classification method in which the rule set is divided into sub-rule sets, a search tree is constructed for each sub-rule set, and the search is used.
  • improvement in processing performance can be expected.
  • the cached information needs to be appropriately updated in accordance with the change to the search tree group.
  • Patent Document 2 is based on a single search tree.
  • this technique is applied to a case where there are a plurality of search trees, each time a packet is received, a process for confirming whether or not there is an update is executed by referring to the counter value for each of the search trees. Therefore, when there are a plurality of search trees, there is a problem that the number of times of data reference in the packet reception process increases and the processing performance deteriorates. Such a problem becomes more prominent particularly when the above processing is executed in a computer including a CPU (Central Processing Unit) and a memory having a hierarchical memory configuration including a cache memory.
  • CPU Central Processing Unit
  • a counter value exists for each node of the search tree, and each packet is typically adapted to a rule included in a different node. For this reason, the memory access to the counter value is executed for a discrete area (address) in the memory address space.
  • a discrete area address
  • the present invention has been made in view of such problems. That is, the present invention is a data processing apparatus capable of reducing the load required for searching for information entries even when there are a plurality of search trees used when searching for information entries used for data processing (for example, packet processing). It is one of the main purposes to provide such.
  • a data processing apparatus mainly employs the following configuration.
  • a data processing device includes: As a first information entry for determining a method for processing data to be processed, one or more search trees are used from a set of first information entries including data identification information to be applied. In the case of processing the data using the searched first information entry, the intermediate nodes and leaf nodes that are nodes constituting each search tree are associated with the search space of the data identification information.
  • the search result of the first information entry by the search tree obtained when searching is stored as a second information entry, and when the data is processed, the second information entry related to the data is If stored, a data processing device for processing the data using the second information entry, Search tree initialization means for grouping one or more search trees and selecting a base search tree in each search tree group; For each of the intermediate nodes and leaf nodes constituting the search tree, update history information indicating the update history of the node and shared update history information indicating the update history of any of the nodes constituting the search tree. Search tree storage means for storing; When the contents of the first information entry are manipulated, the update history information of the node associated with the search space overlapping the operation target data identification information that is the data identification information of the first information entry is updated.
  • And updating means for updating the shared update history information of the node associated with the search space overlapping the operation target data identification information for the base search tree of the search tree group to which the search tree including the node belongs.
  • the node identifier indicating the node reached by the search in each search tree, the update history information of the node, and the sharing obtained as a search result of the first information entry by the search tree
  • Information entry storage means for storing a cache of update history information;
  • the shared update history information cache of the node related to the second information entry stored in the information entry storage unit and the latest shared of the node The update history information is compared for each of the base search trees.
  • the information entry for each search tree of the search tree group to which the base search tree including the node belongs If there is a mismatch, the information entry for each search tree of the search tree group to which the base search tree including the node belongs.
  • the cache of the update history information of each of the nodes related to the second information entry stored in the storage means is compared with the latest update history information of each of the nodes, and if both are inconsistent, Re-evaluation means for determining that the second information entry stored in the information entry storage means is invalid.
  • An information entry management method includes: As a first information entry for determining a method for processing data to be processed, one or more search trees are used from a set of first information entries including data identification information to be applied. In the case of processing the data using the searched first information entry, the intermediate nodes and leaf nodes that are nodes constituting each search tree are associated with the search space of the data identification information.
  • the search result of the first information entry by the search tree obtained when searching is stored as a second information entry, and when the data is processed, the second information entry related to the data is An information entry management method for processing the data using the second information entry, wherein one or more search trees are grouped; Loop, select base search tree in each search tree group, For each of the intermediate nodes and leaf nodes constituting the search tree, update history information indicating the update history of the node and shared update history information indicating the update history of any of the nodes constituting the search tree.
  • the update history information of the node associated with the search space overlapping the operation target data identification information that is the data identification information of the first information entry is updated.
  • the node identifier indicating the node reached by the search in each search tree, the update history information of the node, and the sharing obtained as a search result of the first information entry by the search tree
  • Memorize cache with update history information When performing the data processing using the second information entry, the cache of the shared update history information of the node related to the stored second information entry and the latest shared update history information of the node The base search trees are compared, and if both are inconsistent, the stored second second search is performed for each search tree of the search tree group to which the base search tree including the node belongs.
  • the cache of the update history information of each of the nodes related to the information entry is compared with the latest update history information of each of the nodes, and if both are inconsistent, the stored second information entry is By determining that it is invalid, the second information entry is reevaluated.
  • An information entry management program implements the information entry management method described in (2) above as a program executable by a computer.
  • the object of the present invention is also achieved by a data processing apparatus having the above configuration, a computer-readable recording medium storing the information entry management program for realizing the data processing method by a computer, and the like.
  • FIG. 1 is a block configuration diagram showing an example of a block configuration of a data processing apparatus according to an embodiment of the present invention.
  • FIG. 2 is a block configuration diagram illustrating an example of a detailed block configuration of the search tree control unit of the data processing apparatus illustrated in FIG. 1.
  • FIG. 3 is a block configuration diagram illustrating an example of a detailed block configuration of the cache table control unit of the data processing apparatus illustrated in FIG. 1.
  • FIG. 4 is a table for explaining a configuration example of rules used in the data processing apparatus shown in FIG.
  • FIG. 5 is a table for explaining a configuration example of information related to a search tree and information related to a node used in the data processing apparatus shown in FIG. FIG.
  • FIG. 6 is a table showing an example of information about the cache table and the cache entry stored in the cache table storage unit of the data processing apparatus shown in FIG.
  • FIG. 7 shows the operation when the search tree initialization unit in the search tree control unit of the data processing apparatus shown in FIG. 1 initializes the search tree and rule set stored in the search tree storage unit and rule storage unit, respectively.
  • It is a flowchart which shows an example.
  • FIG. 8 is a flowchart showing an example of the operation when the search tree initialization unit in the search tree control unit of the data processing apparatus shown in FIG. 1 performs the search tree construction process.
  • FIG. 9 is a flowchart showing an example of detailed operation of the search tree construction sub-process A executed in step S112 of the flowchart of FIG. FIG.
  • FIG. 10 shows an example of an operation when the rule update unit in the search tree control unit of the data processing apparatus shown in FIG. 1 updates the search tree and the rule set stored in the search tree storage unit and the rule storage unit, respectively. It is a flowchart to show.
  • FIG. 11 is a flowchart showing an example of detailed operation of the rule update sub-process A1 executed in step S136 of the flowchart of FIG.
  • FIG. 12 is a flowchart showing an example of detailed operation of the rule update sub-process A2 executed in step S143 of the flowchart of FIG.
  • FIG. 13 is a flowchart showing an example of detailed operation of the rule update sub-process B executed in step S137 of the flowchart of FIG. FIG.
  • FIG. 14 is a flowchart showing an example of detailed operation of the rule update sub-process C executed in step S139 of the flowchart of FIG. 15 shows a key given by the search tree search unit in the search tree control unit of the data processing apparatus shown in FIG. 1 with reference to the search tree and rule set stored in the search tree storage unit and the rule storage unit, respectively. It is a flowchart which shows an example of the operation
  • FIG. 16 is a flowchart showing an example of detailed operation of the rule search sub-process A executed in step S181 of the flowchart of FIG.
  • FIG. 17 is a flowchart illustrating an example of an operation when the packet processing unit of the data processing apparatus illustrated in FIG. 1 receives a packet.
  • FIG. 18 is a flowchart showing an example of an operation when the cache entry creation unit in the cache table control unit of the data processing apparatus shown in FIG. 1 creates a cache entry.
  • FIG. 19 is a flowchart showing an example of an operation when the cache entry search unit in the cache table control unit of the data processing apparatus shown in FIG. 1 searches for a cache entry.
  • FIG. 20 is a flowchart showing an example of an operation when the cache entry reevaluation unit in the cache table control unit of the data processing apparatus shown in FIG. 1 reevaluates a cache entry.
  • 21A is an explanatory diagram (1/2) illustrating an example of a sub-rule set when the search tree initialization unit in the search tree control unit of the data processing apparatus illustrated in FIG.
  • FIG. 21B is an explanatory diagram (2/2) illustrating an example of a sub-rule set when the search tree initialization unit in the search tree control unit of the data processing apparatus illustrated in FIG. 1 divides the rule set based on the characteristics of the rules.
  • FIG. 22 is a table showing an example of a search tree group created by the search tree initialization unit in the search tree control unit of the data processing apparatus shown in FIG.
  • FIG. 23 is a schematic diagram illustrating an example of a rule set.
  • FIG. 24 is a schematic diagram illustrating an example of a search tree.
  • FIG. 25 is a table showing an example of information related to nodes in the search tree.
  • FIG. 26 is a diagram illustrating a hardware configuration of an information processing apparatus capable of realizing the data processing apparatus according to the embodiment of the present invention.
  • Such an information entry management method may be implemented as an information entry management program executable by a computer.
  • Such an information entry management program may be recorded on a computer-readable recording medium.
  • drawing reference numerals attached to the following drawings are added to the respective elements for convenience as an example for facilitating understanding, and are not intended to limit the present invention to the illustrated embodiments.
  • a data processing apparatus includes a first information entry that determines a method for processing data to be processed, and a set of first information entries including data identification information to which the data is applied. Search using one or a plurality of search trees.
  • the data processing apparatus When the data processing apparatus processes the data using the searched first information entry, the data processing apparatus searches the intermediate nodes and leaf nodes constituting the search trees in association with the search space of the data identification information. To do. Then, the search result of the first information entry by the search tree is stored as the second information entry. When processing the data, if the second information entry related to the data is stored, the data processing apparatus processes the data using the second information entry.
  • the data processing apparatus includes a search tree initializing unit (search tree initializing means) that groups one or a plurality of the search trees and selects a base search tree in each search tree group.
  • search tree initializing unit search tree initializing means
  • the data processing apparatus includes a search tree storage unit (search tree storage unit) that executes the following processing. That is, the search tree storage unit (search tree storage means) configures update history information indicating the update history of the node and the search tree with respect to each of the intermediate nodes and leaf nodes constituting the search tree. The shared update history information indicating the update history of any of the above nodes is stored.
  • search tree storage unit search tree storage means
  • the data processing apparatus includes an update unit (update means) that executes the following processing.
  • update unit update means
  • the updating unit relates to a node associated with the search space that overlaps the operation target data identification information that is the data identification information in the operated first information entry.
  • the updating unit updates the shared update history information of the node associated with the search space overlapping the operation target data identification information for the base search tree of the search tree group to which the search tree including the node belongs. To do.
  • the data processing apparatus includes an information entry storage unit (information entry storage unit) that executes the following processing. That is, the information entry storage unit obtains the node identifier indicating the node reached by the search in each search tree, the update history information and the shared update history information of the node obtained as a search result of the first information entry by the search tree. Are stored as a second information entry.
  • the information entry storage unit obtains the node identifier indicating the node reached by the search in each search tree, the update history information and the shared update history information of the node obtained as a search result of the first information entry by the search tree. Are stored as a second information entry.
  • the data processing apparatus includes a re-evaluation unit (re-evaluation means) that executes the following processing. That is, when the re-evaluation unit performs data processing using the second information entry, the cache of the shared update history information of the node regarding the second information entry stored in the information entry storage unit, the node Are compared for each of the above-mentioned base search trees. If there is a mismatch between the two as a result of the comparison processing, the re-evaluation unit stores the first search tree stored in the second information entry storage unit for each search tree of the search tree group to which the base search tree including the node belongs. The cache of update history information of each node regarding the information entry 2 is compared with the latest update history information of each node. If there is a mismatch between the two, the re-evaluation unit determines that the second information entry stored in the second information entry storage unit is invalid.
  • a re-evaluation unit re-evaluation means
  • FIG. 1 is a block configuration diagram showing an example of a block configuration of a data processing apparatus according to an embodiment of the present invention.
  • the data processing device 1 illustrated in FIG. 1 includes a search tree control unit 10, a search tree storage unit 20, a rule storage unit 30, a cache table control unit 40, a cache table storage unit 50, and a packet processing unit 60.
  • the communication IF unit 70 and the data processing device control unit 80 are included.
  • the search tree control unit 10 manages search trees stored in the search tree storage unit 20 and rules stored in the rule storage unit 30. Specifically, the search tree control unit 10 is a part that performs, for example, initialization or search of the search tree, addition of rules, and the like.
  • the search tree storage unit 20 is mainly used when searching for a rule (that is, a type of information entry, a first information entry) corresponding to a received packet (that is, a type of data to be processed).
  • the search tree to be stored is stored.
  • the search tree stored in the search tree storage unit 20 is composed of one or a plurality of nodes.
  • the node is either an intermediate node or a leaf node.
  • the search tree storage unit 20 stores information regarding grouping of search trees.
  • the number of search trees stored in the search tree storage unit 20 is usually two or more, but may be one.
  • the search tree storage unit 20 stores a search tree using, for example, a memory.
  • Each search tree, each node, and each search tree group is assigned an identifier that uniquely identifies the search tree, and the identifier is used for operations such as reference.
  • the rule storage unit 30 stores a set (rule set, sub-rule set) of rules (that is, a kind of information entry, first information entry) that is information for packet processing.
  • the rule storage unit 30 stores a set of rules using, for example, a memory.
  • the rule storage unit 30 stores a set of rules in a memory using, for example, a data structure such as an array or a hash table.
  • Each rule, each rule set, and each sub-rule set is assigned an identifier for uniquely identifying the rule, and the identifier is used for operations such as reference.
  • the cache table control unit 40 manages cache entries stored in the cache table storage unit 50. For example, the cache table control unit 40 creates and searches a cache entry.
  • the cache table storage unit 50 is a part that stores a cache entry (that is, a type of information entry, a second information entry) that includes (caches) a result of a search by a search tree.
  • the cache table storage unit 50 functions as, for example, information entry storage means.
  • the cache table storage unit 50 stores a cache entry using a memory, for example.
  • the cache table storage unit 50 stores a cache entry in a memory using, for example, a data structure of a hash table. Each cache entry is assigned an identifier for uniquely identifying the cache entry, and this identifier is used for operations such as reference.
  • the packet processing unit 60 receives a packet (that is, a type of data to be processed) from the communication IF unit 70, and processes the received packet. At that time, the packet processing unit 60 uses the search tree control unit 10 and the cache table control unit 40 to search for information entries (nodes, rules, cache entries) used for processing the packet, and uses the search results. Process the packet. Note that the packet processing unit 60 may function as a data processing execution unit that processes one type (packet) of data to be processed.
  • the communication IF unit 70 is an interface for connecting the data processing apparatus 1 and a communication network (not shown in FIG. 1), and is described as a communication protocol (hereinafter simply referred to as “protocol”) used in the communication network. Packet transmission / reception is performed according to
  • the data processing device control unit 80 controls the data processing device 1 to process the packet according to the rule set or sub-rule set. For example, when the data processing device 1 is activated, the data processing device control unit 80 instructs the search tree initialization unit 11 (FIG. 2) to initialize by passing an initial rule set.
  • the data processing device control unit 80 may read the initial rule set from a storage device (not shown in FIG. 1) included in the data processing device 1. Further, the data processing device control unit 80 may receive the initial rule set from another control device (not shown in FIG. 1). Further, the data processing device control unit 80 instructs the search tree control unit 10 to update the rule when receiving a rule update command from the user via a user interface (not shown in FIG. 1). To do. Further, the data processing device control unit 80 may instruct the search tree control unit 10 to update the rule when receiving a rule update command from another control device (not shown in FIG. 1). Good.
  • FIG. 4 is a table for explaining an example of the configuration of rules used in the data processing apparatus 1 shown in FIG. 1, and an example of the configuration of rules stored in the rule storage unit 30 of the data processing apparatus 1 in FIG. Show.
  • each rule is configured to include a match condition and priority as a kind of data identification information indicating an application target, and further, an action, statistical information, and a deletion flag.
  • the match condition of each rule defines the field value condition possessed by a packet that conforms to the rule. For example, in the present embodiment, the following five fields (5 tuples) are used as match conditions (a type of data identification information).
  • these five fields are a start transmission destination IP address and an end transmission destination IP address, a start transmission source IP address and an end transmission source IP address, a start transmission destination port number and an end transmission destination port number, a start transmission source port number, and End source port number, start protocol number, and end protocol number.
  • the (start and end) destination IP address is in the range of '0.0.0.0' to '255.255.255.255'.
  • the (start and end) source IP address is in the range of '10 .0.0.0 'to '10 .255.255.255'.
  • the (start and end) destination port number is '80'.
  • the (start and end) transmission source port number is in the range of '0' to '65535'.
  • the protocol number is '17'.
  • the present invention is not limited to such a field configuration.
  • the priority of each rule is used to select a rule to be adopted when a plurality of rules match a certain packet.
  • the priority of each rule is, for example, a numerical value (for example, a non-negative integer) representing the priority of each rule as shown in the explanation column 32 of the table of FIG. 4, and a higher numerical value may indicate a higher priority.
  • a numerical value for example, a non-negative integer representing the priority of each rule as shown in the explanation column 32 of the table of FIG. 4
  • a higher numerical value may indicate a higher priority.
  • the action of each rule defines the processing content (processing method) to be executed for a packet that conforms to the rule.
  • the packet is output to the communication IF unit 70 as shown in the example column 33 of the table of FIG.
  • the number of actions included in each rule is not limited to one, and may be zero or two or more.
  • An example rule action is shown below. However, the present embodiment is not limited to the following processing as a rule action.
  • a packet is output from the designated communication IF unit 70.
  • VLAN Virtual Local Area Network
  • the statistical information of each rule represents a statistical value related to a packet group processed in accordance with the rule.
  • the statistical information of each rule is, for example, the total transmission packet size, the total reception packet size, the number of transmission packets, the number of reception packets, etc., as shown in the example column 33 of the table of FIG.
  • the deletion flag of each rule indicates whether or not the deletion process is postponed because the rule is in use, although the deletion is requested for the rule. It is a flag indicating that.
  • the search tree storage unit 20 stores a set of search trees, a search tree group list, and a set of nodes.
  • FIG. 5 is a table for explaining a configuration example of information related to a search tree and information related to a node used in the data processing apparatus 1 shown in FIG.
  • FIG. 5 shows a configuration example of information related to the search tree and information related to the node stored in the search tree storage unit 20 of the data processing apparatus 1 of FIG.
  • the part (A) in FIG. 5 illustrates information related to the search tree and information related to the search tree group list.
  • information regarding the node is illustrated in part (B) in FIG. Note that the node in the present embodiment is either an intermediate node or a leaf node. In the part (B) of FIG.
  • the information common to both the intermediate node and the leaf node is in the “intermediate node / leaf node common” part, and the information only in the intermediate node is “ Information that only the leaf node has in the “intermediate node” portion is shown in the “leaf node” portion.
  • the partial space of the node shown in the item column 25 of the part (B) in FIG. 5 is information on the partial space represented by the node that is a search target of the node.
  • the information of this partial space corresponds to the information described in the “subspace represented by the node” column of FIG. 25 in the example of the general search tree described above with reference to FIGS.
  • the rule match condition in the present embodiment is 5 tuples as described above, and therefore the number of dimensions of the search space in the present embodiment is 5 dimensions.
  • the “dimension to cut” and the “number of cuts” shown in the item column 25 make the search space 5 dimensions.
  • the “Cut dimension” and “Cut number” information regarding the root node is “X axis” and “4”, respectively.
  • the “Cut dimension” and “Cut number” information regarding the node 1 are “Y axis” and “2”, respectively.
  • a certain node may simultaneously execute cuts related to a plurality of dimensions. In this case, a set of “dimension to be cut” and “cut number” is stored for each of a plurality of dimensions.
  • Information related to each search tree includes, as shown in the item column 22 and the description column 23, a root node identifier indicating a node (root node) serving as a root, a subrule set identifier indicating a corresponding subrule set, and a search tree to which the search tree belongs.
  • a search tree group identifier indicating a group is included.
  • the information related to the search tree group includes a search tree identifier list indicating each search tree to which the search tree group belongs, and a base search tree identifier indicating the base search tree as a base.
  • the cache entry is used for storing and reusing a search result of a rule that matches a received packet using a search tree.
  • a field group used as a rule matching condition in the received packet (in this embodiment, 5 fields, that is, 5 tuples) is used.
  • the value of the field group is referred to as a “key”.
  • the data processing device 1 creates and uses a cache entry for each key.
  • a group of packets having the same key is referred to as a “flow”.
  • FIG. 6 is a table showing an example of information about the cache table and the cache entry stored in the cache table storage unit 50 of the data processing apparatus 1 shown in FIG. 1, and the part (A) in FIG. An example of the information regarding an entry is shown.
  • the part (B) in FIG. 6 shows an example of the cache entry node state.
  • Each part of (A) and (B) in FIG. 6 shows a configuration example of information relating to the cache entry used in the present embodiment.
  • an example in which a cache entry is stored using an open address type hash table is shown.
  • the present embodiment is not limited to such an open address type hash table.
  • the bucket list (bucket list) of the cache table can be realized, for example, as an array of cache entries.
  • the Mth cache entry of the Nth bucket can be referred to by accessing the (N-1) * (M-1) th element in the bucket list. it can.
  • the information related to the cache entry includes information on the in-use flag, key, rule identifier, last use time, and node state list, as shown in the item column 51 in part (A) of FIG.
  • the in-use flag is a flag indicating whether or not the cache entry is in use. If the in-use flag is on, it indicates that the cache entry is in use.
  • the key is a key related to the flow represented by the cache entry, and is used as a search key for searching in the hash table.
  • the rule identifier is an identifier of a rule that matches a packet corresponding to the cache entry.
  • the last use time is information indicating the time when the cache entry was last used.
  • the node state list is a list (for each search tree) of the node to which the flow represented by the cache entry belongs when the cache entry is last used, as shown in part (B) of FIG. Show.
  • the node state list of the cache entry shown in part (B) of FIG. 6 stores information on the node (each search tree) that is finally reached when the rule is searched using the key of the cache entry.
  • the node state of the cache entry includes a search tree identifier, a node identifier, an update counter cache, and a shared update counter cache.
  • the search tree identifier of the node state indicates an identifier of the search tree indicated by the node state.
  • the node identifier of the node state is an identifier of a node to which the flow represented by the cache entry belongs in the search tree.
  • the update counter cache and the shared update counter cache in the node state respectively store the update counter value and the shared update counter value of the node when the cache entry is last used. Note that the shared update counter cache is used only for the nodes of the base search tree.
  • FIG. 2 is a block configuration diagram illustrating an example of a detailed block configuration of the search tree control unit 10 of the data processing device 1 illustrated in FIG. 1.
  • the search tree control unit 10 includes a search tree initialization unit 11, a rule update unit 12, a search tree search unit 13, and an update counter acquisition unit 14.
  • the search tree initialization unit 11 initializes the search tree and information related to the rules stored in the search tree storage unit 20 and the rule storage unit 30, respectively, based on the rule set (initial rule set) used in the initial state. To do.
  • the initial rule set may be stored in advance in a storage device (not shown in FIG. 2) included in the data processing device 1. Further, the initial rule set may be given from another control device (not shown in FIG. 2).
  • the rule update unit 12 for example, in response to a request from the data processing device control unit 80 illustrated in FIG. 1, the search tree and rules (the first tree) stored in the search tree storage unit 20 and the rule storage unit 30, respectively. This is a part for updating information relating to a kind of information entry.
  • the rule update unit 12 functions as an update unit, for example.
  • the rule update unit 12 receives the operation type (either addition, change, or deletion) and the rule included in the request from the data processing device control unit 80 as parameters.
  • the operation type included in the request from the data processing device control unit 80 is rule update (rule change), it is possible to request change of priority and action of the corresponding rule, but change the match condition. I can't.
  • the search tree search unit 13 searches for a rule that matches the specified key, for example, in response to a request from the packet processing unit 60 shown in FIG.
  • the search tree search unit 13 receives a key (search key) to be used for the search from the packet processing unit 60 as a parameter related to the search.
  • the search tree search unit 13 identifies the identifier of the highest priority rule that matches the search key (when the highest priority rule is detected), and the node (intermediate node or leaf) that is finally reached in the search in the search tree.
  • a list of node identifiers (for each search tree) is returned to the request source.
  • the update counter acquisition unit 14 acquires the latest value of the update counter of the designated node and the latest value of the shared update counter, for example, in response to a request from the packet processing unit 60 illustrated in FIG.
  • FIG. 3 is a block configuration diagram showing an example of a detailed block configuration of the cache table control unit 40 of the data processing apparatus 1 shown in FIG.
  • the cache table control unit 40 includes a cache entry creation unit 41, a cache entry search unit 42, and a cache entry reevaluation unit 43.
  • the cache entry creation unit 41 creates a cache entry in response to a request from the packet processing unit 60 of the data processing device 1 shown in FIG. In addition, when there is no empty bucket corresponding to the designated key, the cache entry creation unit 41 deletes one cache entry in the bucket and creates a new cache entry.
  • the cache entry search unit 42 searches for a cache entry in response to a request from the packet processing unit 60 of the data processing apparatus 1 shown in FIG. 1, for example.
  • the cache entry search unit 42 receives a key as a parameter, searches for a cache entry corresponding to the received key, and returns the searched cache entry to the request source.
  • the cache entry re-evaluation unit 43 re-evaluates the cache entry in response to a request from the packet processing unit 60 of the data processing device 1 shown in FIG. 1, for example.
  • the cache entry reevaluation unit 43 receives the identifier of the cache entry.
  • the reevaluation of a cache entry is whether or not the search tree search result stored in the specified cache entry (the rule that matches the flow represented by the cache entry) is valid even in the state of the latest rule set. Is to check.
  • the cache entry re-evaluation unit 43 is associated with a node state list including the following information stored as the cache entry in order to confirm whether a certain cache entry is valid in the state of the latest rule set. The latest value of the update counter and shared update counter of the node to be used is used.
  • search tree initialization unit 11 in the search tree control unit 10 of the data processing apparatus 1 shown in FIG. 1 initializes the search trees and rule sets stored in the search tree storage unit 20 and the rule storage unit 30, respectively. It is a flowchart which shows an example of an operation
  • the operation of the search tree initialization process illustrated in the flowchart of FIG. 7 may be started, for example, when the data processing apparatus 1 is started, or may be started in response to an initialization request from the data processing apparatus control unit 80. Good.
  • the search tree initialization unit 11 in the search tree control unit 10 is given a rule set (that is, an initial rule set) used in the initial state as a parameter.
  • the search tree initialization unit 11 first divides the initial rule set given as a parameter into sub-rule sets using the characteristics of each rule, and stores them in the rule storage unit 30 (step S101). ).
  • the division into the sub-rule sets may be performed using a known technique. For example, the division method described in Non-Patent Document 1 may be used.
  • the search tree initialization unit 11 performs size determination for each field (5 tuple) of each rule.
  • the search tree initialization unit 11 sets a rule group having the same magnitude determination result for each field as one sub-rule set.
  • the search tree initialization unit 11 configures one sub-rule set by a rule group in which it is determined that only the destination IP address and destination port number fields are large and the remaining fields are small.
  • rule groups having four or more fields with small match conditions are collectively set as one sub-rule set.
  • FIGS. 21A and 21B A configuration example of the sub-rule set is illustrated in FIGS. 21A and 21B. An example of the configuration of this sub-rule set is shown in two drawings of FIGS. 21A and 21B, and FIG.
  • FIG. 21B shows a continuation of FIG. 21A. That is, FIGS. 21A and 21B are drawings to be understood as a series of drawings. In the following, both drawings will be collectively referred to as FIG. 21. For convenience of explanation, legends having the same contents are provided at the top of FIGS. 21A and 21B. However, when viewing both figures of FIG. 21A and FIG. 21B together (that is, when viewing both figures with the lower end of FIG. 21A connected to the upper end of FIG. 21B), the legend at the top of FIG. 21B is omitted. May be treated as FIG. 21 illustrates an example of a sub-rule set when the search tree initialization unit 11 in the search tree control unit 10 of the data processing apparatus 1 illustrated in FIG.
  • FIG. 1 divides the rule set based on the rule characteristics.
  • the rule has five fields (source IP address (IPS), destination IP address (IPD), (communication) protocol (P), source port (TPS), destination port (TPD)).
  • IPS source IP address
  • IPD destination IP address
  • P communication protocol
  • P source port
  • TPD destination port
  • a configuration example of a sub-rule set in the case of 5 tuples) is shown.
  • a field with a small match condition designation in each field is indicated by a hatched rectangle.
  • part in FIG. 21 is one sub-rule set of “subset 5” as a rule group in which the designation of the match condition is large for all 5 fields (5 tuples). Indicates the presence of. Further, part (b) of FIG. 21 (FIG. 21A) shows five sub-groups “subset 1-1” to “subset 1-5” as a rule group having one field with a small match condition designation. Indicates the case where a rule set exists. For example, “subset 1-1” indicates that the field of the source IP address is a small field, as indicated by hatching.
  • part (c) of FIG. 21 shows 10 sub-groups from “subset 2-1” to “subset 2-10” as a rule group having two fields with a small match condition designation.
  • the case where a rule set exists is shown.
  • the two fields of the transmission source IP address and the transmission destination IP address are small fields.
  • (d) of FIG. 21 shows ten sub-rules from 'subset 3-1' to 'subset 3-10' as a rule group having three fields with a small match condition designation.
  • the case where a set exists is shown. For example, in the case of “subset 3-1”, as indicated by hatching, three fields of the transmission source IP address, the transmission destination IP address, and the protocol are small fields.
  • the portion (e) of FIG. 21 (FIG. 21B) is accommodated as a group of rules of “subset 4” as a group of rules having four or more fields with small match condition designations. Shows the case. For example, when the four fields of the source IP address, destination IP address, protocol, and source port are small fields, the source IP address, destination IP address, protocol, source port, and destination port Even when the five fields are small fields, they are accommodated together in one subrule set called “subset 4”.
  • the search tree initialization unit 11 next constructs a search tree for each sub-rule set (step S102). Details of the operation will be described later. Thereafter, the search tree initialization unit 11 determines a search tree group and a base search tree for each search tree group (step S103).
  • the search tree initialization unit 11 groups search trees according to a grouping criterion determined in advance using the characteristics of the rules included in each search tree.
  • a grouping criterion may be, for example, a grouping criterion that gives the granularity of rule matching conditions (a type of data identification information) as shown below.
  • a search tree search tree for “subset 5” in part (a) of FIG. 21) that accommodates a rule group in which all of the five match conditions are specified is a search tree group. To do.
  • Grouping criterion B A search tree containing a group of rules having a small match condition specification and a field number of 4 or more (search tree for “subset 4” in part (e) of FIG. 21) is also searched independently. You may make it comprise a tree group.
  • Grouping criterion C search tree containing a rule group with a small number of fields (for example, one) with a small match condition specification (search tree for “subset 1-1” in part (b) of FIG. 21) (Search tree for “subset 1-5”) are included in different search tree groups.
  • D-1 For example, a field with a small match condition specification is represented as ‘0’ and a large field is represented as ‘1’, and a 5-bit binary sequence is created for each search tree from the large and small patterns designated with the match condition. Then, using the Hamming distance (specifically, the digitized Hamming distance) between the 5-bit binary sequences created for each search tree, similarity determination for the two search trees is performed.
  • the Hamming distance specifically, the digitized Hamming distance
  • the binary sequence for “subset 2-5” in part (c) of FIG. 21 is “10011”, and the binary sequence for “subset 3-7” of FIG. 21 (d) is “10001”.
  • the Hamming distance between them is “1”.
  • the binary sequence for ‘subset 2-7’ in the part (c) of FIG. 21 is ‘10110’, and the Hamming distance from the binary sequence for subset 2-5 is ‘2’. Therefore, as a search tree accommodated in the same search tree group as the “subset 2-5” search tree, the “subset 3-7” search tree having a smaller Hamming distance is the “subset 2-7” search tree. More desirable.
  • Grouping criterion E The number of search trees included in each search tree group is made as equal as possible (except for groups based on grouping criterion A and grouping criterion B).
  • FIG. 22 shows an example of a search tree group indicating the result of grouping by the search tree initialization unit 11 in the sub-rule set configuration example of FIG.
  • FIG. 22 is a table showing an example of a search tree group created by the search tree initialization unit 11 in the search tree control unit 10 of the data processing apparatus 1 shown in FIG.
  • the sub-rule sets are searched tree group 1 to search tree as shown in the search tree group column 111 of FIG. Grouped into 7 search tree groups of group 7.
  • the search tree group 1 is grouped by applying the grouping criteria C, D, and E to the subrule set illustrated in FIG. 21 as shown in the use grouping criteria column 113 of FIG. Searched search trees are included. That is, as shown in the belonging search tree column 112 in FIG. 22, the search tree group 1 includes “subset 1-1”, “subset 2-1”, “subset 2-2”, and “subset 3- Five search trees of 1 'and' subset 3-3 'belong. Similarly, the search tree group 2 includes search trees that are grouped by applying the grouping criteria C, D, and E to the subrule set illustrated in FIG. That is, as shown in the belonging search tree column 112 of FIG.
  • the search tree group 2 includes “subset 1-2”, “subset 2-5”, “subset 2-6”, and “subset 3-” of FIG. Five search trees of 7 'and' Subset 3-9 'belong.
  • the grouping criterion A to the search tree group 6
  • one search tree of 'subset 5' belongs.
  • the grouping criterion B is applied to the search tree group 7, one search tree of 'subset 4' belongs.
  • the search tree initialization unit 11 selects, for example, a search tree having the smallest (smallest) number of fields for which the match condition is specified from among the determined search tree groups, as a base search tree for the search tree group. Select as. In the example of FIG. 22, the search tree initialization unit 11 uses, for example, a search tree for 'subset 1-1' as a base search tree for search tree group 1 (a search tree having a small number of fields with a small match condition of '1'). ) Is selected.
  • FIG. 8 is a flowchart showing an example of an operation when the search tree initialization unit 11 in the search tree control unit 10 of the data processing apparatus 1 shown in FIG. 1 performs the search tree construction process.
  • 11 shows an example of the operation when a search tree is constructed.
  • the search tree initialization unit 11 first assigns the root node of the search tree to be constructed (step S111). Specifically, the search tree initialization unit 11 assigns a data area for storing information on the root node to the search tree storage unit 20 and sets the following information for the root node.
  • the search tree initialization unit 11 executes a search tree construction sub-process A described later for the assigned root node (step S112). At that time, the search tree initialization unit 11 uses the root node and all the rules included in the sub-rule set to be processed as parameters.
  • FIG. 9 is a flowchart showing an example of detailed operation of the search tree construction sub-process A executed in step S112 of the flowchart of FIG. 8
  • the search tree construction sub-process A includes, as parameters, a processing target node (for example, a node identifier format, hereinafter referred to as “processing target node”), and a set of rules included in the node (for example, a rule identifier list format, (Receive 'rule set for processing').
  • a processing target node for example, a node identifier format, hereinafter referred to as “processing target node”
  • a set of rules included in the node for example, a rule identifier list format, (Receive 'rule set for processing').
  • the search tree initialization unit 11 first checks whether or not the number of rules included in the processing target rule set is equal to or less than a predetermined number (Binth) (step S121). If the number of rules is not less than or equal to a certain number (Binth) (NO in step S121), the search tree initialization unit 11 sets the dimension and the number of cuts to cut the subspace represented by the processing target node. Determine (step S122).
  • the dimension to be cut (Cut) and the number of cuts (Cut) may be determined using a well-known technique. The described division method may be used.
  • the matching condition of each rule of the processing target rule set is limited to a common part with the partial space of the processing target node. For example, when the protocol number of a certain rule is 10 to 20 and the protocol number of the partial space of the processing target node is 15 to 25, the protocol number of the matching condition of the rule is determined to be 15 to 20 .
  • the division method calculates the number of unique ranges for each dimension (field). For example, if there are three rules and the protocol numbers of the match conditions of each rule are 15 to 20, 16 to 20, and 17 to 17, respectively, the number of unique ranges is 3. Further, if the protocol number of the match condition of each rule is 15 to 20, 15 to 20, and 17 to 17, respectively, the number of unique ranges is 2 (that is, the first and the second) (The second is the same protocol number). Then, the division method selects a dimension having a number of unique ranges larger than the average of the number of unique ranges for all dimensions as a dimension to be cut.
  • the upper limit value of the number of cuts is determined from spfac (Special Factor) that is a parameter for adjusting the memory usage and the size (number of rules) of the processing target rule set.
  • spfac Specific Factor
  • the upper limit value of the number to be cut (Cut) is a value obtained by integrating the square root of the rule number and the parameter spfac. Then, the division method obtains an average value and a maximum value of the number of rules included in each child node when the number of cuts is 2.
  • the division method increases the number of cuts, and obtains the average value and the maximum value of the number of rules included in each child node at that time. Then, even if the value to be cut reaches the upper limit value or the number of cuts is increased, the division method is large in the average value and the maximum value of the number of rules included in each child node.
  • the number of cuts when the change stops appearing is determined as the number of cuts of the processing target node.
  • the search tree initialization unit 11 obtains when the partial space represented by the processing target node is cut based on the cut method (the dimension to be cut and the number to be cut) determined in step S122.
  • a child node is assigned to each partial space (step S123).
  • the search tree initialization unit 11 assigns a data area for storing information of each child node to the search tree storage unit 20 and sets the following information for each child node.
  • the number of rules included in the partial space obtained as a result of the cut is “0”, a child node representing the partial space may not be assigned.
  • Leaf node flag off
  • Parent node identifier identifier of the processing target node
  • Subspace information the subspace represented by the child node
  • Update counter and shared update counter 0.
  • the search tree initialization unit 11 executes the main search tree construction sub-process A from steps S121 to S123 for each child node assigned in step S123 (step S124). At that time, the search tree initialization unit 11 uses a child node to be processed and a set of rules included in the child node as parameters.
  • step S121 if the number of rules is equal to or less than a certain number (Binth) (YES in step S121), the search tree initialization unit 11 sets up the processing target node as a leaf node (step S125). Specifically, the search tree initialization unit 11 performs the following setting change / addition for the processing target node.
  • Leaf node flag ON
  • Rule identifier list A list of identifiers of rules included in the child node.
  • the rule update unit 12 first selects a processing target sub-rule set that includes a processing target rule (step S131).
  • the selection of the sub-rule set is performed by the same method as that used in step S101 in the flowchart of FIG. 7 based on the characteristics of the processing target rule.
  • the rule update unit 12 confirms whether or not the current operation requested as a parameter is a rule deletion (step S132).
  • the rule update unit 12 sets the process target rule deletion flag stored in the rule storage unit 30 to ON.
  • Step S134 the process proceeds to Step S135.
  • the rule update unit 12 updates the processing target subrule set according to the request by the parameter (step S133). )
  • the process proceeds to step S135.
  • the rule update unit 12 causes the rule storage unit 30 to newly store the process target rule as an element of the process target sub-rule set.
  • the rule update unit 12 changes information (priority and action) of the processing target rule stored in the rule storage unit 30 as one element of the processing target subrule set.
  • the rule update unit 12 confirms whether or not the current operation requested as a parameter is a rule addition (step S135).
  • the rule update unit 12 describes the root node of the search tree of the processing target subrule set (that is, the processing target search tree) later.
  • the rule update unit 12 executes a rule update sub-process B described later for the root node of the processing target search tree. After (step S137), the process proceeds to step S138.
  • the rule update unit 12 selects a base search tree for the processing target search tree (step S138). Specifically, the rule update unit 12 refers to the search tree group list stored in the search tree storage unit 20 and acquires the identifier of the base search tree of the search tree group to which the processing target search tree belongs. Next, the rule update unit 12 executes a rule update sub-process C described later on the root node of the base search tree selected in step S138 (step S139).
  • the update counter of the node including the processing target rule in the processing target search tree is updated by the rule update sub processing A1 in step S136 and the rule update sub processing B in step S137.
  • the shared update counter of the node including the processing target rule in the base search tree of the search tree group to which the processing target search tree belongs is updated.
  • FIG. 11 is a flowchart showing an example of detailed operation of the rule update sub-process A1 executed in step S136 of the flowchart of FIG.
  • a processing target node processing target node
  • a rule to be added to the node processing target rule
  • the rule update unit 12 first checks whether or not the processing target node is a leaf node (step S141). Whether or not it is a leaf node is confirmed by checking whether or not the leaf node flag of the processing target node is on. When the processing target node is not a leaf node (NO in step S141), the rule update unit 12 selects a child node group including the processing target rule (step S142). Whether or not a certain child node includes a processing target rule is whether or not there is an overlap (overlapping portion) between the partial space represented by the child node and the partial space represented by the matching condition of the processing target rule (that is, common). Whether or not there is a part).
  • the rule update unit 12 executes a rule update sub-process A2 to be described later for each child node (including unassigned child nodes) selected in step S142 (step S143). At that time, the rule update unit 12 uses the selected child node (as a new processing target node), the current processing target node (as a parent node of the new processing target node), and the processing target rule as parameters. To do.
  • the rule update unit 12 checks whether or not there is a child node newly assigned in any of the rule update sub-process A2 executed once or a plurality of times in step S143 (step S144). As a result of the confirmation, if there is no newly assigned child node (NO in step S144), the rule update unit 12 ends the rule update sub-process A1. On the other hand, when there is a newly assigned child node (YES in step S144), the rule update unit 12 updates the update counter of the processing target node (step S145). Specifically, the rule update unit 12 changes the value of the update counter of the processing target node to a value obtained by adding “1” to the current value.
  • step S141 if the processing target node is a leaf node (YES in step S141), the rule update unit 12 determines that the number of rules corresponding to the processing target node (leaf node) after the rule is added in advance. It is confirmed whether or not it is equal to or less than a predetermined fixed number (Binth) (step S146). When the number of rules is equal to or less than the predetermined number (Binth) (YES in step S146), the rule update unit 12 adds the identifier of the processing target rule to the rule identifier list of the processing target node (leaf node) ( Step S147), the process proceeds to the above-described Step S145.
  • a predetermined fixed number Binth
  • step S146 when the number of rules is not equal to or less than the predetermined number (Binth) (NO in step S146), the rule update unit 12 changes the processing target node (leaf node) to an intermediate node (set as an intermediate node). Then, after a new child node is created (step S148), the process proceeds to step S145 described above.
  • step S148 specifically, the rule update unit 12 changes the leaf node flag of the processing target node to OFF.
  • the rule updating unit 12 uses the identifier of the processing target node and the rule set obtained by adding the identifier of the processing target rule to the rule identifier list of the processing target node as parameters, as shown in FIG. The search tree construction sub-process A described above is executed.
  • FIG. 12 is a flowchart showing an example of detailed operation of the rule update sub-process A2 executed in step S143 of the flowchart of FIG.
  • a processing target node processing target node
  • an identifier of a parent node of the processing target node a partial space represented by the processing target node
  • a rule to be added to the node processing target rule
  • the rule update unit 12 first checks whether or not the data area for the processing target node has been allocated (step S151). If the data area has been allocated (YES in step S151), the rule update unit 12 executes the rule update sub-process A1 illustrated in FIG. 11 using the processing target node and the processing target rule as parameters (step S152). ). On the other hand, if the data area has not been allocated (NO in step S151), the rule updating unit 12 allocates a data area for the processing target node and sets it up as a leaf node (step S153). Specifically, the rule update unit 12 assigns a data area for storing information on the processing target node to the search tree storage unit 20, and sets the following information for the node.
  • Leaf node flag ON
  • Parent node identifier a parent node identifier given as a parameter
  • Subspace the subspace given as a parameter
  • Update counter and shared update counter 0,
  • Rule identifier list identifiers of rules to be processed.
  • FIG. 13 is a flowchart showing an example of detailed operation of the rule update sub-process B executed in step S137 of the flowchart of FIG.
  • a process target node process target node
  • a rule to be deleted or changed process target rule
  • the rule update unit 12 first checks whether or not the processing target node is a leaf node (step S161). Whether or not the node is a leaf node is checked based on whether or not the leaf node flag of the processing target node is on. When the processing target node is not a leaf node (N in Step S161), the rule update unit 12 selects a child node group including the processing target rule (Step S162). Whether or not a certain child node includes a processing target rule is whether or not there is an overlapping portion between the partial space represented by the child node and the partial space represented by the matching condition of the processing target rule (that is, there is a common part) Or not).
  • the rule update unit 12 executes the rule update sub-process B for each of the child nodes (only assigned child nodes) selected in step S162 (step S163).
  • step S161 when the processing target node is a leaf node (YES in step S161), the rule update unit 12 updates the rule identifier list of the processing target node (leaf node) (step S164). Specifically, if the requested current operation is rule deletion, the rule update unit 12 deletes the identifier of the processing target rule from the rule identifier list. Further, when the requested current operation is rule update, this operation can be omitted. Thereafter, the rule update unit 12 updates the update counter of the processing target node (leaf node) (step S165). Specifically, the rule update unit 12 changes the value of the update counter of the processing target node (leaf node) to a value obtained by adding “1” to the current value.
  • FIG. 14 is a flowchart showing an example of detailed operation of the rule update sub-process C executed in step S139 of the flowchart of FIG.
  • a node to be processed processing target node
  • a rule to be added to the node processing target rule
  • the rule update unit 12 first checks whether or not the processing target node is a leaf node (step S171). Whether or not the node is a leaf node is checked based on whether or not the leaf node flag of the processing target node is on. If the processing target node is not a leaf node (NO in step S171), the rule update unit 12 selects a child node group including the processing target rule (step S172). Whether or not a certain child node includes a processing target rule is whether or not there is an overlapping portion between the partial space represented by the child node and the partial space represented by the matching condition of the processing target rule (that is, there is a common part) Or not).
  • the rule update unit 12 executes the rule update sub-process C for each of the child nodes (only assigned child nodes) selected in step S172 (step S173).
  • the rule update unit 12 confirms whether or not the following conditions are satisfied. That is, the rule update unit 12 has a child node whose data area is not allocated to the child node selected in step S172 (that is, a child node whose related rule number is 0), and the requested current operation. Is a rule addition, and the condition that there is a portion that overlaps the partial space represented by the unallocated child node and the partial space represented by the matching condition of the processing target rule is confirmed (step S174).
  • the data area has an unallocated child node, the requested current operation is a rule addition, and the partial space represented by the unallocated child node and the partial space represented by the matching condition of the processing rule (YES in step S174), the rule update unit 12 updates the shared update counter of the processing target node (step S175), and ends this rule update sub-process C. Specifically, the rule update unit 12 changes the value of the shared update counter of the processing target node to a value obtained by adding “1” to the current value.
  • step S174 if there is no child node to which the data area is not assigned, or if the requested current operation is not a rule addition (NO in step S174), the rule update unit 12 The rule update sub-process C is terminated.
  • the rule update unit 12 also ends the rule update sub-process C when there is no overlapping portion between the partial space represented by the unallocated child node and the partial space represented by the matching condition of the processing target rule. May be.
  • step S171 If the processing target node is a leaf node in step S171 (YES in step S171), the rule update unit 12 updates the shared update counter of the processing target node (leaf node) (step S176).
  • This rule update sub-process C is terminated. Specifically, the rule update unit 12 changes the value of the shared update counter of the processing target node (leaf node) to a value obtained by adding “1” to the current value.
  • step S139 in FIG. 10 shown in the flowchart of FIG. Run it.
  • the rule update unit 12 causes the rule storage unit 30 to delete a rule whose deletion flag is on and the reference counter is 0 among the rules stored in the rule storage unit 30.
  • FIG. 6 is a flowchart showing an example of an operation when searching for a rule that matches a given key.
  • the operation of the rule search process illustrated in the flowchart of FIG. 15 is started by, for example, a request from the packet processing unit 60, and the search tree search unit 13 uses a key to be used for the search as a parameter related to the rule search process. (Search key) is received.
  • the search tree search unit 13 sends the identifier of the highest priority rule (when searched) matching the search key given as a parameter to the request source (for example, the packet processing unit 60). return.
  • the search tree search unit 13 also returns a list of identifiers (for each search tree) of nodes (intermediate nodes or leaf nodes) finally reached in the search in the search tree to the request source.
  • the search tree search unit 13 first executes a rule search sub-process A described later for each search tree (step S181). Next, the search tree search unit 13 selects the highest priority rule from the set of rules found in each search tree obtained in step S181. The search tree search unit 13 returns the selected rule to the request source as a rule that matches the search key received as a parameter (step S182). Further, the search tree search unit 13 also returns a list of identifiers (for each search tree) of nodes (intermediate nodes or leaf nodes) finally reached by the search obtained for each search tree in step S181 to the request source.
  • FIG. 16 is a flowchart showing an example of detailed operation of the rule search sub-process A executed in step S181 of the flowchart of FIG.
  • the search tree search unit 13 uses, as parameters, a search tree to be processed (for example, a search tree identifier format, hereinafter referred to as “process target search tree”) and a key to be used for the search ( Search key).
  • process target search tree a search tree to be processed
  • Search key a key to be used for the search
  • the search tree search unit 13 returns the searched rule and the identifier of the node finally reached by the search to the request source as a result of the search.
  • a node that is a processing target in the rule search sub-process A is referred to as a “processing target node”.
  • the search tree search unit 13 first selects the root node of the processing target search tree as the processing target node (step S191). Next, the search tree search unit 13 checks whether or not the processing target node is a leaf node (step S192). Whether or not the node is a leaf node is checked based on whether or not the leaf node flag of the processing target node is on.
  • the search tree search unit 13 selects a child node corresponding to the search key received as a parameter (step S193). Next, the search tree search unit 13 checks whether or not a data area has been assigned to the child node selected in step S193 (step S194). If the data area has been allocated (YES in step S194), the search tree search unit 13 selects the child node as a processing target node (step S195), and returns to step S192.
  • step S194 if the data area has not been allocated in step S194 (NO in step S194), the search tree search unit 13 has finally reached in the main search because there is no rule corresponding to the search key.
  • the rule search sub-process A is terminated with the node as the current process target node.
  • step S192 when the processing target node is a leaf node (YES in step S192), the search tree search unit 13 stores a rule based on the rule identifier list of the processing target node (leaf node).
  • the rule related to the leaf node is acquired with reference to the unit 30 (step S196). Further, for example, the search tree search unit 13 performs a linear search process in the rule group, selects a rule with the highest priority that matches the search key from the rule group, and a rule that matches the search key. Output as. Thereafter, the search tree search unit 13 ends the rule search sub-process A. In this case, the node finally reached in this search becomes the current processing target node (leaf node).
  • FIG. 17 is a flowchart illustrating an example of an operation when the packet processing unit 60 of the data processing device 1 illustrated in FIG. 1 receives a packet.
  • the operation of this packet processing illustrated in the flowchart of FIG. 17 is started when the communication IF unit 70 of the data processing apparatus 1 receives a packet (processing target packet) and the packet processing unit 60 is notified of packet reception.
  • the packet processing unit 60 first extracts a key (search key) to be used for rule search from the received processing target packet (step S201). Specifically, the packet processing unit 60 extracts a transmission source IP address, a transmission destination IP address, a protocol number, a transmission source port number, and a transmission destination port number from the received packet as a search key. Next, the packet processing unit 60 requests the cache entry search unit 42 of the cache table control unit 40 shown in FIG. 3 using the search key extracted in step S201 as a parameter to search for a cache entry (step S202). .
  • search key search key
  • the packet processing unit 60 confirms whether a cache entry (processing target cache entry) corresponding to the search key delivered as a parameter is detected as a request result to the cache entry searching unit 42 (step S203).
  • the packet processing unit 60 requests the cache entry re-evaluation unit 43 of the cache table control unit 40 shown in FIG. Is re-evaluated (step S204).
  • the cache entry re-evaluation unit 43 has updated the processing target cache entry (when a rule that matches the search key is detected) or deleted it (if there is a rule that matches the search key).
  • the packet processing unit 60 Upon receiving the re-evaluation result from the cache entry re-evaluation unit 43, the packet processing unit 60 checks whether or not a rule that matches the search key is detected in step S204 (step S205). When a rule that matches the search key is detected (YES in step S205), the packet processing unit 60 processes the processing target packet according to the action of the matching rule (step S206).
  • the packet processing unit 60 executes a process when the matching rule is unknown for the processing target packet ( Step S207). Specifically, for example, the packet processing unit 60 executes any one or combination of processes shown below.
  • step S203 If no processing target cache entry is detected in step S203 (NO in step S203), the packet processing unit 60 requests the search tree search unit 13 of the search tree control unit 10 shown in FIG. A rule that matches the search key is searched (step S208). The packet processing unit 60 confirms whether or not a rule that matches the search key has been searched as a result of the request to the search tree search unit 13 (step S209). If no rule that matches the search key is found (NO in step S209), the packet processing unit 60 proceeds to step S207, and executes the process when the matching rule is unknown.
  • the packet processing unit 60 requests the cache entry creation unit 41 of the cache table control unit 40 shown in FIG. A cache entry for processing the target packet is created and stored in the cache table storage unit 50 (step S210).
  • the packet processing unit 60 uses the search key extracted in step S201, the identifier of the rule found in step S208, and the search tree search unit 13 in step S208 as parameters.
  • the returned list of node identifiers is passed to the cache entry creation unit 41.
  • the packet processing unit 60 proceeds to step S206, and processes the processing target packet according to the action indicated by the matching rule searched in step S208.
  • FIG. 18 is a flowchart showing an example of an operation when the cache entry creation unit 41 in the cache table control unit 40 of the data processing apparatus 1 shown in FIG. 1 creates a cache entry.
  • the operation of this cache entry creation process illustrated in the flowchart of FIG. 18 is started based on a request from the packet processing unit 60, for example.
  • the cache entry creation unit 41 uses the key (search key), the rule identifier, and the identifier of the node finally reached in the search of the search tree using the key (search tree) as parameters related to the cache entry creation process. And every).
  • the cache entry creation unit 41 first calculates a hash value of a key received as a parameter, and selects a hash table bucket corresponding to the key based on the calculation result (Ste S221). Next, the cache entry creation unit 41 confirms whether or not the bucket selected in step S221 has a vacancy (step S222). Whether or not the bucket is empty can be confirmed by checking the in-use flag for each cache entry in the bucket.
  • step S224 If the selected bucket is empty (YES in step S222), the process proceeds to step S224. If there is no space in the selected bucket (NO in step S222), the cache entry creation unit 41 deletes any one of the cache entries in the bucket (bucket). (Step S223), the process proceeds to Step S224.
  • the cache entry creation unit 41 can delete the cache entry by setting the cache entry busy flag to OFF. At that time, the cache entry creation unit 41 decrements the reference counter of the matching rule of the cache entry by 1. Further, the cache entry creation unit 41 may select, for example, a cache entry having the oldest last use time as a deletion target.
  • step S224 the cache entry creation unit 41 selects and sets up one of the free cache entries in the bucket (step S224). Specifically, the cache entry creation unit 41 sets the following information for the selected cache entry. Further, the cache entry creation unit 41 increments the reference counter of the rule given as a parameter by 1.
  • Update counter cache (for each search tree): a value acquired by requesting the update counter acquisition unit 14 shown in FIG. (G) Node state.
  • Shared update counter cache (for each base search tree): A value acquired by requesting the update counter acquisition unit 14 shown in FIG. 2 based on the node identifier.
  • FIG. 19 is a flowchart showing an example of an operation when the cache entry search unit 42 in the cache table control unit 40 of the data processing apparatus 1 shown in FIG. 1 searches for a cache entry.
  • the operation of the cache entry search process illustrated in the flowchart of FIG. 19 is started based on a request from the packet processing unit 60, for example.
  • the cache entry creation unit 41 receives a key (search key) as a parameter related to the cache entry search process.
  • the cache entry search unit 42 first calculates the hash value of the key received as a parameter. Based on the calculation result, the cache entry search unit 42 selects a hash table bucket corresponding to the key (step S231). Next, the cache entry search unit 42 selects the first cache entry in the bucket selected in step S231 (step S232).
  • the cache entry search unit 42 determines whether or not the in-use flag of the selected cache entry is ON and whether the key of the cache entry matches the key given as a parameter of the cache entry search process. Confirmation is made (step S233). If the in-use flag of the cache entry is on and the key of the cache entry matches the key given as a parameter of the cache entry search process (YES in step S233), the cache entry search unit 42 The cache entry matching the key is determined to have been successfully searched. In this case, the cache entry search unit 42 ends the cache entry search process, and returns the identifier of the cache entry to the request source as a result of the search.
  • step S233 when the in-use flag of the cache entry is on in step S233 and the key of the cache entry does not match the key given as the parameter of the cache entry search process (NO in step S233) )
  • the cache entry search unit 42 confirms whether or not the selected cache entry is the last cache entry in the bucket (step S234). If the selected cache entry is the last cache entry in the bucket (YES in step S234), the cache entry search unit 42 ends the cache entry search process. As a result of the search, the cache entry search unit 42 notifies the request source that the cache entry matching the key could not be searched. Even when the in-use flag of the selected cache entry is OFF (NO in step S233), the processing after step S234 is executed as described above.
  • step S234 if it is not the last cache entry in the bucket (NO in step S234), the cache entry search unit 42 selects the next cache entry in the bucket. (Step S235), the process returns to Step S233.
  • FIG. 20 is a flowchart showing an example of an operation when the cache entry reevaluation unit 43 in the cache table control unit 40 of the data processing apparatus 1 shown in FIG. 1 reevaluates a cache entry.
  • the operation of the cache entry reevaluation process illustrated in the flowchart of FIG. 20 is started based on a request from the packet processing unit 60, for example.
  • the cache entry re-evaluation unit 43 receives an identifier of a re-evaluation target cache entry (processing target cache entry) as a parameter related to this cache entry re-evaluation process.
  • the cache entry re-evaluation unit 43 first refers to the node state list of the processing target cache entry.
  • the cache entry reevaluation unit 43 requests the update counter acquisition unit 14 of the search tree control unit 10 illustrated in FIG. 2 to obtain the latest shared update counter value (for each base search tree) of the node to which the flow represented by the processing target cache entry belongs. ) Is acquired (step S241).
  • the cache entry re-evaluation unit 43 calculates the latest shared update counter value acquired in step S241 and the shared update counter value of each base search tree related to the processing target cache entry stored in the cache table storage unit 50. It is confirmed whether there is a difference between them (step S242).
  • step S242 If there is no difference between the latest shared update counter value and the shared update counter value of each base search tree related to the processing target cache entry stored in the cache table storage unit 50 (NO in step S242), the process proceeds to step S247. To do. On the other hand, if there is a difference between the latest shared update counter value and the shared update counter value of each base search tree related to the processing target cache entry stored in the cache table storage unit 50 (YES in step S242), the cache The entry reevaluation unit 43 executes the following processing. That is, the cache entry re-evaluation unit 43 determines that the information related to the nodes in the search tree group in which there is a difference in the matching rule and the shared update counter value among the processing target cache entries may be invalid. To do.
  • the cache entry reevaluation unit 43 refers to the node state list of the processing target cache entry and requests the update counter acquisition unit 14 of the search tree control unit 10 shown in FIG.
  • the latest update counter value (for each search tree) of the node to which it belongs is acquired (step S243).
  • the cache entry reevaluation unit 43 sets the search tree group in the search tree group to which the base search tree in which the difference is detected in step S242 belong to the acquisition target of the update counter value (for each search tree).
  • the cache entry re-evaluation unit 43 finds a difference between the latest update counter value acquired in step S243 and the update counter value of each search tree related to the processing target cache entry stored in the cache table storage unit 50. It is confirmed whether or not there is (step S244). When there is no difference between the latest update counter value and the update counter value of each search tree related to the processing target cache entry stored in the cache table storage unit 50 (NO in step S244), the process proceeds to step S247.
  • step S244 if there is a difference between the latest update counter value and the update counter value of each search tree related to the processing target cache entry stored in the cache table storage unit 50 (YES in step S244).
  • the cache entry re-evaluation unit 43 executes the following processing. That is, the cache entry re-evaluation unit 43 determines that the information related to the node of the search tree having a difference between the matching rule and the update counter value in the processing target cache entry is invalid. Then, the cache entry re-evaluation unit 43 requests the search tree search unit 13 of the search tree control unit 10 shown in FIG. 2 to search again for a rule that matches the key of the processing target cache entry (step S245).
  • the cache entry re-evaluation unit 43 passes to the search tree search unit 13 a list of search tree identifiers determined to have a difference in update counter values in step S244.
  • the search tree search unit 13 that has received the list of identifiers of the search tree may limit the search target to the search tree and perform search processing.
  • the cache entry re-evaluation unit 43 passes the list of node identifiers included in the node state list of the processing target cache entry to the search tree search unit 13.
  • the search tree search unit 13 that has received the list of node identifiers may start the search tree search process not from the root node but from the node identified by the received node identifier.
  • the cache entry re-evaluation unit 43 updates the processing target cache entry based on the rule information searched in step S245 (step S246). Specifically, if a rule that matches the key cannot be searched in step S245, the cache entry re-evaluation unit 43 sets the processing target cache entry in-use flag to OFF and sets the processing target cache. Delete the entry. At that time, the cache entry re-evaluation unit 43 decrements the reference counter of the previous matching rule by one.
  • the cache entry re-evaluation unit 43 sets the identifier of the searched matching rule as the rule identifier of the processing target cache entry. At that time, the cache entry re-evaluation unit 43 decreases the reference counter of the previous matching rule by 1 and increases the reference counter of the new matching rule by 1. In addition, the cache entry re-evaluation unit 43 sets the time of processing as the last use time of the processing target cache entry. Furthermore, if the cache entry reevaluation unit 43 has acquired the latest shared update counter value and the latest update counter value in steps S241 and S243, the cache entry re-evaluation unit 43 acquires the latest shared update counter value and the latest update counter value. The values are stored in the shared update counter cache and update counter cache of the processing target cache entry, respectively.
  • step S242 if there is no difference between the latest shared update counter value and the shared update counter value stored in the processing target cache entry (NO in step S242), or step S244.
  • the cache entry re-evaluation unit 43 executes the process. That is, the cache entry re-evaluation unit 43 evaluates that the processing target cache entry is valid in step S247, and sets the last use time of the processing target cache entry to the time when the processing is executed (step S247). ).
  • the cache entry re-evaluation unit 43 ends the cache entry re-evaluation process, whether the processing target cache entry has been updated (when a rule that matches the key has been found) or has been deleted (key If no rule conforming to is found, information indicating one of the following is returned to the request source.
  • the search tree initialization unit 11 of the search tree control unit 10 shown in FIG. 2 divides the rule set into sub-rule sets based on the characteristics of the rules, and the search tree for each sub-rule set. Build up. Further, the search tree initialization unit 11 groups search trees based on the characteristics of each subrule set, and determines a base search tree for each search tree group. Further, the search tree node stored in the search tree storage unit 20 shown in FIG. 1 has information on the update counter and the shared update counter.
  • the rule update unit 12 of the search tree control unit 10 shown in FIG. 2 performs a node update (addition, change, deletion) representing a partial space that overlaps with the partial space represented by the matching condition of the processing target rule ( Group), the update counter of the node in the search tree for the sub-rule set including the processing target rule is updated. Furthermore, the search tree control unit 10 also updates the shared update counter of the node in the base search tree of the search tree group to which the search tree belongs. Further, the cache entry stored in the cache table storage unit 50 shown in FIG. 1 has information on a node to which a flow represented by the cache entry belongs (for each search tree; information including a cache of a node update counter and a shared update counter). . The information is updated every time the cache entry is used by the cache entry creation unit 41 and the cache entry reevaluation unit 43.
  • the packet processor 60 when processing a received packet, if a cache entry for the packet (flow) exists, the packet processor 60 uses the cache entry after re-evaluating the cache entry.
  • the cache entry re-evaluation unit 43 of the cache table control unit 40 shown in FIG. 3 relates to the stored value of the update counter cache (a type of update history information cache) for the node (group) to which the flow represented by the cache entry belongs. ) And the latest update counter value of the node (a kind of latest update history information). If there is a mismatch between the two, the cache entry reevaluation unit 43 determines that a re-search for a matching rule is necessary. At that time, the cache entry re-evaluation unit 43 stores the stored shared update counter cache value (a kind of cache of shared update history information) and the latest shared update counter value of the node (belonging to the base search tree). (A kind of the latest shared update history information).
  • the cache entry reevaluation unit 43 treats the value of the update counter cache (a type of cache of update history information) as invalid in the search tree group to which the node belongs only when there is a mismatch between the two. In this case, the cache entry reevaluation unit 43 compares the stored update counter cache value with the latest update counter value of the node.
  • the update counter cache a type of cache of update history information
  • the following effects are expected even when a search tree for searching for information entries used for packet processing (a type of data processing) is composed of a plurality of search trees. can do.
  • a search result (first information entry) by a search tree at the time of updating a rule set (a kind of set of information entries), and a cache entry (second information entry) that caches the search result ) Is assumed to be synchronized (cache entry reevaluation process).
  • a shared update counter a type of shared update history information
  • an update counter update
  • the number of confirmations of a kind of history information can be reduced from the number of search trees to the number of base search trees. As a result, it is possible to reduce the number of memory accesses in processing a packet (a kind of data to be processed).
  • the access destination memory address area is limited (access discreteness is lowered), and the memory access delay concealment effect is easily obtained by the cache memory. Thereby, it is possible to stabilize and speed up the performance of packet processing (a kind of data processing).
  • the data processing device 1 of the present embodiment even when there are a plurality of search trees used when searching for information entries used for data processing (for example, packet processing), the load required for searching for information entries Can be reduced.
  • the data processing device 1 does not create or delete a cache entry corresponding to the flow of the packet when there is no rule that matches the received packet (NO in step S209, NO in step S205). (Step S207).
  • the present invention described using this embodiment is not limited to this.
  • the data processing apparatus 1 may create or maintain a cache entry storing information that there is no matching rule in addition to the above. As a result, the data processing device 1 can reduce the number of executions of search tree search when a packet having no matching rule is received.
  • the search tree has a tree structure, and the number of parent nodes of nodes other than the root node is 1.
  • a node may have multiple parent nodes.
  • the search tree is not a pure tree but a graph.
  • the state in which a node has a plurality of parent nodes occurs when there is an overlap in the partial space represented by the plurality of parent nodes. Further, such duplication occurs when a part or all of the partial space represented by each child node is overlapped when a child node group of a certain node is created.
  • Such a method for creating a child node group may be performed as part of search processing optimization. In the search process, when there are overlaps in the partial space of the child node group and the search key is included in the overlapping partial space, a plurality of child nodes including the partial space may be searched.
  • the data processing apparatus 1 always updates the shared update counter of the base tree when updating the rules.
  • the data processing device 1 may perform processing so as not to update the shared update counter of the base tree when a predetermined condition is satisfied.
  • a shared update counter non-update flag is provided in the information related to the node, and the data processing apparatus 1 turns on the flag of the information related to the updated node without updating the shared update counter.
  • the flag is added to the node state of the cache entry, and when the cache entry is created, the data processing apparatus 1 stores the value of the flag of the node in the cache entry.
  • the cache entry reevaluation unit 43 When the cache entry is reevaluated in step S204, the cache entry reevaluation unit 43 refers to the flag. Then, when there is a node with the flag turned on, the cache entry re-evaluation unit 43 omits steps S241 and S242 for the search tree group to which the node belongs, and performs the process assuming that YES is obtained in step S242.
  • the use of the shared update counter has the effect of reducing the number of times of confirmation of the shared update counter and the update counter that need to be confirmed every time a packet is received, and may generate false positive.
  • the search tree is not re-searched, but a load of update counter acquisition and comparison processing occurs.
  • the predetermined condition for example, any one or combination of the following (1) and (2) can be used.
  • the number of cache entries using a certain node can be confirmed by the following method, for example. That is, the reference counter is added to the information related to the node. Then, the reference counter is incremented by 1 when a cache entry that uses and references the node is created, and the reference counter is decremented by 1 when the cache entry finishes using and referring to the node. As a result, the number of cache entries using a certain node can be confirmed.
  • the data processing device 1 selects a base tree from search trees used for rule search.
  • a search tree shared update counter search tree
  • the shared update counter search tree may be used as a base tree.
  • the shared update counter search tree does not contain rules, and its construction may be independent of the initial rule set of the search tree group. For example, the dimension and the number of cuts at each node in the construction of the shared update counter search tree may be given in advance.
  • data processing apparatus 1 the data processing apparatus 1 described in the above embodiment is simply referred to as “data processing apparatus”.
  • each component of the data processing device may be simply referred to as “component of the data processing device”.
  • the data processing device described in the above embodiment may be configured by one or a plurality of dedicated hardware devices.
  • each component shown in each of the above-described drawings is realized by using hardware (an integrated circuit or a storage device mounted with processing logic) partially or entirely integrated. Also good.
  • the components of the data processing device may be realized by, for example, a circuit configuration capable of providing each function.
  • a circuit configuration includes, for example, an integrated circuit such as SoC (System on a Chip), a chip set realized using the integrated circuit, and the like.
  • SoC System on a Chip
  • the data held by the components of the data processing device is, for example, a RAM (Random Access Memory) area integrated as SoC, a flash memory area, or a storage device (semiconductor storage device, etc.) connected to the SoC. May be stored.
  • a well-known communication network for example, a communication bus
  • a communication line connecting each component may be connected between each component by peer-to-peer.
  • the above-described data processing apparatus may be configured by general-purpose hardware 2600 exemplified in FIG. 26 and various software programs (computer programs) executed by the hardware.
  • the data processing device may be configured by an arbitrary number of general-purpose hardware devices and software programs. That is, an individual hardware device may be assigned to each component configuring the data processing device, or a plurality of components may be realized using a single hardware device.
  • the arithmetic device 2601 in FIG. 26 is an arithmetic processing device such as a general-purpose CPU (Central Processing Unit) or a microprocessor.
  • the arithmetic device 2601 may read various software programs stored in a nonvolatile storage device 2603, which will be described later, into the storage device 2602, and execute processing according to the software programs.
  • the functions of the components of the data processing device in the above embodiment are realized using a software program executed by the arithmetic device 2601.
  • the storage device 2602 is a memory device such as a RAM or a ROM that can be referred to from the arithmetic device 2601 and stores software programs, various data, and the like. Note that the storage device 2602 may be a volatile memory device or a nonvolatile memory device. The storage device 2602 may temporarily store data held by the components of the data processing device.
  • the storage device 2602 or the arithmetic device 2601 may be provided with a cache memory 2602a that can be accessed at a relatively higher speed than the storage device 2602.
  • the hardware 2600 can constitute a computer having a hierarchical memory configuration including a CPU and a cache memory.
  • the nonvolatile storage device 2603 is a nonvolatile storage device such as a magnetic disk drive or a semiconductor storage device using flash memory.
  • the nonvolatile storage device 2603 can store various software programs, data, and the like.
  • the network interface 2606 is an interface device connected to a communication network, and for example, a wired and wireless LAN connection interface device may be employed.
  • the data processing apparatus may acquire a communication packet via the network interface 2606 and various communication networks.
  • the drive device 2604 is, for example, a device that processes reading and writing of data with respect to a recording medium 2605 described later.
  • the recording medium 2605 is an arbitrary recording medium capable of recording data, such as an optical disk, a magneto-optical disk, and a semiconductor flash memory.
  • the input / output interface 2607 is a device that controls input / output with an external device.
  • the data processing apparatus described with the above-described embodiment as an example, or a component thereof, for example, a software program capable of realizing the functions described in the above-described embodiment with respect to the hardware apparatus illustrated in FIG. It may be realized by supplying More specifically, for example, the present invention may be realized by the arithmetic device 2601 executing a software program supplied to such a hardware device. In this case, an operating system running on the hardware device, database management software, network software, middleware such as a virtual environment platform, etc. may execute part of each process.
  • each unit shown in each drawing can be realized as a software module, which is a function (processing) unit of a software program executed by the above-described hardware.
  • the division of each software module shown in these drawings is a configuration for convenience of explanation, and various configurations can be assumed for implementation.
  • each component of the data processing device illustrated in FIGS. 1 to 3 is realized as a software module
  • these software modules are stored in the nonvolatile storage device 2603. Then, when the arithmetic device 2601 executes each process, these software modules are read out to the storage device 2602.
  • these software modules may be configured to transmit various data to each other by an appropriate method such as shared memory or inter-process communication. With such a configuration, these software modules are connected so as to communicate with each other.
  • the software program may be recorded on the recording medium 2605.
  • the software program may be stored in the non-volatile storage device 2603 through the drive device 2604 as appropriate at the shipping stage or operation stage of the components of the data processing apparatus.
  • the method of supplying various software programs to the hardware is installed in the apparatus using an appropriate jig in the manufacturing stage before shipment or the maintenance stage after shipment.
  • a method may be adopted.
  • a method for supplying various software programs a general procedure may be adopted at present, such as a method of downloading from the outside via a communication line such as the Internet.
  • the present invention can be understood to be constituted by a code constituting the software program or a computer-readable recording medium on which the code is recorded.
  • the recording medium is not limited to a medium independent of the hardware device, but includes a recording medium in which a software program transmitted via a LAN or the Internet is downloaded and stored or temporarily stored.
  • the components of the data processing apparatus described above are configured by a virtualized environment in which the hardware device illustrated in FIG. 26 is virtualized and various software programs (computer programs) executed in the virtualized environment. May be.
  • the components of the hardware device illustrated in FIG. 26 are provided as virtual devices in the virtual environment.
  • the present invention can be realized with the same configuration as the case where the hardware device illustrated in FIG. 26 is configured as a physical device.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

La présente invention concerne un dispositif de traitement de données qui est susceptible de réduire la charge même lorsqu'il y a une pluralité d'arbres de recherche qui recherchent une entrée d'information de traitement de données. Une unité de commande d'arbre de recherche (10) sélectionne un arbre de recherche de base pour chaque groupe dans lequel les arbres de recherche ont été groupés, et lorsqu'il y a une instruction d'entrée d'informations, met à jour les informations d'historique de mise à jour d'un nœud qui est dans un espace de recherche qui chevauche des informations d'identification de données de l'entrée d'informations et met à jour des informations d'historique de mise à jour partagée d'un nœud qui est dans un espace de recherche qui chevauche les informations d'identification de données pour l'arbre de recherche de base du groupe auquel l'arbre de recherche qui comprend le nœud appartient. Une unité de commande de table d'antémémoire (40) stocke, comme entrées d'antémémoire, des informations qui comprennent un nœud d'arrivée résultant de la recherche d'entrée d'informations au moyen des arbres de recherche et des informations d'historique de mise à jour de nœud et des informations d'historique de mise à jour partagée. Au moment du traitement de données à l'aide des entrées d'antémémoire, l'unité de commande de table d'antémémoire (40) compare chaque antémémoire d'informations d'historique de mise à jour et d'informations d'historique de mise à jour partagée aux entrées d'antémémoire à l'antémémoire la plus récente, et invalide les entrées d'antémémoire s'il y a une anomalie.
PCT/JP2016/000593 2015-02-06 2016-02-05 Dispositif de traitement de données, procédé de gestion d'entrée d'informations et support d'enregistrement doté d'un programme de gestion d'entrée d'informations enregistré sur ce dernier WO2016125501A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2016573232A JP6652070B2 (ja) 2015-02-06 2016-02-05 データ処理装置、情報エントリ管理方法及び情報エントリ管理プログラム

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2015022010 2015-02-06
JP2015-022010 2015-02-06

Publications (1)

Publication Number Publication Date
WO2016125501A1 true WO2016125501A1 (fr) 2016-08-11

Family

ID=56563853

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2016/000593 WO2016125501A1 (fr) 2015-02-06 2016-02-05 Dispositif de traitement de données, procédé de gestion d'entrée d'informations et support d'enregistrement doté d'un programme de gestion d'entrée d'informations enregistré sur ce dernier

Country Status (2)

Country Link
JP (1) JP6652070B2 (fr)
WO (1) WO2016125501A1 (fr)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH10162013A (ja) * 1996-11-28 1998-06-19 Nippon Telegr & Teleph Corp <Ntt> デジタル探索装置
JP2001274837A (ja) * 2000-01-27 2001-10-05 Internatl Business Mach Corp <Ibm> データ・パケットを分類する方法および手段
JP2002325091A (ja) * 2000-08-17 2002-11-08 Nippon Telegr & Teleph Corp <Ntt> フロー識別検索装置および方法
JP2008244642A (ja) * 2007-03-26 2008-10-09 Nec Corp 検索装置と方法とプログラム

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH10162013A (ja) * 1996-11-28 1998-06-19 Nippon Telegr & Teleph Corp <Ntt> デジタル探索装置
JP2001274837A (ja) * 2000-01-27 2001-10-05 Internatl Business Mach Corp <Ibm> データ・パケットを分類する方法および手段
JP2002325091A (ja) * 2000-08-17 2002-11-08 Nippon Telegr & Teleph Corp <Ntt> フロー識別検索装置および方法
JP2008244642A (ja) * 2007-03-26 2008-10-09 Nec Corp 検索装置と方法とプログラム

Also Published As

Publication number Publication date
JP6652070B2 (ja) 2020-02-19
JPWO2016125501A1 (ja) 2017-11-16

Similar Documents

Publication Publication Date Title
US10511532B2 (en) Algorithmic longest prefix matching in programmable switch
US10410010B2 (en) Language-localized policy statements
US7478426B2 (en) Multi-field classification dynamic rule updates
US10572442B2 (en) Systems and methods for providing distributed tree traversal using hardware-based processing
US8478707B1 (en) System and method for reducing flow rules in forwarding tables
US9710503B2 (en) Tunable hardware sort engine for performing composite sorting algorithms
EP2772040B1 (fr) Recherche prédictive de préfixe dans une table de hachage distribuée
US20160277297A1 (en) Sdn packet forwarding
US10284472B2 (en) Dynamic and compressed trie for use in route lookup
JP2016116215A (ja) ハードウェア支援ハッシュテーブルを使用したccnルーティング
JP6940239B2 (ja) データストックを匿名化するための方法およびシステム
TWI645694B (zh) 用於處理交替配置的最長首碼匹配表的裝置和方法
US20160335296A1 (en) Memory System for Optimized Search Access
US9021098B1 (en) Allocation of interface identifiers within network device having multiple forwarding components
US20170012874A1 (en) Software router and methods for looking up routing table and for updating routing entry of the software router
Barta et al. Online probabilistic metric embedding: A general framework for bypassing inherent bounds
JP6652070B2 (ja) データ処理装置、情報エントリ管理方法及び情報エントリ管理プログラム
CN109754021B (zh) 基于范围元组搜索的在线包分类方法
CN107045535B (zh) 数据库表索引
WO2016149905A1 (fr) Attribution de mémoire de réseau prédiffusé programmable par l&#39;utilisateur
Bahrambeigy et al. Bloom-Bird: A scalable open source router based on Bloom filter
WO2017130824A1 (fr) Dispositif de traitement d&#39;informations, procédé de traitement d&#39;informations et support d&#39;enregistrement contenant un programme de traitement d&#39;informations
US20200387484A1 (en) Tree deduplication
WO2016208148A1 (fr) Dispositif de traitement de données, procédé de traitement de données et support d&#39;enregistrement lisible par ordinateur
KR101583439B1 (ko) 리프-푸싱 기반의 영역 분할 사분 트라이를 이용한 패킷 분류 방법 및 패킷 분류 장치

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16746332

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2016573232

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16746332

Country of ref document: EP

Kind code of ref document: A1