WO2015032216A1 - 路由查找方法及装置、B-Tree树结构的构建方法 - Google Patents

路由查找方法及装置、B-Tree树结构的构建方法 Download PDF

Info

Publication number
WO2015032216A1
WO2015032216A1 PCT/CN2014/078055 CN2014078055W WO2015032216A1 WO 2015032216 A1 WO2015032216 A1 WO 2015032216A1 CN 2014078055 W CN2014078055 W CN 2014078055W WO 2015032216 A1 WO2015032216 A1 WO 2015032216A1
Authority
WO
WIPO (PCT)
Prior art keywords
hardware
route
node
module
lookup
Prior art date
Application number
PCT/CN2014/078055
Other languages
English (en)
French (fr)
Inventor
程晨
李彧
张炜
徐宝魁
陈伟
孙远航
Original Assignee
中兴通讯股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中兴通讯股份有限公司 filed Critical 中兴通讯股份有限公司
Priority to US14/917,773 priority Critical patent/US9871727B2/en
Publication of WO2015032216A1 publication Critical patent/WO2015032216A1/zh

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/74Address processing for routing
    • H04L45/745Address table lookup; Address filtering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/74Address processing for routing
    • H04L45/745Address table lookup; Address filtering
    • H04L45/748Address table lookup; Address filtering using longest matching prefix
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/60Router architectures

Definitions

  • the present invention relates to the field of network switching, and in particular, to a route search method and apparatus, and a B-Tree tree structure construction method.
  • BACKGROUND OF THE INVENTION With the rapid development of the Internet on the Internet, the core routers used for backbone network interconnection have an interface rate of up to 100 Gbps, which requires the core router to achieve a route lookup rate of several million times per second while supporting a large-capacity routing table.
  • IP Internet Protocol
  • TCAM Ternary Content Addressable Memory
  • the Trie tree structure is the most widely used tree structure, and it is easy to implement pipeline operations on hardware, which is beneficial to improving the throughput of route lookup.
  • the Trie tree structure has certain limitations. Generally, the Trie tree structure has a large number of pipeline stages, which causes an excessive search delay.
  • the route lookup design with multi-bit Trie tree structure can greatly reduce the number of pipeline stages, but it will bring a lot of extra memory consumption, and this extra memory consumption is related to the prefix distribution of the routing table, so that the multi-bit Trie tree is based. There is a large fluctuation in the routing table capacity of the route lookup hardware design. In the case of a bad route distribution, the storage space utilization is very low.
  • the present invention provides a route lookup method and apparatus, and a B-Tree tree structure construction method for solving at least the above-mentioned technologies. problem.
  • a route finder includes: a route lookup algorithm software module, a route update interface module, and a route lookup hardware module, where the route lookup algorithm software module is set to execute The software calculation and update entry instruction of the routing entry is sent; the routing update interface module is set to receive the update entry instruction sent by the routing algorithm software module, according to the road Controlling the above-mentioned route to find the data flow of the hardware module by finding the actual working state of the hardware module, and writing the update entry into the memory of the above-mentioned route lookup hardware module; the route lookup hardware module is set to respond to the hardware system's route lookup request, and Returning the search result of the longest prefix match to the foregoing hardware system; wherein the route lookup hardware module is a pipelined architecture.
  • the route lookup algorithm software module includes: an insert operation submodule, configured to respond to a route insert instruction of the route forwarding system, and insert the routing entry into the B-Tree tree structure; the delete operation submodule is set to respond to the above a route deletion instruction of the route forwarding system, and deleting the above routing entry in the B-Tree tree structure; the software entry memory management sub-module is configured to manage the node data and the result table entry data in the route lookup algorithm; a hardware operation submodule, configured to record the changed B-Tree tree node and the result table entry in the cache in the insert operation of the insert operation submodule or the delete operation of the delete operation submodule, and in the insert operation or After the deletion operation ends, the tree node is converted from the software data format to the hardware data format, and the converted hardware data and the corresponding hardware mapped address are continuously written into the above route update interface module through the software and hardware interaction interface.
  • an insert operation submodule configured to respond to a route insert instruction of the route forwarding system
  • the route lookup hardware module includes: a lookup logic submodule, configured to send the lookup key value information and the root node address information in the memory submodule to the tree structure after receiving the route lookup request of the hardware system Level 1 finds the pipeline; then determines whether the root node address is a node of the level; if yes, initiates a read node request to the memory submodule, waiting for the memory submodule to return the node information; if not, retaining the current node information It is further configured to determine whether the routing entry is hit according to comparing the node information and the key value; if the hit, the result record corresponding to the hit routing entry replaces the previous hit result; the memory submodule includes multiple independent memory spaces, Corresponding to each of the lookup logic layers of the above tree structure.
  • the routing update interface module includes: a buffer submodule, configured to receive node data and result table entry data and a corresponding hardware mapping address through the software and hardware interaction interface; and a logic processing submodule configured to search for the hardware module according to the foregoing route
  • the working state of the above-mentioned search logic sub-module updates the item update content of the cache sub-module to the memory sub-module of the route look-up hardware module.
  • the above insertion operation submodule comprises: a tree structure management unit, configured to
  • the access sequence in the B-Tree tree structure is controlled; the node parsing and comparing unit is set to jointly control the ordered query of the new entry in the B-Tree tree structure with the tree structure management unit, and find the insertion position;
  • the operation unit is configured to perform a split operation on the full node when the new entry needs to be inserted into the full node;
  • the result table update unit is configured to store the result information carried by the new entry in the result table, and the result information is The address is recorded in the corresponding location of the node where the new entry is located.
  • the software entry sub-memory management sub-module comprises: a software node management unit, configured to manage node allocation in a route lookup algorithm, quickly allocate and manage software nodes through a memory management algorithm; and a software result table management unit, configured to manage routes Finding the address allocation of the result table in the algorithm, quickly assigning and managing the result table through the memory management algorithm;
  • the hardware address mapping management unit is configured to perform software address mapping on the actual memory space of the hardware, and manage the hardware node through the software memory management algorithm And the result table; wherein, the software node and the hardware node are in one-to-one correspondence, and the software result table and the hardware result table are in one-to-one correspondence.
  • the present invention further provides a route lookup method, wherein the method includes: defining an order M and a maximum height N of a B-Tree tree structure, a maximum number of nodes per layer, and a number of result table entries Then, determining the corresponding number of pipeline stages of the hardware, the occupied space of each layer node, and the space occupied by the result table, thereby constructing a B-Tree tree structure; performing a route lookup operation based on the B-Tree tree structure described above.
  • the present invention further provides a method for constructing a B-Tree tree structure in which a hardware lookup structure is bottom-up, wherein the method includes: defining according to actual routing table capacity requirements and delay requirements The order height of the B-Tree tree structure and the maximum height N required for the B-Tree tree structure are set according to the maximum height N described above. The corresponding flow level number N+1 is defined; the B-Tree structure is defined according to the algorithm filling condition. The maximum number of nodes per layer, set the corresponding memory space according to the maximum number of nodes per layer above; set the memory space of the result table according to the actual capacity requirement of the routing table.
  • the embodiment of the present invention solves the problem that the memory locating scheme of the related art has a large memory consumption and affects the route locating efficiency.
  • the technical solution of the embodiment of the present invention designs the software algorithm module and the hardware data structure of the route locating system as a whole.
  • the update process, the lookup pipeline structure and the memory structure can meet the high-performance lookup requirements of the large-capacity routing table, enable hardware pipeline operations, have a small number of pipelines, and are insensitive to the routing prefix distribution.
  • FIG. 1 is a block diagram showing the structure of a route lookup device according to an embodiment of the present invention
  • FIG. 2 is a block diagram of a routing algorithm software module according to an embodiment of the present invention
  • the hardware search structure of the embodiment of the present invention is a flowchart of a method for constructing a bottom-up B-Tree tree structure.
  • FIG. 9 is a schematic diagram of operation of a route lookup device according to an embodiment of the present invention.
  • the embodiments of the present invention provide a route search method and device, and a B-Tree tree structure construction method, which are described below with reference to the accompanying drawings. Embodiments of the present invention will be further described in detail. It should be understood that the specific embodiments described herein are merely illustrative of the embodiments of the present invention
  • the embodiment of the present invention provides a B-Tree-based route search method and a hardware architecture, which can support a large-capacity forwarding table and perform high-speed search.
  • B-tree is a software algorithm that should be widely set up for database file management. It is characterized by a tree node with M-1 keywords and M child nodes. The depth of B-Tree is determined by the order M and the keyword. The number is determined, regardless of the distribution of the keywords.
  • FIG. 1 is a structural block diagram of a route finder according to an embodiment of the present invention. As shown in FIG. 1, the device includes: a route lookup algorithm software module 10 and a route update interface module 20 And the route search hardware module 30, the following describes each module in detail.
  • the route lookup algorithm software module 10 is configured to perform the software calculation of the route entry and the release of the update entry instruction;
  • the route update interface module 20 is connected to the route lookup algorithm software module 10, and is configured to receive the route lookup algorithm software module 10, after the update entry instruction is issued, according to the actual working state of the route lookup hardware module 30, control the data flow of the route lookup hardware module 30, and write the update entry into the memory of the route lookup hardware module 30;
  • the module 30 is connected to the route update interface module 20, and is configured to respond to the route lookup request of the hardware system, and return the search result of the longest prefix match to the hardware system.
  • the route lookup hardware module 30 is a pipelined architecture.
  • the software algorithm module, the hardware data structure, the update process, the search pipeline structure and the memory structure of the route lookup system are designed as a whole, which can meet the high-performance search requirement of the large-capacity routing table, and can implement the hardware pipeline operation.
  • the number of pipelines is small and the capacity is not sensitive to the routing prefix distribution.
  • the design of each module is described in detail below. 2 is a schematic structural diagram of a routing algorithm software module according to an embodiment of the present invention. As shown in FIG. 2, the entire route lookup algorithm software module is implemented based on the B-Tree algorithm that does not need to perform lookup backtracking as described above, and the overall framework is divided into two.
  • Insert operation sub-module mainly works in response to the route insertion instruction of the route forwarding system, and inserts the delivered route entry into the B-Tree tree structure.
  • the tree structure management sub-module (205) in the module controls the access order of the new entry in the tree structure, and cooperates with the node parsing and comparison sub-module (206) to ensure that the new entry has a B-Tree structure. Order the query, and find the appropriate insertion location.
  • the main function of the node split operation submodule ( 207 ) is to perform a split operation of the B-Tree when a new entry needs to be inserted into an already full node. Split operations can only be triggered when a new entry is inserted.
  • the result table update module (208) stores the result information carried by the new entry in the result table, and records the address of the result in the corresponding location of the node where the new entry is located.
  • the delete operation sub-module (202) mainly works in response to the route deletion instruction of the route forwarding system, and deletes the delivered route entry from the B-Tree tree structure.
  • the tree structure management sub-module and node parsing and comparison sub-module functions in the module are the same as the insert operation sub-module (201), ensuring that the deleted entry is orderedly in the B-Tree tree structure, and the matching target entry is found and performed. delete.
  • the function of the node merging sub-module (209) is that when an entry is deleted from the node, if the number of keys in the node is too small, the node is merged with the sibling node. Operation. Node merge operations may only be triggered when an entry is deleted. When the entry is deleted, the result table update module deletes the actual result corresponding to the entry from the result table.
  • the function of the software table entry memory management sub-module (203) is to manage the node and the result table in the route lookup algorithm, including: software node management sub-module (210), software result table management sub-module (211), hardware The node address mapping management sub-module (212) and the hardware result table address mapping management sub-module (213).
  • the function of the software node management sub-module (210) is to manage node assignments in the route lookup algorithm, and to quickly allocate and manage software nodes through memory management algorithms.
  • the function of the software result table management sub-module (211) is to manage the entry address allocation of the result table in the route lookup algorithm, and quickly allocate and manage the result table through the memory management algorithm.
  • the hardware node address mapping management sub-module (212) and the hardware result table address mapping management sub-module (213) are to perform address mapping on the real memory space of the hardware in the software module, and manage the hardware node and the result table through the software memory management algorithm. .
  • the software node and the hardware node are required to correspond one-to-one, and the software result table and the hardware result table also correspond one-to-one, so in fact one software node not only corresponds to a software memory node address, but also corresponds to an actual hardware memory node address.
  • the main function of the update hardware operation sub-module (204) is to record the B-Tree tree node and the result table entry involved in the change in the insert operation or the delete operation in the cache, and after the insertion operation or the delete operation ends, the tree is
  • the node converts from the software data format to the agreed hardware data format, and continuously converts the converted hardware data and the corresponding hardware mapped address into the routing entry update module through a hardware/software interaction interface (which may be, but not limited to, a LocalBus or a PCIe interface).
  • a hardware/software interaction interface which may be, but not limited to, a LocalBus or a PCIe interface.
  • the significance of this module is to reduce interface interaction between hardware and software, save update time, and reduce the impact of item updates on the actual hardware lookup flow.
  • the route lookup algorithm software module may include: an insert operation submodule, configured to respond to a route insert instruction of a route forwarding system, and The routing entry is inserted into the B-Tree tree structure; the deletion operation sub-module is set to respond to the route deletion instruction of the routing forwarding system, and the routing entry is deleted in the B-Tree tree structure; the software entry memory management sub-module , configured to manage node data and result table entry data in the route lookup algorithm; update hardware operation submodule, set to involve change in the insert operation of the insert operation submodule or the delete operation of the delete operation submodule
  • the B-Tree tree node and the result table entry are recorded in the cache, and after the insert operation or the delete operation ends, the tree node is converted from the software data format to the hardware data format, and the converted hardware data and the corresponding hardware map are mapped.
  • the insert operation submodule may include: a tree structure management unit configured to control an access order of the new entry in the B-Tree tree structure; a node parsing and comparison unit, configured to be shared with the tree structure management unit Controlling the ordered entry of the new entry in the above B-Tree tree structure, and finding the insertion position; the node splitting operation unit is set to split the full node when the new entry needs to be inserted into the full node;
  • the update unit is configured to store the result information carried by the new entry in the result table, and record the address of the result information in the corresponding location of the node where the new entry is located.
  • the software table item memory management sub-module may include: a software node management unit, configured to manage node allocation in a route lookup algorithm, quickly allocate and manage software nodes through a memory management algorithm; and a software result table management unit, configured to be managed The address of the result table in the route lookup algorithm is allocated, and the result table is quickly allocated and managed by the memory management algorithm; the hardware address mapping management unit is configured to perform software address mapping on the actual memory space of the hardware, and manage the hardware through the software memory management algorithm.
  • the node and the result table wherein, the software node and the hardware node are in one-to-one correspondence, and the software result table and the hardware result table are in one-to-one correspondence.
  • the route lookup algorithm software module runs on a central processing unit (Central Processing Unit, CPU for short), and can be a B-Tree algorithm software program written in a high-level algorithm language (such as C, C++ language). It uses routing entry update instructions (insert, delete routing entry instructions) as its input source, typically from a protocol platform or driver. After an update command is sent to the route lookup algorithm software module, the route lookup algorithm software module inserts the update instruction according to the B-Tree algorithm; when the insert operation is completed, all the changed trees involved in the calculation are performed. The node writes the route update interface module in one time through the hardware and software interaction interface (may be, but not limited to, PCIe interface, LocalBus interface, etc.) according to the agreed hardware format.
  • the hardware and software interaction interface may be, but not limited to, PCIe interface, LocalBus interface, etc.
  • the hardware related module includes a route update interface module and a route lookup hardware module, where the route update interface module (201) is A hardware module is divided into two parts: a buffer area (202) and a logical processing area (203), as shown in FIG.
  • the buffer area is set to store the update node information written by the algorithm module; the logic processing area searches the relevant state of the hardware module according to the route, and writes the update node data into the node memory of the hardware module according to a certain order.
  • the main function of the buffer area (302) is to receive the node data written by the storage software through the software and hardware interaction interface, the result table entry data and their corresponding hardware addresses, which have been converted into hardware data formats by software.
  • the main function of the logical processing area (303) is to find the actual working state of the lookup logic sub-module (305) in the hardware module according to the route, and update the update content of the cache area to the memory area of the route lookup hardware module in real time (306).
  • the focus of the logical processing area is that the B-Tree algorithm's lookup pipeline has a correlation, that is, the node of the next-level pipeline access is part of the search result of the upper-level pipeline; and because each level of the pipeline is searched for request Engraving may exist, so hardware memory must be updated when appropriate, otherwise it will cause some search misses or even false hits.
  • the simplest processing solution is: When an update request occurs, the search flow logic is first controlled by the logical processing area, and all lookup requests for the entry of the search logic area (305) are blocked, and all lookup requests are temporarily stored in the cache.
  • first output queue FirstlnputFirstOutput, FIFO for short
  • FIFO FirstlnputFirstOutput
  • the embodiment provides a preferred embodiment, that is, the route update interface module includes: a cache submodule, configured to receive through the software and hardware interaction interface The node data and the result table entry data and the corresponding hardware mapping address; the logic processing submodule is configured to: according to the routing, look up the working state of the foregoing search logic submodule of the hardware module, and update the item update content of the cache submodule to the foregoing route Find the above memory submodule of the hardware module.
  • the following is another hardware related module (route lookup hardware module) for the route lookup device.
  • the route lookup hardware module (304) is the main module of the hardware route lookup, which is a pipeline design, mainly by the search logic area (305) and The memory area (306) is constructed.
  • the lookup logic area (305) is designed for the pipeline.
  • Each level of pipeline has a lookup logic.
  • the information in the pipeline includes: lookup key value information, node address, hit information (address in the result table), and so on.
  • the hardware lookup request is first sent to the first-level pipeline through the hardware interface, and the root node address stored in the hardware module is obtained (307); each level of the pipeline accesses the corresponding memory area of the level according to the node address, and the corresponding node is obtained.
  • the memory area (306) is designed as a block, and each block corresponds to a running water.
  • the node space corresponding to each level of flow is defined by the algorithm according to actual needs, and the memory of each level is solidified by hardware design.
  • the root node address (307) is a register that is configured by the software module to mark the root node location of the tree structure.
  • the result table (309) corresponds to the last level of searching for the pipeline, and the algorithm defines the actual occupied space of the result table according to the actual table item capacity requirement.
  • the memory area (306) is an important data storage area in the route lookup hardware module, and its structure is shown in FIG. Each memory area in the memory area is independent of each other, and is divided into a root node address area, a layer node area, and a result table area.
  • the root node address is set to store the address of the root node of the B-Tree tree structure, which may belong to any layer node.
  • Each layer of tree node areas are independent of each other and do not overlap with other tree node areas, so as to avoid conflicts in the simultaneous access of the pipeline.
  • the resulting table area is also a separate area accessed by the last level of the search flow.
  • the root node address can be implemented in registers; each layer node and result table area can be implemented with static random access memory (SRAM), dynamic random access memory (DRAM), or other types of memory depending on the actual memory size required and access latency requirements.
  • SRAM static random access memory
  • DRAM dynamic random access memory
  • the route-finding hardware module may include: a search logic sub-module, configured to receive the hardware After the route lookup request of the system, the search key value information and the root node address information in the memory submodule are sent to the first level search stream in the tree structure; and then it is determined whether the root node address is a node of the level; if yes, Initiating a read node request to the memory sub-module, waiting for the memory sub-module to return node information; if not, retaining the current node information unchanged; and setting to determine whether the routing entry is hit according to comparing the node information with the key value; , the result record corresponding to the hit routing entry replaces the previous hit result; the memory sub-module, including multiple independent memory spaces, respectively corresponding to each of the search logic layers of the above tree structure.
  • the search logic area is the main implementation module of the hardware search pipeline.
  • the search pipeline has a total of N+1 levels, where LV_1 to 1 ⁇ _ ⁇ is the tree node search logic layer, and LV_N+1 is the result table lookup. Logical layer.
  • Each search logic layer corresponds to a separate memory space in the memory area, ensuring that the accesses of the layers to the memory area do not conflict with each other.
  • the following is a description of the process of searching for a logical area.
  • FIG. 4 is a schematic flowchart of a hardware search flow according to an embodiment of the present invention. As shown in FIG. 4, the flow includes the following steps (step S401-step S410): Step S401, The route lookup hardware module receives the input lookup request.
  • Step S402 the search key value information and the root node address information in the memory area are simultaneously sent to the first level search stream.
  • Step S403 in the logic of the pipeline, first determine whether the root node address is a node of the current level, if it is a node of the current level, initiate a read node request to the memory, wait for the memory to return the node data; if it is not the node of the level, The memory initiates the request, leaving the current node information unchanged.
  • Step S404 parsing the obtained node information, and comparing with the key value. In step S405, if the node of the current level is not accessed, no parsing and comparison is needed, and the tapping wait is sufficient.
  • step S406 If there is a hit in the result of the comparison, the result corresponding to the hit route entry is recorded, overwriting the result of the previous hit.
  • step S406 if the node of the level is not accessed, no recording is required, and the shooting waits. If the comparison is made, the node address that should be accessed by the next level is recorded in the node address from the comparison result; if the node of the current level is not accessed, the current node information is retained and does not change.
  • Step S407 searching for the water-carrying key value information, the node address, and the hit result to enter the next-stage pipeline processing logic.
  • step S408 if the next stage flow is not the last stage, the process is the same as the above steps S403-S407.
  • step S409 in the result table flow level, the logic will access the result table of the memory area according to the hit result (result table entry address) to obtain the hit actual result; if not, the result table is not accessed, and the miss information is output.
  • step S410 the actual result of the hit is returned through the hardware interface, and if it is missed, the miss information is returned.
  • This embodiment includes a description of the process of inserting and deleting a route prefix. It is assumed that the order M of the B-Tree is 3, and there are at most 2 sets of prefixes and 3 child node pointers in each tree node.
  • FIG. 5 is a schematic diagram of a route prefix insertion process according to an embodiment of the present invention.
  • the routing table is empty, and there is actually no prefix in the memory area of the route lookup hardware module.
  • a default empty node is shown as the lowest LV_N in the lookup pipeline.
  • the root node address register can now point to the default empty node address.
  • the 4-bit prefix 1001 is first inserted, as shown at 502.
  • the prefix is inserted into the first slot of the initial node, and the 4th bit of the Vector vector of the prefix length is 1 and written as 0001, indicating that the prefix length is 4.
  • the corresponding result is stored in the result table address A1, and A1 is saved in the node.
  • the prefix 1000 is inserted into the first slot of the initial node, the Vector vector of the prefix length is 0001, and the corresponding result is stored in the result table address A2.
  • the prefix 1110 is inserted, as shown at 504. Because the node is full at this time, node splitting is required to generate a new root node and a new sibling node.
  • the new root node is on the LV_N-1 layer and the new sibling node is on the LV_N layer.
  • the root node stores the prefix 1001/0001/A1. Its left child node stores the prefix 1000/0001/A2, and the right child node stores the prefix 1110/0001/A3.
  • the tree structure is on the upper layer, and the root node address update needs to be directed to the new root node (the node where 1001 is located).
  • FIG. 6 is a schematic diagram of a process of deleting a route prefix according to an embodiment of the present invention.
  • the final insertion result that has been constructed above is the initial state of the deletion process (601).
  • First delete 1000 as shown in 602.
  • the result of the operation is that the root node of the LV_N-1 layer is deleted, and one node of the LV_N layer is left as the root node, and the prefixes 1001/0101/A1 and 1110/0001/A3 are sequentially stored.
  • the routing table becomes empty at this point.
  • the data formats involved in this embodiment are mainly divided into a node data format and a result table data format.
  • the node data includes: key value, tag vector, result table address, and next level pointer.
  • the key value is the route prefix in the routing table;
  • the tag vector is the tag vector of the different prefix length entries in the same prefix in the B-Tree algorithm;
  • the result table address is the address information of the search result corresponding to the routing entry in the result table;
  • the next level pointer is the address of the node that should be accessed by the next level after the level is searched.
  • the hardware lookup structure involved in this embodiment is a bottom-up B-Tree tree structure. Firstly, according to the actual routing table capacity requirement and delay requirement, define the order height M of the B-Tree and the maximum height N required by the B-Tree, and set the corresponding flow level number N+1 according to the maximum height; then define according to the algorithm filling situation. The maximum number of nodes per tree structure is required, and the corresponding memory space is set according to the maximum number of nodes; finally, the memory space of the result table is set according to the actual capacity requirement of the routing table.
  • FIG. 7 is a flowchart of a route search method according to an embodiment of the present invention. As shown in FIG.
  • the flow includes the following steps (step S702-step S704): S702, defining an order M and a maximum height N of the B-Tree structure, a maximum number of nodes per layer, and a number of result table entries, thereby determining a corresponding number of pipeline stages of the hardware, occupying space of each layer, and occupying space of the result table, thereby constructing B-Tree structure; Step S704, performing a route lookup operation based on the B-Tree structure described above.
  • the present invention provides a method for constructing a B-Tree tree structure in which the hardware lookup structure is bottom-up, and FIG.
  • Step S802 defining an order of a B-Tree tree structure according to an actual routing table capacity requirement and a delay requirement
  • the maximum height N required for the M and B-Tree tree structures is set according to the maximum height N described above, and the corresponding number of pipeline stages N+1 is set
  • Step S804 the maximum number of nodes per layer of the B-Tree tree structure is defined according to the algorithm filling situation.
  • the route finder includes three parts: a route lookup algorithm software module, a route update interface module, and a route lookup hardware module.
  • FIG. 9 is a schematic diagram of the operation of the route finder according to the embodiment of the present invention. As shown in FIG. 9, the route lookup algorithm software module completes the software calculation and update entry of the route entry, and the route update interface module receives the update entry.
  • the control route finds the data flow of the hardware module, and writes the update entry into the memory of the route lookup hardware module, and the route lookup hardware module is a pipelined architecture, and responds to the routing of the hardware system. Find the request and return the result of the longest prefix match.
  • the specific implementation method of the present invention is divided into the following three steps: the first step determines the tree structure order M of the B-Tree, the height of the tree structure N, the number of nodes per layer, and the number of result table entries according to actual needs, thereby determining the corresponding hardware. The number of pipeline stages, the space occupied by each node, and the space occupied by the result table.
  • the second step is to design the routing calculation according to the framework of the first step.
  • the third step is to combine the modules completed in the second step into a route search system according to the actual application scenario, and connect to the route forwarding system.
  • the route lookup algorithm software module can be run on the main control CPU of the route lookup system, and the hardware related module (route update interface module and route lookup hardware module) can be in the field programmable gate array (Field-Programmable Gate Array, referred to as FPGA). Or (Application Specific Integrated Circuit, ASIC for short), and connected through the hardware data interface in the actual application scenario, such as the Intei aken-Lookaside interface commonly used in routing and forwarding systems.
  • FPGA Field-Programmable Gate Array
  • connection between the software module and the hardware module can be, but is not limited to, connected via a standard LocalBus or PCIe interface.
  • the technical solution of the present invention designs the software algorithm module, the hardware data structure, the update process, the search pipeline structure and the memory structure of the route lookup system as a whole, and can satisfy the high performance of the large-capacity routing table. Finding requirements, enabling hardware pipeline operations, low pipeline levels, and capacity is not sensitive to routing prefix distribution. While the preferred embodiments of the present invention have been disclosed for purposes of illustration, those skilled in the art will recognize that various modifications, additions and substitutions are possible, and the scope of the invention should not be limited to the embodiments described above.
  • the technical solution provided by the embodiments of the present invention can be applied to the field of network switching, and the software algorithm module, the hardware data structure, the update process, the search pipeline structure and the memory structure of the route lookup system are designed as a whole, and the large-capacity route can be satisfied.
  • the high-performance lookup requirements of the table enable hardware pipeline operations, low pipeline levels, and capacity is not sensitive to routing prefix distribution.

Abstract

本发明公开了一种路由查找方法及装置、B-Tree树结构的构建方法。其中,该装置包括:路由查找算法软件模块、路由更新接口模块和路由查找硬件模块,其中,路由查找算法软件模块,设置为执行对路由条目的软件计算和更新条目指令的下发;路由更新接口模块,设置为在接收到路由查找算法软件模块下发的更新条目指令后,根据路由查找硬件模块的实际工作状态,控制路由查找硬件模块的数据流,并将更新条目写入路由查找硬件模块的存储器中;路由查找硬件模块,设置为响应硬件系统的路由查找请求,并将最长前缀匹配的查找结果返回至硬件系统;其中,路由查找硬件模块为流水线型架构。通过本发明实现硬件流水操作,流水级数少,容量对路由前缀分布不敏感。

Description

路由査找方法及装置、 B-Tree树结构的构建方法 技术领域 本发明涉及网络交换领域, 特别是涉及一种路由查找方法及装置、 B-Tree树结构 的构建方法。 背景技术 随着互联网 Internet 的迅猛发展, 用于主干网络互联的核心路由器的接口速率达 到 lOOGbps, 该速率要求核心路由器在支持大容量路由表的情况下路由查找速率达到 每秒几百万次。 互联网协议 (Internet Protocol, 简称为 IP) 查找需要得到最长匹配前 缀, 由于高速查找的需要, 软件查找方法已经不适用, 近年来研究人员提出了多种硬 件查找方法以提高查找速率, 其中以 Trie 树结构和三态内容寻址存储器 (Ternary Content Addressable Memory, 简称为 TCAM) 最为流行。
Trie树结构是应用最为广泛的树结构, 容易实现硬件上的流水操作, 利于提升路 由查找的吞吐率。 但是 Trie树结构具有一定的局限性。 一般 Trie树结构的流水级数较 多, 会带来过大的查找延时。 采用多比特 Trie树结构的路由查找设计可以大大减少流 水级数, 但会带来大量额外内存的消耗, 并且这种额外的内存消耗量与路由表的前缀 分布相关, 使得基于多比特 Trie树的路由查找硬件设计的路由表容量存在较大起伏, 在比较坏的路由分布的情况下, 存储空间利用率很低。 针对相关技术中路由查找方案内存消耗较大, 影响路由查找效率的问题, 目前尚 未提出有效的解决方案。 发明内容 针对相关技术中路由查找方案内存消耗较大, 影响路由查找效率的问题, 本发明 实施例提供了一种路由查找方法及装置、 B-Tree树结构的构建方法, 用以至少解决上 述技术问题。 根据本发明实施例的一个方面, 提供了一种路由查找装置, 其中, 该装置包括: 路由查找算法软件模块、 路由更新接口模块和路由查找硬件模块, 其中, 路由查找算 法软件模块, 设置为执行对路由条目的软件计算和更新条目指令的下发; 路由更新接 口模块, 设置为在接收到上述路由查找算法软件模块下发的更新条目指令后, 根据路 由查找硬件模块的实际工作状态, 控制上述路由查找硬件模块的数据流, 并将更新条 目写入上述路由查找硬件模块的存储器中; 路由查找硬件模块, 设置为响应硬件系统 的路由查找请求, 并将最长前缀匹配的查找结果返回至上述硬件系统; 其中, 上述路 由查找硬件模块为流水线型架构。 优选地, 上述路由查找算法软件模块包括: 插入操作子模块, 设置为响应路由转 发系统的路由插入指令,并将上述路由条目插入到 B-Tree树结构中;删除操作子模块, 设置为响应上述路由转发系统的路由删除指令, 并将上述路由条目在上述 B-Tree树结 构中删除; 软件表项内存管理子模块, 设置为对路由查找算法中的节点数据和结果表 条目数据进行管理; 更新硬件操作子模块, 设置为将在上述插入操作子模块的插入操 作或上述删除操作子模块的删除操作中涉及到改变的 B-Tree树节点和结果表条目记录 在缓存中, 并在插入操作或删除操作结束后, 将树节点从软件数据格式转换成硬件数 据格式, 并将转换后的硬件数据和对应的硬件映射地址通过软硬件交互接口连续写入 上述路由更新接口模块中。 优选地, 上述路由查找硬件模块包括: 查找逻辑子模块, 设置为在接收到上述硬 件系统的路由查找请求后, 将查找键值信息和内存子模块中的根节点地址信息发送至 树结构的第一级查找流水中; 然后判断上述根节点地址是否为本级节点; 如果是, 则 向上述内存子模块发起读节点请求, 等待上述内存子模块返回节点信息; 如果不是, 则保留当前节点信息不变; 还设置为根据比较上述节点信息与键值, 判断路由条目是 否命中; 如果命中, 则将命中的路由条目对应的结果记录替代之前的命中结果; 内存 子模块, 包括多个独立内存空间, 分别与上述树结构的每个查找逻辑层相对应。 优选地, 上述路由更新接口模块包括: 缓存子模块, 设置为通过上述软硬件交互 接口接收节点数据和结果表条目数据以及对应的硬件映射地址; 逻辑处理子模块, 设 置为根据上述路由查找硬件模块的上述查找逻辑子模块的工作状态, 将上述缓存子模 块的条目更新内容更新至上述路由查找硬件模块的上述内存子模块中。 优选地, 上述插入操作子模块包括: 树结构管理单元, 设置为对新条目在上述
B-Tree树结构中的访问顺序进行控制; 节点解析及比较单元, 设置为与上述树结构管 理单元共同控制新条目在上述 B-Tree树结构中的有序查询, 并找到插入位置; 节点分 裂操作单元, 设置为在新条目需要插入到已满节点时, 对该已满节点进行分裂操作; 结果表更新单元, 设置为将新条目携带的结果信息存入结果表, 并将该结果信息的地 址记录在上述新条目所在节点的对应位置中。 优选地, 上述软件表项内存管理子模块包括: 软件节点管理单元, 设置为管理路 由查找算法中的节点分配, 通过内存管理算法迅速分配和管理软件节点; 软件结果表 管理单元, 设置为管理路由查找算法中的结果表的条目地址分配, 通过内存管理算法 快速分配和管理结果表; 硬件地址映射管理单元, 设置为将硬件的实际内存空间进行 软件地址映射, 通过软件的内存管理算法管理硬件节点和结果表; 其中, 软件节点与 硬件节点一一对应, 软件结果表与硬件结果表一一对应。 根据本发明的另一方面, 本发明还提供了一种路由查找方法, 其中, 该方法包括: 定义 B-Tree树结构的阶数 M和最大高度 N、 每层最大节点数目和结果表条目数, 进 而确定硬件相应的流水级数、 每层节点占用空间和结果表占用空间, 进而构建 B-Tree 树结构; 基于上述 B-Tree树结构, 执行路由查找操作。 根据本发明的又一方面, 本发明还提供了一种硬件查找结构为自底向上生长的 B-Tree树结构的构建方法, 其中, 该方法包括: 根据实际路由表容量需求和延时需求 定义出 B-Tree树结构的阶数 M和 B-Tree树结构所需要的最大高度 N, 按照上述最大 高度 N设定相应的流水级数 N+1;根据算法填充情况定义出 B-Tree树结构的每层最大 节点数目, 按照上述每层最大节点数目设定相应的内存空间; 根据路由表实际容量需 求设定结果表的内存空间。 通过本发明实施例, 解决了相关技术中路由查找方案内存消耗较大, 影响路由查 找效率的问题, 本发明实施例的技术方案从整体上设计了路由查找系统的软件算法模 块、 硬件数据结构、 更新流程、 查找流水结构和内存结构, 能够满足大容量路由表的 高性能查找需求, 能够实现硬件流水操作, 流水级数少, 并且容量对路由前缀分布不 敏感。 上述说明仅是本发明实施例技术方案的概述, 为了能够更清楚了解本发明实施例 的技术手段, 而可依照说明书的内容予以实施, 并且为了让本发明实施例的上述和其 它目的、 特征和优点能够更明显易懂, 以下特举本发明实施例的具体实施方式。 附图说明 图 1是根据本发明实施例的路由查找装置的结构框图; 图 2是根据本发明实施例的路由算法软件模块的架构示意图; 图 3是根据本发明实施例的路由查找装置的硬件相关模块的架构示意图; 图 4是根据本发明实施例的硬件查找流水的流程示意图; 图 5是根据本发明实施例的路由前缀插入流程示意图; 图 6是根据本发明实施例的路由前缀删除流程示意图; 图 7是根据本发明实施例的路由查找方法的流程图; 图 8是根据本发明实施例的硬件查找结构为自底向上生长的 B-Tree树结构的构建 方法的流程图; 图 9是根据本发明实施例的路由查找装置的运行示意图。 具体实施方式 为了解决相关技术路由查找方案内存消耗较大, 影响路由查找效率的问题, 本发 明实施例提供了一种路由查找方法及装置、 B-Tree树结构的构建方法, 以下结合附图 以及实施例, 对本发明实施例进行进一步详细说明。 应当理解, 此处所描述的具体实 施例仅仅用以解释本发明实施例, 并不限定本发明实施例。 本发明实施例提出一种基于 B-Tree的路由查找方法和硬件架构, 能够支持大容量 转发表, 并进行高速查找, 最重要的是本发明实施例提出的方法能够实现硬件流水操 作, 流水级数少, 并且容量对路由前缀分布不敏感。 B-tree 是一种被广泛应设置为数 据库文件管理的软件算法, 它的特点是一个树节点拥有 M-1个关键字和 M个子节点, B-Tree 的深度由阶数 M和关键字的数目决定, 与关键字的分布无关。 这些特点使得 B-Tree能够克服上述相关技术中 Trie树结构的缺点, 实现流水级数少且对路由前缀分 布不敏感的路由查找设计。 需要注意的是, 在应用 B-Tree进行路由查找时, 需要确保返回的结果为最长前缀 匹配结果。 由于查找回溯是无法在硬件流水操作中实现的, 因此需要对 B-Tree的路由 查找算法进行特殊的处理。 这类处理有很多种公开的操作方法, 例如可以将父前缀复 制成多份合并到其所有子前缀上, 或者采用将父前缀与树结构最上层的子节点合并的 方法。 本发明利用了将父前缀与树结构最上层子节点合并的方法, 设计出基于 B-Tree 的路由查找方法和装置。 本实施例提供了一种路由查找方法装置, 图 1是根据本发明实施例的路由查找装 置的结构框图, 如图 1所示, 该装置包括: 路由查找算法软件模块 10、 路由更新接口 模块 20和路由查找硬件模块 30, 下面对各个模块进行具体介绍。 路由查找算法软件模块 10, 设置为执行对路由条目的软件计算和更新条目指令的 下发; 路由更新接口模块 20,连接至路由查找算法软件模块 10,设置为在接收到上述路 由查找算法软件模块 10下发的更新条目指令后, 根据路由查找硬件模块 30的实际工 作状态,控制上述路由查找硬件模块 30的数据流, 并将更新条目写入上述路由查找硬 件模块 30的存储器中; 路由查找硬件模块 30,连接至路由更新接口模块 20,设置为响应硬件系统的路由 查找请求, 并将最长前缀匹配的查找结果返回至上述硬件系统; 其中, 上述路由查找 硬件模块 30为流水线型架构。 通过本实施例, 从整体上设计了路由查找系统的软件算法模块、 硬件数据结构、 更新流程、 查找流水结构和内存结构, 能够满足大容量路由表的高性能查找需求, 能 够实现硬件流水操作, 流水级数少, 并且容量对路由前缀分布不敏感。 下面对各模块的设计进行详细描述。 图 2是根据本发明实施例的路由算法软件模块的架构示意图, 如图 2所示, 整个 路由查找算法软件模块基于前文所述的不需要进行查找回溯的 B-Tree算法实现,整体 框架分为 4个部分: 插入操作子模块 (201 )、 删除操作子模块 (202)、 软件表项内存 管理子模块 (203 ) 和更新硬件操作子模块 (204)。 插入操作子模块(201 )主要工作是响应路由转发系统的路由插入指令, 并将下发 的路由条目插入到 B-Tree树结构中。 该模块中的树结构管理子模块 (205 ) 对新条目 在树结构中的访问顺序进行控制, 它与节点解析及比较子模块(206 )共同作用, 保证 新条目在 B-Tree树结构中有序查询, 并找到合适的插入位置。 节点分裂操作子模块 ( 207 ) 主要功能是当新条目需要插入到一个已经满的节点时, 将该节点进行 B-Tree 的分裂操作。分裂操作仅在插入新条目时可能被触发。当新条目被成功插入树结构后, 结果表更新模块(208 )将新条目携带的结果信息存入结果表, 并将该结果的地址记录 在新条目所在的节点对应位置中。 删除操作子模块(202 )主要工作是响应路由转发系统的路由删除指令, 并将下发 的路由条目从 B-Tree树结构中删除。该模块中的树结构管理子模块和节点解析及比较 子模块功能与插入操作子模块 (201 ) 相同, 保证删除条目在 B-Tree树结构中有序查 询, 并找到匹配的目标条目, 并进行删除。 节点合并子模块(209 ) 的功能是当一个条 目被从节点中删除后, 如果该节点中键值数目过小, 则将该节点与兄弟节点进行合并 操作。 节点合并操作仅在删除条目时可能被触发。 当条目被删除后, 结果表更新模块 将该条目对应的实际结果从结果表中删除。 软件表项内存管理子模块(203 )的功能是对路由查找算法中的节点和结果表进行 管理, 包括内容有: 软件节点管理子模块 (210)、 软件结果表管理子模块 (211 )、 硬 件节点地址映射管理子模块 (212) 和硬件结果表地址映射管理子模块 (213 )。 软件节点管理子模块(210)的功能是管理路由查找算法中的节点分配, 通过内存 管理算法快速分配和管理软件节点。 软件结果表管理子模块(211 )的功能是管理路由查找算法中的结果表的条目地址 分配, 通过内存管理算法快速分配和管理结果表。 硬件节点地址映射管理子模块 (212) 和硬件结果表地址映射管理子模块 (213 ) 是将硬件的实际内存空间在软件模块中进行地址映射, 通过软件的内存管理算法来管 理硬件节点和结果表。 要求软件节点与硬件节点一一对应, 软件结果表与硬件结果表 也一一对应, 因此实际上一个软件节点不仅对应了一个软件内存节点地址, 还对应了 一个实际的硬件内存节点地址。 更新硬件操作子模块(204)的主要功能是将在插入操作或删除操作中涉及到改变 的 B-Tree树节点和结果表条目记录在缓存中, 并在插入操作或删除操作结束后, 将树 节点从软件数据格式转换成约定的硬件数据格式, 并将转换后的硬件数据和对应的硬 件映射地址通过软硬件交互接口 (可以但不限于 LocalBus或 PCIe接口) 连续写入路 由条目更新模块中。 该模块的意义在于减少软硬件的接口交互操作, 节省更新时间, 并减少条目更新对硬件实际查找流水的影响。 基于上述路由查找算法软件模块的描述, 本实施例提供了一种优选实施方式, 即 上述路由查找算法软件模块可以包括: 插入操作子模块, 设置为响应路由转发系统的 路由插入指令, 并将上述路由条目插入到 B-Tree树结构中; 删除操作子模块, 设置为 响应上述路由转发系统的路由删除指令, 并将上述路由条目在上述 B-Tree树结构中删 除; 软件表项内存管理子模块, 设置为对路由查找算法中的节点数据和结果表条目数 据进行管理; 更新硬件操作子模块, 设置为将在上述插入操作子模块的插入操作或上 述删除操作子模块的删除操作中涉及到改变的 B-Tree树节点和结果表条目记录在缓存 中, 并在插入操作或删除操作结束后,将树节点从软件数据格式转换成硬件数据格式, 并将转换后的硬件数据和对应的硬件映射地址通过软硬件交互接口连续写入上述路由 更新接口模块中。 优选地, 上述插入操作子模块可以包括: 树结构管理单元, 设置为对新条目在上 述 B-Tree树结构中的访问顺序进行控制; 节点解析及比较单元, 设置为与上述树结构 管理单元共同控制新条目在上述 B-Tree树结构中的有序查询, 并找到插入位置; 节点 分裂操作单元, 设置为在新条目需要插入到已满节点时,对该已满节点进行分裂操作; 结果表更新单元, 设置为将新条目携带的结果信息存入结果表, 并将该结果信息的地 址记录在上述新条目所在节点的对应位置中。 优选地, 上述软件表项内存管理子模块可以包括: 软件节点管理单元, 设置为管 理路由查找算法中的节点分配, 通过内存管理算法迅速分配和管理软件节点; 软件结 果表管理单元, 设置为管理路由查找算法中的结果表的条目地址分配, 通过内存管理 算法快速分配和管理结果表; 硬件地址映射管理单元, 设置为将硬件的实际内存空间 进行软件地址映射, 通过软件的内存管理算法管理硬件节点和结果表; 其中, 软件节 点与硬件节点一一对应, 软件结果表与硬件结果表一一对应。 路由查找算法软件模块运行在主控中央处理器 (CentralProcessingUnit, 简称为 CPU) 上, 可以是高级算法语言 (如 C、 C++语言) 编写的 B-Tree算法软件程序。 它 以路由条目更新指令 (插入、 删除路由条目指令) 作为其输入源, 一般来自于协议平 台或驱动。 当一条更新指令下发到路由查找算法软件模块后, 路由查找算法软件模块 对这条更新指令根据 B-Tree算法进行插入操作; 当插入操作完成后, 将本次计算所涉 及的所有改变的树节点按照约定的硬件格式, 通过软硬件交互接口 (可以但不限于 PCIe接口、 LocalBus接口等) 一次性写入路由更新接口模块中。 图 3是根据本发明实施例的路由查找装置的硬件相关模块的架构示意图, 如图 3 所示, 硬件相关模块包括路由更新接口模块和路由查找硬件模块, 其中, 路由更新接 口模块 (201 ) 是一个硬件模块, 分为缓存区 (202) 和逻辑处理区 (203 ) 两个部分, 如图 3所示。 缓存区设置为存放算法模块写入的更新节点信息; 逻辑处理区根据路由 查找硬件模块的相关状态, 按照一定的顺序, 将更新节点数据写入硬件模块的节点内 存中。 下面进行具体介绍。 缓存区(302)主要功能是接收存储软件通过软硬件交互接口写入的节点数据、 结 果表条目数据和它们对应的硬件地址, 这些数据已经由软件转换为硬件数据格式。 逻辑处理区(303 )主要功能是根据路由查找硬件模块中的查找逻辑子模块(305 ) 的实际工作状态, 将缓存区的条目更新内容实时更新至路由查找硬件模块的内存区 (306)。 逻辑处理区的重点在于, B-Tree算法的查找流水具有相关性, 即下一级流水 访问的节点是上一级流水的查找结果中的一部分; 并且因为每一级流水的查找请求时 刻都可能存在, 所以必须在合适的情况下才能对硬件内存进行更新, 否则会造成某些 查找出现不命中甚至是错误命中的情况。 最简单的一种处理方案是: 当出现更新请求 后, 首先由逻辑处理区对查找流水逻辑进行控制, 阻断查找逻辑区(305 )入口的所有 查找请求, 所有查找请求将被暂存在缓存先入先出队列(FirstlnputFirstOutput, 简称为 FIFO) 中; 然后等到查找流水逻辑中所有级的查找请求都被完全响应并返回结果, 此 时查找逻辑中不再存在任何查找请求数据包; 再将需要更新的节点和结果表从缓存区 读出, 根据相应的硬件地址写入内存区 (306 ); 最后完成所有更新操作后, 解除对查 找逻辑区的查找请求的锁定, 查找逻辑恢复正常工作状态。 基于上述对路由查找装置的硬件相关模块 (路由更新接口模块) 的描述, 本实施 例提供了一种优选实施方式, 即路由更新接口模块包括: 缓存子模块, 设置为通过上 述软硬件交互接口接收节点数据和结果表条目数据以及对应的硬件映射地址; 逻辑处 理子模块, 设置为根据上述路由查找硬件模块的上述查找逻辑子模块的工作状态, 将 上述缓存子模块的条目更新内容更新至上述路由查找硬件模块的上述内存子模块中。 下面对对路由查找装置的另一个硬件相关模块 (路由查找硬件模块) 进行介绍, 路由查找硬件模块(304)为硬件路由查找的主要模块, 是流水线设计, 主要由查找逻 辑区 (305 ) 和内存区 (306) 构成。 查找逻辑区(305 )为流水线设计, 每一级流水都有一个查找逻辑, 流水线中的信 息包括: 查找键值信息、 节点地址、 命中信息 (结果表中的地址) 等。 硬件的查找请 求通过硬件接口,首先发送给首级流水,同时获得硬件模块中存放的根节点地址 ( 307); 每一级流水根据节点地址去访问本级相对应的内存区, 获得相应的节点信息 (如果访 问的节点不是本级节点, 则本级不进行查找比较); 再将节点中的键值信息进行解析, 与上一级传递来的查找键值进行比较, 获得比较结果(如果命中, 获得新的命中信息; 如果未命中, 输出未命中), 并获得下一级流水应访问的节点地址; 最后一级流水将根 据命中信息访问一次结果表, 将结果表中相应的命中结果读出, 通过硬件接口返回给 查找请求端。 内存区(306)为分块设计, 每一块对应一条流水。 每一级流水对应的节点空间由 算法根据实际需求定义, 通过硬件设计固化每一级的内存。根节点地址(307)是一个 寄存器, 由软件模块配置, 标记树结构的根节点位置。 结果表(309)对应最后一级查 找流水, 由算法根据实际的表项容量需求定义结果表的实际占用空间。 内存区 (306) 是路由查找硬件模块中重要的数据存储区域, 其结构如图 3所示。 内存区中的各内存区域相互独立, 分为根节点地址区域、各层节点区域和结果表区域。 根结点地址设置为存储 B-Tree树结构的根节点的地址, 它可能属于任一层节点。 每一 层树节点区域都相互独立, 不与其他树节点区域交叠,避免查找流水同时访问的冲突。 结果表区域也是一块独立区域, 由查找流水的最后一级访问。 根节点地址可以用寄存器实现; 各层节点和结果表区域可以根据所需实际内存空 间大小和访问延时要求用静态随机访问存储器 (SRAM )、 动态随机访问存储器 (DRAM) 或其它类型存储器实现。 基于上述对路由查找装置的硬件相关模块 (路由查找硬件模块) 的描述, 本实施 例提供了一种优选实施方式, 即路由查找硬件模块可以包括: 查找逻辑子模块, 设置 为在接收到上述硬件系统的路由查找请求后, 将查找键值信息和内存子模块中的根节 点地址信息发送至树结构的第一级查找流水中; 然后判断上述根节点地址是否为本级 节点; 如果是, 则向上述内存子模块发起读节点请求, 等待上述内存子模块返回节点 信息; 如果不是, 则保留当前节点信息不变; 还设置为根据比较上述节点信息与键值, 判断路由条目是否命中; 如果命中, 则将命中的路由条目对应的结果记录替代之前的 命中结果; 内存子模块, 包括多个独立内存空间, 分别与上述树结构的每个查找逻辑 层相对应。 在图 3中可以看出, 查找逻辑区是硬件查找流水的主要实现模块, 查找流水共有 N+1级, 其中 LV_1至1^_^^为树节点查找逻辑层, LV_N+1为结果表查找逻辑层。每 一个查找逻辑层都对应了内存区中的一块独立内存空间, 保证各层对内存区的访问不 会相互冲突。 下面对查找逻辑区的查找流程进行介绍, 图 4是根据本发明实施例的硬 件查找流水的流程示意图,如图 4所示,该流程包括以下步骤(步骤 S401-步骤 S410): 步骤 S401 , 路由查找硬件模块收到输入的查找请求。 步骤 S402,将查找键值信息和内存区中的根节点地址信息同时发送至第一级查找 流水中。 步骤 S403 , 在流水的逻辑中首先对根节点地址是否为本级节点进行判断, 如果是 本级节点, 则向内存发起读节点请求, 等待内存返回节点数据; 如果不是本级节点, 则不向内存发起请求, 保留当前节点信息不改变。 步骤 S404, 将获得的节点信息进行解析, 并与键值进行比较。 步骤 S405, 如果未访问本级节点, 则不需要进行解析比较, 打拍等待即可。 比较 的结果如果存在命中, 则将命中的路由条目对应的结果记录, 覆盖之前命中的结果。 步骤 S406, 如果未访问本级节点, 则不需要进行记录, 打拍等待。 如果进行了比 较, 则从比较结果中获取下一级应访问的节点地址记录在节点地址中; 如果未访问本 级节点, 保留当前节点信息不改变。 步骤 S407, 查找流水携带键值信息、节点地址和命中结果进入下一级流水处理逻 辑。 步骤 S408, 如果下一级流水不是最后一级, 则过程与上述步骤 S403-步骤 S407 流程相同, 如果下一级流水是最后一级, 则为访问结果表的处理逻辑。 步骤 S409, 在结果表流水级中, 逻辑将根据命中结果 (结果表条目地址)访问内 存区的结果表, 获得命中实际结果; 如果未命中, 则不访问结果表, 输出未命中信息。 步骤 S410,将命中的实际结果通过硬件接口返回,如果未命中,返回未命中信息。 下面通过具体实施例进行详细介绍。 本实施例包括对路由前缀的插入和删除流程的描述, 假定 B-Tree的阶数 M为 3, 每个树节点中存在最多 2组前缀和 3个子节点指针。 图 5 是根据本发明实施例的路由前缀插入流程示意图, 如图 5 所示, 初始情况 ( 501 ), 路由表为空, 在路由查找硬件模块的内存区中实际不存在任何前缀, 图中使 用了一个默认的空节点作为示意, 处于查找流水的最底层 LV_N。 此时根节点地址寄 存器可以指向默认空节点地址。 首先插入 4-bit前缀 1001, 如 502所示。 该前缀被插入初始节点的第一个空位, 标记前缀长度的 Vector矢量的第 4比特位为 1, 写为 0001, 表示前缀长度为 4。 同时 将对应的结果存放在结果表地址 A1中, 并将 A1保存在节点中。 接着插入前缀 1000, 如 503所示。 因为 1000小于 1001, 所以该前缀被插入初始 节点的第一个空位, 标记前缀长度的 Vector矢量为 0001, 同时将对应的结果存放在结 果表地址 A2中。 插入前缀 1110, 如 504所示。 因为此时节点满了, 所以需要进行节点分裂, 产生 一个新的根节点和一个新的兄弟节点。 新的根节点位于 LV_N-1层, 新的兄弟节点位 于 LV_N 层。 根节点中存放的是前缀 1001/0001/A1 , 它的左子节点存放的是前缀 1000/0001/A2, 右子节点存放的是前缀 1110/0001/A3。 此时树结构上层一层, 需要将 根节点地址更新指向新的根节点 (1001所在的节点)。 插入前缀 10**, 如 505所示。 因为 10**是根节点中前缀 1001的父前缀, 我们采 用的算法要保持父前缀合并在最上层的子前缀中, 因此将 10**在前缀 1001 的 Vector 上进行标记。 因为 10**前缀长度为 2, 因此标记 1001的 Vector的第 2比特位为 1, 此 时 1001的 Vector变为 0101, 同时表示了两种前缀长度 (2和 4)。 将 10**对应的结果 也存放在 A1地址中, 此时 A1地址中存在两个查找结果。 图 6是根据本发明实施例的路由前缀删除流程示意图, 如图 6所示, 以上述已经 构建的最终插入结果为删除流程的初始状态 (601 )。 首先删除 1000, 如 602所示。 删除节点中的 1000前缀和结果表中对应地址 A2 的结果内容。 因为删除 1000 导致节点变空, 需要进行节点合并操作。 操作结果是 LV_N-1层的根节点被删除,剩下 LV_N层的一个节点作为根节点,其中依次存放的是 前缀 1001/0101/A1和 1110/0001/A3。 此时因为根节点发生改变, 因此需要将根节点地 址指向新的根节点。 接着删除 1001, 如 603所示。 将节点中的前缀 1001删除, 但由于该前缀中还保 存在他的父前缀 10**, 因此不能将该前缀位置空出, 而是将该前缀位置的前缀改写为 1000, Vector改写为 0100 (表示前缀长度为 2的一个前缀), 并将 A1地址中 1000对 应的结果删除。 删除 10**, 如 604所示。 将节点中的前缀 1000/0100/A1删除, 并删除结果表 A1 地址的结果。 将前缀 1110/0001/A3移动到节点的第一个位置。 最后删除 1110, 如 605所示。 将节点中的前缀 1110/0001/A3删除, 并删除结果表 A3地址的结果。 此时路由表变为空。 本实施例中涉及的数据格式主要分为节点数据格式、 结果表数据格式。 节点数据 中包括: 键值、 标记矢量、 结果表地址、 下一级指针四个部分。 键值为路由表中的路 由前缀;标记矢量为上述 B-Tree算法中的标记同一前缀中不同前缀长度条目的标记矢 量; 结果表地址为本路由条目对应的查找结果在结果表的地址信息; 下一级指针是经 过本级查找后, 下一级应访问的节点地址。 本实施例中涉及的硬件查找结构为自底向上生长的 B-Tree树结构。首先根据实际 路由表容量需求和延时需求定义出 B-Tree的阶数 M和 B-Tree需要的最大高度 N, 按 照最大高度设定相应的流水级数 N+1 ; 然后根据算法填充情况定义出每层树结构最大 的节点数目需求, 按照最大节点数目设定相应的内存空间; 最后根据路由表实际容量 需求设定结果表的内存空间。 所谓自底向上生长, 就是 B-Tree的动态插入过程中, 首个节点分配在节点层的最 底层 LVn (209), 同时由于首个节点为根节点, 需要将根节点的地址写入根节点地址 内存中 (207)。 当更新路由条目过程中根节点发生改变时, 需要软件同步更新根节点 地址, 保证查找流水正常工作。 基于上述分析, 本发明提供了一种路由查找方法, 图 7是根据本发明实施例的路 由查找方法的流程图, 如图 7所示, 该流程包括以下步骤 (步骤 S702-步骤 S704): 步骤 S702, 定义 B-Tree树结构的阶数 M和最大高度 N、 每层最大节点数目和结 果表条目数, 进而确定硬件相应的流水级数、 每层节点占用空间和结果表占用空间, 进而构建 B-Tree树结构; 步骤 S704, 基于上述 B-Tree树结构, 执行路由查找操作。 基于上述分析,本发明提供了一种硬件查找结构为自底向上生长的 B-Tree树结构 的构建方法, 图 8是根据本发明实施例的硬件查找结构为自底向上生长的 B-Tree树结 构的构建方法的流程图, 如图 8所示, 该流程包括以下步骤 (步骤 S802-步骤 S804): 步骤 S802, 根据实际路由表容量需求和延时需求定义出 B-Tree树结构的阶数 M 和 B-Tree树结构所需要的最大高度 N,按照上述最大高度 N设定相应的流水级数 N+1 ; 步骤 S804, 根据算法填充情况定义出 B-Tree树结构的每层最大节点数目, 按照 上述每层最大节点数目设定相应的内存空间; 根据路由表实际容量需求设定结果表的 内存空间。 在本发明的技术方案中, 路由查找装置包括路由查找算法软件模块、 路由更新接 口模块和路由查找硬件模块三部分。 图 9是根据本发明实施例的路由查找装置的运行 示意图, 如图 9所示, 路由查找算法软件模块完成对路由条目的软件计算和更新条目 指令下发, 路由更新接口模块在接收了更新条目指令之后, 根据路由查找硬件模块的 实际工作状态, 控制路由查找硬件模块的数据流, 并将更新条目写入路由查找硬件模 块的存储器中, 路由查找硬件模块为流水线型架构, 响应硬件系统的路由查找请求, 并返回最长前缀匹配的查找结果。 本发明的具体实施方法分为如下三步:第一步根据实际需求确定 B-Tree的树结构 阶数 M、 树结构高度 N、 每层节点的数目和结果表条目数, 从而确定硬件相应的流水 级数、 每层节点占用空间和结果表占用空间。 第二步根据第一步的框架来设计路由算 法软件模块、 路由更新接口模块和路由查找硬件模块。 第三步为将将第二步完成的各 模块根据实际应用场景组合成路由查找系统, 并连接入路由转发系统中。 其中的路由查找算法软件模块可以运行在路由查找系统的主控 CPU上,硬件相关 模块 (路由更新接口模块和路由查找硬件模块) 可以在现场可编程门阵列 ( Field-Programmable Gate Array, 简称为 FPGA) 或 (Application Specific Integrated Circuit,简称为 ASIC)芯片中实现,并通过实际应用场景中的硬件数据接口进行连接, 例如路由转发系统中常用的 Intei aken-Lookaside接口。 软件模块和硬件模块之间的连 接可以但不限于通过标准的 LocalBus或 PCIe接口进行连接。 从以上的描述中可以看出, 本发明的技术方案从整体上设计了路由查找系统的软 件算法模块、 硬件数据结构、 更新流程、 查找流水结构和内存结构, 能够满足大容量 路由表的高性能查找需求, 能够实现硬件流水操作, 流水级数少, 并且容量对路由前 缀分布不敏感。 尽管为示例目的, 已经公开了本发明的优选实施例, 本领域的技术人员将意识到 各种改进、 增加和取代也是可能的, 因此, 本发明的范围应当不限于上述实施例。 工业实用性 本发明实施例提供的技术方案可以应用于网络交换领域, 从整体上设计了路由查 找系统的软件算法模块、 硬件数据结构、 更新流程、 查找流水结构和内存结构, 能够 满足大容量路由表的高性能查找需求, 能够实现硬件流水操作, 流水级数少, 并且容 量对路由前缀分布不敏感。

Claims

权 利 要 求 书 、 一种路由查找装置, 所述装置包括: 路由查找算法软件模块、 路由更新接口模 块和路由查找硬件模块, 其中,
所述路由查找算法软件模块, 设置为执行对路由条目的软件计算和更新条 目指令的下发; 所述路由更新接口模块, 设置为在接收到所述路由查找算法软件模块下发 的更新条目指令后, 根据所述路由查找硬件模块的实际工作状态, 控制所述路 由查找硬件模块的数据流, 并将更新条目写入所述路由查找硬件模块的存储器 中;
所述路由查找硬件模块, 设置为响应硬件系统的路由查找请求, 并将最长 前缀匹配的查找结果返回至所述硬件系统; 其中, 所述路由查找硬件模块为流 水线型架构。 、 如权利要求 1所述的装置, 其中, 所述路由查找算法软件模块包括: 插入操作子模块, 设置为响应路由转发系统的路由插入指令, 并将所述路 由条目插入到 B-Tree树结构中; 删除操作子模块, 设置为响应所述路由转发系统的路由删除指令, 并将所 述路由条目在所述 B-Tree树结构中删除; 软件表项内存管理子模块, 设置为对路由查找算法中的节点数据和结果表 条目数据进行管理; 更新硬件操作子模块, 设置为将在所述插入操作子模块的插入操作或所述 删除操作子模块的删除操作中涉及到改变的 B-Tree树节点和结果表条目记录 在缓存中, 并在插入操作或删除操作结束后, 将树节点从软件数据格式转换成 硬件数据格式, 并将转换后的硬件数据和对应的硬件映射地址通过软硬件交互 接口连续写入所述路由更新接口模块中。 、 如权利要求 1所述的装置, 其中, 所述路由查找硬件模块包括: 查找逻辑子模 块、 内存子模块, 其中, 所述查找逻辑子模块, 设置为在接收到所述硬件系统的路由查找请求后, 将查找键值信息和所述内存子模块中的根节点地址信息发送至树结构的第一级 查找流水中; 然后判断所述根节点地址是否为本级节点; 如果是, 则向所沭内 存子模块发起读节点请求, 等待所述内存子模块返回节点信息; 如果不是, 则 保留当前节点信息不变; 还设置为根据比较所述节点信息与键值, 判断路由条 目是否命中; 如果命中, 则将命中的路由条目对应的结果记录替代之前的命中 结果;
所述内存子模块, 包括多个独立内存空间, 分别与所述树结构的每个查找 逻辑层相对应。 、 如权利要求 3所述的装置, 其中, 所述路由更新接口模块包括: 缓存子模块, 设置为通过所述软硬件交互接口接收节点数据和结果表条目 数据以及对应的硬件映射地址;
逻辑处理子模块, 设置为根据所述路由查找硬件模块的所述查找逻辑子模 块的工作状态, 将所述缓存子模块的条目更新内容更新至所述路由查找硬件模 块的所述内存子模块中。 、 如权利要求 2所述的装置, 其中, 所述插入操作子模块包括: 树结构管理单元,设置为对新条目在所述 B-Tree树结构中的访问顺序进行 控制;
节点解析及比较单元, 设置为与所述树结构管理单元共同控制新条目在所 述 B-Tree树结构中的有序查询, 并找到插入位置; 节点分裂操作单元, 设置为在新条目需要插入到已满节点时, 对该已满节 点进行分裂操作;
结果表更新单元, 设置为将新条目携带的结果信息存入结果表, 并将该结 果信息的地址记录在所述新条目所在节点的对应位置中。 、 如权利要求 2所述的装置, 其中, 所述软件表项内存管理子模块包括: 软件节点管理单元, 设置为管理路由查找算法中的节点分配, 通过内存管 理算法迅速分配和管理软件节点; 软件结果表管理单元, 设置为管理路由查找算法中的结果表的条目地址分 配, 通过内存管理算法快速分配和管理结果表;
硬件地址映射管理单元,设置为将硬件的实际内存空间进行软件地址映射, 通过软件的内存管理算法管理硬件节点和结果表; 其中, 软件节点与硬件节点 一一对应, 软件结果表与硬件结果表一一对应。 一种路由查找方法, 包括:
定义 B-Tree树结构的阶数 M和最大高度 N、 每层最大节点数目和结果表 条目数,进而确定硬件相应的流水级数、每层节点占用空间和结果表占用空间, 进而构建 B-Tree树结构;
基于所述 B-Tree树结构, 执行路由查找操作。 一种硬件查找结构为自底向上生长的 B-Tree树结构的构建方法, 所述方法包 括:
根据实际路由表容量需求和延时需求定义出 B-Tree树结构的阶数 M和 B-Tree树结构所需要的最大高度 N,按照所述最大高度 N设定相应的流水级数 N+1 ;
根据算法填充情况定义出 B-Tree树结构的每层最大节点数目,按照所述每 层最大节点数目设定相应的内存空间; 根据路由表实际容量需求设定结果表的 内存空间。
PCT/CN2014/078055 2013-09-09 2014-05-21 路由查找方法及装置、B-Tree树结构的构建方法 WO2015032216A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US14/917,773 US9871727B2 (en) 2013-09-09 2014-05-21 Routing lookup method and device and method for constructing B-tree structure

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201310408340.3 2013-09-09
CN201310408340.3A CN104426770A (zh) 2013-09-09 2013-09-09 路由查找方法及装置、B-Tree树结构的构建方法

Publications (1)

Publication Number Publication Date
WO2015032216A1 true WO2015032216A1 (zh) 2015-03-12

Family

ID=52627771

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2014/078055 WO2015032216A1 (zh) 2013-09-09 2014-05-21 路由查找方法及装置、B-Tree树结构的构建方法

Country Status (3)

Country Link
US (1) US9871727B2 (zh)
CN (1) CN104426770A (zh)
WO (1) WO2015032216A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106789672A (zh) * 2017-01-18 2017-05-31 北京经纬恒润科技有限公司 一种报文路由处理方法及装置
CN111881064A (zh) * 2020-07-24 2020-11-03 北京浪潮数据技术有限公司 一种全闪存储系统中访问请求的处理方法、装置及设备

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104794196B (zh) * 2015-04-21 2018-07-31 浙江大学 一种树形结构数据的采集和更新方法
US9992097B2 (en) * 2016-07-11 2018-06-05 Cisco Technology, Inc. System and method for piggybacking routing information in interests in a content centric network
CN107204926B (zh) * 2017-05-16 2021-06-11 上海博达数据通信有限公司 预处理cache的路由快速查找方法
CN108132893A (zh) * 2017-12-06 2018-06-08 中国航空工业集团公司西安航空计算技术研究所 一种支持流水的常量Cache
CN108200221B (zh) * 2017-12-25 2021-07-30 北京东土科技股份有限公司 一种网络地址转换环境中转换规则同步方法及装置
CN109657456B (zh) * 2018-12-12 2022-02-08 苏州盛科通信股份有限公司 Tcam验证方法及系统
US11240355B2 (en) * 2019-05-17 2022-02-01 Arista Networks, Inc. Platform agnostic abstraction for forwarding equivalence classes with hierarchy
US11645428B1 (en) * 2020-02-11 2023-05-09 Wells Fargo Bank, N.A. Quantum phenomenon-based obfuscation of memory
CN112134805B (zh) * 2020-09-23 2022-07-08 中国人民解放军陆军工程大学 基于硬件实现的快速路由更新电路结构及更新方法
CN114911728A (zh) * 2021-02-07 2022-08-16 华为技术有限公司 一种数据查找方法、装置及集成电路
US11700201B2 (en) 2021-07-26 2023-07-11 Arista Networks, Inc. Mechanism to enforce consistent next hops in a multi-tier network
CN114238343B (zh) * 2021-12-23 2022-10-28 南京华飞数据技术有限公司 基于大数据的多维度可变性自动化造数据模型的实现方法
CN117494587B (zh) * 2023-12-29 2024-04-09 杭州行芯科技有限公司 芯片封装结构的空间关系管理方法、电子设备及存储介质

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101009656A (zh) * 2006-01-25 2007-08-01 三星电子株式会社 路由系统及其管理规则条目的方法
US7260096B2 (en) * 2002-07-09 2007-08-21 International Business Machines Corporation Method and router for forwarding internet data packets
CN101834802A (zh) * 2010-05-26 2010-09-15 华为技术有限公司 转发数据包的方法及装置

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5261088A (en) * 1990-04-26 1993-11-09 International Business Machines Corporation Managing locality in space reuse in a shadow written B-tree via interior node free space list
JP2001191567A (ja) 1999-11-04 2001-07-17 Fuji Photo Film Co Ltd 記録方法及び記録装置
JP2003516666A (ja) * 1999-12-10 2003-05-13 モサイド・テクノロジーズ・インコーポレイテッド 最長一致アドレスルックアップのための方法および装置
US6934252B2 (en) * 2002-09-16 2005-08-23 North Carolina State University Methods and systems for fast binary network address lookups using parent node information stored in routing table entries
US7551609B2 (en) * 2005-10-21 2009-06-23 Cisco Technology, Inc. Data structure for storing and accessing multiple independent sets of forwarding information
KR100888261B1 (ko) * 2007-02-22 2009-03-11 삼성전자주식회사 뱅크 id를 이용할 수 있는 메모리 서브 시스템과 그 방법
CN101286935A (zh) * 2008-05-07 2008-10-15 中兴通讯股份有限公司 一种基于ip地址范围的路由查找方法
KR101699779B1 (ko) * 2010-10-14 2017-01-26 삼성전자주식회사 플래시 메모리의 색인 방법

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7260096B2 (en) * 2002-07-09 2007-08-21 International Business Machines Corporation Method and router for forwarding internet data packets
CN101009656A (zh) * 2006-01-25 2007-08-01 三星电子株式会社 路由系统及其管理规则条目的方法
CN101834802A (zh) * 2010-05-26 2010-09-15 华为技术有限公司 转发数据包的方法及装置

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
TAN, MINGFENG ET AL.: "An IPv6 Routing Lookup Algorithm for Large Route Tables Based on Range Representation B-tree", JOURNAL OF NATIONAL UNIVERSITY OF DEFENSE TECHNOLOGY, vol. 27, no. 5, 31 May 2005 (2005-05-31) *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106789672A (zh) * 2017-01-18 2017-05-31 北京经纬恒润科技有限公司 一种报文路由处理方法及装置
CN106789672B (zh) * 2017-01-18 2020-08-04 北京经纬恒润科技有限公司 一种报文路由处理方法及装置
CN111881064A (zh) * 2020-07-24 2020-11-03 北京浪潮数据技术有限公司 一种全闪存储系统中访问请求的处理方法、装置及设备

Also Published As

Publication number Publication date
US20160294693A1 (en) 2016-10-06
CN104426770A (zh) 2015-03-18
US9871727B2 (en) 2018-01-16

Similar Documents

Publication Publication Date Title
WO2015032216A1 (zh) 路由查找方法及装置、B-Tree树结构的构建方法
JP6356675B2 (ja) 集約/グループ化動作:ハッシュテーブル法のハードウェア実装
JP5529976B2 (ja) 高速ipルックアップのためのシストリック・アレイ・アーキテクチャ
CN100550847C (zh) 一种解决Hash冲突的方法及装置
US8208408B2 (en) Tree-based node insertion method and memory device
US8295286B2 (en) Apparatus and method using hashing for efficiently implementing an IP lookup solution in hardware
US8868926B2 (en) Cryptographic hash database
US7805427B1 (en) Integrated search engine devices that support multi-way search trees having multi-column nodes
CN110808910B (zh) 一种支持QoS的OpenFlow流表节能存储架构及其方法
US20120215752A1 (en) Index for hybrid database
CN110119425A (zh) 固态驱动器、分布式数据存储系统和利用键值存储的方法
US9841913B2 (en) System and method for enabling high read rates to data element lists
JP3570323B2 (ja) アドレスに関するプレフィクスの格納方法
US20170364291A1 (en) System and Method for Implementing Hierarchical Distributed-Linked Lists for Network Devices
WO2020125630A1 (zh) 文件读取
WO2015032214A1 (zh) 一种同时支持IPv4和IPv6的高速路由查找方法及装置
US20170017419A1 (en) System And Method For Enabling High Read Rates To Data Element Lists
CN102736986A (zh) 一种内容可寻址存储器及其检索数据的方法
CN103077099B (zh) 一种块级快照系统及基于该系统的用户读写方法
US8539135B2 (en) Route lookup method for reducing overall connection latencies in SAS expanders
CN113377689A (zh) 一种路由表项查找、存储方法及网络芯片
US7953721B1 (en) Integrated search engine devices that support database key dumping and methods of operating same
CN113138859A (zh) 一种基于共享内存池的通用数据存储方法
WO2006000138A1 (fr) Tampon et procede associe
KR102101419B1 (ko) 라우팅 테이블 검색 방법 및 이를 구현하는 메모리 시스템

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14842939

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

WWE Wipo information: entry into national phase

Ref document number: 14917773

Country of ref document: US

122 Ep: pct application non-entry in european phase

Ref document number: 14842939

Country of ref document: EP

Kind code of ref document: A1