CN104679854B - Routing table stores and lookup method - Google Patents

Routing table stores and lookup method Download PDF

Info

Publication number
CN104679854B
CN104679854B CN201510081214.0A CN201510081214A CN104679854B CN 104679854 B CN104679854 B CN 104679854B CN 201510081214 A CN201510081214 A CN 201510081214A CN 104679854 B CN104679854 B CN 104679854B
Authority
CN
China
Prior art keywords
layer
routing table
hop
node
trie trees
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201510081214.0A
Other languages
Chinese (zh)
Other versions
CN104679854A (en
Inventor
杨仝
谢高岗
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Computing Technology of CAS
Original Assignee
Institute of Computing Technology of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Computing Technology of CAS filed Critical Institute of Computing Technology of CAS
Priority to CN201510081214.0A priority Critical patent/CN104679854B/en
Publication of CN104679854A publication Critical patent/CN104679854A/en
Application granted granted Critical
Publication of CN104679854B publication Critical patent/CN104679854B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The present invention provides a kind of routing table storage and lookup method.The storage method includes:Routing table trie trees the 0th to n-layer are used to judge that the data structure of prefix length to be placed on internal memory in piece;0th to the m layers data structure for being used to search next-hop is placed on internal memory outside piece;And (n+1)th to the m layers data structure for judging prefix length is placed on internal memory outside piece;Wherein, m+1 is the routing table trie tree numbers of plies, n<M and be positive integer.For single table search, the present invention realizes internal memory in O (1) lookup time and O (1) piece simultaneously;For virtual flow-line table search, internal memory in O (1) lookup time and O (1) piece is not only ensure that, and it is unrelated with the number, feature, size of virtual routing tables, so as to avoid that periodically router is upgraded or updated.

Description

Routing table stores and lookup method
Technical field
The present invention relates to technical field of the computer network, and more particularly, to a kind of storage of routing table and lookup side Method.
Background technology
Router is the nucleus equipment of internet, and the forwarding performance of router directly influences the performance of whole internet. The main time of router forwarding is expended in the lookup of routing table, thus the lookup technology of routing table become one it is classical Study a question.
Existing method for searching route table is many, is divided into software approach and hardware approach.Some of classical ways The Tree Bitmap that the LC-trie algorithms and cisco router used including Lulea algorithms, PBF algorithms, linux kernel uses Algorithm.However, existing software approach sacrifices renewal to accelerate to search, or only it is absorbed in algorithm renewal and bone can not be met Dry link searches speed requirement;Although existing hardware approach can realize high-speed searching, often it is difficult to support quickly to update, More importantly the high power consumption of hardware and high cost seriously constrain the actual deployment of hardware algorithm.
In addition, the existing major issue of router is aging, it is skyrocketed through that (year increases about with routing table 15%) router, disposed can not either accommodate existing routing table or lookup speed is slower and slower, so having to HardwareUpgring or update periodically are carried out to router.Specifically, all Routing table lookup algorithms all can actively or Well-designed data structure is passively dividedly stored in piece internal memory outside internal memory and piece.It is however, at full speed with routing table Increase, the data structure of piece interior energy storage is fewer and fewer, if not upgrading internal memory in piece, then searching speed will be slower and slower.
Internal memory in O (1) lookup time and O (1) piece only is realized simultaneously, the performance and route of router could be caused The change of table is unrelated, can just avoid that periodically router is upgraded or updated.However, prior art all can not be same Internal memory in Shi Shixian O (1) lookup time and O (1) piece.
The content of the invention
To solve the above problems, the present invention provides a kind of routing table storage method, including:
Routing table trie trees the 0th to n-layer are used to judge that the data structure of prefix length to be placed on internal memory in piece;
0th to the m layers data structure for being used to search next-hop is placed on internal memory outside piece;And
(n+1)th to the m layers data structure for judging prefix length is placed on internal memory outside piece;Wherein, m+1 is routing table The trie tree numbers of plies, n<M, n and m are positive integer.
In the above method, for IPv4 routing table trie trees, n=24.
In the above method, the data structure for being used to judge prefix length of routing table trie trees the 0th to n-layer is n+1 position Figure, wherein one layer of each bitmap and routing table trie trees are corresponding;And routing table trie trees the 0th are used to search to n-layer The data structure of next-hop is n+1 array, and wherein one layer of each array and routing table trie trees is corresponding.Wherein, with road Prefix information corresponding to the layer in the bitmap instruction routing table trie trees as corresponding to table trie trees the 0th to every layer in n-layer, and Indicate position of the next hop information in array corresponding to this layer corresponding to the prefix information.
In the above method, bitmap sum group corresponding with i-th layer has 2 respectivelyiPosition, and i=0 ..., n, in i-th layer of correspondence Bitmap in, each 1 position indicates respectively each prefix information of this layer in routing table trie trees, and remaining position is 0; The position identical position instruction of the prefix information prefix information is indicated in array corresponding to i layers, in bitmap corresponding with i-th layer Corresponding next-hop port numbers.
In the above method, in addition to key is carried out to the internal node of n-th layer according to following steps and pushed away layer by layer:
For the internal node with next hop information of n-th layer in routing table trie trees, if it only has a child Node, then generate its another child nodes at (n+1)th layer, and the child using the next hop information of the internal node as generation The next hop information of child node;If it has two child nodes, by any one next-hop under empty child's node One hop-information is entered as the next hop information of the internal node;The next-hop of the internal node is empty;
For the internal node without next hop information of n-th layer in routing table trie trees, obtaining it has next-hop letter The next hop information of the nearest ancestor node of breath, if it only has a child nodes, it is generated in another of (n+1)th layer Child nodes, and the next hop information using the next hop information of the ancestor node as the child nodes of generation;If it has two Individual child nodes, then next hop information of any one next-hop for empty child's node is entered as the next of the ancestor node Hop-information.
In the above method, for the internal node or leaf section of n-th layer in routing table trie trees after being pushed away layer by layer in key Point, the correspondence position in bitmap corresponding to n-th layer are 1, other positions 0;The prefix indicated for the internal node of n-th layer is believed Breath, 0 is arranged to by the correspondence position in array corresponding to n-th layer, and the other positions in array corresponding to this layer and key are layer by layer It is identical before pushing away.
In the above method, for IPv4 routing table trie trees, the trie trees are shifted onto the 6th, 12,18,24 and 32 layer;And Key is carried out to the 24th layer of internal node to push away layer by layer.
In the above method, for IPv4 routing table trie trees, the trie trees are shifted onto the 16th and 24 layer, and carry out leaf and push away.
In the above method, for multiple IPv4 routing tables trie trees, merge the plurality of routing table trie trees and carry out leaf and push away, Obtain being superimposed trie trees;And shift superposition trie trees onto the 16th and 24 layer, and carry out leaf and push away.
In the above method, (n+1)th to m layers be used for judge that prefix length and the data structure for searching next-hop are Hash table or skew array.
According to one embodiment of present invention, a kind of routing table lookup side based on above-mentioned routing table storage method is also provided Method, including:
Step 1), since bitmap corresponding to n-th layer, according to IP preceding n positions position bitmap in position, if the position For 0, then make n=n-1 and repeat the position fixing process, until the position in the bitmap of positioning is 1, according to the location lookup current layer Corresponding array is to obtain next-hop port numbers;Step 2) is performed if the position is 1;
Step 2) array according to corresponding to the location lookup n-th layer, next-hop end is obtained if correspondence position is non-zero Slogan;Otherwise, search and array or hash table are offset outside piece to obtain next-hop port numbers.
For single table search, the present invention realizes internal memory in O (1) lookup time and O (1) piece simultaneously;For For virtual flow-line table search, the present invention not only ensure that internal memory in O (1) lookup time and O (1) piece, and with it is virtual Number, feature, the size of routing table are unrelated, so as to avoid that periodically router is upgraded or updated.
Brief description of the drawings
The following drawings only does schematic illustration and explanation to the present invention, is not intended to limit the scope of the present invention, wherein:
Fig. 1 is routing table two dimension segmentation schematic diagram according to an embodiment of the invention;
Fig. 2 is the schematic diagram of SAIL_B methods according to an embodiment of the invention;
Fig. 3 is the schematic diagram of SAIL_L methods according to an embodiment of the invention;
Fig. 4 is the schematic diagram of SAIL_M methods according to an embodiment of the invention.
Embodiment
In order to which technical characteristic, purpose and the effect of the present invention is more clearly understood, now control illustrates this hair Bright embodiment.
According to one embodiment of present invention, there is provided a kind of routing table storage method.
Generally, this method includes:By the data for being used to judge prefix length of routing table trie trees the 0th to n-th layer Structure is placed on internal memory in piece;0th to the m layers data structure for being used to search next-hop is placed on internal memory outside piece;And by N+1 is used to judge that the data structure of prefix length to be placed on internal memory outside piece to m layers.Wherein, n<M and n is positive integer, m+1 is The number of plies of routing table trie trees.
The routing table storage method is described in more detail below in conjunction with Fig. 1, this method comprises the following steps:
The first step:Two-dimentional segmentation is carried out to routing table.
1. the first dimension is split
The segmentation of first dimension is that routing table lookup process is split:The length of the prefix of matching is first found, then Longest prefix match has reformed into accurate matching;Next-hop is searched then according to the prefix length of matching.
2. the second dimension is split
The segmentation of second dimension is that the trie trees (such as m+1 layers trie trees) built according to routing table are split, also It is in n-th layer and (n+1)th (0 by trie trees<n<M) split between layer.
In one embodiment, the IPv4 routing table trie trees for the 0th~32 layer, preferably the 24th layer and the 25th layer it Between split, obtain the 0th~24 and the 25th~32 layer.The reason for selection is split between the 24th layer and the 25th layer is:Root According to test of the inventor to real traffic and routing table, almost 100% flow all hits the 0th~24 layer.
Such as it is not particularly illustrated, following embodiment is the IPv4 routing tables to split between the 24th layer and the 25th layer Exemplified by trie trees.
Second step:The data storage structure in internal memory outside internal memory and piece in piece respectively.
After two dimension is split, 0~n-layer is used to judge that the data structure of prefix length to be placed on internal memory in piece, will Other data structures are placed on the outer internal memory of piece.
As shown in figure 1, in the embodiment of IPv4 routing table trie trees, by the 0th~24 layer for judging prefix length Data structure be placed on internal memory in piece;By the 0th~32 layer of the data structure for being used to search next-hop, and by the 25th~32 layer The data structure for being used to judge prefix length be placed on the outer internal memory of piece.
For, with data structure outside piece, four kinds of embodiments, Wen Zhongfen being described respectively below in conjunction with Fig. 2 to Fig. 4 in piece Also known as make SAIL_B methods, SAIL_U methods, SAIL_L methods and SAIL_M methods:
1.SAIL_B methods
I), piece internal data structure
For SAIL_B methods, the data structure in piece is n+1 (for example, in the implementation of above-mentioned IPv4 routing tables trie trees In example, n=24) individual bitmap (Bitmap), each layer in routing table trie trees in 0~n-layer is corresponding with a bitmap, and i-th The bitmap of layer has 2iPosition.
Each in routing table have the prefix of next hop information to should a certain layer of routing table trie trees a node (i.e. solid node in Fig. 2), and with this layer corresponding to one in Bitmap it is 1 corresponding.Wherein, the solid node is affiliated 1 position corresponding in Bitmap corresponding to this layer of the position instruction of layer.In each layer of routing table trie trees, have next The prefix node of hop-information may have multiple, therefore may include multiple 1 in Bitmap corresponding to each layer, and its in Bitmap His position is all zero.
The example provided with reference to figure 2, wherein, the route table items that the prefix with next hop information is 001 and 111 are corresponding Two solid nodes of the 3rd layer of trie trees, Bitmap corresponding to the 3rd layer totally 23=8.It will be understood by those skilled in the art that In routing table trie trees, prefix 001 to should the 3rd layer in trie trees (in the case of the full node of the layer) the 2nd node from left to right, 3rd layer of the 8th node, therefore Bitmap corresponding to the 3rd layer is 01000001 from left to right in the corresponding trie trees of prefix 111.
Ii), the outer data structure of piece
1. the data structure for being used to search next-hop of 0~n-layer can use full array in routing table trie trees, i.e., for I-th (0≤i≤n) layer, length is established as 2iArray (or port array), for route table items corresponding to the layer, if Next-hop, then the element of relevant position indicates next hop information in array, is otherwise 0.Specifically, at end corresponding to each layer In mouth array, 1 position in the Bitmap according to corresponding to this layer, in port, the same position of array inserts corresponding route table items In next hop information.
For example, in the illustrated example shown in fig. 2, Bitmap corresponding to the 3rd layer is port number corresponding to the 01000001, the 3rd layer Group is 03000007, wherein 3 be next hop information corresponding to prefix 001,7 be next hop information corresponding to prefix 111.3 and 7 Position in the array of port, respectively 001 and 111 position consistency in Bitmap of prefix corresponding to its.
2. for each layer in routing table trie trees (n+1)th~m (such as the 25th~32) layer, for judging prefix length Data structure and the data structure for searching next-hop can use hash table or the form using skew array, due to Performance is not influenceed too much, therefore user can be with unrestricted choice.
In a further embodiment, preceding n+1 (such as the 0th~24) layer should be looked into during routing table in order to avoid searching, again Rear m-n+1 (such as the 25th~32) layer is looked into, after trie trees are built according to routing table, key can be carried out to n-th layer and pushed away layer by layer (Pivot push)。
With continued reference to Fig. 2, due to there is the prefix of next-hop with solid node on behalf in trie trees, corresponding to hollow node Prefix does not have a next-hop, the digitized representation next-hop in solid node.When being pushed away layer by layer to the 24th layer of progress key, the 24th layer Boring node (that is, the hollow node of non-leaf nodes) and interior solid node (that is, the solid node of non-leaf nodes) It is required for shifting the 25th layer onto, the 24th layer of key pushes away a point following two situations layer by layer:
1), any one interior solid node P for the 24th layer, P two child nodes are checked (at the 25th layer).Such as Fruit only has a child nodes at the 25th layer, then generates another child nodes, and cause the next of generated child nodes Jump the next-hop for P.If having two child nodes at the 25th layer, for one child nodes of any of which, if under it One jump is sky, then its next-hop is entered as to P next-hop;If next-hop is not sky, without modification.Finally, will Interior solid node P next-hop is set to sky.
2), any one boring node Q for the 24th layer, it is assumed that the next-hop of Q nearest solid ancestor node For h, check Q two child nodes (at the 25th layer).If in a 25th layer of only child nodes, another child is generated Child node, the next-hop of generated child nodes is entered as h.If there are two child nodes at the 25th layer, for wherein Any one child nodes, if next-hop is sky, its next-hop is entered as h;If next-hop is not sky, do not repair Change.Finally, boring node Q next-hop is remained in that as sky.
Either which kind of situation, after carrying out key and pushing away layer by layer, the 24th layer of internal node (including hollow and solid) Next-hop can all be changed into empty, and two child nodes of these nodes all can be solid node.
After key is carried out to the 24th layer and is pushed away layer by layer, for the 24th layer of trie tree nodes, if internal or leaf Child node, then the correspondence position for setting the 24th layer of Bitmap is 1, and this layer of Bitmap other positions are arranged to 0.To the 24th layer Carry out after key pushes away layer by layer, for the 24th layer of trie tree nodes, if internal node, then the outer 24th layer port number of piece The correspondence position of group is arranged to 0;Other positions push away as before layer by layer with key.So, if finding the 24th layer first Bitmap position is 1, and the position for then finding the 24th layer of port array is 0, and this just illustrates next-hop port information the 25th ~32 layers.After key is carried out to the 24th layer and is pushed away layer by layer, the content that is stored in data structure corresponding to the 25th layer is also therewith more Change.
2.SAIL_U methods
SAIL_U methods are similar with SAIL_L methods, it is therefore an objective to are ensureing the condition of " renewal only needs a memory access " Under continue accelerate search.
Due to 64 bits generally can be read and write using 64 bit processors, a memory access now.Observed based on this, Due to log264=6, therefore trie trees can be pushed away to (push) to the 6th, 12,18,24 (i.e. 6 multiples) and the 32nd layer. 24th layer still carries out key and pushes away layer by layer, and the step is as SAIL_B methods.So renewal at most influences 2 every time6Bit, according to It can so ensure " renewal only needs a memory access ".
3.SAIL_L methods
With reference to figure 3, the purpose of SAIL_L methods is to pursue high-speed searching, while ensures rational renewal speed.If will Trie trees all shift the 24th layer onto, then internal memory is too big in piece, and renewal cost is too high.If push away too many layer, then looked into piece Look for too many memory access of needs.Therefore trie trees are shifted onto the 16th layer and the 24th layer, the ground different from SAIL_B and SAIL_U methods Side is that it (leaf-pushing, i.e., is h i-th layer of next-hop port that SAIL_L also needs to carry out leaf to push away during pushing away Node can be converted to the child nodes (i+1 layer) that two ports of the node are h, child nodes can still carry out such as This operation, and so on).The port array that the trie trees postponed according to leaf are established outside Bitmap arrays and piece in piece.This Sample respectively has three tables for the 16th, 24 layer:Bitmap arrays (B16,B24), port array (N16,N24) and skew array (C16, C24).Skew array is used for the offset address for recording one layer of port array, and the conflict that so hash method can be avoided to bring is asked Topic.The result of bidimensional segmentation is only by B16,B24In film releasing.Which updates the renewal complete class that method is pushed away with classical leaf Seemingly, therefore no longer illustrate.
4.SAIL_M methods
After SAIL_L methods are applied into virtual routing tables, SAIL_M can be referred to as.It is exactly first to merge multiple routing tables Leaf is carried out into one, when merging simultaneously to push away, i.e., leaf shifts the 16th, 24 layer onto.In the advantage of merging is to greatly save Deposit, and cause the interior upper bound for having a very little in piece.As shown in figure 4, trie1 and trie2 is reformed into after merging It is superimposed trie trees.In one embodiment, SAIL_L methods are used to superposition trie trees, wherein still by B16,B24It is placed on piece It is interior.
Based on above-mentioned routing table storage method, according to one embodiment of present invention, a kind of routing table lookup side is also provided Method.
By taking IPv4 routing table trie trees as an example, since the 24th layer of Bitmap, according to before IP 24 than certain bits, that is, look into See the element position of Bitmap corresponding to 24 bits convert before IP address integer.If the position is 0 in Bitmap, under explanation Then one hop-information judges the 23rd layer of Bitmap at the 0th~23 layer, if correspondence position is 0, continue to judge the 22nd layer, Until a certain layer Bitmap correspondence position is 1, the port array of this layer is then inquired about, reads next-hop.If before IP address 24 correspondence positions in corresponding Bipmap are 1, then then next-hop searches the 24th layer of port number at the 24th~32 layer Group, it is exactly next-hop port if correspondence position non-zero;Otherwise, illustrate that next-hop at the 25th~32 layer, is then gone outside piece Data structure lookup next-hop.For the 25th~32 layer, due to using skew array or Hash table in external memory, therefore using normal Skew array or Hash table lookup method.
According to above-mentioned four kinds of routing table storage methods and lookup method, because memory size is very big outside piece, and inexpensively, because This can use two pieces of internal memories completely, and one piece carries out one piece of lookup and is updated, and then exchanges periodically synchronization.For in piece more Newly, this by any trie trees partially due to do not convert, so any renewal at most only can change a bit.
To verify the validity of routing table storage provided by the invention and lookup method, above-mentioned implementation is respectively adopted in inventor Mode is tested, and experimental result is as shown in table 1.
Table 1
Memory size in piece Search performance in piece More new capability in piece The outer search performance of piece
SAIL_B =4MB Most 25 memory access Most 1 memory access Most 2 memory access
SAIL_U ≤2.03MB Most 4 memory access Most 1 memory access Most 2 memory access
SAIL_L ≤2.13MB Most 2 memory access -- Most 2 memory access
SAIL_M ≤2.13MB Most 2 memory access -- Most 2 memory access
Table 1 compares the performance of four kinds of embodiments, wherein internal memory is both less than 4MB in the piece of four kinds of methods, less than existing Piece in memory size.Because memory access is more slowly outside piece, therefore the bottleneck of search performance is outside piece, and the search performance outside piece is most Bad situation is all 2 times.Therefore, method for searching route table provided by the invention realizes internal memory in the piece of O (1) and searched simultaneously when Between.
Accessed further, since method provided by the invention is all array, operation also only have it is most basic add, subtract and, because This present invention has cross-platform characteristic, and SAIL_L methods are realized and tested on four kinds of platforms by inventor, cpu test As a result show, 7~50 times faster than classical way using method for searching route table provided by the invention.
It should be appreciated that although this specification is described according to each embodiment, not each embodiment only includes one Individual independent technical scheme, this narrating mode of specification is only that those skilled in the art will should say for clarity Bright book is as an entirety, and the technical solutions in the various embodiments may also be suitably combined, and forming those skilled in the art can be with The other embodiment of understanding.
The schematical embodiment of the present invention is the foregoing is only, is not limited to the scope of the present invention.It is any Those skilled in the art, equivalent variations, modification and the combination made on the premise of the design of the present invention and principle is not departed from, The scope of protection of the invention all should be belonged to.

Claims (11)

1. a kind of routing table storage method, including:
Routing table trie trees the 0th to n-layer are used to judge that the data structure of prefix length to be placed on internal memory in piece;
0th to the m layers data structure for being used to search next-hop is placed on internal memory outside piece;And
(n+1)th to the m layers data structure for judging prefix length is placed on internal memory outside piece;Wherein, m+1 is routing table trie Set the number of plies, n<M, n and m are positive integer.
2. the method according to claim 11, wherein, for IPv4 routing table trie trees, n=24.
3. method according to claim 1 or 2, wherein, being used for of routing table trie trees the 0th to n-layer judges prefix length Data structure be n+1 bitmap, wherein one layer of each bitmap and routing table trie trees is corresponding;And routing table trie The data structure for being used to search next-hop for setting the 0th to n-layer is n+1 array, and each array and routing table trie trees are wherein One layer corresponding;
Wherein, indicated with routing table trie trees the 0th to every layer of corresponding bitmap in n-layer in routing table trie trees corresponding to the layer Prefix information, and indicate position of the next hop information in array corresponding to this layer corresponding to the prefix information.
4. according to the method for claim 3, wherein, bitmap sum group corresponding with i-th layer has 2 respectivelyiPosition, and i= 0 ..., n, in bitmap corresponding to i-th layer, each 1 position indicates respectively each prefix letter of this layer in routing table trie trees Breath, remaining position are 0;In array corresponding to i-th layer, with i-th layer corresponding to indicate that the position of prefix information is identical in bitmap The position instruction prefix information corresponding to next-hop port numbers.
5. according to the method for claim 3, in addition to according to following steps to the internal node of n-th layer carry out key layer by layer Push away:
For the internal node with next hop information of n-th layer in routing table trie trees, if it only has a child nodes, Its another child nodes at (n+1)th layer is then generated, and is saved the next hop information of the internal node as the child of generation The next hop information of point;If it has two child nodes, the next-hop by any one next-hop for empty child's node Information is entered as the next hop information of the internal node;The next-hop of the internal node is empty;
For the internal node without next hop information of n-th layer in routing table trie trees, obtaining it has next hop information The next hop information of nearest ancestor node, if it only has a child nodes, generates its another child at (n+1)th layer Node, and the next hop information using the next hop information of the ancestor node as the child nodes of generation;If it has two children Child node, the then next-hop that any one next-hop is entered as to the ancestor node for the next hop information of empty child's node are believed Breath.
6. the method according to claim 11, wherein, for n-th layer in routing table trie trees after being pushed away layer by layer in key Internal node or leaf node, the correspondence position in bitmap corresponding to n-th layer are 1, other positions 0;For the inside of n-th layer The prefix information of node instruction, is arranged to 0, its in array corresponding to this layer by the correspondence position in array corresponding to n-th layer His position is identical with before key pushes away layer by layer.
7. according to the method for claim 3, wherein, for IPv4 routing table trie trees, shift the trie trees the onto 6th, 12, 18th, 24 and 32 layers;And
Key is carried out to the 24th layer of internal node to push away layer by layer.
8. according to the method for claim 3, wherein, for IPv4 routing table trie trees, shift the trie trees onto the 16th He 24 layers, and carry out leaf and push away.
9. according to the method for claim 3, wherein, for multiple IPv4 routing tables trie trees, merge the plurality of routing table Trie trees simultaneously carry out leaf and pushed away, and obtain being superimposed trie trees;And
Shift superposition trie trees onto the 16th and 24 layer, and carry out leaf and push away.
10. method according to claim 1 or 2, wherein, (n+1)th to m layers be used for judges prefix length and for looking into The data structure for looking for next-hop is hash table or skew array.
11. a kind of method for searching route table of the routing table storage method based on as described in claim 1-10, including:
Step 1), since bitmap corresponding to n-th layer, according to IP preceding n positions position bitmap in position, if the position be 0, Then make n=n-1 and repeat the position fixing process, it is corresponding according to the location lookup current layer until the position in the bitmap of positioning is 1 Array to obtain next-hop port numbers;Step 2) is performed if the position is 1;
Step 2) array according to corresponding to the location lookup n-th layer, next-hop port numbers are obtained if correspondence position is non-zero; Otherwise, search and array or hash table are offset outside piece to obtain next-hop port numbers.
CN201510081214.0A 2015-02-15 2015-02-15 Routing table stores and lookup method Active CN104679854B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510081214.0A CN104679854B (en) 2015-02-15 2015-02-15 Routing table stores and lookup method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510081214.0A CN104679854B (en) 2015-02-15 2015-02-15 Routing table stores and lookup method

Publications (2)

Publication Number Publication Date
CN104679854A CN104679854A (en) 2015-06-03
CN104679854B true CN104679854B (en) 2018-01-26

Family

ID=53314896

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510081214.0A Active CN104679854B (en) 2015-02-15 2015-02-15 Routing table stores and lookup method

Country Status (1)

Country Link
CN (1) CN104679854B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106656816B (en) * 2016-09-18 2019-09-24 首都师范大学 Distributed ipv6 method for searching route and system
CN110995876B (en) * 2019-10-11 2021-02-09 中国科学院计算技术研究所 Method and device for storing and searching IP
CN111461751B (en) * 2020-04-02 2024-03-29 武汉大学 Real estate information chain organization method based on block chain, historical state tracing method and device
CN113343034A (en) * 2021-06-08 2021-09-03 湖南大学 IP searching method, system and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1504912A (en) * 2002-12-05 2004-06-16 �Ҵ���˾ Performance and memory bandwidth utilization for tree searches using tree fragmentation
CN101577662A (en) * 2008-05-05 2009-11-11 华为技术有限公司 Method and device for matching longest prefix based on tree form data structure
CN102333036A (en) * 2011-10-17 2012-01-25 中兴通讯股份有限公司 Method and system for realizing high-speed routing lookup
CN103530337A (en) * 2013-09-30 2014-01-22 北京奇虎科技有限公司 Device and method for recognizing invalid parameters in URL

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1504912A (en) * 2002-12-05 2004-06-16 �Ҵ���˾ Performance and memory bandwidth utilization for tree searches using tree fragmentation
CN101577662A (en) * 2008-05-05 2009-11-11 华为技术有限公司 Method and device for matching longest prefix based on tree form data structure
CN102333036A (en) * 2011-10-17 2012-01-25 中兴通讯股份有限公司 Method and system for realizing high-speed routing lookup
CN103530337A (en) * 2013-09-30 2014-01-22 北京奇虎科技有限公司 Device and method for recognizing invalid parameters in URL

Also Published As

Publication number Publication date
CN104679854A (en) 2015-06-03

Similar Documents

Publication Publication Date Title
JP5529976B2 (en) Systolic array architecture for high-speed IP lookup
US8780926B2 (en) Updating prefix-compressed tries for IP route lookup
US7668890B2 (en) Prefix search circuitry and method
US6522632B1 (en) Apparatus and method for efficient prefix search
CN104679854B (en) Routing table stores and lookup method
Waldvogel et al. Scalable high-speed prefix matching
US8325721B2 (en) Method for selecting hash function, method for storing and searching routing table and devices thereof
JP5960863B1 (en) SEARCH DEVICE, SEARCH METHOD, PROGRAM, AND RECORDING MEDIUM
US20070177512A1 (en) Method of Accelerating the Shortest Path Problem
Le et al. Scalable tree-based architectures for IPv4/v6 lookup using prefix partitioning
CN103051543A (en) Route prefix processing, lookup, adding and deleting method
WO2011124030A1 (en) Method and device for storing routing table entry
WO2014047863A1 (en) Generating a shape graph for a routing table
Pao et al. IP address lookup using bit-shuffled trie
US20130117504A1 (en) Embedded memory and dedicated processor structure within an integrated circuit
CN110995876B (en) Method and device for storing and searching IP
CN105824927B (en) A kind of domain name matching method based on tree automaton
Yang et al. High performance IP lookup on FPGA with combined length-infix pipelined search
JP4726310B2 (en) Information retrieval apparatus, information retrieval multiprocessor and router
Pao et al. Bit-shuffled trie: IP lookup with multi-level index tables
Park et al. An efficient IP address lookup algorithm based on a small balanced tree using entry reduction
Erdem et al. Value-coded trie structure for high-performance IPv6 lookup
Jiang et al. Towards green routers: Depth-bounded multi-pipeline architecture for power-efficient IP lookup
JP6205463B2 (en) SEARCH DEVICE, SEARCH METHOD, PROGRAM, AND RECORDING MEDIUM
JP2020017147A (en) Packet search device, packet search method, and packet search program

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant